X-LINUX-AI-CV OpenSTLinux expansion package

来自百问网嵌入式Linux wiki
Zhouyuebiao讨论 | 贡献2020年5月8日 (五) 23:17的版本 (创建页面,内容为“This article describes the content of the X-LINUX-AI-CV Expansion Package and explains how to use it. ==Description== X-LINUX-AI-CV is the STM32 MPU OpenSTLinux Exp…”)
(差异) ←上一版本 | 最后版本 (差异) | 下一版本→ (差异)

This article describes the content of the X-LINUX-AI-CV Expansion Package and explains how to use it.

Description

X-LINUX-AI-CV is the STM32 MPU OpenSTLinux Expansion Package that targets artificial intelligence for computer vision.
This package contains AI and computer vision frameworks, as well as application examples to get started with some basic use cases.

文件:STM32 MPU Embedded Software And X-LINUX-AI-CV Expansion Package.png
STM32MPU Embedded Software with the X-LINUX-AI-CV OpenSTLinux expansion package

Current version

X-LINUX-AI-CV v1.1.0

Contents

Software structure

文件:X-LINUX-AI-CV software structure.png
X-LINUX-AI-CV v1.1.0 Expansion Package Software structure

Supported hardware

As any software expansion package, the X-LINUX-AI-CV is supported on all STM32MP1 Series and is compatible with the following boards:

  • STM32MP157C-DK2[6]
  • STM32MP157C-EV1[7]
  • STM32MP157A-EV1[8]
  • STM32MP157 Avenger96 board[4]

How to use the X-LINUX-AI-CV Expansion Package

Software installation

Please refer to the STM32MP1 artificial intelligence expansion packages article to build and install the X-LINUX-AI-CV software.

Material needed

To use the X-LINUX-AI-CV OpenSTLinux Expansion Package, choose one of the following materials:

  • STM32MP157C-DK2[6] + an UVC USB WebCam
  • STM32MP157C-EV1[7] with the built in camera
  • STM32MP157A-EV1[8] with the built in camera
  • STM32MP157 Avenger96 board[4] + an UVC USB WebCam or the OV5640 CSI Camera mezzanine board[5]

AI application examples

Application examples are provided within two flavors:

  • C/C++ application
  • Python application
Info.png Python applications are good for prototyping but are less efficient than C/C++ application
Warning.png Python scripts could take up to 30 seconds before displaying camera frames

C/C++ TensorFlowLite applications

This part provides information about the C/C++ applications examples based on TensorflowLite, Gstreamer and OpenCV.
The applications integrate a camera preview and test data picture that is then connected to the chosen TensorFlowLite model.
Two C/C++ application examples are available and are described below:

Image classification application

Description

The image classification[9] neural network model allows identification of the subject represented by an image. It classifies an image into various classes.

文件:Cpp image classification application screenshot.png
C/C++ image classification application

The label_tfl_gst_gtk C/C++ application (located in the userfs partition: /usr/local/demo-ai/ai-cv/bin/label_tfl_gst_gtk) is a C/C++ application application for image classification.
The application demonstrate a computer vision use case for image classification where frames are grabbed from a camera input (/dev/videox) and analyzed by a neural network model interpreted by TensorFlow Lite framework.
Gstreamer pipeline is used to stream camera frames (using v4l2src), to display a preview (using waylandsink) and to execute neural network inference (using appsink).
The result of the inference is displayed on the preview. The overlay is done using GTK widget with cairo.
This combination is quite simple and efficient in term of CPU consumption.

How to use it

The application label_tfl_gst_gtk accepts the following input parameters:

Usage: ./label_tfl_gst_gtk -m <model .tflite> -l <label .txt file>                                                                                             
                                                                                                                                                             
-i --image <directory path>:          image directory with image to be classified                                                                              
-v --video_device <n>:                video device (default /dev/video0)                                                                                       
--frame_width  <val>:                 width of the camera frame (default is 640)                                                                               
--frame_height <val>:                 height of the camera frame (default is 480)                                                                              
--framerate <val>:                    framerate of the camera (default is 15fps)                                                                               
-m --model_file <.tflite file path>:  .tflite model to be executed                                                                                             
-l --label_file <label file path>:    name of file containing labels                                                                                           
--input_mean <val>:                   model input mean (default is 127.5)                                                                                      
--input_std  <val>:                   model input standard deviation (default is 127.5)                                                                        
--help:                               show this help 
Testing with MobileNet V1
Default model: MobileNet V1 0.5 128 quant

The default model used for tests is the mobilenet_v1_0.5_128_quant.tflite downloaded from Tensorflow Lite hosted models[10]


To ease launching of the application, two shell scripts are available:

  • launch image classification based on camera frame inputs
Board $> /usr/local/demo-ai/ai-cv/bin/launch_bin_label_tfl_mobilenet.sh
  • launch image classification based on the picture located in /usr/local/demo-ai/ai-cv/models/mobilenet/testdata directory
Board $> /usr/local/demo-ai/ai-cv/bin/launch_bin_label_tfl_mobilenet_testdata.sh
Info.png Note that you need to populate the testdata directory with your own data sets.

The pictures are then randomly read from the testdata directory

Testing another MobileNet v1 model

You can test other models by downloading them directly to the STM32MP1 board. From example:

Board $> curl http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224_quant.tgz | tar xzv -C /usr/local/demo-ai/ai-cv/models/mobilenet/
Board $> /usr/local/demo-ai/ai-cv/bin/label_tfl_gst_gtk -m /usr/local/demo-ai/ai-cv/models/mobilenet/mobilenet_v1_1.0_224_quant.tflite -l /usr/local/demo-ai/ai-cv/models/mobilenet/labels.txt -i /usr/local/demo-ai/ai-cv/models/mobilenet/testdata/
Testing with your own model

The label_tfl_gst_gtk application fits with Tensorflow Lite model format for image classification. Any model with a .tflite extension and a label file can be used with label_tfl_gst_gtk application.
You are free to update the label_tfl_gst_gtk source to perfectly fit your needs.
The label_tfl_gst_gtk application source code is located here:

meta-st-stm32mpu-ai/recipes-samples/demo/tensorflow-lite-cv-apps/bin

Object detection application

Description

The object detection[11] neural network model allows identification and localization of a known object within an image.

文件:Cpp object detection application screenshot.png
C/C++ Object detection application

The objdetect_tfl_gst_gtk C/C++ application (located in the userfs partition: /usr/local/demo-ai/ai-cv/bin/objdetect_tfl_gst_gtk) is a C/C++ application application for object detection.
This application demonstrate a computer vision use case for object detection where frames are grabbed from a camera input (/dev/videox) and analyzed by a neural network model interpreted by TensorFlow Lite framework.
Gstreamer pipeline is used to stream camera frames (using v4l2src), to display a preview (using waylandsink) and to execute neural network inference (using appsink).
The result of the inference is displayed on the preview. The overlay is done using GTK widget with cairo.
This combination is quite simple and efficient in term of CPU consumption.

How to use it

The application objdetect_tfl_gst_gtk accepts the following input parameters:

Usage: ./objdetect_tfl_gst_gtk -m <model .tflite> -l <label .txt file>                                                                                             
                                                                                                                                                             
-i --image <directory path>:          image directory with image to be classified                                                                              
-v --video_device <n>:                video device (default /dev/video0)                                                                                       
--frame_width  <val>:                 width of the camera frame (default is 640)                                                                               
--frame_height <val>:                 height of the camera frame (default is 480)                                                                              
--framerate <val>:                    framerate of the camera (default is 15fps)                                                                               
-m --model_file <.tflite file path>:  .tflite model to be executed                                                                                             
-l --label_file <label file path>:    name of file containing labels                                                                                           
--input_mean <val>:                   model input mean (default is 127.5)                                                                                      
--input_std  <val>:                   model input standard deviation (default is 127.5)                                                                        
--help:                               show this help 
Testing with COCO ssd MobileNet v1

The model used for test is the detect.tflite downloaded from object detection overview[11]

To ease launching of the application, two shell scripts are available:

  • launch object detection based on camera frame inputs
Board $> /usr/local/demo-ai/ai-cv/bin/launch_bin_objdetect_tfl_coco_ssd_mobilenet.sh
  • launch object detection based on the picture located in /usr/local/demo-ai/ai-cv/models/coco_ssd_mobilenet/testdata directory
Board $> /usr/local/demo-ai/ai-cv/bin/launch_bin_objdetect_tfl_coco_ssd_mobilenet_testdata.sh
Info.png Note that you need to populate the testdata directory with your own data sets.

The pictures are then randomly read from the testdata directory

Python TensorFlowLite applications

This part provides information about the Python applications examples based on TensorflowLite and OpenCV.
The applications integrate a camera preview and test data picture that is then connected to the chosen TensorFlowLite model.
Two Python application examples are available and are described below:

Image classification application

Description

The image classification[9] neural network model allows identification of the subject represented by an image. It classifies an image into various classes.

文件:Image classification application screenshot.png
Python image classification application

The label_tfl_multiprocessing.py Python script (located in the userfs partition: /usr/local/demo-ai/ai-cv/python/label_tfl_multiprocessing.py) is a multi-process python application for image classification.
The application enables OpenCV camera streaming (or test data picture) and TensorFlowLite interpreter runing the NN inference based on the camera (or test data pictures) inputs.
The user interface is implemented using Python GTK.

How to use it

The Python script label_tfl_multiprocessing.py accepts the following input parameters:

-i, --image          image directory with images to be classified
-v, --video_device   video device (default /dev/video0)
--frame_width        width of the camera frame (default is 320)
--frame_height       height of the camera frame (default is 240)
--framerate          framerate of the camera (default is 15fps)
-m, --model_file     tflite model to be executed
-l, --label_file     name of file containing labels
--input_mean         input mean
--input_std          input standard deviation
Testing with MobileNet V1
Default model: MobileNet V1 0.5 128 quant

The default model used for tests is the mobilenet_v1_0.5_128_quant.tflite downloaded from Tensorflow Lite hosted models[10]


To ease launching of the Python script, two shell scripts are available:

  • launch image classification based on camera frame inputs
Board $> /usr/local/demo-ai/ai-cv/python/launch_python_label_tfl_mobilenet.sh
  • launch image classification based on the picture located in /usr/local/demo-ai/ai-cv/models/mobilenet/testdata directory
Board $> /usr/local/demo-ai/ai-cv/python/launch_python_label_tfl_mobilenet_testdata.sh
Info.png Note that you need to populate the testdata directory with your own data sets.

The pictures are then randomly read from the testdata directory

Testing another MobileNet v1 model

You can test other models by downloading them directly to the STM32MP1 board. From example:

Board $> curl http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224_quant.tgz | tar xzv -C /usr/local/demo-ai/ai-cv/models/mobilenet/
Board $> python3 /usr/local/demo-ai/ai-cv/python/label_tfl_multiprocessing.py -m /usr/local/demo-ai/ai-cv/models/mobilenet/mobilenet_v1_1.0_224_quant.tflite -l /usr/local/demo-ai/ai-cv/models/mobilenet/labels.txt -i /usr/local/demo-ai/ai-cv/models/mobilenet/testdata/
Testing with your own model

The label_tfl_multiprocessing.py python script fits with Tensorflow Lite model format for image classification. Any model with a .tflite extension and a label file can be used with label_tfl_multiprocessing.py python script.
You are free to update the label_tfl_multiprocessing.py python script to perfectly fit your needs.

Object detection application

Description

The object detection[11] neural network model allows identification and localization of a known object within an image.

文件:Object detection application screenshot.png
Python object detection application

The objdetect_tfl_multiprocessing.py python script (located in the userfs partition: /usr/local/demo-ai/ai-cv/python/objdetect_tfl_multiprocessing.py) is a multi-process python application for object detection.
The application enables OpenCV camera streaming (or test data pictures) and TensorFlowLite interpreter runing the NN inference based on the camera (or test data picture) inputs.
The user interface is implemented using Python GTK..

How to use it

The Python script objdetect_tfl_multiprocessing.py accepts different input parameters:

-i, --image          image directory with images to be classified
-v, --video_device   video device (default /dev/video0)
--frame_width        width of the camera frame (default is 320)
--frame_height       height of the camera frame (default is 240)
--framerate          framerate of the camera (default is 15fps)
-m, --model_file     tflite model to be executed
-l, --label_file     name of file containing labels
--input_mean         input mean
--input_std          input standard deviation
Testing with COCO ssd MobileNet v1

The model used for test is the detect.tflite downloaded from object detection overview[11]

To ease launching of the Python script, two shell scripts are available:

  • launch object detection based on camera frame inputs
Board $> /usr/local/demo-ai/ai-cv/python/launch_python_objdetect_tfl_coco_ssd_mobilenet.sh
  • launch object detection based on the picture located in /usr/local/demo-ai/ai-cv/models/coco_ssd_mobilenet/testdata directory
Board $> /usr/local/demo-ai/ai-cv/python/launch_python_objdetect_tfl_coco_ssd_mobilenet_testdata.sh
Info.png Note that you need to populate the testdata directory with your own data sets.

The pictures are then randomly read from the testdata directory

Enjoy running your own CNN

The above examples provide application samples to demonstrate how to execute Tensforflow Lite CNN easily on the STM32MP1.

You are free to update the C/C++ application or Python scripts for your own purposes, using your own CNN Tensorflow Lite models.

C/C++ applications source code are located here:

meta-st-stm32mpu-ai/recipes-samples/demo/tensorflow-lite-cv-apps/bin

Python scripts are located here:

meta-st-stm32mpu-ai/recipes-samples/demo/tensorflow-lite-cv-apps/python