Create your Gitee Account
Explore and code with more than 5 million developers,Free private repositories !:)
Sign up
本Application支持运行在Atlas 200 DK或者AI加速云服务器上,实现了对常见的分类网络的推理功能并输出前n个推理结果 spread retract

Clone or download
Cancel
Notice: Creating folder will generate an empty file .keep, because not support in Git
Loading...
README.md

EN|CN

Image Classification

The classification application runs on the Atlas 200 DK or the AI acceleration cloud server and implements the inference function by using a common classification network, and the first n inference results are output.

Prerequisites

Before using an open source application, ensure that:

  • Mind Studio has been installed.
  • The Atlas 200 DK developer board has been connected to Mind Studio, the cross compiler has been installed, the SD card has been prepared, and basic information has been configured.

Software Preparation

Before running the application, obtain the source code package and configure the environment as follows.

  1. Obtain the source code package.

    Download all the code in the sample-classification repository at https://gitee.com/Atlas200DK/sample-classification to any directory on Ubuntu Server where Mind Studio is located as the Mind Studio installation user, for example, /home/ascend/sample-classification.

  2. Obtain the source network model required by the application.

    Obtain the source network model and its weight file used in the application by referring to Table 1, and save them to any directory on the Ubuntu server where Mind Studio is located (for example, $HOME/ascend/models/classification).

    Table 1 Models used in the general classification network application

    Model Name

    Model Description

    Model Download Path

    alexnet

    Image classification inference model.

    It is a AlexNet model based on Caffe.

    Download the source network model file and its weight file by referring to README.md in https://gitee.com/HuaweiAscend/models/tree/master/computer_vision/classification/alexnet.

    caffenet

    Image classification inference model.

    It is a CaffeNet model based on Caffe.

    Download the source network model file and its weight file by referring to README.md in https://gitee.com/HuaweiAscend/models/tree/master/computer_vision/classification/caffenet.

    densenet

    Image classification inference model.

    It is a DenseNet121 model based on Caffe.

    Download the source network model file and its weight file by referring to README.md in https://gitee.com/HuaweiAscend/models/tree/master/computer_vision/classification/densenet.

    googlenet

    Image classification inference model.

    It is a GoogLeNet model based on Caffe.

    Download the source network model file and its weight file by referring to README.md in https://gitee.com/HuaweiAscend/models/tree/master/computer_vision/classification/googlenet.

    inception_v2

    Image classification inference model.

    It is an Inception V2 model based on Caffe.

    Download the source network model file and its weight file by referring to README.md in https://gitee.com/HuaweiAscend/models/tree/master/computer_vision/classification/inception_v2.

    inception_v3

    Image classification inference model.

    It is an Inception V3 model based on Caffe.

    Download the source network model file and its weight file by referring to README.md in https://gitee.com/HuaweiAscend/models/tree/master/computer_vision/classification/inception_v3.

    inception_v4

    Image classification inference model.

    It is an Inception V4 model based on Caffe.

    Download the source network model file and its weight file by referring to README.md in https://gitee.com/HuaweiAscend/models/tree/master/computer_vision/classification/inception_v4.

    mobilenet_v1

    Image classification inference model.

    It is a MobileNet V1 model based on Caffe.

    Download the source network model file and its weight file by referring to README.md in https://gitee.com/HuaweiAscend/models/tree/master/computer_vision/classification/mobilenet_v1.

    mobilenet_v2

    Image classification inference model.

    It is a MobileNet V2 model based on Caffe.

    Download the source network model file and its weight file by referring to README.md in https://gitee.com/HuaweiAscend/models/tree/master/computer_vision/classification/mobilenet_v2.

    resnet18

    Image classification inference model.

    It is a ResNet 18 model based on Caffe.

    Download the source network model file and its weight file by referring to README.md in https://gitee.com/HuaweiAscend/models/tree/master/computer_vision/classification/resnet18.

    resnet50

    Image classification inference model.

    It is a ResNet 50 model based on Caffe.

    Download the source network model file and its weight file by referring to README.md in https://gitee.com/HuaweiAscend/models/tree/master/computer_vision/classification/resnet50.

    resnet101

    Image classification inference model.

    It is a ResNet 101 model based on Caffe.

    Download the source network model file and its weight file by referring to README.md in https://gitee.com/HuaweiAscend/models/tree/master/computer_vision/classification/resnet101.

    resnet152

    Image classification inference model.

    It is a ResNet 152 model based on Caffe.

    Download the source network model file and its weight file by referring to README.md in https://gitee.com/HuaweiAscend/models/tree/master/computer_vision/classification/resnet152.

    vgg16

    Image classification inference model.

    It is a VGG16 model based on Caffe.

    Download the source network model file and its weight file by referring to README.md in https://gitee.com/HuaweiAscend/models/tree/master/computer_vision/classification/vgg16.

    vgg19

    Image classification inference model.

    It is a VGG19 model based on Caffe.

    Download the source network model file and its weight file by referring to README.md in https://gitee.com/HuaweiAscend/models/tree/master/computer_vision/classification/vgg19.

    squeezenet

    Image classification inference model.

    It is a SqueezeNet model based on Caffe.

    Download the source network model file and its weight file by referring to README.md in https://gitee.com/HuaweiAscend/models/tree/master/computer_vision/classification/squeezenet.

    dpn98

    Image classification inference model.

    It is a DPN-98 model based on Caffe.

    Download the source network model file and its weight file by referring to README.md in https://gitee.com/HuaweiAscend/models/tree/master/computer_vision/classification/dpn98.

  3. Convert the source network model to a Da Vinci model.

    1. Choose Tool > Convert Model from the main menu of Mind Studio. The Convert Model page is displayed.

    2. On the Convert Model page, set Model File and Weight File to the model file and weight file downloaded in 2, respectively.

      • Set Model Name to the model name in Table 1.

      • For the GoogleNet and Inception_v2 models, a general classification network application processes one image at a time. Therefore, the value of N in Input Shape must be set to 1 during conversion.

        Figure 1 Input Shape configuration reference

    3. Click OK to start model conversion.

      After successful conversion, a .om Da Vinci model is generated in the $HOME/tools/che/model-zoo/my-model/xxx directory.

  4. Log in to Ubuntu Server where Mind Studio is located as the Mind Studio installation user and set the environment variable DDK_HOME.

    vim ~/.bashrc

    Run the following commands to add the environment variables DDK_HOME and LD_LIBRARY_PATH to the last line:

    export DDK_HOME=/home/XXX/tools/che/ddk/ddk

    export LD_LIBRARY_PATH=$DDK_HOME/uihost/lib

    NOTE:

    • XXX indicates the Mind Studio installation user, and /home/XXX/tools indicates the default installation path of the DDK.
    • If the environment variables have been added, skip this step.

    Enter :wq! to save and exit.

    Run the following command for the environment variable to take effect:

    source ~/.bashrc

Deployment

  1. Access the root directory where the classification application code is located as the Mind Studio installation user, for example, /home/ascend/sample-classification.

  2. Run the deployment script to prepare the project environment, including compiling and deploying the ascenddk public library and application.

    bash deploy.sh host_ip model_mode

    • host_ip: For the Atlas 200 DK developer board, this parameter indicates the IP address of the developer board.For the AI acceleration cloud server, this parameter indicates the IP address of the host.
    • model_mode indicates the deployment mode of the model file. The default setting is internet.
      • local: If the Ubuntu system where Mind Studio is located is not connected to the network, use the local mode. In this case, download the dependent common code library ezdvpp to the sample-classification/script directory by referring to the Downloading Dependent Code Library.
      • internet: Indicates the online deployment mode. If the Ubuntu system where Mind Studio is located is connected to the network, use the Internet mode. In this case, download the dependent code library ezdvpp online.

    Example command:

    bash deploy.sh 192.168.1.2 internet

  3. Upload the generated Da Vinci offline model and images to be inferred to the directory of the HwHiAiUser user on the host. For details, see Table 1.

    For example, upload the model file alexnet.om to the /home/HwHiAiUser/models directory on the host.

    The image requirements are as follows:

    • Format: JPG, PNG, and BMP.
    • Width of the input image: the value is an integer ranging from 16px to 4096px.
    • Height of the input image: the value is an integer ranging from 16px to 4096px.

Running

  1. Log in to the Host as the HwHiAiUser user in SSH mode on Ubuntu Server where Mind Studio is located.

    ssh HwHiAiUser@host_ip

    For the Atlas 200 DK, the default value of host_ip is 192.168.1.2 (USB connection mode) or 192.168.0.2 (NIC connection mode).

    For the AI acceleration cloud server, host_ip indicates the IP address of the server where Mind Studio is located.

  2. Go to the path of the executable file of classification application.

    cd ~/HIAI_PROJECTS/ascend_workspace/classification/out

  3. Run the application.

    Run the run_classification.py script to print the inference result on the execution terminal.

    Example command:

    python3 run_classification.py -m ~/models/alexnet.om -w 227 -h 227 -i

    ./example.jpg -n 10

    • -m/--model_path: path for storing offline models
    • -w/model_width: width of the input image of a model. The value is an integer ranging from 16px to 4096px. Obtain the input width and height required by each model by referring to the readme file of each model file on gitee. For details, see Table 1.
    • -h/model_height: height of the input image of a model. The value is an integer ranging from 16px to 4096px. Obtain the input width and height required by each model by referring to the readme file of each model file on gitee. For details, see Table 1.
    • -i/input_path: path of the input image. It can be a directory, indicating that all images in the current directory are used as input (Multiple inputs can be specified).
    • -n/top_n: the first n inference results that are output

    For other parameters, run the python3 run_classification.py --help command. For details, see the help information.

Downloading Dependent Code Library

Download the dependent software libraries to the sample-classification/script directory.

Table 2 Download the dependent software library

Module Name

Module Description

Download Address

EZDVPP

Encapsulates the DVPP interface and provides image and video processing capabilities, such as color gamut conversion and image / video conversion

https://gitee.com/Atlas200DK/sdk-ezdvpp

After the download, keep the folder name ezdvpp.

Comments ( 0 )

Sign in for post a comment