Training & Quantization Tools For Embedded AI Development - in PyTorch.
This code provides a set of low complexity deep learning examples and models for low power embedded systems. Low power embedded systems often requires balancing of complexity and accuracy. This is a tough task and requires significant amount of expertise and experimentation. We call this process complexity optimization. In addition we would like to bridge the gap between Deep Learning training frameworks and real-time embedded inference by providing ready to use examples and enable ease of use. Scripts for training, validation, complexity analysis are also provided.
This code also includes tools for Quantization Aware Training that can output an 8-bit Quantization friendly model - these tools can be used to improve the quantized accuracy and bring it near floating point accuracy. For more details, please refer to the section on Quantization.
Several of these models have been verified to work on TI's Jacinto7 Automotive Processors. These tools and software are primarily intended as examples for learning and research.
These instructions are for installation on Ubuntu 18.04.
Install Anaconda with Python 3.7 or higher from https://www.anaconda.com/distribution/
After installation, make sure that your python is indeed Anaconda Python 3.7 or higher by typing:
python --version
Clone this repository into your local folder
Execute the following shell script to install the dependencies:
./setup.sh
Below are some of the examples are currently available. Click on each of the links above to go into the full description of the example.
Object Detection - this link will take you to another repository, where we have our object detection training scripts.
Object Keypoint Estimation - coming soon..
Quantization (especially 8-bit Quantization) is important to get best throughput for inference. Quantization can be done using either Post Training Quantization (PTQ) or Quantization Aware Training (QAT).
TI Deep Learning Library (TIDL) that is part of the Processor SDK RTOS for Jacinto7 natively supports PTQ - TIDL can take floating point models and can quantize them using advanced calibration methods.
We have guidelines on how to choose models and how train them to get best accuracy with Quantization. It is unlikely that there will be significant accuracy drop with PTQ if these guidelines are followed. In spite of this, if there are models that have significant accuracy drop with quantization, it is possible to improve the accuracy using QAT. Please read more details in the documentation on Quantization.
Some of the common training and validation commands are provided in shell scripts (.sh files) in the root folder.
Landing Page: https://github.com/TexasInstruments/jacinto-ai-devkit
Actual Git Repositories: https://git.ti.com/jacinto-ai
Each of the repositories listed in the above link have an "about" tab with documentation and a "summary" tab with git clone/pull URLs.
Our source code uses parts of the following open source projects. We would like to sincerely thank their authors for making their code bases publicly available.
Module/Functionality | Parts of the code borrowed/modified from |
---|---|
Datasets, Models | https://github.com/pytorch/vision, https://github.com/ansleliu/LightNet |
Training, Validation Engine/Loops | https://github.com/pytorch/examples, https://github.com/ClementPinard/FlowNetPytorch |
Object Detection | https://github.com/open-mmlab/mmdetection |
Please see the LICENSE file for more information about the license under which this code is made available.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。