项目作者: TNTWEN

项目描述 :
This is implementation of YOLOv4,YOLOv4-relu,YOLOv4-tiny,YOLOv4-tiny-3l,Scaled-YOLOv4 and INT8 Quantization in OpenVINO2021.3
高级语言: Python
项目地址: git://github.com/TNTWEN/OpenVINO-YOLOV4.git
创建时间: 2020-07-17T15:36:32Z
项目社区:https://github.com/TNTWEN/OpenVINO-YOLOV4

开源协议:MIT License

下载


OpenVINO-YOLOV4

Introduction

This is full implementation of YOLOV4 series in OpenVINO2021.3.

Based on https://github.com/mystic123/tensorflow-yolo-v3

Supported model

Supported device

  • Intel CPU
  • Intel GPU
  • HDDL VPU
  • NCS2
  • … …

Supported model precision

Supported inference demo

  • Python demo:all models
  • C++ demo:YOLOv4,YOLOv4-relu,YOLOv4-tiny,YOLOv4-tiny-3l

Development log

FAQ

FAQ

Environment

How to use

★ This repository provides python inference demo for different OpenVINO version.pythondemo

★ Choose the right demo before you run object_detection_demo_yolov3_async.py

★ You could also use C++ inference demo provided by OpenVINO.

(OpenVINO2021.3 default C++ demo path:C:\Program Files (x86)\Intel\openvino_2021.3.394\deployment_tools\open_model_zoo\demos\multi_channel_object_detection_demo_yolov3\cpp)

YOLOV4

download yolov4.weights .

  1. #windows default OpenVINO path
  2. python convert_weights_pb.py --class_names cfg/coco.names --weights_file yolov4.weights --data_format NHWC
  3. "C:\Program Files (x86)\Intel\openvino_2021\bin\setupvars.bat"
  4. python "C:\Program Files (x86)\Intel\openvino_2021.3.394\deployment_tools\model_optimizer\mo.py" --input_model frozen_darknet_yolov4_model.pb --transformations_config yolov4.json --batch 1 --reverse_input_channels
  5. python object_detection_demo_yolov3_async.py -i cam -m frozen_darknet_yolov4_model.xml -d CPU

OpenVINOyolov4

Compared with darknet:
darknetyolov4

YOLOV4-relu

prepare yolov4.weights .

  1. #windows default OpenVINO path
  2. cd yolov4-relu
  3. python convert_weights_pb.py --class_names cfg/coco.names --weights_file yolov4.weights --data_format NHWC
  4. "C:\Program Files (x86)\Intel\openvino_2021\bin\setupvars.bat"
  5. python "C:\Program Files (x86)\Intel\openvino_2021.3.394\deployment_tools\model_optimizer\mo.py" --input_model frozen_darknet_yolov4_model.pb --transformations_config yolov4.json --batch 1 --reverse_input_channels
  6. python object_detection_demo_yolov3_async.py -i cam -m frozen_darknet_yolov4_model.xml -d CPU

YOLOV4-tiny

download yolov4-tiny.weights .

  1. #windows default OpenVINO path
  2. python convert_weights_pb.py --class_names cfg/coco.names --weights_file yolov4-tiny.weights --data_format NHWC --tiny
  3. "C:\Program Files (x86)\Intel\openvino_2021\bin\setupvars.bat"
  4. python "C:\Program Files (x86)\Intel\openvino_2021.3.394\deployment_tools\model_optimizer\mo.py" --input_model frozen_darknet_yolov4_model.pb --transformations_config yolo_v4_tiny.json --batch 1 --reverse_input_channels
  5. python object_detection_demo_yolov3_async.py -i cam -m frozen_darknet_yolov4_model.xml -d CPU

OpenVINOyolov4tiny

Compared with darknet:
darknetyolov4tiny

INT8 Quantization

Thanks for Jacky‘s excellent work!

Ref:https://docs.openvinotoolkit.org/latest/pot_README.html

Environment:

  • OpenVINO2021.3
  • Ubuntu 18.04/20.04 ★
  • Intel CPU/GPU

Step 1:Dataset Conversion

we should convert YOLO dataset to OpenVINO supported formats first.

|—annotations

​ |— output.json #output of convert.py , COCO-JSON format

|—images

​ |— *.jpg #put all the images here

|—labels

​ |—*.txt #put all the YOLO format .txt labels here

|—classes.txt

we use coco128 for example:

  1. cd INT8
  2. python3 convert.py --root_dir coco128 --save_path output.json

Step 2: Install Accuracy-checker and POT

  1. sudo apt-get install python3 python3-dev python3-setuptools python3-pip
  2. cd /opt/intel/openvino_2021.3.394/deployment_tools/open_model_zoo/tools/accuracy_checker
  3. sudo python3 setup.py install
  4. cd /opt/intel/openvino_2021.3.394/deployment_tools/tools/post_training_optimization_toolkit
  5. sudo python3 setup.py install

Step 3: INT8 Quantization using POT

​ Prepare your yolo IR model(FP32/FP16) first.

  1. source '/opt/intel/openvino_2021.3.394/bin/setupvars.sh'
  2. pot -c yolov4_416x416_qtz.json --output-dir backup -e

​ Parameters you need to set in yolov4_416x416_qtz.json:

  • Line 4,5 :Set FP32/FP16 YOLO IR model ‘s path

    1. "model":"models/yolov4/FP16/frozen_darknet_yolov4_model.xml",
    2. "weights":"models/yolov4/FP16/frozen_darknet_yolov4_model.bin"
  • Line 29,30 :Set image width and height

    1. "dst_width": 416,
    2. "dst_height": 416
  • Line 38: Annotation_file(COCO JSON file)

    1. "annotation_file": "./coco128/annotations/output.json"
  • Line 40: Path of images

    1. "data_source": "./coco128/images",
  • There are many other quantization strategies to choose from, and the relevant parameters are annotated in yolov4_416x416_qtz.json.Select the strategy you want to replace the default strategy and try by yourself!

Step 4: Test IR model’s map using Accuracy-checker

  1. #source '/opt/intel/openvino_2021.3.394/bin/setupvars.sh'
  2. accuracy_check -c yolov4_416x416_coco.yml -td CPU #-td GPU will be faster

​ Parameters you need to set in yolov4_416x416_qtz.json:

  • Line 5,6 : Set IR model ‘s path

    1. model: models/yolov4/FP16/frozen_darknet_yolov4_model.xml
    2. weights: models/yolov4/FP16/frozen_darknet_yolov4_model.bin
  • Line 12: number of classes

    1. classes: 80
  • Line 25: Image size

    1. size: 416
  • Line 38:Annotation_file(COCO JSON file)

    1. annotation_file: ./coco128/annotations/output.json
  • Line 39: Path of images

    1. data_source: ./coco128/images