项目作者: ucbdrive

项目描述 :
Official implementation of Joint Monocular 3D Vehicle Detection and Tracking (ICCV 2019)
高级语言: Python
项目地址: git://github.com/ucbdrive/3d-vehicle-tracking.git
创建时间: 2019-05-17T06:14:54Z
项目社区:https://github.com/ucbdrive/3d-vehicle-tracking

开源协议:BSD 3-Clause "New" or "Revised" License

下载


Joint Monocular 3D Vehicle Detection and Tracking

Language grade: Python

We present a novel framework that jointly detects and tracks 3D vehicle bounding boxes.
Our approach leverages 3D pose estimation to learn 2D patch association overtime and uses temporal information from tracking to
obtain stable 3D estimation.

Joint Monocular 3D Vehicle Detection and Tracking


Hou-Ning Hu,
Qi-Zhi Cai,
Dequan Wang,
Ji Lin,
Min Sun,
Philipp Krähenbühl,
Trevor Darrell,
Fisher Yu.


In ICCV, 2019.

Paper
Website

Prerequisites

  1. ! NOTE: this repo is made for PyTorch 1.0+ compatible issue, the generated results might be changed.
  • Linux (tested on Ubuntu 16.04.4 LTS)
  • Python 3.6.9
    • 3.6.4 tested
    • 3.6.9 tested
  • PyTorch 1.3.1
    • 1.0.0 (with CUDA 9.0, torchvision 0.2.1)
    • 1.1.0 (with CUDA 9.0, torchvision 0.3.0)
    • 1.3.1 (with CUDA 10.1, torchvision 0.4.2)
  • nvcc 10.1
    • 9.0.176, 10.1 compiling and execution tested
    • 9.2.88 execution only
  • gcc 5.4.0
  • Pyenv or Anaconda

and Python dependencies list in 3d-tracking/requirements.txt

Quick Start

In this section, you will train a model from scratch, test our pretrained models, and reproduce our evaluation results.
For more detailed instructions, please refer to DOCUMENTATION.md.

Installation

  • Clone this repo:

    1. git clone -b pytorch1.0 --single-branch https://github.com/ucbdrive/3d-vehicle-tracking.git
    2. cd 3d-vehicle-tracking/
  • Install PyTorch 1.0.0+ and torchvision from http://pytorch.org and other dependencies. You can create a virtual environment by the following:
    ```bash

    Add path to bashrc

    echo -e ‘\nexport PYENV_ROOT=”$HOME/.pyenv”\nexport PATH=”$PYENV_ROOT/bin:$PATH”‘ >> ~/.bashrc
    echo -e ‘if command -v pyenv 1>/dev/null 2>&1; then\n eval “$(pyenv init -)”\nfi’ >> ~/.bashrc

Install pyenv

curl -L https://raw.githubusercontent.com/pyenv/pyenv-installer/master/bin/pyenv-installer | bash

Restart a new terminal if “exec $SHELL” doesn’t work

exec $SHELL

Install and activate Python in pyenv

pyenv install 3.6.9
pyenv local 3.6.9

  1. - Install requirements, create folders and compile binaries for detection
  2. ```bash
  3. cd 3DTracking
  4. bash scripts/init.sh
  5. cd ..
  6. cd faster-rcnn.pytorch
  7. bash init.sh

NOTE: For faster-rcnn-pytorch compiling problems
[1], please compile COCO API and replace pycocotools.

NOTE: For object-ap-eval compiling problem. It only supports python 3.6+, need numpy, skimage, numba, fire. If you have Anaconda, just install cudatoolkit in anaconda. Otherwise, please reference to this page to set up llvm and cuda for numba.

Data Preparation

For a quick start, we suggest using GTA val set as a starting point. You can get all needed data via the following script.

  1. # We recommand using GTA `val` set (using `mini` flag) to get familiar with the data pipeline first, then using `all` flag to obtain all the data
  2. python loader/download.py mini

More details can be found in 3d-tracking.

Execution

For running a whole pipeline (2D proposals, 3D estimation and tracking):

  1. # Generate predicted bounding boxes for object proposals
  2. cd faster-rcnn.pytorch/
  3. # Step 00 (Optional) - Training on GTA dataset
  4. ./run_train.sh
  5. # Step 01 - Generate bounding boxes
  6. ./run_test.sh
  1. # Given object proposal bounding boxes and 3D center from faster-rcnn.pytorch directory
  2. cd 3d-tracking/
  3. # Step 00 - Data Preprocessing
  4. # Collect features into json files (check variables in the code)
  5. python loader/gen_pred.py gta val
  6. # Step 01 - 3D Estimation
  7. # Running single task scripts mentioned below and training by yourself
  8. # or alternatively, using multi-GPUs and multi-processes to run through all 100 sequences
  9. python run_estimation.py gta val --session 616 --epoch 030
  10. # Step 02 - 3D Tracking and Evaluation
  11. # 3D helps tracking part. For tracking evaluation,
  12. # using multi-GPUs and multi-processes to run through all 100 sequences
  13. python run_tracking.py gta val --session 616 --epoch 030
  14. # Step 03 - 3D AP Evaluation
  15. # Convert tracking output to evaluation format
  16. python tools/convert_estimation_bdd.py gta val --session 616 --epoch 030
  17. python tools/convert_tracking_bdd.py gta val --session 616 --epoch 030
  18. # Evaluation of 3D Estimation
  19. python tools/eval_dep_bdd.py gta val --session 616 --epoch 030
  20. # 3D helps Tracking part
  21. python tools/eval_mot_bdd.py --gt_path output/616_030_gta_val_set --pd_path output/616_030_gta_val_set/kf3doccdeep_age20_aff0.1_hit0_100m_803
  22. # Tracking helps 3D part
  23. cd tools/object-ap-eval/
  24. python test_det_ap.py gta val --session 616 --epoch 030

Note: If facing ModuleNotFoundError: No module named 'utils' problem, please add PYTHONPATH=. before python {scripts} {arguments}.

Citation

If you find our code/models useful in your research, please cite our paper:

  1. @inproceedings{Hu3DT19,
  2. author = {Hu, Hou-Ning and Cai, Qi-Zhi and Wang, Dequan and Lin, Ji and Sun, Min and Krähenbühl, Philipp and Darrell, Trevor and Yu, Fisher},
  3. title = {Joint Monocular 3D Vehicle Detection and Tracking},
  4. journal = {ICCV},
  5. year = {2019}
  6. }

License

This work is licensed under BSD 3-Clause License. See LICENSE for details.
Third-party datasets and tools are subject to their respective licenses.

Acknowledgements

We thank faster.rcnn.pytorch for the detection codebase, pymot for their MOT evaluation tool and kitti-object-eval-python for the 3D AP calculation tool.