项目作者: ChenyanWu

项目描述 :
Code for "MEBOW: Monocular Estimation of Body Orientation In the Wild", CVPR 2020
高级语言: Python
项目地址: git://github.com/ChenyanWu/MEBOW.git
创建时间: 2020-03-29T08:13:40Z
项目社区:https://github.com/ChenyanWu/MEBOW

开源协议:

下载


MEBOW

Human Body Orientation Estimation

Introduction

This is an official pytorch implementation of MEBOW: Monocular Estimation of Body Orientation In the Wild.
In this work, we present COCO-MEBOW (Monocular Estimation of Body Orientation in the Wild), a new large-scale dataset for orientation estimation from a single in-the-wild image. Based on COCO-MEBOW, we established a simple baseline model for human body orientation estimation. This repo provides the code.

Installation

  1. Install pytorch >= v1.0.0 following official instruction.
  2. Clone this repo, and we’ll call the directory that you cloned as ${HBOE_ROOT}.
  3. Install dependencies:
    1. pip install -r requirements.txt
  4. Install COCOAPI:
    1. # COCOAPI=/path/to/clone/cocoapi
    2. git clone https://github.com/cocodataset/cocoapi.git $COCOAPI
    3. cd $COCOAPI/PythonAPI
    4. # Install into global site-packages
    5. make install
    6. # Alternatively, if you do not have permissions or prefer
    7. # not to install the COCO API into global site-packages
    8. python3 setup.py install --user
    Note that instructions like # COCOAPI=/path/to/install/cocoapi indicate that you should pick a path where you’d like to have the software cloned and then set an environment variable (COCOAPI in this case) accordingly.
  5. Init output(training model output directory) and log(tensorboard log directory) directory:

    1. mkdir output
    2. mkdir log

    Your directory tree should look like this:

    1. ${HBOE_ROOT}
    2. ├── data
    3. ├── experiments
    4. ├── lib
    5. ├── log
    6. ├── models
    7. ├── output
    8. ├── tools
    9. ├── README.md
    10. └── requirements.txt
  6. Download pretrained models from the model zoo provided by HRnet(GoogleDrive or OneDrive)

    1. ${HBOE_ROOT}
    2. `-- models
    3. `-- pose_hrnet_w32_256x192.pth

    Quick Demo

  • Download our trained /g/personal/czw390_psu_edu/EoXLPTeNqHlCg7DgVvmRrDgB_DpkEupEUrrGATpUdvF6oQ?e=CQQ2KY">HBOE model, and then place it under the folder models.
    1. ${HBOE_ROOT}
    2. `-- models
    3. `-- model_hboe.pth
  • Run python tools/demo.py --cfg experiments/coco/segm-4_lr1e-3.yaml images/demo.jpg. You may use the path of your own image to replace images/demo.jpg. For better performance, your image should be a human-cropped image. The ratio of height to width should be around 4:3.

Data preparation

For MEBOW dataset, please download images, bboxes and keypoints from COCO download. Please email czw390@psu.edu to get access to the human body orientation annotations. Note: For academic researchers, please use your educational email address. You will directly get access to the annotations via your educational email. For researchers in business companies, please send a formal letter (with the company name and your signature) to promise that you will not use the annotations for commercial purposes. Sorry for the inconvenience.

Put images and all the annotations under {HBOE_ROOT}/data, and make them look like this:

  1. ${HBOE_ROOT}
  2. |-- data
  3. `-- |-- coco
  4. `-- |-- annotations
  5. | |-- train_hoe.json
  6. | |-- val_hoe.json
  7. | |-- person_keypoints_train2017.json
  8. | `-- person_keypoints_val2017.json
  9. `-- images
  10. |-- train2017
  11. | |-- 000000000009.jpg
  12. | |-- 000000000025.jpg
  13. | |-- 000000000030.jpg
  14. | |-- ...
  15. `-- val2017
  16. |-- 000000000139.jpg
  17. |-- 000000000285.jpg
  18. |-- 000000000632.jpg
  19. |-- ...

For TUD dataset, please download images from the web page of TUD. The page also provides 8-bin orientation annotation. Continuous orientation annotation for TUD dataset can be found from here. We provide our precessed TUD annotation from /g/personal/czw390_psu_edu/EqU8hWh-NgFOoNmIBEgE5RYBn61ZsFudKHCgbEH9-_V9DA?e=PZzshY">here.
Put TUD images and our processed annotation under {HBOE_ROOT}/data, and make them look like this:

  1. ${HBOE_ROOT}
  2. |-- data
  3. `-- |-- tud
  4. `-- |-- annot
  5. | |-- train_tud.pkl
  6. | |-- val_tud.pkl
  7. | `-- test_tud.pkl
  8. `-- images
  9. |-- train
  10. |-- validate
  11. `-- test

Trained HBOE model

We also provide the trained HBOE model (MEBOW as training set). (/g/personal/czw390_psu_edu/EoXLPTeNqHlCg7DgVvmRrDgB_DpkEupEUrrGATpUdvF6oQ?e=CQQ2KY">OneDrive)

Training and Testing

Training on MEBOW dataset

  1. python tools/train.py --cfg experiments/coco/segm-4_lr1e-3.yaml

Training on TUD dataset

  1. python tools/train.py --cfg experiments/tud/lr1e-3.yaml

Testing on MEBOW dataset

You should change TEST:MODEL_FILE to your own in “experiments/coco/segm-4_lr1e-3.yaml”. If you want to test with our trained HBOE model, specify TEST:MODEL_FILE with the downloaded model path.

  1. python tools/test.py --cfg experiments/coco/segm-4_lr1e-3.yaml

Acknowledgement

This repo is based on HRnet.

Citation

If you use our dataset or models in your research, please cite with:

  1. @inproceedings{wu2020mebow,
  2. title={MEBOW: Monocular Estimation of Body Orientation In the Wild},
  3. author={Wu, Chenyan and Chen, Yukun and Luo, Jiajia and Su, Che-Chun and Dawane, Anuja and Hanzra, Bikramjot and Deng, Zhuo and Liu, Bilan and Wang, James Z and Kuo, Cheng-hao},
  4. booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  5. pages={3451--3461},
  6. year={2020}
  7. }