项目作者: carpedm20

项目描述 :
TensorFlow implementation of "Learning from Simulated and Unsupervised Images through Adversarial Training"
高级语言: Python
项目地址: git://github.com/carpedm20/simulated-unsupervised-tensorflow.git
创建时间: 2016-12-27T08:51:15Z
项目社区:https://github.com/carpedm20/simulated-unsupervised-tensorflow

开源协议:Apache License 2.0

下载


Simulated+Unsupervised (S+U) Learning in TensorFlow

TensorFlow implementation of Learning from Simulated and Unsupervised Images through Adversarial Training.

model

Requirements

Usage

To generate synthetic dataset:

  1. Run UnityEyes with changing resolution to 640x480 and Camera parameters to [0, 0, 20, 40].
  2. Move generated images and json files into data/gaze/UnityEyes.

The data directory should looks like:

  1. data
  2. ├── gaze
  3. ├── MPIIGaze
  4. └── Data
  5. └── Normalized
  6. ├── p00
  7. ├── p01
  8. └── ...
  9. └── UnityEyes # contains images of UnityEyes
  10. ├── 1.jpg
  11. ├── 1.json
  12. ├── 2.jpg
  13. ├── 2.json
  14. └── ...
  15. ├── __init__.py
  16. ├── gaze_data.py
  17. ├── hand_data.py
  18. └── utils.py

To train a model (samples will be generated in samples directory):

  1. $ python main.py
  2. $ tensorboard --logdir=logs --host=0.0.0.0

To refine all synthetic images with a pretrained model:

  1. $ python main.py --is_train=False --synthetic_image_dir="./data/gaze/UnityEyes/"

Training results

Differences with the paper

  • Used Adam and Stochatstic Gradient Descent optimizer.
  • Only used 83K (14% of 1.2M used by the paper) synthetic images from UnityEyes.
  • Manually choose hyperparameters for B and lambda because those are not specified in the paper.

Experiments #1

For these synthetic images,

UnityEyes_sample

Result of lambda=1.0 with optimizer=sgd after 8,000 steps.

  1. $ python main.py --reg_scale=1.0 --optimizer=sgd

Refined_sample_with_lambd=1.0

Result of lambda=0.5 with optimizer=sgd after 8,000 steps.

  1. $ python main.py --reg_scale=0.5 --optimizer=sgd

Refined_sample_with_lambd=1.0

Training loss of discriminator and refiner when lambda is 1.0 (green) and 0.5 (yellow).

loss

Experiments #2

For these synthetic images,

UnityEyes_sample

Result of lambda=1.0 with optimizer=adam after 4,000 steps.

  1. $ python main.py --reg_scale=1.0 --optimizer=adam

Refined_sample_with_lambd=1.0

Result of lambda=0.5 with optimizer=adam after 4,000 steps.

  1. $ python main.py --reg_scale=0.5 --optimizer=adam

Refined_sample_with_lambd=0.5

Result of lambda=0.1 with optimizer=adam after 4,000 steps.

  1. $ python main.py --reg_scale=0.1 --optimizer=adam

Refined_sample_with_lambd=0.1

Training loss of discriminator and refiner when lambda is 1.0 (blue), 0.5 (purple) and 0.1 (green).

loss

Author

Taehoon Kim / @carpedm20