项目作者: Daisy-Zhang

项目描述 :
General video classification framework implemented by Pytorch for all image classification task.
高级语言: Python
项目地址: git://github.com/Daisy-Zhang/Video-Classification-Pytorch.git
创建时间: 2020-11-28T06:08:35Z
项目社区:https://github.com/Daisy-Zhang/Video-Classification-Pytorch

开源协议:MIT License

下载


Pytorch Video Classification

General video classification framework implemented by Pytorch for all video classification task.

(Remember first to extract all frames of your videos and put the frames in the same video data dir.)

structure

/checkpoints

This directory will store all models you trained.

/data

Please put all your training or test data in this directory and follow the original directory structure.

/log

All log record file will be stored in this directory.

/models

You can put all the network model you design in this directory. I already provide four classic networks: ALSTM, CNNLSTM,LRCN, RNN.

/simple_test

A simple test dataset created by me to validate the test function. Users can keep this directory and make their own change to use their test data, or just delete this directory.

This directory contains the original videos, their frames and the extracted feature.

env

You could use following command to install all dependencies:

  1. pip -r requirements.txt

PS: for the pytorch version, early version may still be available.

data

I implement Dataset class in dataset.py. All videos and their frames should put in /data/test and /data/train directory. The child directory is each classes, such as /data/test/ClassA et. al.

config

Users can change the config setting in conf.py as they need, such as IMAGE_SIZE, EPOCH et al.

feature extraction

After you put your video and their frames well in /data, users should first run extract_features.py to extract the feature of your video data:

  1. >> python extract_features.py -h
  2. usage: extract_features.py [-h] -model MODEL -seq_length SEQ_LENGTH
  3. -image_size IMAGE_SIZE -sample_frames SAMPLE_FRAMES
  4. -data_dir DATA_DIR [-gpu]
  5. optional arguments:
  6. -h, --help show this help message and exit
  7. -model MODEL extractor model type
  8. -seq_length SEQ_LENGTH
  9. sample frames length
  10. -image_size IMAGE_SIZE
  11. crop image size
  12. -sample_frames SAMPLE_FRAMES
  13. sampled frames
  14. -data_dir DATA_DIR whole image folder dir
  15. -gpu use gpu or not

Once feature extraction done, there will be a /sequences directory in /data which contains all the features and the meta files.

train

Users can run train.py to start training:

  1. >> python train.py -h
  2. usage: train.py [-h] -model MODEL -seq_dir SEQ_DIR -seq_length SEQ_LENGTH
  3. -cnn_type CNN_TYPE [-gpu]
  4. optional arguments:
  5. -h, --help show this help message and exit
  6. -model MODEL model type
  7. -seq_dir SEQ_DIR features dir
  8. -seq_length SEQ_LENGTH
  9. sequences length
  10. -cnn_type CNN_TYPE features extractor cnn type
  11. -gpu use gpu or not

test

Users can run test.py to evaluate your model:

  1. >> python test.py -h
  2. usage: test.py [-h] -model MODEL -weights WEIGHTS [-gpu] -data_path DATA_PATH
  3. -image_size IMAGE_SIZE -cnn_type CNN_TYPE -seq_length
  4. SEQ_LENGTH
  5. optional arguments:
  6. -h, --help show this help message and exit
  7. -model MODEL model type
  8. -weights WEIGHTS the weights file you want to test
  9. -gpu use gpu or not
  10. -data_path DATA_PATH test data path
  11. -image_size IMAGE_SIZE
  12. input image size
  13. -cnn_type CNN_TYPE lstm feature extractor type
  14. -seq_length SEQ_LENGTH
  15. sequences length

others

utils.py: some utils function used in train.py and test.py. Users can modify this file for their convinence.

If this repo do you a favor, a star is my pleasure :)

And if you find any problem, please contact me or open an issue.