tumor cell segmentation by inception-v3 and FCN model
Original picture is 2048 x 2048 tiff style, and tumor pictures have regional annotations with SVG format.
Source Code:
python main_cancer_annotation.py # Tumor Image Process
python main_non_cancer_annotation.py # Normal Image Process
All patch will be in two files: 0 and 1, 0 is normal, 1 is tumor. 0 and 1 are in images file. In images, we write in labels.txt
. Every line is 0 or 1, means two class in the images file. Then, count the number of patch, 1/10 in validation,change the value of validation in process_medical.sh
. Compile:
cd /path/to/models/inception
bazel build //inception:process_medical
run:
bazel-bin/inception/process_medical /path/to/input/images
Train: batch size = 32 * GPU number, learning rate = 0.05, learning rate decay factor = 0.5. Compile:
cd /path/to/models/inception
bazel build //inception:medical_train_with_eval
run:
bazel-bin/inception/medical_train_with_eval \
--num_gpus=2 \
--batch_size=64 \
--train_dir=/path/to/save/checkpoints/and/summaries \
--data_dir=/path/to/input/images \
--pretrained_model_checkpoint_path=/path/to/restore/checkpoints \
--initial_learning_rate=0.05 \
--learning_rate_decay_factor=0.5 \
--log_file=.txt
Attention: After train, all the files in train_dir
need to be copied to pretrained_model_checkpoint_path
, and change checkpoint file, make all the ckpt point to pretrained_model_checkpoint_path
.
Log is inlog_file
, including information of ACC or loss.
Evaluation. Compile:
cd /path/to/models/inception
bazel build //inception:medical_eval
run:
bazel-bin/inception/medical_eval \
--checkpoint_dir=/path/to/restore/checkpoints \
--eval_dir=/path/to/save/summaries \
--data_dir=/path/to/input/images \
--num_example=30000 \
--subset=validation \
--eval_file=.txt
Attention: Log is inlog_file
, including information of ACC or loss.
Train: inpout is patch and its annotation, annotation is the cropping of step 1. FCN is based on VGG Net. Change last fully connection networks to 1 x 1 convolution network, then use the last 3 and 4 pooling layer to upsample, output the same size
s picture of patch. Learning rate = 0.0001, batch size = 32. Run:
python FCN.py \
--batch_size=32 \
--checkpoint_dir=/path/to/pretrained/models/ \
--logs_dir=/path/to/save/logs/ \
--data_dir=/path/to/data/ \
--mode=train
Evaluation: input is patch and it`s annotation, output is expectant annotation. Calculate the proportion of expectant regions and compare the real regions and calculate the accuracy. Run:
python FCN.py \
--batch_size=32 \
--checkpoint_dir=/path/to/models/ \
--logs_dir=/path/to/save/logs/ \
--data_dir=/path/to/data/ \
--mode=test \
--subset=validation