项目作者: arthur801031

项目描述 :
Official PyTorch implementaiton of the paper "3D Instance Segmentation Framework for Cerebral Microbleeds using 3D Multi-Resolution R-CNN."
高级语言: Python
项目地址: git://github.com/arthur801031/3d-multi-resolution-rcnn.git
创建时间: 2020-12-15T07:31:45Z
项目社区:https://github.com/arthur801031/3d-multi-resolution-rcnn

开源协议:Apache License 2.0

下载


3D Instance Segmentation Framework for Cerebral Microbleeds using 3D Multi-Resolution R-CNN

Official PyTorch implementaiton of the paper “3D Instance Segmentation Framework for Cerebral Microbleeds using 3D Multi-Resolution R-CNN” by I-Chun Arthur Liu, Chien-Yao Wang, Jiun-Wei Chen, Wei-Chi Li, Feng-Chi Chang, Yi-Chung Lee, Yi-Chu Liao, Chih-Ping Chung, Hong-Yuan Mark Liao, Li-Fen Chen. Paper is currently under review.

Keywords: 3D instance segmentation, 3D object detection, cerebral microbleeds, convolutional neural networks (CNNs), susceptibility weighted imaging (SWI), 3D Mask R-CNN, magnetic resonance imaging (MRI), medical imaging, pytorch.

Usage Instructions

Requirements

  • Linux (tested on Ubuntu 16.04 and Ubuntu 18.04)
  • Conda or Miniconda
  • Python 3.4+
  • PyTorch 1.0
  • Cython
  • mmcv

Tested on CUDAs

  • CUDA 11.0 with Nvidia Driver 450.80.02
  • CUDA 10.0 with Nvidia Driver 410.78
  • CUDA 9.0 with Nvidia Driver 384.130

Installation

  1. Cloen the repository.
  1. git clone https://github.com/arthur801031/3d-multi-resolution-rcnn.git
  1. Install Conda or Miniconda.
    Miniconda installation instructions
  1. Create a conda environment from conda.yml.
  1. cd 3d-multi-resolution-rcnn/
  2. conda env create --file conda.yml
  1. Install pip packages.
  1. pip install -r requirements.txt
  1. Compile cuda extensions.
  1. ./compile.sh
  1. Create a “data” directory and move your dataset to this directory.
  1. mkdir data

Training Commands

  1. # single GPU training with validation during training
  2. clear && python setup.py install && CUDA_VISIBLE_DEVICES=0 ./tools/dist_train.sh configs/3d-multi-resolution-rcnn.py 1 --validate
  3. # multi-GPU training with validation during training
  4. clear && python setup.py install && CUDA_VISIBLE_DEVICES=0,1 ./tools/dist_train.sh configs/3d-multi-resolution-rcnn.py 2 --validate
  5. # resume training from checkpoint
  6. clear && python setup.py install && CUDA_VISIBLE_DEVICES=0,1 ./tools/dist_train.sh configs/3d-multi-resolution-rcnn.py 2 --validate --resume_from work_dirs/checkpoints/3d-multi-resolution-rcnn/latest.pth

Testing Commands

  1. # perform evaluation on bounding boxes only
  2. clear && python setup.py install && CUDA_VISIBLE_DEVICES=0 python tools/test.py configs/3d-multi-resolution-rcnn.py work_dirs/checkpoints/3d-multi-resolution-rcnn/latest.pth --gpus 1 --out results.pkl --eval bbox
  3. # perform evaluation on bounding boxes and segmentations
  4. clear && python setup.py install && CUDA_VISIBLE_DEVICES=0 python tools/test.py configs/3d-multi-resolution-rcnn.py work_dirs/checkpoints/3d-multi-resolution-rcnn/latest.pth --gpus 1 --out results.pkl --eval bbox segm

Test Image(s)

Refer to test_images.py for details.

COCO Annotation Format

  1. {
  2. "info": {
  3. "description": "Dataset",
  4. "url": "https://",
  5. "version": "0.0.1",
  6. "year": 2019,
  7. "contributor": "arthur",
  8. "date_created": "2020-10-29 17:12:12.838644"
  9. },
  10. "licenses": [
  11. {
  12. "id": 1,
  13. "name": "E.g. Attribution-NonCommercial-ShareAlike License",
  14. "url": "http://"
  15. }
  16. ],
  17. "categories": [
  18. {
  19. "id": 1,
  20. "name": "microbleed",
  21. "supercategory": "COCO"
  22. }
  23. ],
  24. "images": [
  25. {
  26. "id": 1,
  27. "file_name": "A002-26902603_instance_v1.npy",
  28. "width": 512,
  29. "height": 512,
  30. "date_captured": "2020-10-29 15:31:32.060574",
  31. "license": 1,
  32. "coco_url": "",
  33. "flickr_url": ""
  34. },
  35. {
  36. "id": 2,
  37. "file_name": "A003-1_instance_v1.npy",
  38. "width": 512,
  39. "height": 512,
  40. "date_captured": "2020-10-29 15:31:32.060574",
  41. "license": 1,
  42. "coco_url": "",
  43. "flickr_url": ""
  44. }
  45. ],
  46. "annotations": [
  47. {
  48. "id": 1,
  49. "image_id": 1,
  50. "category_id": 1,
  51. "iscrowd": 0,
  52. "area": 196,
  53. "bbox": [
  54. 300,
  55. 388,
  56. 7,
  57. 7,
  58. 65,
  59. 4
  60. ],
  61. "segmentation": "data/Stroke_v4/COCO-full-vol/train/annotations_full/A002-26902603_instance_v1_1.npy",
  62. "segmentation_label": 1,
  63. "width": 512,
  64. "height": 512
  65. },
  66. {
  67. "id": 2,
  68. "image_id": 2,
  69. "category_id": 1,
  70. "iscrowd": 0,
  71. "area": 1680,
  72. "bbox": [
  73. 334,
  74. 360,
  75. 15,
  76. 14,
  77. 33,
  78. 8
  79. ],
  80. "segmentation": "data/Stroke_v4/COCO-full-vol/train/annotations_full/A003-1_instance_v1_1.npy",
  81. "segmentation_label": 1,
  82. "width": 512,
  83. "height": 512
  84. },
  85. {
  86. "id": 3,
  87. "image_id": 2,
  88. "category_id": 1,
  89. "iscrowd": 0,
  90. "area": 486,
  91. "bbox": [
  92. 380,
  93. 244,
  94. 9,
  95. 9,
  96. 51,
  97. 6
  98. ],
  99. "segmentation": "data/Stroke_v4/COCO-full-vol/train/annotations_full/A003-1_instance_v1_10.npy",
  100. "segmentation_label": 10,
  101. "width": 512,
  102. "height": 512
  103. },
  104. {
  105. "id": 4,
  106. "image_id": 2,
  107. "category_id": 1,
  108. "iscrowd": 0,
  109. "area": 256,
  110. "bbox": [
  111. 340,
  112. 300,
  113. 8,
  114. 8,
  115. 61,
  116. 4
  117. ],
  118. "segmentation": "data/Stroke_v4/COCO-full-vol/train/annotations_full/A003-1_instance_v1_11.npy",
  119. "segmentation_label": 11,
  120. "width": 512,
  121. "height": 512
  122. },
  123. {
  124. "id": 5,
  125. "image_id": 2,
  126. "category_id": 1,
  127. "iscrowd": 0,
  128. "area": 550,
  129. "bbox": [
  130. 367,
  131. 196,
  132. 10,
  133. 11,
  134. 65,
  135. 5
  136. ],
  137. "segmentation": "data/Stroke_v4/COCO-full-vol/train/annotations_full/A003-1_instance_v1_12.npy",
  138. "segmentation_label": 12,
  139. "width": 512,
  140. "height": 512
  141. }
  142. ]
  143. }

This codebase is based on OpenMMLab Detection Toolbox and Benchmark.