项目作者: Karol-G

项目描述 :
Gcam is an easy to use Pytorch library that makes model predictions more interpretable for humans. It allows the generation of attention maps with multiple methods like Guided Backpropagation, Grad-Cam, Guided Grad-Cam and Grad-Cam++.
高级语言: Python
项目地址: git://github.com/Karol-G/Gcam.git
创建时间: 2020-05-29T06:51:33Z
项目社区:https://github.com/Karol-G/Gcam

开源协议:MIT License

下载


Gcam (Grad-Cam)

License
Docs
PyPI version
Python package

New version of this repo at https://github.com/MECLabTUDA/M3d-Cam

Gcam is an easy to use Pytorch library that makes model predictions more interpretable for humans.
It allows the generation of attention maps with multiple methods like Guided Backpropagation,
Grad-Cam, Guided Grad-Cam and Grad-Cam++.

All you need to add to your project is a single line of code:

  1. model = gcam.inject(model, output_dir="attention_maps", save_maps=True)

Features

  • Works with classification and segmentation data / models
  • Works with 2D and 3D data
  • Supports Guided Backpropagation, Grad-Cam, Guided Grad-Cam and Grad-Cam++
  • Attention map evaluation with given ground truth masks
  • Option for automatic layer selection

Installation

Documentation

Gcam is fully documented and you can view the documentation under:

https://karol-g.github.io/Gcam

Examples

#1 Classification (2D) #2 Segmentation (2D) #3 Segmentation (3D)
Image
Guided backpropagation
Grad-Cam
Guided Grad-Cam
Grad-Cam++

Usage

  1. # Import gcam
  2. from gcam import gcam
  3. # Init your model and dataloader
  4. model = MyCNN()
  5. data_loader = DataLoader(dataset, batch_size=1, shuffle=False)
  6. # Inject model with gcam
  7. model = gcam.inject(model, output_dir="attention_maps", save_maps=True)
  8. # Continue to do what you're doing...
  9. # In this case inference on some new data
  10. model.eval()
  11. for batch in data_loader:
  12. # Every time forward is called, attention maps will be generated and saved in the directory "attention_maps"
  13. output = model(batch)
  14. # more of your code...

Demos

Classification

You can find a Jupyter Notebook on how to use Gcam for classification using a resnet152 at demos/Gcam_classification_demo.ipynb or opening it directly in Google Colab: Open In Colab

2D Segmentation

TODO

3D Segmentation

You can find a Jupyter Notebook on how to use Gcam with the nnUNet for handeling 3D data at demos/Gcam_nnUNet_demo.ipynb or opening it directly in Google Colab: Open In Colab