项目作者: greydanus

项目描述 :
Visualizing how deep networks make decisions
高级语言: Jupyter Notebook
项目地址: git://github.com/greydanus/excitationbp.git
创建时间: 2017-07-18T21:50:20Z
项目社区:https://github.com/greydanus/excitationbp

开源协议:

下载


excitationbp: visualizing how deep networks make decisions

Sam Greydanus. March 2018. MIT License.

Oregon State University College of Engineering. Explainable AI Project. Supported by DARPA.

Written in PyTorch

imagenet-ceb.png

About

This is a PyTorch implementation of contrastive excitation backprop (EB) (see this paper and original code in Caffe). The idea of EB is to visualize what causes a given neuron to fire. We perform backprop only on positive weights and keep the gradient normalized to 1. The gradient on the original image can then be loosely interpreted as the probability that a given pixel will excite that neuron.

Contrastive EB is a little different. We backprop both a positive and a negative activation of the neuron of interest through the layer immediately below it. Then we sum these two signals and perform EB over the remaining layers as usual. This signal can be loosely interepreted as the probability that a given pixel will:

  • excite the neuron.
  • not inhibit the neuron.

We performed experiments on

How to use

A minimal example of how to install/use this project:

  1. # clone the repo
  2. git clone https://github.com/greydanus/excitationbp.git
  3. # enter the repo
  4. cd excitationbp
  5. # install the package
  6. python setup.py install
  7. # enter the python command line
  8. python
  9. # import the EB module
  10. import excitationbp as eb
  11. # enter excitation backprop mode
  12. # (this replaces some of PyTorch's internal autograd functions)
  13. eb.use_eb(True)
  14. # perform excitation backprop
  15. # model: a PyTorch module
  16. # inputs: a PyTorch Variable that will be passed to the model
  17. # prob_outputs: probability distribution over outputs
  18. # (usually all zeros except for a 1 on the neuron you want to inspect)
  19. # contrastive: boolean, whether to use EB or contrastive EB
  20. # target_layer: int, relates to layer we want to visualize. 0 refers to input
  21. prob_inputs = eb.utils.excitation_backprop(model, inputs, prob_outputs, contrastive=False, target_layer=0)

Check out the two Jupyter notebooks for detailed examples.

ImageNet Results

Regular EB (has a hard time separating neuron-specific signals)

imagenet-eb.png

Contrastive EB (separates neuron-specific signals well)

imagenet-ceb.png

Contrastive EB to a mid-level conv. layer

imagenet-pool-ceb.png

Noisy-MNIST Results

I trained a simple fully-connected network on MNIST data + noise. Regular EB, again, had a hard time separating neuron-specific signals.

mnist-eb.png

Contrastive EB separated the ‘1’ vs ‘5’ signals.

mnist-ceb.png

Runtime

Computing a regular EB signal requires a single forward pass and a single backward pass. The same goes for the contrastive EB signal now (as of this update)

Dependencies

All code is written in Python 3.6. You will need: