Skip to content

RuoyuGuo/Visualising-Image-Classification-Models-and-Saliency-Maps

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

Deep Inside Convolutional Networks

Paper: Simonyan, K., Vedaldi, A., Zisserman, A. 2013. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps.

reference:

This project try to implement this paper on VGG16 model using keras with tensorflow backend.

  1. visualise first Conv Layers' filters
  2. visualise activation map of a input image
  3. visualise image that can maximise one class score
  4. visualise saliency map of a input image

1, 2 and 4 have been successfully implemented. However, I cannot achieve 3 very well.

I use the formula(1) given in the paper, and use zero image plus ImageNet weights as input. I also calculate result with and without image preprocessing as well as deprocessing using zero-centred method with ImageNet weights. Nevertheless, the generated image is also undiscriminatable.

Therefore, I use methods mentioned in [3]. Besides, I create a zero image with e*20 noise per pixel, where e randomly are sample from np.random.random(). Then plus ImageNet weights. For depreprocessing, I use methods in [3].

You can find more details in Visualising Image Classification Models and Saliency Maps.ipynb.

If you have any suggestions or questions, please create issue, Thank you!

About

implementation of Visualising Image Classification Models and Saliency Maps

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published