Add support for interpretable neural network models with iNNvestigate

Hi,

I started using the fastai library, which does many things right, especially when it comes to experimenting with various settings, and trying what works best, by providing a uniform interface.
Before that I have used Tensorflow together with the library https://github.com/albermax/innvestigate

It helps to understand what your model learned and why it may not perform as expected, by using heatmap renderings, that highlight what features were relevant for an NN model’s classification of a specific image. It is quite intuitive, especially for the DeepTaylor rendering.

Many other algorithms have been implemented already, which makes it ideal to compare them with each other. Similar to fastai, this uniform interface makes it much more efficient to compare various approaches.

Would you be willing to help out to get this effort going further? I just started using fastai recently, so I can’t comment on the technical details yet, as requested by the author. Anybody who wishes to help or comment to add PyTorch/fastai support would be very welcome to do so:

See also here for a live demo, in the browser (e.g., draw a digit, and see what parts get highlighted as especially relevant for the classification):

https://lrpserver.hhi.fraunhofer.de/

I think that as long as they support PyTorch, they should support fastai quite easily. We’ll see when they officially support PyTorch!