Heatmaps in plot_top_losses()

Hi. I noticed that plot_top_losses() now has code for generating heatmaps automatically. However:

  1. It is undocumented. Maybe it is just present in the dev version of the library? Indeed, updating to 1.0.43 does not produce a learner.py aligned with the one in the repo.
  2. I’d like to keep plot_top_losses() and my plot_multi_top_losses() weel coordinated. If you want, you can take over the development of the multi version (I’m not jealous :stuck_out_tongue:), otherwise let’s try and stay coordinated (I’d like the iterpretation api to be as polished as possible) :wink:
  3. I’ll start to work about integration of grad-cam in multi ASAP. Maybe I’ll try grad-cam++ which has the advantage of producing the cam using a weighted average of gradients, thus being more faithful to the true activations.

OT (not worth of opening a separate thread): people complained, with reason, that multi takes figsz as one of its argument instead of the standard figsize. May you just correct this? So I can avoid to do a fork ex novo and a PR just for this.

Thanks!

3 Likes

It is actually documented, see here. Note that it’s an external contribution, and that grad cam can’t really be used as is for multi class (it only uses the activations of the wrongly predicted class).
If you want to do something similar for plot_multi_top_losses you can definitely suggest a PR, I’m not taking over anything :wink: .
I’ll make the change for figsz later today.

1 Like

Thanks! But how comes that I don’t find it in my learner.py, despite having updated to 1.0.43?

Yes, I was wondering if one can use it to help in figuring out why the model gave that wrong prediction, by highlighting the features the models thinks as distinctive of that (wrong) class.
I’m interested in your opinion here: do you deem it as worthwhile?

Thanks a lot!

I think it may be good to provide a tutorial to walk through the methodology used to generate the heat map. This will give insight for how to interprete, as well as customizing it. Ppl can always read the paper, but a tutorial would be nice to explain it in context of the library.

3 Likes

I wrote one for myself as notebook. But it’s in Italian. I’ll translate it ASAP and publish it on medium. I’ll keep you posted (meanwhile, mind that grad-cam theory is simpler than you think. You should at least try and read the paper).

i think I may have read it (the paper), if this is the same one as mentioned in F Chollet’s book? if you have one ready, thats great. Don’t mind reading it again, from a new perspective.

1 Like

Try this one: https://arxiv.org/abs/1710.11063

1 Like

Thanks, this has ++ in it, must be more recent, will read.

1 Like

Thanks, I was wondering why my plot of images had heatmap gradients. Turned this parameter off and I am able to get the actual images that are misclassified.

I wanted to use this resource (heatmap) on all the images, not only on the top_losses. Does anyone knows how to do it?

2 Likes

I also want that, did you figure out a way?

1 Like

Hello @divyansh,
I still didn’t find out how to do it using some function of fast AI. But at the class 6 of the course @jeremy showed how to implement it.

You can also refer to @quan.tran repository (https://github.com/anhquan0412/animation-classification) where he implemented it (https://nbviewer.jupyter.org/github/anhquan0412/animation-classification/blob/master/gradcam-usecase.ipynb).

If you find out some function of fast.ai to do it, please let me know.

Hope it helps.

2 Likes

hey @sgugger i write the function for generating heatmaps for every single image for my work and i wonder could i contribute it to the fastai library

Hello! Great contribution! So is there a summary of what exactly this heatmap represents? Like the color and the area. Like does it represent which areas of the image made classification go wrong? What does the color represent in this case? Thanks again!

Any news on gradcam for multi_top_loss?