A walk with fastai2 - Study Group and Online Lectures Megathread

For image regression what type of explanation mechanisms we can use for DL models? I can think of CAM, Layer visualization roc, AIC but not confusion matrix etc… Any suggestions are welcome…

GradCAM and layer visualization are pretty much it for the most part, focusing on what the attention is. You could also then isolate to what each point is in the output as we’d assume that y1 would go to y1 on our ground truth, etc, so we could see which point is having the highest difficulty

1 Like

Attention is also good one … By the way I was trying to find not only for point regression but more general to numeric ones…

1 Like

Here is a paper that presents another technique for comparison of dl models and interpretation.
Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability


So, heatmaps was not where I wanted it to be by this point, so I’m subbing in a different lecture that week while we discuss GAN’s


Thanks !

Awesome @foobar8675 !! I will take a look at it later today. Big bounty!! :moneybag:

How should I deal with multimodal data? I’m having one block as image and I’ve two text parameters, is this doable using DataBlock or I should check Pipeline?

Is anybody else having trouble to watch the last recording (lesson 5) on youtube? It may be because I am in a public network (namely starbucks) I just would like to double-check. :wink:

splits = (self.splitter or RandomSplitter())(items)
so all our splitters return a function _inner which is applied to items and outputs the splits. We need to be subsetting these splits.
So we need a function (subsetSplitter) that wraps these pre defined splitter functions (which return a function) and perform some operation on the splits.

The idea of wrapping makes sense to me but i have no clue how to do it :slight_smile:

Haven’t done it yet, sorry :sweat_smile: week turned busier than anticipated. I’ll steam it today, I’ll have more time.


it is not yet uploaded i think @mgloria

@muellerzr if possible could you post here what time if you plan on live streaming, will try to join :slight_smile:

Sure. Let’s rock with 3pm CST. I’ll post the streaming link around 2


Link should still be the same:


We’re live :slight_smile:

1 Like

Thanks everyone who came and watched :slight_smile: I hope that some of you more familiar with the technique can provide more input (@lgvaz) at where I may have mispoke/gotten wrong. Otherwise a list of resources in which I mentioned today:

Lucas’s Repository for style transfer: https://github.com/lgvaz/projects/tree/master/vision/style

Residual Blocks: https://arxiv.org/abs/1512.03385
Upsample ConvLayer: http://distill.pub/2016/deconv-checkerboard/

The new fastai2 paper:

Jeremy’s lecture on style transfer: https://www.youtube.com/watch?v=xXXiC4YRGrQ (starts ~59:14)


I noticed that your networks that use residual connections just add the input in the forward function. I just finished reading the fastai paper before I watched your video and learnt that fastai provides an nn.SequentialEx and MergeLayer for these kind of cases. It might be useful to use that when defining ResBlocks or other related blocks in your code :slight_smile:


@ilovescience great idea! Could you show an example of a refactored ResBlock like in the lesson using these? :slight_smile: this would also be applicable (I think) to the layers we built for the style transfer too

1 Like