A walk with fastai2 - Study Group and Online Lectures Megathread

GradCAM and layer visualization are pretty much it for the most part, focusing on what the attention is. You could also then isolate to what each point is in the output as we’d assume that y1 would go to y1 on our ground truth, etc, so we could see which point is having the highest difficulty

1 Like

Attention is also good one … By the way I was trying to find not only for point regression but more general to numeric ones…

1 Like

Here is a paper that presents another technique for comparison of dl models and interpretation.
Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability


So, heatmaps was not where I wanted it to be by this point, so I’m subbing in a different lecture that week while we discuss GAN’s


Thanks !

Awesome @foobar8675 !! I will take a look at it later today. Big bounty!! :moneybag:

How should I deal with multimodal data? I’m having one block as image and I’ve two text parameters, is this doable using DataBlock or I should check Pipeline?

Is anybody else having trouble to watch the last recording (lesson 5) on youtube? It may be because I am in a public network (namely starbucks) I just would like to double-check. :wink:

splits = (self.splitter or RandomSplitter())(items)
so all our splitters return a function _inner which is applied to items and outputs the splits. We need to be subsetting these splits.
So we need a function (subsetSplitter) that wraps these pre defined splitter functions (which return a function) and perform some operation on the splits.

The idea of wrapping makes sense to me but i have no clue how to do it :slight_smile:

Haven’t done it yet, sorry :sweat_smile: week turned busier than anticipated. I’ll steam it today, I’ll have more time.


it is not yet uploaded i think @mgloria

@muellerzr if possible could you post here what time if you plan on live streaming, will try to join :slight_smile:

Sure. Let’s rock with 3pm CST. I’ll post the streaming link around 2


Link should still be the same:


We’re live :slight_smile:

1 Like

Thanks everyone who came and watched :slight_smile: I hope that some of you more familiar with the technique can provide more input (@lgvaz) at where I may have mispoke/gotten wrong. Otherwise a list of resources in which I mentioned today:

Lucas’s Repository for style transfer: https://github.com/lgvaz/projects/tree/master/vision/style

Residual Blocks: https://arxiv.org/abs/1512.03385
Upsample ConvLayer: http://distill.pub/2016/deconv-checkerboard/

The new fastai2 paper:

Jeremy’s lecture on style transfer: https://www.youtube.com/watch?v=xXXiC4YRGrQ (starts ~59:14)


I noticed that your networks that use residual connections just add the input in the forward function. I just finished reading the fastai paper before I watched your video and learnt that fastai provides an nn.SequentialEx and MergeLayer for these kind of cases. It might be useful to use that when defining ResBlocks or other related blocks in your code :slight_smile:


@ilovescience great idea! Could you show an example of a refactored ResBlock like in the lesson using these? :slight_smile: this would also be applicable (I think) to the layers we built for the style transfer too

1 Like

Hey so I haven’t played too much with fastai2’s Layer API but I think it should be possible to replace this:

class ResBlock(Module):
  def __init__(self, nf):
    self.conv1 = ConvLayer(nf, nf)
    self.conv2 = ConvLayer(nf, nf)
  def forward(self, x): return x + self.conv2(self.conv1(x))

with this:

def ResBlock(nf):
    return SequentialEx(ConvLayer(nf, nf),
                                     ConvLayer(nf, nf),


class ResBlock(nn.Module):
    def __init__(self,nf):
        self.convpath = SequentialEx(ConvLayer(nf, nf),
                                         ConvLayer(nf, nf),
    def forward(self,x):
       return self.convpath(x)

If dense=False then it’s a residual connection but if dense=True then it’s a dense connection. IIRC I think SequentialEx will deal with giving the original input to MergeLayer.

Note I haven’t tried this out yet, just wrote this up based on the fastai codebase. So it is very much possible that I got this wrong and this doesn’t work.

But the two examples of usage in the codebase are here and here. Surprisingly, these are the only two places where they are used, which I find slightly odd, because I would expect it to be used for the actual ResBlocks in the codebase.

Nevertheless, I hope this helps!