Lesson 7 in-class chat ✅

ah, so in this case,

data.normalize() is equivalent to data.normalize(data.batch_stats())?

I believe it is.
Just proofread the code, it’s exactly what it is doing.

1 Like

I didn’t get what Flatten() is for

But you could still implement that randomness by shuffling so long as that shuffling adheres to the rule a batch must have same image sizes. I struggle with the idea of throwing away data by cropping or resizing.

You take a tensor and make it a simple vector. If the tensor is of size 10x1x1 it just transforms it to a vector of size 10

1 Like

is there any workaround to this issue… as i m stuck. I want to use my list of trn /valid to overcome class imabalance

Stay tune for the second part of the course :wink:

1 Like

I keep getting Plot_multi is not defined. Do we have to import something special for this? (I don’t think so but even after updating fastai it’s not working)

I would have expected this network to have a softmax at the end to classify between the 10 classes. Why is there none?

You should just have to import * from fastai.vision

Newb question…
I have version 1.0.34 of the fastai library on my local server. I started from the beginning with 1.0.0ish when I built it. Every week before class I run

conda update -c fastai fastai

Now I notice that the latest dev version in github is 1.0.38, which means I am a few releases behind.

The question is, what command do I run to get a specific version like 1.0.35 or the latest non-dev like 1.0.37 or the latest as in 1.0.38 using conda or other?

1 Like

It’s inside the loss function in pytorch.


There’s no weight associated to the skip connection ?

conda install -c fastai fastai==1.0.35

is 56 layer suffering from vanishing grad?

That’s more recent, but yes, we put some weight on the result of the conv layers (0.2 I believe). The identity connection has a weight of 1.

1 Like

Does it mean that now with ResNets (skip-connections) we can train models of arbitrary depths? (Having enough data, of course).

Is that a parameter of the model optimised by the optimiser or a hyperparameter ?

No, it’s an hyper-parameter.

1 Like

And enough compute. The benefit of going deeper isn’t always obvious.