Lesson 3 In-Class Discussion ✅

You can check loss function used by learner by simply running learn.loss_func??

2 Likes

Thank you bluesky314 (Rahul). That makes a good case for segmentation.

Can one specify colors for different classes such as:

R G B Class
64 128 64 Animal
192 0 128 Archway
0 128 192 Bicyclist

The label images will also be matching these colors.

The reason I am asking is that I may want to know how many different classes exist in an image. I can simply count number of unique colors that belong to my Color-to-Class map

1 Like

is the Lesson3-imdb,ipynb pointing towards wrong URL for IMDB (https://s3.amazonaws.com/fast-ai-nlp/imdb.tgz)?

The gzipped file does not contain anything like:

[PosixPath(’/home/jhoward/.fastai/data/imdb/imdb.vocab’),
PosixPath(’/home/jhoward/.fastai/data/imdb/models’),
PosixPath(’/home/jhoward/.fastai/data/imdb/tmp_lm’),
PosixPath(’/home/jhoward/.fastai/data/imdb/train’),
PosixPath(’/home/jhoward/.fastai/data/imdb/test’),
PosixPath(’/home/jhoward/.fastai/data/imdb/README’)]

As a result

(path/‘train’).ls()

results in an error

I created a symlink for my data folder in place of .fastai/data. Thus I can avoid having to open that .fastai directory often

1 Like

I analyzed the lesson videos to get Jeremy’s facial expressions to find when he’s most happy, sad, surprised etc. during the lessons. Gist linked in the post over on “Share your work”: https://forums.fast.ai/t/share-your-work-here/27676/366?u=jerbly

2 Likes

creating a symlink to the data folder has been great for me.

How to pass in your own loss function?

Yes I don’t know why dice was not used

When @jeremy changed his model from 128 to 256 image sizes, but kept the weights from the previous model, I can’t get my head round how the weights learned were still useful. Everything has got 4 times bigger and surely your filters won’t work anymore, in particular for satellite images where everything is at the same scale. The only way I can possibly see this working is if somehow the augmentation had done a lot of zooming in and out so the learned filters were able to adapt. Can anyone shed any light on this please?

EDIT: I couldn’t watch all the lesson live, so I need to go back and watch the end, so apologies if this was covered.

1 Like

Hi i made this notebook to understand the transforms by showing them:
[https://github.com/kasparlund/fastaiNotebooks/blob/master/show_transforms/show_transforms.ipynb]

i have not found you how to append the “skew” transform to a list of transform so if you know how then i would like to see it

19 Likes

Assign your loss function to learn.loss_func .

Thanks for showing that! :slight_smile: Note that you can also see examples of all transforms in the docs:

https://docs.fast.ai/vision.transform.html#List-of-transforms

2 Likes

@safekidda I am thinking that initially the training was done for 128 sized image and later when the size is increased and the weights are being used from previous model, it helps the new model to learn fast and on top of the previous model. But I would like to get response from other learned individuals to comment and clear the doubts.

1 Like

@joshfp I’d point out that the new channels would likely retain some of their spatial information, so at least some of the weights of the early layers will transfer.

If you are still underfitting, try a different LR, and reduce the dropout.

Note that you get 0.5 dropout by default.

1 Like

How does Cyclical Learning Rate compare to One cycle ? Previously they were being denoted by use_clr and use_clr_beta.

1 Like

How do we create our own path object given I have a path as a string?

Thanks. Yes, it used weights from previous model, but I’m questioning how they would be useful given the dimensions of the image have changed. If you think about it, the filters that we’re learned worked on small satellite images, so if everything suddenly got 4 times bigger how good would, say, an edge detector be? Only way I can see it working is if the original model had augmentation applied to work with zoomed images.

You are talking about frontier research. Be content with the LR finder, for now. It is a gigantic step forward with respect to any previous method.