Lesson 3 In-Class Discussion ✅

Docs are not down for me at: https://docs.fast.ai/vision.models.unet.html
Maybe he is seeing a cached response?

1 Like

Ah, if you refresh the doc page several times, eventually it succeeds.

That would work very well, yes.

2 Likes

why would you pass a resnet for creating a unet??

6 Likes

Sometimes if the data is really different to Imagnet my loss figures start at a really high number (100s) and only gets down to ~100 after training. Should I divide it by a weighting factor to ‘normalise’? Or does it not matter e.g average loss magnitude doesn’t matter?

1 Like

Olaf Ronneberger’s U-Net with over 3k citations:

9 Likes

how much is final output img size by create_unet
like 128,512,368 etc

The first part of the unet architecture uses a standard classification CNN (originally VGG). In our case we’re using a resnet as a base

6 Likes

Is gradient descent the only good way to find the best weights?
(I can imagine going brute force through all options, as an example of an alternative).

1 Like

planet notebook: To make the kaggle download (as per the notebook) work you have to :

  • go to the kaggle competition and click “Late submission” and confirm by an sms code
  • go to the datapage and accept the rule (the dialog box)
1 Like

Where and when is the meetup next week?

5 Likes

Will there be part2 live?

1 Like

What to do when my model stuck at local minima, instead of Global minima?

1 Like

Here is the info on the meetup with Leslie Smith and Jeremy

6 Likes

To add a little bit more to what Jeremy is saying, you can’t begin training at the highest learning rate because at the start of training, there is a high chance a high learning rate is going to make you diverge. That’s why we have this ‘warm-up’ that progressively increases the learning rate to its max value.

5 Likes

anyone else got CUDA out of memory error when trying to crate the unet? decreasing batch size didn’t work

4 Likes

I was looking at this earlier this week. I found this to be helpful.

1 Like

My understanding is -
Try to increase learning rate and train for a bit and then train on decreased learning rate to somewhat help it get out of the local minima

1 Like

Many optimizers learn with a “momentum”. It gives the impression that they wish to proceed in a certain direction with a smaller learning rate, and not jump back and forth very much. Do they get messed up by large learning rates?

No, we use Adam by default, which has momentum. So this doesn’t get messed up.

3 Likes