How are “the last layers” defined in fast.ai ? How do we know which are considered last ?
what about max_lr ?
Rule of thumb.
Last layers are what you put on top of your pretrained model. We also call that the head of the model.
But I only have two thumbs, not three
You could use class activation maps to help you do that. A class activation map is a 2D grid of scores associated with a specific output class, computed for every location in any input image, indicating how important each location is with respect to the class under consideration. [Deep learning with Python by François Chollet]
So when you train a basic resnet34 model for example, does it have a head ? In other words, does fast.ai provide a “standard head” we can change ?
Can we have an explanation what the first argument in fit_one_cycle actually represents? Is it equivalent to an epoch?
If you called create_cnn
, we removed the last layers of the resnet that were specific to imagenet and replaced them by a new head composed of layers randomly initialized.
That was a really helpful explanation of fine tuning. Thanks @jeremy
It is the number of epochs you want to train with.
And is it the same as just .fit(…) . Why not use just fit()
Ok. Thanks for confirming this @sgugger.
fit_one_cycle
does extra nice things for you, but Jeremy will explain that later tonight.
“affine” function?
is that the number of epochs in the one cycle? (if fit_one_cyle)
See the docs on how to customize your model: https://docs.fast.ai/vision.learner.html#Customize-your-model
When you use precompute=True, it is the same behaviour as when you freeze the pretrained part of the model, right?
Yes, there is a predefined deafult head and you can also create a custom head and send it as an argument. For more information see create_cnn
Yes indeed.