I just ran the following code in the 01_intro.ipynb file -
from fastai.vision.all import *
path = untar_data(URLs.PETS)/‘images’
def is_cat(x): return x.isupper()
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
learn = cnn_learner(dls, resnet34, metrics=error_rate)
I can see 2 sets of training data but both of them says epoc 0.
I want to understand how did the code train the model for 2 cycles when it was not explicitly mentioned in the code.
Thanks in advance.
I think this has to do with Python indexing which starts counting from 0. Selecting
means running first epock 0 and then epock 1.
For example runing learn.fine_tune(8) runs epocks from 0 to 7.
You mention that you can see 2 sets of training data. I guess these are the ones you refer to as epoc 0 (2x). It is not clear to me why this separation is there.
Thank you so much for replying. It makes a lot of sense now that you mentioned:
learn.fine_tune(2) means 2 epochs (0 and 1).
However, in my code sample when I run it, it says
Epoch 0 -----
Epoch 0 -----
Instead of saying Epoch 0 and Epoch 1.
Might be a bug.
But thanks again for clearing my doubt. Really appreciate it.
When you use .fine_tune, you actually do the following:
- one epoch of training where the pretrained model is frozen and only the last layers of the model ate trained, i.e. only the weights and biases of the last layers are modified
- n epochs of training on the full model, i.e. all parameters (w and b) are updated.
This is why you see two sets of epoch
If you check the doc for .fine_tune you will see that you also can define how many epochs to use for the first step.
This is the basic principles of transfer learning.
Hope it helps.
Thanks for sharing this information. It was useful.