Does learn.load(previous_learn) overwrite learn = cnn_learner(new data set)

In Lesson 2 there are code like this :

Previous model based on original ‘data’

learn = cnn_learner(data, models.resnet34, metrics=error_rate)
learn.fit_one_cycle(4)
learn.save(‘stage-1’)
learn.unfreeze()
learn.fit_one_cycle(2, max_lr=slice(3e-5,3e-4))
learn.save(‘stage-2’)

Then, data was cleaned and formed new dataset as ‘db’

db = (ImageList.from_folder(path)
.split_none()
.label_from_folder()
.transform(get_transforms(), size=224)
.databunch()
)

Then, there are 2 line codes:

learn_cln = cnn_learner(db, models.resnet34, metrics=error_rate)
learn_cln.load(‘stage-2’)

My understanding is, ‘learn_cln = cnn_learner(db, models.resnet34, metrics=error_rate)’ generate a new model, learn_cln, based on ‘db’ data. However, ‘learn_cln.load(‘stage-2’)’ loads previous model ‘learn’, and overwrites ‘learn_cln’.

Am I correct? Thank you!

2 Likes

I have been wanting to know the answer to this question too.

So when we do a .save() the only thing that is done is the models weights aka these parameters we’re training get saved away. So when we generate our new learner, in your case learn_cln, it’s still the same model but we want to load in our old weights, and so we do a learn_cln.load('stage-2'). Now we’re using the same model that we originally trained with before cleaning the data and so we don’t have to do as much training. Does this make sense? :slight_smile:

For those familiar with PyTorch, learn_cln.save() does the same thing as torch.save() on the state dictionary plus optionally the state of the optimizer too

1 Like

@muellerzr So mathematically, we have saved our most recent coefficients, which were found by fitting the model with the ‘un-clean’ data.

But then we load the ‘new’ model, it is actually mathematically identical to the old. Going forward, any fitting is done on the ‘clean’ data, starting from where we left off with the old parameters?

Am I understanding correctly?

If by old parameters you mean the coefficients we saved away then yes

1 Like

@muellerzr In tabular setting will the embedding matrices be mapped properly to a new data set you train on if it has the same unique values? Also, what if a new categorical variable shows up.

You should generally use either .add_test or if using v2 test_dl in which yes everything will be mapped accordingly. If a new value shows up IIRC the matrix should be one larger for each row to account for this new unknown

1 Like

@muellerzr Is the new book the best resource for v2 tabular?

That or my walkthrough/study group/course yes. Search for Walk with fastai2 and you will find it

@muellerzr

I watched your video and looked at the test_dl part. I guess I might not have been good at explaining. In v2, say everyday I get 500mb of new training data but due to some constraints I can’t store all of the past data and each day I want to use the weights from the last day to train this days data.

Day 1: Train tabular learner on 500mb data for regression problem, save model, the 500mb data is deleted
Day2: load old model, train using day1 weights and embeddings as starting point for training todays new 500mb

I realize it would be best if I could just train all at once, however when you create the old learner the prepared data set is a parameter. How would I retrain on unseen data while keeping the old weights? I see how to get predictions on a new data set. I just am not finding an example like what I am talking about above.