Doing Transfer Learning Twice

Hello everybody. I am working on a model for the classification of spectrograms as indicative of a person as healthy or sick from some data I have. ResNet gives me about 84% accuracy, but I think it can be improved upon trying this approach:

  1. Train ResNet on GBs of spectrograms so it understands the basic patterns that lie in spectrograms

  2. Transfer these weights to then be fined tuned on my specific spectrogram data

Unfortunately, I’m not sure how to copy the weights from step 1, to perform step 2. Does anyone have an idea about how to do so?

If the model does not change for both the datasets, then you can use the learner save and load functionality.

If the number of categories are different then you need to just change the body of the model. There are few ways to do that. 1 simple way is to access the model by using learn.model[0] and use the load_state_dict function to load the body weights.

1 Like

Appreciate the reply.

So when I load the learner object, I have to now change the data bunch associated with it to my new dataset, as the old learner object I load up is associated with the old dataset?

Does that make sense? In essence, I am trying to figure out to change the databunch associated with the old learner object to the new data.

Edit:

This worked. I replaced the dls attribute with my new databunch I created (dls2). Does this look like the right way to go about things?

path = ‘./7)/core’
dls2 = ImageDataLoaders.from_folder(path, valid_pct=0.2, bs=4)
dls2.show_batch(nrows=5, ncols=4)
learn = load_learner(’./7)/transferWeights.pkl’)
learn.dls = dls2
learn.unfreeze()
learn.fit_one_cycle(2000, 1e-1)