Predicting against pre-trained model results in worse accuracy without additional fit_one_cycle

I have pre-trained a cnn model with ~0.05 error rate. If I load the pre-trained model in a new session and predict on validation set, I get huge drop in accuracy from what was reported in last fit_one_cycle run. However if I run fit_one_cycle(1), accuracy comes back.

I am not sure what I am doing wrong. I should be able to use pre-trained model on new data without fit_one_cycle()

Are you 100% sure that stage-9 has the weights you want? If you try saving a stage-10 after the fit_one_cycle then restarting the notebook and loading that, does it match the results from In [7]?

I tried to reproduce this on the lesson1-pets notebook and loading saved weights then doing get_preds worked for me on a fresh run of the notebook.

1 Like

Brad,
Thanks for taking time to try recreate this problem. Looks like there was something wrong in my saved model. I re-trained the model from scratch and it works fine now.

Thanks

Farhan

1 Like