Training same model at different epocs

While keeping all the hyperparameters the same, I am getting different per epoch statistics while training the same model for different epochs. Is there something I am missing here?

That is to be expected. The training starts with some random weights and bias each time you run it from scratch.

We are using pre-trained weights here, right? Shouldn’t they be the same?

Not sure what model you’re using, but when you add additional layers to a pretrained model they are initialized with random weights. So if you take an Imagenet trained model like ResNet34 and add “bear classifier” layers to it to narrow down the (I’m not sure how many) Imagenet classes to just 3 (teddy, Grizzly, black), those NEW layers have random weights. And YES even running a a completely pretrained model on the same data twice will give slightly different results. Also the datasets used in your training are usually randomized so that you get a different training set and validation set with each new run.

1 Like

Got it! Thank you @Interogativ

Set a seed to improve reproducibility.

set_seed(42,True)

If I want it full repeatable, I’ll set the seed, the dataloaders and run the model in a function.
Also read this for some additional background. fastai2 reproducibility · Issue #2832 · fastai/fastai (github.com)