Why Do we get Different Results on Different Runs?

I’d suggest looking here: [Solved] Reproducibility: Where is the randomness coming in? - #28 by harikrishnanrajeev

but what I found to reproduce is just the line

set_seed(42)

before the dataloaders.

Here is my theory as to why you need this. If anyone can confirm or correct this please do:

  • Before each epoch the training set is randomly shuffled.
  • The weights are updated every mini-batch.
  • If you do not have a seed for this your weights will be updated in a different order which will give you different weights and therefor a model which will perform differently on the validation set
  • Using set_seed(42) you set a seed for this random shuffling of the training set so it will shuffle it the same way each time giving you identical runs for each epoch when you run the whole thing again.

is this right?

if so, are we handicapping the the model training because its shuffling the training set the same way for each epoch?