Each time I get the same structured learner and fit with the same lr, I get different results for my loss function. However, I am looking for a stable model which lets me reproduce the exact loss value. Am I doing something wrong? Or is this not possible at all?
To reproduce the exam same values on two different model runs, your batches would have to be un-shuffled, your weights would have to be initialized to be the exact same, and you would require the exact same dropout mask (or dropout would need to be removed altogether) for each batch for each epoch.
I’m not sure if there is something you could do with a seed. Maybe try experimenting with something like this and see if it works: https://discuss.pytorch.org/t/what-is-manual-seed/5939/3
That might give you repeatable results but I haven’t used it so I can’t say for sure and it seems like the answers online are pretty inconclusive. A lot of people recommending that and a lot of people saying it doesn’t work.
I’m not sure if there is something you could do with a seed. Maybe try experimenting with something like this and see if it works: https://discuss.pytorch.org/t/what-is-manual-seed/5939/31
indeed, it does the trick to giving identical results (at least tried with
fit()), thank you!
And potentially one for python core, so together:
# torch RNG
# python RNG