When running the fastai notebooks I think we often compare our results regarding loss and accuracy to the results shown in the lectures in order to get an idea of how well we are able to reproduce the same performances.
As an example, after running the Lesson 1 notebook (unchanged) for the first time on the official fastai AWS AMI I got a final TTA accuracy of 0.99299999999999999 (https://gyazo.com/1a780541eefe368f1602ba3f83752909), which is less good than @Jeremy’s in the lecture.
From past experience with Keras I noticed that even when setting the Numpy and Keras random library seeds there are situations in which results are not completely reproducible.
Is there a suggested way to get reproducible results using PyTorch and the fastai lib?
I think if you’re using python3 you need to set python’s random module seed to be fixed as well.
Edit: It’s been a while since this issue affected me, but my problem with python3 was not setting a seed for the hash function which affected results across restarts.
Hello! I am quite interested in the general question of reproducibility. While on many examples it comes to slight differences there are cases where the difference is quite a bit bigger. @jeremy For example:
I have been struggling to get lesson4-imdb to converge to the stated accuracy for about two days now.
In the lectures (and in the notebook) the learner achieves 4.165 but when I re-run the script (even if I add additional fit run with lower LR and longer cycle) I get stuck around 4.20, which ultimately results in ~90.5% accuracy.
I am currently running the vanilla setup on Paperspace, to see if that works and will report back later. It would be good to find the cause for this (significant) drop in final accuracy. Is it my GPU? CUDA version? PyTorch version?
Also: Paperspace is cool, but it seems to be training 4x slower than my local machine, so you can calculate the number of hours that would justify investing in a GPU .
Thanks for the reply! I’m not sure I follow. What’s the hypothesis here? I am running the notebook as is, I haven’t changed anything, yet I see this discrepancy.