Hi dan,
There is some randomness to these networks. first big area is that the initial weights are randomized. also, you’ll see some randomness whenever computing is distributed among threads. this happens on both the pytorch and fast.ai side.
Here’s some things I tried to do to get it to run in a deterministic fashion.
manual_seed = 555
random.seed(manual_seed)
np.random.seed(manual_seed)
torch.manual_seed(manual_seed)
torch.cuda.manual_seed_all(manual_seed)
torch.backends.cudnn.deterministic = True
This should cover most of it from the pytorch side. However, I think that the thread use in fast.ai was still introducing some amount of invariability. you would have to make everything run single threaded or do something to ensure things run in the same order each time.