Different results with the same model


I have a model that was trained on an older version of fastai, but with torch 0.4 and cuda 8.0x.
I’ve set up another machine that has the standard fastai library and setup from the paperspace script. For some reason, I am getting completely different results.

I’m running:

  • the same model,
  • with the same data
  • from the same file

The only difference is that it’s a different computer. I’m sure there are differences in the underlying libraries - fastai is a different version (although I did try reverting the git repo) - etc. But I can’t understand why I’m getting completely different results. Can anyone help?

Kind regards,


edit - the previous revision of fast ai was e8841f79eb399ac641dac1fe3ba05fb5f6b8b93f

For anyone to help, you’ll need to provide more detail. What model are you running and what do you mean by “completely different results”?

I am guessing the filesystems might be different. Have a look at data.test_ds.fnames (or data.trn_ds.fnames or data.val_ds.fnames) in both the computers and check if they are the same. It mainly differs when using ImageClassiferData.from_paths.

Though it would be better to clarify what “completely different results” mean.

1 Like

Hi Arka,

Thanks very much - that fixed it - or rather identified it. Now, at least, I know what was wrong.

For the record, completely different meant that results on one system were completely uncorrelated with results on the other. It was mystifying. I would have expected it to fail completely instead of being wrong.

Thanks once again for the help!