I recently tried upgrading my fastai libraries.
I was running fastai v1.0.55, pytorch 1.1.0, and torchvision 0.3.0
and upgraded to fastai v1.0.60, pytorch 1.4.0, and torchvision 0.5.0
I noticed that performance on the validation metrics got significantly
worse in the new enviroment when training a number of models within
the same notebook (so all the hyperparameters should be the same).
Are there any known issues with fastai or pytorch that could cause this?
Different defaults/optimization settings perhaps?
The task/metric I’m measuring is regression on images/MSE on validation set.