I trained the Densenet169 model on a dermatology dataset till it reached an error_rate of 0.43. I did this 3 months ago.
Recently I made the data bunch again to test the accuracy and the error rate had decreased to 0.17 and the training loss had increased.
This drastic decrease in error_rate (ie increase in accuracy) was very unexpected.
I reckoned that this could have been because of creating the data bunch again. The images in the training set and the validation set may not the same anymore. Some images that were in the validation set earlier may now be in the training set and some images that were in the training set earlier may now be in the validation set.
But, The random.seed() values in both cases were the same. This made me question the above explanation.
Can anyone comment on what might have happened here, please?