Is it known that the mnist stats are wrong?
([0.15, 0.15, 0.15], [0.15, 0.15, 0.15])
Mean is repeated for std. I calculated the stats across the
URLs.MNIST_SAMPLE training set and got:
This is close to the values in the PyTorch mnist example which uses
Not sure about fixing them and possibly throwing off models, but maybe at least for v2.
Code I used to collect the stats is in Calculating our own image stats (imagenet_stats, cifar_stats etc.) as people were asking about collecting stats.
I don’t remember when they were computed and put there so it’s possible they are wrong. Note that MNIST SAMPLE only has two classes so it’s not the right dataset to compute the mean and std, it should be the whole training set of the real MNIST dataset.
Looking at history it’s from a commit from Jeremy with the dataset and stats which haven’t been edited.
Yeah, quite true about the sample, just what I was working on. Should I calculate the correct values of the full dataset and submit a PR? Given the possible issues with people using the new ones with a model trained on the old I wasn’t sure.
Yes please. The model on MNIST train quickly anyway.