Training CIFAR-10

As a starting step to try and replicate the Fixmatch and the FROST papers, I wanted to establish a good baseline on CIFAR-10.

Given the success on the DAWN benchmark and the advancement of the library I was under the (wrong) impression that I could get to 90-94% accuracy relatively easily, with just a bit of patience (I don’t need to limit myself in the number of epochs, I just want to reach the accuracy).

I have been bashing my head against it and I do not seem to be able to beat 88-89% no matter what I try. I tried different lr schedules, several architectures, optimizers, batch sizes, but I remain frustratingly stuck. I am hesitant to try and replicate as it is the code from the Dawn submission and try to adapt it to fastai2 as I would not understand what I am doing wrong.

So my question is: do you have a “simple” recipe to train CIFAR-10 to 90-94% accuracy using out of the box fastai2? Bonus points if you use a variant of resnet18, but if a larger architecture is needed, so be it.

Fun fact: with a bit of care I managed to use a learning rate of 7.6 (!) and SGD, but it only got me to 85% accuracy. Still fun though.

Try resizing your input image to at least 128x128 if using with with resnet18 (or any resnet?). I found it helps.

Thanks for the suggestion. I had tried different sizes, but without much success. The only thing that worked (and worked well) for me has been changing the network structure. I have not been working on this for quite a while, but might at some point