Did any one try with the Stanford Dogs Dataset, at http://vision.stanford.edu/aditya86/ImageNetDogs/? I tried this with resnet50 with 8 epoch of training. The final error rate I got is 0.104. Yet I am not sure how this compares to the result at the link above. They used something called mean accuracy. Can somebody explain the relation between mean accuracy and error rate?
Some other questions during my learning progress:
- After above mentioned 8 epoch of training, I used lr_find to find the learning rate, and plot that. Then if I do learn.save and learn.load, Did I get everything restored? I did an lr_find and plot again after learn.load, I can see the graph plotted is totally different with the first one.
- After I run lr_find, I run learn.unfreeze(), and learn.fit_one_cycle(4, max_lr=slice(1e-6,1e-3)). Yet as listed below, the result is not improving at all. This is even true for the dog and cat dataset in lesson 1. Can anyone explain that?
Also what should I do, if I want to further improve the result?
Thanks in advance for your time/support.