Just got started with FastAI and getting addicted pretty quickly
I’m doing the homework assignment, and I’m getting scores for the Kaggle competition above 15 (not good). I’ve spent some time debugging this. At first, I realized that the ids in the file were incorrect due to how get_batches iterates through the directory of test images, but even after correcting for that, I’m still in the 15s.
As I dug around, I noticed that there were lots of 1.0 probabilities in my results file. I thought to myself “that doesn’t make sense – the chances of getting a 1.0 on the validation data should be low, let alone on the test data”
But sure enough, when I run a prediction on a small set of both the test data and the validation data, I get tons of 1.0 probabilities.
Jeremy mentioned in one of the lessons that the classifier net used for that example tends to produce overconfident results. One way to avoid it is to increase the temperature of the softmax (if you were using one) (cf.https://www.cs.toronto.edu/~hinton/absps/distillation.pdf ). But more easily, just np.clip() the probabilities between 0.05 and 0.95. The crossentropy loss function hits you hard if you are maximally confident but you happen to get the prediction wrong.
I’m running into errors when trying to run get_batches on my test directory containing unlabeled dog and cat images. I’m working on step 11 of the homework for lesson 1. The structure of /test1 is simple: it doesn’t contain any subdirectories and only contains the set of unlabeled images.
Looking at the error messages, it seems like the error is occurring image.pyc in the flow_index method which has the function signature:
_flow_index(self, N, batch_size, shuffle, seed)
I’m having a hard time seeing why N is getting the value 0. Has anyone else run into this problem?
Edit: this issue doesn’t happen for /train which does have two subdirectories…my suspicion is that the issue has to do with directory structure, but I’m not sure.
After a while with no feedback, I stopped the kernel and run it again and then it gives a different error
AttributeError Traceback (most recent call last)
----> 1 import utils; reload(utils)
2 from utils import plots
/home/yousry/nbs/utils.py in ()
26 from IPython.lib.display import FileLink
—> 28 import theano
29 from theano import shared, tensor as T
30 from theano.tensor.nnet import conv2d, nnet
/home/yousry/anaconda2/lib/python2.7/site-packages/theano/init.pyc in ()
98 # needed during that phase.
99 import theano.tests
–> 100 if hasattr(theano.tests, “TheanoNoseTester”):
101 test = theano.tests.TheanoNoseTester().test
AttributeError: ‘module’ object has no attribute ‘tests’
I tried to update Theano to latest version [0.9.0rc4] but still got the same error
Thank you very much for that, Jose! I want to add some color here so anyone else seeing similar scores can learn from my mistakes. While it’s true that bounding probabilities does improve performance, my problem was a typo. Notice in my screenshot above how none of the probabilities were close to 0? Well that should have been my first clue. I had simply typo’d my python code that took the output from vgg.predict.
With the typo, my score was 15.41851. With the typo fixed, my score dropped to 0.20384. Using Jose’s clipping technique, I got all the way down to 0.10064!
In container, (following install_gpu.sh) apt-get update, install anaconda, set PATH, install cunn (CUDA comes with the container image), install/configure keras/theano, configure jupyter without password.