Just got started with FastAI and getting addicted pretty quickly
I'm doing the homework assignment, and I'm getting scores for the Kaggle competition above 15 (not good). I've spent some time debugging this. At first, I realized that the
ids in the file were incorrect due to how
get_batches iterates through the directory of test images, but even after correcting for that, I'm still in the 15s.
As I dug around, I noticed that there were lots of
1.0 probabilities in my results file. I thought to myself "that doesn't make sense -- the chances of getting a
1.0 on the validation data should be low, let alone on the test data"
But sure enough, when I run a prediction on a small set of both the test data and the validation data, I get tons of
So my questions is twofold:
- Am I correct that
1.0 probabilities shouldn't show up here?
- If I am correct on #1, has anyone ran into this before and could they point me in the right direction?