Hi team,
Just got started with FastAI and getting addicted pretty quickly
Iām doing the homework assignment, and Iām getting scores for the Kaggle competition above 15 (not good). Iāve spent some time debugging this. At first, I realized that the id
s in the file were incorrect due to how get_batches
iterates through the directory of test images, but even after correcting for that, Iām still in the 15s.
As I dug around, I noticed that there were lots of 1.0
probabilities in my results file. I thought to myself āthat doesnāt make sense ā the chances of getting a 1.0
on the validation data should be low, let alone on the test dataā
But sure enough, when I run a prediction on a small set of both the test data and the validation data, I get tons of 1.0
probabilities.
So my questions is twofold:
- Am I correct that
1.0
probabilities shouldnāt show up here? - If I am correct on #1, has anyone ran into this before and could they point me in the right direction?
Thanks!