Lesson 3 - Strange results for Planet dataset

Hello

I have worked with the planet dataset and I’ve got good results from learn.fit (around 0,93%).
However, when I check the results on validation set with augmentation, using learn.TTA(), I get bad results (0,48%).

Is it a good use of learn.TTA() ? How can the results be so bad on validation set with TTA?

That’s weird…

Take a look at preds and see what they look like.

Hello

I have the same problem from the unmodified version on the fast.ai Github (lesson2-image_models)
Here is what I see in preds:

Many thanks for replicating this. Looks like something we need to look in to!

Have you tried without np.exp()?