learn.TTA() return actual probabilities instead of log probs now?

In lesson 2 Jeremy said TTA() return the log probs, and to obtain the actual probabilities we need to calculate the exponential (np.exp) of returned values.

while doing predictions on planet amazon data set, I got 48% f2 score when I used the np.exp of probs returned byTTA(). My accuracy jumped to 93% (also actual accuracy shown while running epochs).

There is also no negative value in probs returned by TTA().
Screenshot from 2018-01-10 03-46-07

Link to my notebook. https://github.com/irshadqemu/fasai_dl1/blob/master/Planet_amazon_resnet34.ipynb

@irshaduetian
Actually there is a modification for TTA…
Checkout other threads in the forum…
There was an awesome PR and analysis made by @alessa

Here’s the link…

1 Like

I experienced something similar to what Irshad reported and also came across other threads mentioning changes to TTA.

I presume that at some point the notebook in question will get modified appropriately to reflect the changes to TTA. In the mean time an unexpected side benefit was all of the extra info learned from investigating :wink:

1 Like

Searching the forum will always help…

Lot of quire’s were answered there,

If we keep on creating similar threads, it will become difficult to find the correct one…

Choice is yours…

Thanks…

Does there happen to be a good way to merge thread content?

I don’t know…
I am as new to this forum as others are…

A PR to fix any notebooks using the old approach would be most welcomed! :slight_smile:

Here’s an attempt for the lesson 2 notebook:

https://github.com/fastai/fastai/pull/89

1 Like