Accuracy(log_preds, y), where's the y from?

Hi, I was testing the accuracy of my algo using:
accuracy(log_preds, y)

but I couldn’t figure how it can calculate the accuracy if I don’t have the ground truths of the test set. Where are they y’s from?

I’m using TTA.

For the test set, you cannot calculate accuracy if you don’t have any labels.

When looking at the source code, you can see that when you call learn.TTA(), it returns two things:

return np.stack(preds1+preds2), targs

It returns your TTA predictions but also your targets (i.e. y). So you can simply do preds, y = learn.TTA() and then pass those to your accuracy function.

That’s the odd thing, nowhere in my code am I saying: “here are the test set labels.”

However TTA is still able to test and I’m able to get an accuracy %.

When we add a test set in fastai, fastai creates an emptylabellist for it.

if label is None: 
   labels = EmptyLabelList([0] * len(items))

Now by default as you see it is labeling with index=0, so due to this you can get accuracy values.

So if you run it for CIFAR10, then all the labels in the test set would be aeroplane.

How can I set my own y’s?

Create a new databunch and predict on it.