Dimension error in Learn.TTA output

I following the instructions here DeepLearning-LecNotes2 to run resnet34 on the dogbreeds dataset. When running these lines
log_preds,y = learn.TTA()
probs = np.exp(log_preds)
accuracy(log_preds,y), metrics.log_loss(y, probs)
I get a dimensions error when the last two functions are called.Should this be happening? I could reshape my probs data to the same dimension of the y variable, but how could I call the TTA function so that I would not have to do this. Can anyone give an explaination of what exactly learn.TTA() does?
The error I get is this:
TypeError Traceback (most recent call last)
in ()
----> 1 accuracy(log_preds,y), metrics.log_loss(y, probs)
2

~/fastai/courses/dl1/fastai/metrics.py in accuracy(preds, targs)
7
8 def accuracy(preds, targs):
----> 9 preds = torch.max(preds, dim=1)[1]
10 return (preds==targs).float().mean()
11

TypeError: torch.max received an invalid combination of arguments - got (numpy.ndarray, dim=int), but expected one of:

  • (torch.FloatTensor source)
  • (torch.FloatTensor source, torch.FloatTensor other)
    didn’t match because some of the keywords were incorrect: dim
  • (torch.FloatTensor source, int dim)
  • (torch.FloatTensor source, int dim, bool keepdim)

Your error is because the accuracy function expects torch Tensors and your preds are a np array. You’ll want to call torch.from_numpy(preds).cuda(), that should fix it.

That doesn’t seem to be working for me I get the following error.

RuntimeError Traceback (most recent call last)
in ()
----> 1 accuracy(torch.from_numpy(log_preds).y), metrics.log_loss(torch.from_numpy(probs).cuda(),y)
2

RuntimeError: from_numpy expects an np.ndarray but got torch.cuda.FloatTensor

I don’t think probs is an np array, so you don’t have to call torch.from_numpy on it. Just on log_preds.

I’m having the same issue as the OP but haven’t managed to find a working solution yet from following the help here. I had the same original error and have now tried changing to

log_preds,y = learn.TTA()
probs = np.exp(log_preds)
accuracy(torch.from_numpy(log_preds),y), metrics.log_loss(y, probs)

but this gives the error

TypeError: eq received an invalid combination of arguments - got (numpy.ndarray), but expected one of:
* (int value)
didn't match because some of the arguments have invalid types: (numpy.ndarray)
* (torch.LongTensor other)
didn't match because some of the arguments have invalid types: (numpy.ndarray)

I too had this problem so searched through the forums. TTA() was changed in November 2017 because it had been averaging the log, rather than the probabilities. Now it returns the actual individual TTA predictions, which then have to be averaged. I think accuracy() has also changed. This gives the accuracy:

log_probs_tta,y = learn.TTA()
probs = np.mean(np.exp(log_probs_tta), axis=0)
preds = np.argmax(probs_tta, axis=1)
(preds==y).mean()

2 Likes

accuracy_np as opposed to accuracy did the trick for me

log_preds,y = learn.TTA()
probs = np.mean(np.exp(log_preds), 0)
accuracy_np(probs, y), metrics.log_loss(y, probs)

1 Like

This just helped me out, thanks so much.

How did you go about finding this? I’m curious to see how I can hunt down the next similar problem I encounter. But your solution worked perfectly. Cheers!