Difference between accuracy and accuracy_np

I am using fast ai library for multiclass text classification. I am a bit confused with the metrics used in the notebooks. I used RNN for it from fast ai lib. I noticed that there is huge difference in accuracy. I am using “normal accuracy” for training:

learn.metrics = [accuracy_np]

This shows the accuracy on validation set next to the losses. Once the training is done, I use this code to get predictions for validation set (it is nearly the same as the accuracy_np function):

log_preds = learn.predict()
preds = np.argmax(log_preds, axis=1)
Once I have the predictions, I use sklearn accuracy method to calculate the predictions.

The problem is that I am getting different results from both of these calculation approaches. I also tried to use accuracy_np during the training instead of accuracy but I got following error:

~/miniconda3/envs/fastai/lib/python3.6/site-packages/torch/tensor.py in eq(self, other)
358
359 def eq(self, other):
–> 360 return self.eq(other)
361
362 def ne(self, other):

TypeError: eq received an invalid combination of arguments - got (torch.cuda.LongTensor), but expected one of:

  • (int value)
    didn’t match because some of the arguments have invalid types: (torch.cuda.LongTensor)
  • (torch.LongTensor other)
    didn’t match because some of the arguments have invalid types: (torch.cuda.LongTensor)

Did anyone encounter the same problem? What is the difference between these two approaches of calculating the accuracy? Can someone see cause why I get different results?