FastaiV2 Runtime Error on tta "softmax_lastdim_kernel_impl" not implemented for 'Half'

Hello, I am trying to do TTA on the model trained using fp16. By running the following
code

dl = dls.valid
a1, target = learn.tta(dl=dl, n=16)

I got the following error

RuntimeError                              Traceback (most recent call last)
<ipython-input-50-964f94913b4a> in <module>
      1 dl = dls.valid
----> 2 a1, target = learn.tta(dl=dl, n=16)

/opt/conda/lib/python3.7/site-packages/fastai/learner.py in tta(self, ds_idx, dl, n, item_tfms, batch_tfms, beta, use_max)
    565             for i in self.progress.mbar if hasattr(self,'progress') else range(n):
    566                 self.epoch = i #To keep track of progress on mbar since the progress callback will use self.epoch
--> 567                 aug_preds.append(self.get_preds(dl=dl, inner=True)[0][None])
    568         aug_preds = torch.cat(aug_preds)
    569         aug_preds = aug_preds.max(0)[0] if use_max else aug_preds.mean(0)

/opt/conda/lib/python3.7/site-packages/fastai/learner.py in get_preds(self, ds_idx, dl, with_input, with_decoded, with_loss, act, inner, reorder, cbs, **kwargs)
    238             pred_i = 1 if with_input else 0
    239             if res[pred_i] is not None:
--> 240                 res[pred_i] = act(res[pred_i])
    241                 if with_decoded: res.insert(pred_i+2, getattr(self.loss_func, 'decodes', noop)(res[pred_i]))
    242             if reorder and hasattr(dl, 'get_idxs'): res = nested_reorder(res, tensor(idxs).argsort())

/opt/conda/lib/python3.7/site-packages/fastai/losses.py in activation(self, out)
     97         return loss*self.eps/c + (1-self.eps) * F.nll_loss(log_preds, target.long(), reduction=self.reduction)
     98 
---> 99     def activation(self, out): return F.softmax(out, dim=-1)
    100     def decodes(self, out):    return out.argmax(dim=-1)
    101 

/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in softmax(input, dim, _stacklevel, dtype)
   1496         dim = _get_softmax_dim('softmax', input.dim(), _stacklevel)
   1497     if dtype is None:
-> 1498         ret = input.softmax(dim)
   1499     else:
   1500         ret = input.softmax(dim, dtype=dtype)

RuntimeError: "softmax_lastdim_kernel_impl" not implemented for 'Half'

Tried learn = learn.to_fp32() as per this discussion but I am getting the same error

Thanks in advance for your time – if I’ve missed out anything, over- or under-emphasized a specific point let me know in the comments.

Did you use native_fp16 or regular fp16?

1 Like

@ilovescience, I think native one causes that. normal fp16 works fine…

learn.to_native_fp32() solved my similar problem. I hope this helps.

2 Likes

Yes, problem exists with to_native_fp16(). Alternative would be to use to_fp16().

1 Like

not needed per v2.2.0 https://github.com/fastai/fastai/releases/tag/2.2.0

Opps, I was using native_fp16, Anyways Thank you @ilovescience