How to apply TTA with fp16

My model was trained in fp16 mode:
learn = Learner(data,
md_ef,
metrics = [qk],
model_dir=“models”).to_fp16()

Then I wanted to use TTA, but failed
preds,y = learn.TTA(DatasetType.Test)

Gives error:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-7-f2756ab5830c> in <module>
      6     test_df.to_csv('submission.csv',index=False)
      7     print ('done')
----> 8 run_subm()

<ipython-input-7-f2756ab5830c> in run_subm(learn, coefficients)
      1 def run_subm(learn=learn, coefficients=[0.5, 1.5, 2.5, 3.5]):
      2     opt = OptimizedRounder()
----> 3     preds,y = learn.TTA(DatasetType.Test)
      4     tst_pred = opt.predict(preds, coefficients)
      5     test_df.diagnosis = tst_pred.astype(int)

/opt/conda/lib/python3.6/site-packages/fastai/vision/tta.py in _TTA(learn, beta, scale, ds_type, activ, with_loss)
     35     preds,y = learn.get_preds(ds_type, activ=activ)
     36     all_preds = list(learn.tta_only(ds_type=ds_type, activ=activ, scale=scale))
---> 37     avg_preds = torch.stack(all_preds).mean(0)
     38     if beta is None: return preds,avg_preds,y
     39     else:

RuntimeError: "sum_cpu" not implemented for 'Half'

Is there a way to apply it? Tried safe and load without fp16 but failed with other problem

You can’t use TTA in FP16 because some of the underlying functions aren’t implemented in FP16 in PyTorch. You should put your learner back in FP32 with learn = learn.to_fp32()

3 Likes

@sgugger Get the same error when loading a pre-trained model. Is it impossible to load it it was put to FP16 when being saved?

You can’t load a pretrained model in FP16, you should put in FP32 before saving with learn = learn.to_fp32().

1 Like

Thank you!

A bit late here, but PyTorch doesn’t seem to support a lot of functions for types other than torch.float32. For e.g., mean() doesn’t work either.