Just check the dataloader you use (valid I’m guessing) is still in FP16 as it looks like it’s giving inputs in full precision.
You can add the transform that convert the tensor to half precision with:
learn.data.valid_dl.add_tfm(to_half)
Another workaround is to put back your model in full precision with learn.model.float().