Mixed precision training example using fastai

Has anyone tried mixed precision training using fastai or come across an example somewhere? I’m not sure how to go about it. I believe adding to_fp16 to my learner will convert it to half precision and not mixed precision. Is that correct?
Notebooks that using mixed precision would be really helpful.

I tried half precision on a dataset and the training time did not reduce much. It went down from like 30 mins to 25 mins. Also for plotting top losses, I have to convert the learner back to 32 point floating precision. Any ideas what’s going on here? I’ll share the link to the notebook soon.

for vision i’ve noticed the time hasnt’t reduced that much. i did a language model and that seemed to reduce the time a lot more than vision.

i’m not positive, but i think when you add to_fp16() it is doing mixed precision because i think there are times it switches back to 32 during training. i’ve also noticed needing to switch to 32 for certain things after training, not sure why. it sounds like you’ve already used it, but here is a notebook i have

You need a GPU which is capable of mixed precision training (afaik for example the new 20XX series).
If it is working you should see that the same model training procedure uses less GPU RAM (check with nvidia-smi) and you should be able to increase the batch size significantly.

I recently played with it with image data and I could almost double the batch size.