Using `to_fp16()` as default

By using mixed precision, we can use less RAM for the GPU.
With newer graphics cards, it is faster to train the model or to make inference.
Then why does fastai not use mixed precision as default?
Is there any reason not to use mixed precision in any case?

Thank you.