16 bit precision for cnn_learner?

Does it make sense to have a half precision mode for cnn?

The docs mention here
https://docs.fast.ai/callbacks.fp16.html
The use of the class, learner.to.fp_16, to double the speed.

How can this be applied to the class, cnn_learner which has no attribute ‘.to.fp_16’

Hey, I remember using it with a cnn_learner. Link to notebook

Learner(data, model, metrics=[accuracy]).to_fp16() not to.fp16()

I realize there is a Learner.to_fp16 . I am asking about a cnn_learner. These docs
https://docs.fast.ai/vision.learner.html

do not indicate that attribute. How could we modify cnn_learner to use half precision?

cnn_learner is just a function that returns a Learner object.

1 Like