Mixed precision training

Yes, overriding the adam loss with a partial of the adam loss with the eps worked for me:

learn.opt_func = partial(learn.opt_func, eps=1e-4)

any problem is seeing model converge?
I always find with fp16 model is not getting converged and metrics are getting worst…
i just use below peace of code

learn1 = Learner(data, 
                md_ef, 
                loss_func=loss_func,
                metrics = [qk,r2_score,exp_rmspe], 
                path='.').to_fp16()
learn1.model.half()

Mi making any mistake some where