Mixed precision training

#3

Do you just restart your kernel and start over when you get this or something else?I wonder is this due to some parameter setting I am missing .I am keeping all defaults as of now on fp16

0 Likes

(RobG) #4

Not sure what the issue is. I Have reported it before, but would be good to hear of others experience. Do you have any transforms() in your training? It could be something in a transform going awry.

0 Likes

#5

Yeah I do.What kind of benefit you are observing on using fp16() ?Is it close to ~25% that I reported?

0 Likes

(RobG) #6

yes about 25% on average

2 Likes

#7

Looks like we are on right path.Will be good to know what others are experiencing with mixed precesion training.

0 Likes

(Karl) #8

I don’t have much experience with fp16 training but I know one known issue is that gradients or other small values can become zero due to the lower numerical precision. One way around this is to scale up the loss value (ie by a factor of 1000), which by the chain rule also scales the gradients. This might be a solution to the problem you’re seeing.

4 Likes

#9

Thanks Karl.I might have to try that.

0 Likes

(Andrea de Luca) #10

The point of mixed precision training is addressing such shortcomings of pure FP16. The library should take care of this: the parts which are sensitive to truncation and/or rounding are handled in FP32.

2 Likes

(Andrea de Luca) #11

I’d like to see memory occupation about the same training cycle for FP32 vs. Mixed. Thanks.

0 Likes

(Michael) #12

I was also playing with the mixed precision training and was not observing some NaN losses so far.

There seems to be also a callback to stop the training when the loss is getting NaN: https://github.com/fastai/fastai_docs/blob/master/dev_nb/new_callbacks.ipynb (However, I didn’t tested it so far.)

Here is the super explanation from Sylvain: Mixed precision training

PS: Maybe also for your interest: learn.TTA(is_test=True) not supporting half precision models?

1 Like

(Keyur Paralkar) #13

I am experiencing same issue with mixed precision training. The validation loss reaches nan at about 25-30% of training process. Will try to increase the loss_scale factor to 1000 and let this thread know it’s result.

1 Like

(Keyur Paralkar) #14

Even after changing the loss_scale factor in to_fp16() to 1000.0 gives nan value more early around 3% of training. Is this problem specific to a particular kind of data?

1 Like

(Keyur Paralkar) #15

Can this be related to vanishing gradient problem ?

0 Likes

#16

So has anyone actually managed to make to_fp16() work in the sense that it has shortened training?

0 Likes

(Jeff Hale) #17

I have successfully used mixed precision to speed up training on Colab with an image classification task with a FastAI CNN with a batch size of 16. When using a batch size of 256, I didn’t see any speed improvement. Not sure why.

For the same task on Kaggle, using mixed precision made training slower than the default fp32 training! That was very surprising. The CUDA and cuDNN versions were older on Kaggle. I’m not sure if that is the cause.

Colab is using CUDA 10 with a K80 and Kaggle is using CUDA 9 with a P100.

Overall, Kaggle was faster than Colab.

Any one have any ideas why training if faster sometimes but not others?

0 Likes

(Andrea de Luca) #18

AFAIK, you need Cuda 10 and >=410 drivers in order to use fp16 effectively.

May I ask you a quantification of the speedup you registered in fp16 on colab with cuda 10?

No idea about the slowdowns with higher batch sizes. I can say I noticed a slowdown in convergence when training in fp16. For me, the best thing about fp16 is that it effectively doubles your vram.

1 Like

(Jeff Hale) #19

Thank you @balnazzar. That makes sense re Cuda 10. I thought I saw fp16 should work with lower in Nvidia’s specs, but my findings agree with what you’ve said.

Here’s the article where I discuss with the speeds: https://towardsdatascience.com/kaggle-vs-colab-faceoff-which-free-gpu-provider-is-tops-d4f0cd625029 19:54 to 16:37 training time improvement on Colab with fp16 vs fp32.

Agree regarding vram.

2 Likes

(Kerem Turgutlu) #20

I am facing a similar issue with mixed precision training where my loss goes to NaN after 5-6 epochs. What is the best way to debug such issue?

Possible Fix: Setting Adam eps=1e-4 and using clip in to_fp16 seems to work.

0 Likes

(Michael) #21

Yes, overriding the adam loss with a partial of the adam loss with the eps worked for me:

learn.opt_func = partial(learn.opt_func, eps=1e-4)
0 Likes

(jaideep v) #22

any problem is seeing model converge?
I always find with fp16 model is not getting converged and metrics are getting worst…
i just use below peace of code

learn1 = Learner(data, 
                md_ef, 
                loss_func=loss_func,
                metrics = [qk,r2_score,exp_rmspe], 
                path='.').to_fp16()
learn1.model.half()

Mi making any mistake some where

0 Likes