Automatic mixed precision native in Pytorch nightly

This should be interesting once it comes out in the stable version. Not sure if this could be a good idea to refactor by deletion to replace what fastai does for fp_16 with the native pytorch version: https://pytorch.org/docs/stable/notes/amp_examples.html

2 Likes

We will see how we implement it. What they did would require a new training loop, so we will probably do what we did with Apex: use the helper functions they probably added behind their high-level abstraction.

3 Likes

I am going to experiment with it this weekend but I had a crazy idea to make this work natively with v2. Stay tuned!

7 Likes

Go for it Sylvain. That’s an illustration of the fastai spirit :wink:

Good luck!

So I has some rework to do with our internal optimizer to make it work with native mixed precision (and look more like a PyTorch optimizer, so that’s good) but I have something that works.

On the nightlies, you can check it by using to_native_fp16/to_native_fp32 instead of to_fp16/to_fp32. I haven’t tested it with any other callback or checked if it actually trained as well yet, just focused on making it pass basic tests. Let us know what you find when using it.

3 Likes

Merging discussion from https://github.com/fastai/fastai2/issues/241 into this thread.

All that is before mixed_precision_one_batch is the current implementation of mixed precision which we won’t remove until there has been a release of PyTorch and we have confirmed native mixed precision work. So the implementation of native mixed precision is just the altered one_batch function and the callback, you can ignore the rest of the callback.fp16 module.

I see. My comments were meant to make sure you don’t do more work than necessary to integrate the native API.