NVIDIA Apex Amp comparison

I recently saw this article on NVIDIA automatic mixed-precision in PyTorch and as a novice in these lower-level hardware details I am curious about a couple of things.

  • How similar is Amp vs. fastai’s methodology for enabling mixed-precision training? For example, do they differ in designation of which pieces are/aren’t executed in FP16?
  • Should Amp provide additional speed/accuracy improvements in either all or specific scenarios, or is it more focused on ease of use?

Very interested in hearing any other points of comparison or opinions on this topic! :upside_down_face:

1 Like

They’re very similar. We’re working with Nvidia to try to move more of our backend code into Apex. I think the fastai API is more flexible and usable, which is why we’re trying to get the best of both.

4 Likes

Awesome, thanks for the response!

Any updates on this? I am guessing a lot of things happened in between.

I’m considering a change of GPU and suddenly FP16 training becomes very interesting especially in the applications I’m aiming for.

Mixed precision in fastai does the same as Apex, more or less.

2 Likes