Inference on fastai v2 much faster than fastai v1 for the same model?

EDIT: I made a mistake measuring the time taken. The actual speeds of inference are about the same.

I’ve just tested inference with the unet_learner on fastai v2 and it seems way faster than the unet_learner on fastai v1. For a single 512x512 image using a resnet34 backbone in v1 it’s 0.2s and on v2 it’s 0.004s!
Forgive me as I haven’t gone through the source code but I didn’t see a specific mention in the changelog to do with an inference speed up. Has there been a big change to how inference happens in v2? Or perhaps there’s another reason for why it’s faster?
Just noticed the models are slightly different. v1 uses weight norm, I’ll test this but I don’t think that would cause a 50x speed up.

Did you make sure both were on the same device?

Pipeline wise transforms can now be done in the GPU (batch transforms specifically).

Can we get a bit more than this? Like maybe a gist of your results?

Apologies, I made a very silly mistake and was measuring different things. I’ll make a note to not post very late in the night :sweat: