EDIT: I made a mistake measuring the time taken. The actual speeds of inference are about the same.
I’ve just tested inference with the unet_learner on fastai v2 and it seems way faster than the unet_learner on fastai v1. For a single 512x512 image using a resnet34 backbone in v1 it’s 0.2s and on v2 it’s 0.004s!
Forgive me as I haven’t gone through the source code but I didn’t see a specific mention in the changelog to do with an inference speed up. Has there been a big change to how inference happens in v2? Or perhaps there’s another reason for why it’s faster?
Just noticed the models are slightly different. v1 uses weight norm, I’ll test this but I don’t think that would cause a 50x speed up.