Inference for super-resolution

Hi,

I trained the superresolution model (with feature loss, on the pets datasets) from lesson 7, and I am now trying to apply it in inference mode. I saw on the fastai documentation that there is now a learn.export() method that is supposed to export the model including the databunch structure (really nice feature, by the way!). But this means that the output shape of the network is fixed, right?

In the case of the super-resolution (and maybe also semantic segmentation?), I would like in general the output image size to be very related to the input size (like x1.5 or x2).

Is there a way to do it?

Thanks,
Sebastien

You need to create a new dataloader for each input if you want the outputs to be of different size. This also has implications for compute efficiency because you can’t batch together images with different target sizes.

I have a basic working example here:

For each input image, I create a new dataloader based off the dimensions of that image and target a 2x increase in height/width up to 1000x1000.

5 Likes

@KarlH I am trying to build the same kind of app using uvicorn and starlette but I am planning to perform a GPU inference of the model. I am facing a lot of challenges with that.
May I know why did you opt for CPU inference?