Creating a production build of unet_learner for single image inference

If you want to deploy a fastai model with minimal dependencies, I would suggest moving everything completely to PyTorch.

Also, saving the model with learn.save saves additional optimizer information so either pass in the argument with_opt=False or save using PyTorch (torch.save(learn.model)).

The main issue is that you would have to reverse engineer the model definition that you are using for unet_learner. You might have to replicate some of the DynamicUnet fastai code in your codebase.

With the plain PyTorch model, then the last question is how the image is loaded and passed to the model. Just make sure to check the fastai transforms and implement things like Normalization or whatever other steps happen to convert the image into a tensor to pass into the model.

The last thing to note is that in your environment make sure to only install CPU version of PyTorch. That also saves a lot of time and space during deployment.

Of course, there is a lot of hassle involved here, but if you really need the most minimal dependencies this may be the easiest approach.

4 Likes