Lesson 3 - Official Topic

You can also export your model to ONNX and deploy your model where you can use the ONNX runtime for serving the predictions. Please see this related post and the pointers to the code that is quite generic (though I wrote it for Azure Functions serverless deployment).

https://forums.fast.ai/t/exporting-a-model-for-local-inference-mode/66975/10?u=zenlytix