Hi,
We are the BentoML team. BentoML is an open-source tool for high-performance model serving. We wrote a guide on deploying fastai model into a Kubernetes cluster as an API model server for production. I would love to get your feedback or any questions you might have on the Kubernetes deployment guide.
- You can find the PR here: https://github.com/fastai/course-v3/pull/508
- You can read the guide here: https://github.com/fastai/course-v3/blob/51d689a5c707ceace80e5176f3a6ae26402cd9d8/docs/deployment_kubernetes_docker.md
Let us know here if you have any questions or suggestions! Also would love to hear how you are deploying fastai models to production and what are the pain points.
Cheers
Bo