Deploying fastai model as REST API with BentoML

Hi guys,

I am really excited to share with you our open source project BentoML, now with fastai support!

BentoML is an open source framework for creating, shipping and running ML services

Just like fastai is making deep learning easier to use. BentoML is making ML service deployment easier to use for data scientists. From model in Jupyter notebook to ML service in production in 5 minutes.

You can try out fast ai example, notebook at Google Colab.

Let me know what you guys think. Love to get your feedback!

Cheers

Bo

5 Likes

I used it for bringing a model to production and it worked well!

That’s awesome it works well for you. I am very happy it helped. Do you mind to share where you deploy it to? Is it your own server setup (like ec2 or kubernetes) or vendor services like Sagemaker?

We are about to do a major release soon that should speed up the service 10x-30x in performance and nice UI for management.

Feel free to reach if you have any questions. You can ping me here or find me in the BentoML slack group, we are pretty active.

Any plans to build in a fastai2 handler? Great work by the way.

@danjjohns
Thank you for the encouragement!

Fastai 2 is already supported(https://github.com/bentoml/BentoML/blob/master/bentoml/artifact/fastai2_model_artifact.py) since BentoML 0.7.6. I will put up an example notebook of it in our gallery(GitHub.com/bentoml/gallery) soon.

Okay nice, looking forward to the example notebook. Might deploy a new fastai2 model with Bento again