Hosting your model on AWS

I am quite sure that this question is not under the course preview, but I think it is a natural step forward once our model is hitting the 99%+ accuracy :slight_smile: . Say, we intend to port your classifier in an AWS instance and ship dogvscats-as-a-service (DCaaS) – curious to know the most popular and efficient method to host your model in AWS and expose it as an endpoint? Also, hosting it in a p3 instance/fastai ami is probably an overkill, so looking to discuss on how to perform efficient import model and handle REST API like requests.

1 Like

Just putting it behind a flask endpoint should be fine - I don’t think there’s any particular complexity involved. Probably best to use CPU mode for serving, rather than CUDA

1 Like