Poll: How do you deploy your models in production?

How do you deploy your models in production?

0 voters

If you have a minute, add a comment below for why you picked your platform of choice. Thanks!

Google kubernetes engine

1 Like

I’d like to deploy my model as part of an executable for Linux and Windows. I know this is old-fashioned, but it needs to work on hospitals…

I’ve run into a small problem, though. It seems like fastai has a hard dependency on tk, so I’m having trouble deploying my models in a Python environment without tk. I get “No module named tkinter” when trying to import fastai.vision. I even get this error if I do “import fastai.basic_train” and then do load_learner(path).

Is it possible to get around this? I tried just loading the model using torch: torch.load(src, strict=False) and then running it as a pure torch model. But this way, my input data doesn’t get transformed, and thus my model doesn’t perform as it should. I’ve also tried to replicate what is going on in basic_train.load_learner but I’ve had no luck so far.

Can you please point me in the right direction?

Kind regards,

Hosting on my own dedicated server.

Deploy fastai model with GKE on GCP
GKE = google kubernetes engine
GCP = google compute platform

flask (starlette application)
web server code -> https://github.com/render-examples/fastai-v3
save model to google drive -> https://course.fast.ai/deployment_render.html
move server.py (app.py) to the same dir as dockerfile

dockerfile -> (1.8 GB)
I used the docker file from https://github.com/render-examples/fastai-v3
but changed the app/server.py to server.py because I moved the file into the dockerfile directory

build docker image with google cloud build
google deployment tools -> https://www.jhanley.com/google-cloud-run-getting-started-with-python-and-flask/
John Hanley’s page works except I had to use GKE instead of cloud run to deploy model (bc of memory?)
cloud build can build the docker image much faster than docker desktop great for big images (fastai)
cloud build will build and publish the image to google container registry
I didn’t use docker desktop to build the docker image

deploy the image to GKE
cloud run didn’t work (memory?) so I used GKE
you can use the google console or command line to deploy the image to GKE
ignore some warning messages about pytorch version incompatibility
Example… SourceChangeWarning: source code of class ‘fastai.text.models.awd_lstm.LinearDecoder’ has changed.
expose application to internet --port 80 --target-port 8080
have to use both port and target-port for public web page
view web page … GKE -> services & Ingress (menu on the left)-> overview (menu on top) -> External endpoints

I see that 52% of the people that voted used Google App Engine, why is that?? even though Jeremy uses render (13% of people that voted) if i’m not mistaken…

I deploy everything on SeeMe.ai but I might be biased since I created it… :slight_smile:

It’s easy, you can share models and we have native iOS (with Core ML support) and Android apps…

Heroku/voila because it was the first one I came across which was free and easy and worked properly.