Poll: How do you deploy your models in production?

(building render.com) #1

How do you deploy your models in production?

0 voters

If you have a minute, add a comment below for why you picked your platform of choice. Thanks!

0 Likes

(Pietro La Torre) #2

Google kubernetes engine

0 Likes

(Andreas Sand) #3

I’d like to deploy my model as part of an executable for Linux and Windows. I know this is old-fashioned, but it needs to work on hospitals…

I’ve run into a small problem, though. It seems like fastai has a hard dependency on tk, so I’m having trouble deploying my models in a Python environment without tk. I get “No module named tkinter” when trying to import fastai.vision. I even get this error if I do “import fastai.basic_train” and then do load_learner(path).

Is it possible to get around this? I tried just loading the model using torch: torch.load(src, strict=False) and then running it as a pure torch model. But this way, my input data doesn’t get transformed, and thus my model doesn’t perform as it should. I’ve also tried to replicate what is going on in basic_train.load_learner but I’ve had no luck so far.

Can you please point me in the right direction?

Kind regards,
Andreas

0 Likes

#4

Hosting on my own dedicated server.

0 Likes