Deployment Platform: Render ✅

My first effort on Render. I used the base code, changing only the export_file_url and the classes. Any suggestions?

Edit: @anurag - I’d welcome any ideas. Thanks!

Hi @anurag! Awesome work with Render so far! We are trying to deploy a fastai model in the form of an API (receive post request, processes data using model, outputs results and sends it back). My question is: what dependencies does the sample app contain in order for the render deploy to work properly?

The reason I ask is: If I add my API code to server.py in the sample app, without removing anything, it deploys properly, and I can see it running online, but if I deploy the API code alone (without the app specific lines and associated files), it works locally, but when I upload it to Render, the deployment perpetually states “in progress”. This happens despite the server running successfully in the console. I’ve attached an image of what I’m seeing in the Render Dashboard below.

I hope this was sufficiently clear, and if there is any other info you need in order to solve the problem, feel free to ask.

Thanks very much! I’m looking forward to making good use of Render!

Nick S

1 Like

All the dependencies are in requirements.txt and the Dockerfile. We haven’t seen these issues with anyone else, so it seems specific to your code.

1 Like

@anurag Thanks you very much! I deploy the first deep learning classifier in my life! Thank you, fast.ai:blush:

2 Likes

Congratulations!

Hi man,

I gave a look at your Render app in github. But I don’t get why do you need the export.pkl file to be a separate one, instead of just downloading the weights contained in the export.pkl in the model folder as suggested by Render example?

Is there any specific reason?

@Preka Oh? Are you referring to lines 12-14 in server.py? That’s from the example repo.

SOLVED - JUST USE THE LINK GENERATOR FOR GOOGLE DRIVE

Well, not any more. Let me explain. The problem you have shown here has to do with the updated version of fastai. In particular, it seems to use a different tokenizer compared to the previous versions, therefore it arises an error. =====> Did you solve that first? I think the solution would be to train again the model using the upgraded tokenizer so that it can be coherent with the latest version of fastai.

Secondly, I am facing a different problem related to RENDER, I guess, as it cannot not open and/or download the pickle file. It works with my local packages but not on Render. Any idea how to solve that?

Were you able to figure this out?

So I haven’t done much web development myself, though I want to get started on learning the basics needed to customize and create web apps using uvicorn/starlette with Fastai models. Where would be a good place to start looking/learning to get better at this?

Any eta on when higher memory tiers will be coming? It’s been a few months since this post so I was just curious as to where things where at.

Are you running into memory issues? We are going to release higher tiers this summer.

Yeah I am. I’m using a Unet model for some Gan stuff, and just minutes after my server finishes deploying live it crashes due to a memory problem.

Got it. We will prioritize tiers accordingly.

1 Like

Yeah, yeah, the issue I was facing has nothing to do with Render. Rather it has to do with the link from the google drive. The best way to get the exact https address to the google drive where one has saved model weights, is through the link provided somewhere in this forum.

Hi man,
no, I was referring the pickle file containing the weights. Shouldn’t it be download on the app/model?

Hi @anurag,

I have been using Render for vision applications, which is great! Are there any Render examples which are compatible with ULMFiT for natural language processing?

In ULMFiT, for inference, the model requires three files: the data pickle file, the fine-tuned language model file and classifier pickle file. In CV examples, we usually use “export” for inference. But for ULMFiT, we save and load them serially. I am a bit confused on how to modify your Render example to work on this case. Any pointers?

Thanks very much,
Nick

@anurag
@jeremy
@rachel
Thanks to you all for this course. I am a student of fastai from Abuja, Nigeria and I have been able to use render to deploy my first web app that will help me classify sick from healthy pigs. I conceived of a third class I call an “outlier” for any image that is not a pig. I might work on something better…but it feels just great that I am able to deploy this with your help. Thanks

https://swinehealth.onrender.com/

2 Likes

Unfortunately I haven’t come across any. But I would recommend tinkering with the sample code to make it run locally, and then deploying on Render through GitHub.

1 Like

We’ve managed export the model into a single .pkl, which resolves the previous issue. Now the problem is that when loading the model with load_learner from the Dropbox link, it ends up as an image model, rather than a text model (doing so from a local file works properly). As a result, when trying to process text, I’m receiving the error:
AttributeError: 'str' object has no attribute 'apply_tfms'
Which I believe is because it is treating it as image data, rather than text as it’s supposed to.
Could not find any way to specify as “text” when creating the learner. Any ideas on how to resolve?