Deployment Platform: Render ✅

Dude, I can’t thank you enough. I finally got it live! I’m a bit miffed at myself that I didn’t realize the model file should’ve been in the app folder (and it’s sort of obvious when I think about it) but at least now I know going forward. Once I refine my model (and maybe adding some CSS styles) I’ll be sharing this for sure.

Hip Hip Hooray! Glad you got your model working sarahgood!
It’s always easy with hindsight. But having these troublesome points of interest while building your model means that you have learned way more than if it had gone perfectly. Every cloud has a silver lining!

Well done!

mrfabulous1 :smiley: :smiley:

ps. thanks for the acknowledgement :smiley: :smiley:

@mrfabulous1 Thank you! I got it working. There were two hurdles in case it helps someone.

1. requirements.txt
I notice it does not like below, which are in my pip list in Colab environment.

torch=1.5.1+cu101
torchvision=0.6.1+cu101

I replaced them with below, and it works.

torch==1.5.1
torchvision==0.6.1

2. The downloadable link for export file hosted in google drive
It was restricted. It needs to be anyone with the link. What a gotcha.

Now it’s running and I can test my weed detector with photo I took.

Let me celebrate for a second. :laughing:

And then there is one more problem. Once in a while, it ran out of memory. My export file is 80MB. Does it use that much memory – used over 512 MB? :cry:

2 Likes

Happy days Cesar_Zeus It’s always great to see people get there first fastai model deployed.

Thanks for posting your success every post helps others.

I believe your server issue may be related to the fact that the basic account/tier comes with 500mb of shared ram. See $7.00 option not quite sure what shared means, but it could mean that your virtual environment and docker container use some of that ram.

So you may need to investigate where ram is being used when the service is running.

This could be something that render.com support could explain.

Let us know on the forum if you investigate any further as it was sarahgood’s determination to get their model working that I could use as an example for you.

Once again congrats on getting your model working.

Cheers mrfabulous1 :smiley: :smiley:

Hi Everyone,

I have never made a web application and don’t know how an application works. Can you suggest me a tutorial or something similar that would help me in deploying my classifier that I made into a webapp?

Thank you all in advance.

So after 2 sleepless nights, I figured it out. I am broke so can’t use Render. I used Heroku, here’s what I made https://pokemonclassifierapp.herokuapp.com/

1 Like

Hi sachin93 I hope your having a wonderful day!
Well done getting your model working on heroku.com.

Cheers mrfabulous1 :smiley: :smiley: :smiley:

1 Like

@mrfabulous1 thank you!!!

Just been having this problem again (strangely using the same code that worked for me last time!). This issue is that due to the virus scan blocking things, the data = await response.read() is returned as the html of the webpage instead of the pkl file… hence the error is the start of the HTML returned: <!DOCTYPE html><html>......

This time, after trying pretty much all the Google Drive direct link versions I could find, I gave up and have used Google Cloud Storage. It’s easy to create a bucket, then make the files directly public and use that public URL instead, which works smoothly:

https://cloud.google.com/storage/docs/creating-buckets
https://cloud.google.com/storage/docs/access-control/making-data-public

I would recommend for anyone having problems to deploy locally first, as it’s much easier to debug.

1 Like

This might sound like a basic question but I encountered this error when I try to run the app using local/my laptop.

name ‘path’ is not defined

It refers to line 16 on servery.py (same as the original FastAi-v3 code) where
path = Path(__file__).parent
is defined.

I’ve tried doing from pathlib import Path but it still throws that error. I didn’t change anything from the original code (besides export file url and name) but that error keeps showing. What should I do to make it work?

I tried running from local because it couldn’t run on Render either. When I’m trying to run it where it executes [6/6] RUN python app/server.py:, it gives me this error:

error: failed to solve: rpc error: code = Unknown desc = executor failed running [/bin/sh -c python app/server.py]: buildkit-runc did not terminate successfully
error: exit status 1

What happened and what should I do?

Hi arahpanah hope all is well!

If you haven’t done so please search this thread for ‘pip list’ and follow the instructions there for amending requirements.txt etc.

Cheers mrfabulous1 :grinning::grinning:

1 Like

Thanks! It works! :smiley:

It also runs well on Render as well!

But somehow I can’t run it on local. Tried running the server.py, it didn’t give me any error message but it didn’t give me any local host url that I can use to view the webapp.

This is what I got (well, I actually didn’t get anything). I’m pretty sure it should give me local host url.

Untitled

Hi arahpanah glad to hear it is all working.

It looks like you may not have enttered the full command to run the app locally. I believe it should have the option serve at the end of your command.

python app/server.py serve

Hope this helps

mrfabuluos1 :smiley: :smiley:

1 Like

Based on what I know, in order to make this works, the FileList from the uploaded images is converted to FormData and then inputted into this route for the request parameter.

@app.route('/analyze', methods=['POST'])
async def analyze(request):
    img_data = await request.form()
    img_bytes = await (img_data['file'].read())
    img = open_image(BytesIO(img_bytes))
    prediction = learn.predict(img)[0]
    return JSONResponse({'result': str(prediction)})

But, the problem is FileList only exists in an uploaded image files with File type. A plain image URL doesn’t have that FileList array.

And from what I know, learn.predict() doesn’t get image URL.

Say, I don’t want do the prediction with uploaded images, and the only input I have is an image URL that doesn’t have FileList array. Like, what if I wanna create a website where the user could paste an image URL into the website and then the image gets predicted. How do I do that? I’ve tried various method to input the URL but none works.

Hi everyone, this is mine. I tried to modify it so it could work on a mobile browser
https://coffee-classifier.onrender.com

Well, you can visit the repository,

1 Like

Hello, anyone knows how to perform a live stream detection inside render, but the user could access the webcam. The input stream will be processed by the classifier

1 Like

Hello, I just deployed my first “painter classifier” on Render and wanted to share how I got it to work in case it could help someone (took me several hours to find out). I followed up the instructions from https://course19.fast.ai/deployment_render.html#deploy, and copied my model file ‘export.pkl’ to dropbox instead of google drive. With the link provided for Google Drive, when running on Render, it always ended up unsuccessful. I would like to know why it is not working for me when using Google Drive, but for now I am happy to use Dropbox since it works smoothly.

it is now my turn now to celebrate my first classifier application on Render, so happy after so many hours troubleshooting!

here is the Painter classifier link https://painter-finder.onrender.com that classifies painting from VanGogh, Matisse, and Monet!

1 Like

Hi all!

This is my first time reporting an error, so please feel free to ask me for any information pertaining to my error and I will get to you ASAP.

I an trying to deploy my model on Render.com. but am running into some trouble.

Here are my failed logs from deploying to Render:

Here are my requirements.txt and server.py file:


I’ve read through this thread but didn’t find any solutions are tips on my particular error. Any suggestions, ideas, or learning experiences from @mrfabulous1, @anurag or anyone else will be greatly appreciated!

Hi faceyacc hope all is well!
Unfortunately errors on Render.com are slightly convoluted and often mask the real error:

Run the application standalone first on a local machine, this can be done with or without docker.

Running the app locally without Docker, helps avoid many errors down the line, also many errors show up that are much easier to resolve as your not seeing an error that is being reported once the app has passed through Docker then the render.com console.

Cheers mrfabulous1 :smiley: :smiley:

Hi @mrfabulous1!
I am ecstatic that you reply to my post. You seem to be able to help a lot people on this thread!

After I forked @anurag repo I did a git clone to run my model locally using VS Code ( I am not sure if this is the problem).

I am getting a NameError for Path. So I did a pip install and import from fastai.imports import * to “work around” this but that lead me to having a NameError for load_learner.

I am currently using paperspace with fastai 2.1.5 (based on my results from running ! pip list) in Juypter notebok.

Here is my requirements.txt

Here is how my situation looks when I clone my repo into VS Code:

Any tips, tricks, or learning experience would be greatly appreciated.

Thank You