Deployment Platform: Render ✅

OK, so I retrained my model on Floydhub, and attempted to deploy the app. Here’s my repo, including the model.pkl file. These were the model packages from Floydhub’s pip list:

fastai==1.0.61
torch==1.5.0
torchvision==0.6.1

So this is the list for my requirements.txt file:

aiofiles==0.4.0
aiohttp==3.5.4
asyncio==3.4.3
fastai==1.0.61
numpy==1.18.4
pillow==5.4.1
python-multipart==0.0.5
starlette==0.12.0
uvicorn==0.11.5
torch==1.5.0
torchvision==0.6.1

I’ve tried testing the app locally, but using ‘python app/server.py serve’ I get this error:

Traceback (most recent call last):
  File "app/server.py", line 48, in <module>
    learn = loop.run_until_complete(asyncio.gather(*tasks))[0]
  File "C:\Users\[me]\anaconda3\lib\asyncio\base_events.py", line 583, in run_until_complete
    return future.result()
  File "app/server.py", line 33, in setup_learner
    await download_file(export_file_url, path / export_file_name)
  File "app/server.py", line 26, in download_file
    async with session.get(url) as response:
  File "C:\Users\[me]\anaconda3\lib\site-packages\aiohttp\client.py", line 1005, in __aenter__
    self._resp = await self._coro
  File "C:\Users\[me]\anaconda3\lib\site-packages\aiohttp\client.py", line 466, in _request
    ssl=ssl, proxy_headers=proxy_headers, traces=traces)
  File "C:\Users\[me]\anaconda3\lib\site-packages\aiohttp\client_reqrep.py", line 286, in __init__
    self.update_host(url)
  File "C:\Users\[me]\anaconda3\lib\site-packages\aiohttp\client_reqrep.py", line 340, in update_host
    raise InvalidURL(url)
aiohttp.client_exceptions.InvalidURL
(base)

I’ve managed to install Docker through pip install docker but when I run the docker line from the app readme, my environment doesn’t seem to recognize docker. I don’t know if it’s because I need to change the command to be able to run through conda or not.

I probably just don’t know enough about web dev to deploy this without going back and learning enough Django/React/some other framework to build it from scratch. :confused:

Hi sarahgood I hope you are having a wonderful day!

Having looked at your repo I found the following points of interest

  1. Floydhub or something else had locked server.py so when run on the desktop you don’t have permission to run it. - changed permisions on app directory.

  2. Floydhub appears to be using an unsupported combination of torchvision 0.6.1 and 1.5.0 which was stoopping your app from starting. - changed requirements.txt line to

torch==1.5.1
  1. The model.pkl file the should be in the app drectory. - put model.pkl in app directory

I tested your model running the same command as you from terminal on my macbook pro.

  1. Don’t forget to edit your index.html file because it looks like a bear classifier at the moment, we don’t want to confuse your users. :smiley:

As you can see your medieval classifier is working just fine, there must be a fine line between medieval and post medieval as my google search was for a post medieval item. :smiley: :smiley:

You do not need to run docker to test your app at all. See one of my previous posts.

The gentleman who donated this repo actually was the creator of Django, having played with Django, I think it would be a little more tricky than this repo to achieve.

The points of interest above, mean you have learned an invaluable lesson that is how much work it can take to build a simple app which is why @jeremy says build something. This little app uses Github, Floydhub, css, html, fastai, Learn AI concepts, aiohttp, asyncio, uvicorn, Online Jupyter notebooks, Starlette, Javascript , Unix commands, Python, and Docker. Which many people don’t realise when they first start.

since starting on fastai about a year ago I have played with at least a couple of hundred libraries.

Hope this helps, don’t forget to put it on Share your work here ✅ when you get it working. :smiley:

Cheers mrfabulous1 :grinning: :grinning:

1 Like

Hi Cesar_Zeus hope you are having a wonderful day also!

You just need the files listed in your requirements.txt file like in the post here.
Deployment Platform: Render ✅ Your version numbers may be different though.

Cheers mrfabulous1 :smiley: :smiley:

Dude, I can’t thank you enough. I finally got it live! I’m a bit miffed at myself that I didn’t realize the model file should’ve been in the app folder (and it’s sort of obvious when I think about it) but at least now I know going forward. Once I refine my model (and maybe adding some CSS styles) I’ll be sharing this for sure.

Hip Hip Hooray! Glad you got your model working sarahgood!
It’s always easy with hindsight. But having these troublesome points of interest while building your model means that you have learned way more than if it had gone perfectly. Every cloud has a silver lining!

Well done!

mrfabulous1 :smiley: :smiley:

ps. thanks for the acknowledgement :smiley: :smiley:

@mrfabulous1 Thank you! I got it working. There were two hurdles in case it helps someone.

1. requirements.txt
I notice it does not like below, which are in my pip list in Colab environment.

torch=1.5.1+cu101
torchvision=0.6.1+cu101

I replaced them with below, and it works.

torch==1.5.1
torchvision==0.6.1

2. The downloadable link for export file hosted in google drive
It was restricted. It needs to be anyone with the link. What a gotcha.

Now it’s running and I can test my weed detector with photo I took.

Let me celebrate for a second. :laughing:

And then there is one more problem. Once in a while, it ran out of memory. My export file is 80MB. Does it use that much memory – used over 512 MB? :cry:

2 Likes

Happy days Cesar_Zeus It’s always great to see people get there first fastai model deployed.

Thanks for posting your success every post helps others.

I believe your server issue may be related to the fact that the basic account/tier comes with 500mb of shared ram. See $7.00 option not quite sure what shared means, but it could mean that your virtual environment and docker container use some of that ram.

So you may need to investigate where ram is being used when the service is running.

This could be something that render.com support could explain.

Let us know on the forum if you investigate any further as it was sarahgood’s determination to get their model working that I could use as an example for you.

Once again congrats on getting your model working.

Cheers mrfabulous1 :smiley: :smiley:

Hi Everyone,

I have never made a web application and don’t know how an application works. Can you suggest me a tutorial or something similar that would help me in deploying my classifier that I made into a webapp?

Thank you all in advance.

So after 2 sleepless nights, I figured it out. I am broke so can’t use Render. I used Heroku, here’s what I made https://pokemonclassifierapp.herokuapp.com/

1 Like

Hi sachin93 I hope your having a wonderful day!
Well done getting your model working on heroku.com.

Cheers mrfabulous1 :smiley: :smiley: :smiley:

1 Like

@mrfabulous1 thank you!!!

Just been having this problem again (strangely using the same code that worked for me last time!). This issue is that due to the virus scan blocking things, the data = await response.read() is returned as the html of the webpage instead of the pkl file… hence the error is the start of the HTML returned: <!DOCTYPE html><html>......

This time, after trying pretty much all the Google Drive direct link versions I could find, I gave up and have used Google Cloud Storage. It’s easy to create a bucket, then make the files directly public and use that public URL instead, which works smoothly:

https://cloud.google.com/storage/docs/creating-buckets
https://cloud.google.com/storage/docs/access-control/making-data-public

I would recommend for anyone having problems to deploy locally first, as it’s much easier to debug.

1 Like

This might sound like a basic question but I encountered this error when I try to run the app using local/my laptop.

name ‘path’ is not defined

It refers to line 16 on servery.py (same as the original FastAi-v3 code) where
path = Path(__file__).parent
is defined.

I’ve tried doing from pathlib import Path but it still throws that error. I didn’t change anything from the original code (besides export file url and name) but that error keeps showing. What should I do to make it work?

I tried running from local because it couldn’t run on Render either. When I’m trying to run it where it executes [6/6] RUN python app/server.py:, it gives me this error:

error: failed to solve: rpc error: code = Unknown desc = executor failed running [/bin/sh -c python app/server.py]: buildkit-runc did not terminate successfully
error: exit status 1

What happened and what should I do?

Hi arahpanah hope all is well!

If you haven’t done so please search this thread for ‘pip list’ and follow the instructions there for amending requirements.txt etc.

Cheers mrfabulous1 :grinning::grinning:

1 Like

Thanks! It works! :smiley:

It also runs well on Render as well!

But somehow I can’t run it on local. Tried running the server.py, it didn’t give me any error message but it didn’t give me any local host url that I can use to view the webapp.

This is what I got (well, I actually didn’t get anything). I’m pretty sure it should give me local host url.

Untitled

Hi arahpanah glad to hear it is all working.

It looks like you may not have enttered the full command to run the app locally. I believe it should have the option serve at the end of your command.

python app/server.py serve

Hope this helps

mrfabuluos1 :smiley: :smiley:

1 Like

Based on what I know, in order to make this works, the FileList from the uploaded images is converted to FormData and then inputted into this route for the request parameter.

@app.route('/analyze', methods=['POST'])
async def analyze(request):
    img_data = await request.form()
    img_bytes = await (img_data['file'].read())
    img = open_image(BytesIO(img_bytes))
    prediction = learn.predict(img)[0]
    return JSONResponse({'result': str(prediction)})

But, the problem is FileList only exists in an uploaded image files with File type. A plain image URL doesn’t have that FileList array.

And from what I know, learn.predict() doesn’t get image URL.

Say, I don’t want do the prediction with uploaded images, and the only input I have is an image URL that doesn’t have FileList array. Like, what if I wanna create a website where the user could paste an image URL into the website and then the image gets predicted. How do I do that? I’ve tried various method to input the URL but none works.

Hi everyone, this is mine. I tried to modify it so it could work on a mobile browser
https://coffee-classifier.onrender.com

Well, you can visit the repository,

1 Like

Hello, anyone knows how to perform a live stream detection inside render, but the user could access the webcam. The input stream will be processed by the classifier

1 Like

Hello, I just deployed my first “painter classifier” on Render and wanted to share how I got it to work in case it could help someone (took me several hours to find out). I followed up the instructions from https://course19.fast.ai/deployment_render.html#deploy, and copied my model file ‘export.pkl’ to dropbox instead of google drive. With the link provided for Google Drive, when running on Render, it always ended up unsuccessful. I would like to know why it is not working for me when using Google Drive, but for now I am happy to use Dropbox since it works smoothly.

it is now my turn now to celebrate my first classifier application on Render, so happy after so many hours troubleshooting!

here is the Painter classifier link https://painter-finder.onrender.com that classifies painting from VanGogh, Matisse, and Monet!

1 Like