Deployment Platform: Render ✅

Hi Everyone,

I have never made a web application and don’t know how an application works. Can you suggest me a tutorial or something similar that would help me in deploying my classifier that I made into a webapp?

Thank you all in advance.

So after 2 sleepless nights, I figured it out. I am broke so can’t use Render. I used Heroku, here’s what I made https://pokemonclassifierapp.herokuapp.com/

1 Like

Hi sachin93 I hope your having a wonderful day!
Well done getting your model working on heroku.com.

Cheers mrfabulous1 :smiley: :smiley: :smiley:

1 Like

@mrfabulous1 thank you!!!

Just been having this problem again (strangely using the same code that worked for me last time!). This issue is that due to the virus scan blocking things, the data = await response.read() is returned as the html of the webpage instead of the pkl file… hence the error is the start of the HTML returned: <!DOCTYPE html><html>......

This time, after trying pretty much all the Google Drive direct link versions I could find, I gave up and have used Google Cloud Storage. It’s easy to create a bucket, then make the files directly public and use that public URL instead, which works smoothly:

https://cloud.google.com/storage/docs/creating-buckets
https://cloud.google.com/storage/docs/access-control/making-data-public

I would recommend for anyone having problems to deploy locally first, as it’s much easier to debug.

1 Like

This might sound like a basic question but I encountered this error when I try to run the app using local/my laptop.

name ‘path’ is not defined

It refers to line 16 on servery.py (same as the original FastAi-v3 code) where
path = Path(__file__).parent
is defined.

I’ve tried doing from pathlib import Path but it still throws that error. I didn’t change anything from the original code (besides export file url and name) but that error keeps showing. What should I do to make it work?

I tried running from local because it couldn’t run on Render either. When I’m trying to run it where it executes [6/6] RUN python app/server.py:, it gives me this error:

error: failed to solve: rpc error: code = Unknown desc = executor failed running [/bin/sh -c python app/server.py]: buildkit-runc did not terminate successfully
error: exit status 1

What happened and what should I do?

Hi arahpanah hope all is well!

If you haven’t done so please search this thread for ‘pip list’ and follow the instructions there for amending requirements.txt etc.

Cheers mrfabulous1 :grinning::grinning:

1 Like

Thanks! It works! :smiley:

It also runs well on Render as well!

But somehow I can’t run it on local. Tried running the server.py, it didn’t give me any error message but it didn’t give me any local host url that I can use to view the webapp.

This is what I got (well, I actually didn’t get anything). I’m pretty sure it should give me local host url.

Untitled

Hi arahpanah glad to hear it is all working.

It looks like you may not have enttered the full command to run the app locally. I believe it should have the option serve at the end of your command.

python app/server.py serve

Hope this helps

mrfabuluos1 :smiley: :smiley:

1 Like

Based on what I know, in order to make this works, the FileList from the uploaded images is converted to FormData and then inputted into this route for the request parameter.

@app.route('/analyze', methods=['POST'])
async def analyze(request):
    img_data = await request.form()
    img_bytes = await (img_data['file'].read())
    img = open_image(BytesIO(img_bytes))
    prediction = learn.predict(img)[0]
    return JSONResponse({'result': str(prediction)})

But, the problem is FileList only exists in an uploaded image files with File type. A plain image URL doesn’t have that FileList array.

And from what I know, learn.predict() doesn’t get image URL.

Say, I don’t want do the prediction with uploaded images, and the only input I have is an image URL that doesn’t have FileList array. Like, what if I wanna create a website where the user could paste an image URL into the website and then the image gets predicted. How do I do that? I’ve tried various method to input the URL but none works.

Hi everyone, this is mine. I tried to modify it so it could work on a mobile browser
https://coffee-classifier.onrender.com

Well, you can visit the repository,

1 Like

Hello, anyone knows how to perform a live stream detection inside render, but the user could access the webcam. The input stream will be processed by the classifier

1 Like

Hello, I just deployed my first “painter classifier” on Render and wanted to share how I got it to work in case it could help someone (took me several hours to find out). I followed up the instructions from https://course19.fast.ai/deployment_render.html#deploy, and copied my model file ‘export.pkl’ to dropbox instead of google drive. With the link provided for Google Drive, when running on Render, it always ended up unsuccessful. I would like to know why it is not working for me when using Google Drive, but for now I am happy to use Dropbox since it works smoothly.

it is now my turn now to celebrate my first classifier application on Render, so happy after so many hours troubleshooting!

here is the Painter classifier link https://painter-finder.onrender.com that classifies painting from VanGogh, Matisse, and Monet!

1 Like

Hi all!

This is my first time reporting an error, so please feel free to ask me for any information pertaining to my error and I will get to you ASAP.

I an trying to deploy my model on Render.com. but am running into some trouble.

Here are my failed logs from deploying to Render:

Here are my requirements.txt and server.py file:


I’ve read through this thread but didn’t find any solutions are tips on my particular error. Any suggestions, ideas, or learning experiences from @mrfabulous1, @anurag or anyone else will be greatly appreciated!

Hi faceyacc hope all is well!
Unfortunately errors on Render.com are slightly convoluted and often mask the real error:

Run the application standalone first on a local machine, this can be done with or without docker.

Running the app locally without Docker, helps avoid many errors down the line, also many errors show up that are much easier to resolve as your not seeing an error that is being reported once the app has passed through Docker then the render.com console.

Cheers mrfabulous1 :smiley: :smiley:

Hi @mrfabulous1!
I am ecstatic that you reply to my post. You seem to be able to help a lot people on this thread!

After I forked @anurag repo I did a git clone to run my model locally using VS Code ( I am not sure if this is the problem).

I am getting a NameError for Path. So I did a pip install and import from fastai.imports import * to “work around” this but that lead me to having a NameError for load_learner.

I am currently using paperspace with fastai 2.1.5 (based on my results from running ! pip list) in Juypter notebok.

Here is my requirements.txt

Here is how my situation looks when I clone my repo into VS Code:

Any tips, tricks, or learning experience would be greatly appreciated.

Thank You

Hi faceyacc Hi hope you are having a Jolly day!

Not sure the repository you have used is still current?

Try the following.

from fastai.vision.all import * # replace line 5 with this line
remove line 11.

Often ‘undefined’ points to incorrect modules being loaded or correct modules not being loaded.

If it’s possible, can you paste the code rather than, code images as this helps people test your code quicker, better still if your repo is public, if you post the link that would help.

Cheers mrfabulous! :smiley: :smiley:

Hi again!

I’ve tried removing line 11 and replacing lien 5 with from fastai.vision.all import *, but still no cigar. I believe that you may be right about incorrect modules. I am using fastai==2.1.5 and it seems that I should install fastcore along with that. I did some debugging to see where the problem is coming form it has to do with the fastai PyPi version (2.1.5) and torch/torchvision version (1.7.0 and 0.8.0) respectively.

I’ve read some of your other solutions on this thread and may try to deploy wit Heroku and attempt to connect that with my Flutter mobile UI.

Here is my most recent requirements.txt:

 aiofiles==0.5.0
aiohttp==3.6.2
asyncio==3.4.3
fastai==2.2.5
fastcore
ipywidgets
graphviz
numpy==1.19.0
torch==1.7.0
torchvision==0.8.0
pillow==8.0.1
python-multipart==0.0.5
starlette==0.13.6
uvicorn==0.11.7

Here is my server.py file in the app directory:

import aiohttp
import asyncio
import uvicorn
from fastai import *
from fastai.vision.all import *
from io import BytesIO
from starlette.applications import Starlette
from starlette.middleware.cors import CORSMiddleware
from starlette.responses import HTMLResponse, JSONResponse
from starlette.staticfiles import StaticFiles




export_file_url = 'https://www.dropbox.com/s/yqmtjsqednhljqt/export.pkl?raw=1'
export_file_name = 'export.pkl'

classes = ['black', 'grizzly', 'teddys']
path = Path(__file__).parent

app = Starlette()
app.add_middleware(CORSMiddleware, allow_origins=['*'], allow_headers=['X-Requested-With', 'Content-Type'])
app.mount('/static', StaticFiles(directory='app/static'))


async def download_file(url, dest):
    if dest.exists(): return
    async with aiohttp.ClientSession() as session:
        async with session.get(url) as response:
            data = await response.read()
            with open(dest, 'wb') as f:
                f.write(data)


async def setup_learner():
    await download_file(export_file_url, path / export_file_name)
    try:
        learn = load_learner(path, export_file_name)
        return learn
    except RuntimeError as e:
        if len(e.args) > 0 and 'CPU-only machine' in e.args[0]:
            print(e)
            message = "\n\nThis model was trained with an old version of fastai and will not work in a CPU environment.\n\nPlease update the fastai library in your training environment and export your model again.\n\nSee instructions for 'Returning to work' at https://course.fast.ai."
            raise RuntimeError(message)
        else:
            raise


loop = asyncio.get_event_loop()
tasks = [asyncio.ensure_future(setup_learner())]
learn = loop.run_until_complete(asyncio.gather(*tasks))[0]
loop.close()


@app.route('/')
async def homepage(request):
    html_file = path / 'view' / 'index.html'
    return HTMLResponse(html_file.open().read())


@app.route('/analyze', methods=['POST'])
async def analyze(request):
    img_data = await request.form()
    img_bytes = await (img_data['file'].read())
    img = open_image(BytesIO(img_bytes))
    prediction = learn.predict(img)[0]
    return JSONResponse({'result': str(prediction)})


if __name__ == '__main__':
    if 'serve' in sys.argv:
        uvicorn.run(app=app, host='0.0.0.0', port=5000, log_level="info")

As always, any tips, tricks, or learning experience would be greatly appreciated.

Thank you
Ty

Hi all!

After hours of hard work trying to deploy my model using Render’s service, I finally was able to get my model up and running!

I built my model using the latest version of fastai (2.2.5)

Here are the problems I ran into and how I fixed it:

  1. In the original repo: (https://github.com/render-examples/fastai-v3), line 35: learn = load_learner(path, export_file_name) throws an error. This is due to how export and load_learner methods work in the new version of fastai. To fix this I switch line 35 with the following:

learn = load_learner(path/export_file_name)

  1. I ran into an error using the ‘load_image’ when deploying to Render:

This is because in the newest version of fastai load_image returns an PIL.Image.Image object instead of a PILImage object. I suggest that if you are using the newest version of fastai and are trying to deploy using Render, replace 'load_image() with PILImage.create().

This final tip is really coming from @mrfabulous1 who strongly suggest that you run everything locally before trying anything out on Render. This helped me a million. Having Anaconda installed also help set up my environment so I don’t have worry if my environment is clashing with each other.

sources: Deployment Platform: Render ✅

1 Like

Hi faceyacc Great to see you got your model working.
:smiley: :smiley: