Deployment Platform: Render ✅

Hi everyone, this is mine. I tried to modify it so it could work on a mobile browser
https://coffee-classifier.onrender.com

Well, you can visit the repository,

1 Like

Hello, anyone knows how to perform a live stream detection inside render, but the user could access the webcam. The input stream will be processed by the classifier

1 Like

Hello, I just deployed my first “painter classifier” on Render and wanted to share how I got it to work in case it could help someone (took me several hours to find out). I followed up the instructions from https://course19.fast.ai/deployment_render.html#deploy, and copied my model file ‘export.pkl’ to dropbox instead of google drive. With the link provided for Google Drive, when running on Render, it always ended up unsuccessful. I would like to know why it is not working for me when using Google Drive, but for now I am happy to use Dropbox since it works smoothly.

it is now my turn now to celebrate my first classifier application on Render, so happy after so many hours troubleshooting!

here is the Painter classifier link https://painter-finder.onrender.com that classifies painting from VanGogh, Matisse, and Monet!

1 Like

Hi all!

This is my first time reporting an error, so please feel free to ask me for any information pertaining to my error and I will get to you ASAP.

I an trying to deploy my model on Render.com. but am running into some trouble.

Here are my failed logs from deploying to Render:

Here are my requirements.txt and server.py file:


I’ve read through this thread but didn’t find any solutions are tips on my particular error. Any suggestions, ideas, or learning experiences from @mrfabulous1, @anurag or anyone else will be greatly appreciated!

Hi faceyacc hope all is well!
Unfortunately errors on Render.com are slightly convoluted and often mask the real error:

Run the application standalone first on a local machine, this can be done with or without docker.

Running the app locally without Docker, helps avoid many errors down the line, also many errors show up that are much easier to resolve as your not seeing an error that is being reported once the app has passed through Docker then the render.com console.

Cheers mrfabulous1 :smiley: :smiley:

Hi @mrfabulous1!
I am ecstatic that you reply to my post. You seem to be able to help a lot people on this thread!

After I forked @anurag repo I did a git clone to run my model locally using VS Code ( I am not sure if this is the problem).

I am getting a NameError for Path. So I did a pip install and import from fastai.imports import * to “work around” this but that lead me to having a NameError for load_learner.

I am currently using paperspace with fastai 2.1.5 (based on my results from running ! pip list) in Juypter notebok.

Here is my requirements.txt

Here is how my situation looks when I clone my repo into VS Code:

Any tips, tricks, or learning experience would be greatly appreciated.

Thank You

Hi faceyacc Hi hope you are having a Jolly day!

Not sure the repository you have used is still current?

Try the following.

from fastai.vision.all import * # replace line 5 with this line
remove line 11.

Often ‘undefined’ points to incorrect modules being loaded or correct modules not being loaded.

If it’s possible, can you paste the code rather than, code images as this helps people test your code quicker, better still if your repo is public, if you post the link that would help.

Cheers mrfabulous! :smiley: :smiley:

Hi again!

I’ve tried removing line 11 and replacing lien 5 with from fastai.vision.all import *, but still no cigar. I believe that you may be right about incorrect modules. I am using fastai==2.1.5 and it seems that I should install fastcore along with that. I did some debugging to see where the problem is coming form it has to do with the fastai PyPi version (2.1.5) and torch/torchvision version (1.7.0 and 0.8.0) respectively.

I’ve read some of your other solutions on this thread and may try to deploy wit Heroku and attempt to connect that with my Flutter mobile UI.

Here is my most recent requirements.txt:

 aiofiles==0.5.0
aiohttp==3.6.2
asyncio==3.4.3
fastai==2.2.5
fastcore
ipywidgets
graphviz
numpy==1.19.0
torch==1.7.0
torchvision==0.8.0
pillow==8.0.1
python-multipart==0.0.5
starlette==0.13.6
uvicorn==0.11.7

Here is my server.py file in the app directory:

import aiohttp
import asyncio
import uvicorn
from fastai import *
from fastai.vision.all import *
from io import BytesIO
from starlette.applications import Starlette
from starlette.middleware.cors import CORSMiddleware
from starlette.responses import HTMLResponse, JSONResponse
from starlette.staticfiles import StaticFiles




export_file_url = 'https://www.dropbox.com/s/yqmtjsqednhljqt/export.pkl?raw=1'
export_file_name = 'export.pkl'

classes = ['black', 'grizzly', 'teddys']
path = Path(__file__).parent

app = Starlette()
app.add_middleware(CORSMiddleware, allow_origins=['*'], allow_headers=['X-Requested-With', 'Content-Type'])
app.mount('/static', StaticFiles(directory='app/static'))


async def download_file(url, dest):
    if dest.exists(): return
    async with aiohttp.ClientSession() as session:
        async with session.get(url) as response:
            data = await response.read()
            with open(dest, 'wb') as f:
                f.write(data)


async def setup_learner():
    await download_file(export_file_url, path / export_file_name)
    try:
        learn = load_learner(path, export_file_name)
        return learn
    except RuntimeError as e:
        if len(e.args) > 0 and 'CPU-only machine' in e.args[0]:
            print(e)
            message = "\n\nThis model was trained with an old version of fastai and will not work in a CPU environment.\n\nPlease update the fastai library in your training environment and export your model again.\n\nSee instructions for 'Returning to work' at https://course.fast.ai."
            raise RuntimeError(message)
        else:
            raise


loop = asyncio.get_event_loop()
tasks = [asyncio.ensure_future(setup_learner())]
learn = loop.run_until_complete(asyncio.gather(*tasks))[0]
loop.close()


@app.route('/')
async def homepage(request):
    html_file = path / 'view' / 'index.html'
    return HTMLResponse(html_file.open().read())


@app.route('/analyze', methods=['POST'])
async def analyze(request):
    img_data = await request.form()
    img_bytes = await (img_data['file'].read())
    img = open_image(BytesIO(img_bytes))
    prediction = learn.predict(img)[0]
    return JSONResponse({'result': str(prediction)})


if __name__ == '__main__':
    if 'serve' in sys.argv:
        uvicorn.run(app=app, host='0.0.0.0', port=5000, log_level="info")

As always, any tips, tricks, or learning experience would be greatly appreciated.

Thank you
Ty

Hi all!

After hours of hard work trying to deploy my model using Render’s service, I finally was able to get my model up and running!

I built my model using the latest version of fastai (2.2.5)

Here are the problems I ran into and how I fixed it:

  1. In the original repo: (https://github.com/render-examples/fastai-v3), line 35: learn = load_learner(path, export_file_name) throws an error. This is due to how export and load_learner methods work in the new version of fastai. To fix this I switch line 35 with the following:

learn = load_learner(path/export_file_name)

  1. I ran into an error using the ‘load_image’ when deploying to Render:

This is because in the newest version of fastai load_image returns an PIL.Image.Image object instead of a PILImage object. I suggest that if you are using the newest version of fastai and are trying to deploy using Render, replace 'load_image() with PILImage.create().

This final tip is really coming from @mrfabulous1 who strongly suggest that you run everything locally before trying anything out on Render. This helped me a million. Having Anaconda installed also help set up my environment so I don’t have worry if my environment is clashing with each other.

sources: Deployment Platform: Render ✅

1 Like

Hi faceyacc Great to see you got your model working.
:smiley: :smiley:

How to get gradient working with fastai 2.0.11, torch 1.7.1, torchvision 1.8.2 (2.4.2021)

I ran into hours of debugging due to errors that arose from packages changing since the last Render update. Here are the changes that allow me to take a model training similar to the bear example in lecture 2, and bring it into prod on Render. For full code, see https://github.com/jocalzaretta/kombucha-mold-detection

Requirements.txt edits
There is a new, required way to pip install torch, with wheel first
Also, need to downgrade to torch 1.6, torchvision 0.7 due to error with Analyzing image in final site

-f https://download.pytorch.org/whl/torch_stable.html
torch==1.6.0+cpu
-f https://download.pytorch.org/whl/torch_stable.html
torchvision==0.7.0+cpu
fastbook==0.0.12
fastai==2.0.11

Server.py edits

Imports
from fastai import *
from fastai.vision import *
#I needed to add these imports in order to avoid the error: Path not defined
import fastbook
from fastbook import *
from fastai.vision.widgets import *

Load_learner
Change the load_learner line in setup_learner function
From: learn = load_learner(path , export_file_name)
To: learn = load_learner(path / export_file_name)

Open_image
Change open_image line in analyze function
From: img = open_image(BytesIO(img_bytes))
To: img = PILImage.create(BytesIO(img_bytes))

Working Site: https://kombucha-mold-detection.onrender.com/ (many edits to come with design and output)

3 Likes

@anurag @jetcalz07

Hi! I wrote my model in Google Colab and have a pkl of my fine-tuned model. I have the code for the deployment on Github, but am having a lot of trouble deploying with Render. Everytime I debug something, another thing pops up. Does anyone have an updated requirements.txt?

Current bug:
#8 27.59 ERROR: Could not find a version that satisfies the requirement uvicorn==-0.11.7
Apr 19 11:34:19 PM #8 27.59 ERROR: No matching distribution found for uvicorn==-0.11.7
Apr 19 11:34:19 PM #8 ERROR: executor failed running [/bin/sh -c pip install --upgrade -r requirements.txt]: buildkit-runc did not terminate successfully

This is how my requirements.text currently looks:
aiofiles==0.4.0
aiohttp==3.5.4
asyncio==3.4.3
-f https://download.pytorch.org/whl/torch_stable.html
torch==1.6.0+cpu
-f https://download.pytorch.org/whl/torch_stable.html
torchvision==0.7.0+cpu
fastbook==0.0.12
fastai==2.0.11
numpy==1.16.4
starlette==0.12.0
python-multipart==0.0.5
uvicorn==-0.11.7

Hi lillyolson Hope all is well and you are having a wonderful day. If you see my previous post below I recommend that you test it on your desktop first before render.com

There isn’t really a latest requirements.txt file as every library is continually changing.

The most reliable way is make sure you requirements.txt contains the exact same library versions as the Colab versions you trained the model on. This just leaves a few libraries that Colab doesn’t use such as

In these cases the latest version normally works.

I usually record the versions at the same time I create the model as libraries like Pytorch can change daily sometimes.
Hope this helps

Cheers mrfabulous1 :grinning: :grinning:

When I run locally server.py in my laptop, it seems like I get a memory leak. After stop and restart and upload the local image file several times. I have no more memory at all and have to reboot the machine. Does anyone have the same problem ? Thanks

By the way, I’m not very familiar with python asyncio, why we need to be initialized learn in learn = loop.run_until_complete(asyncio.gather(*tasks))[0] rather than simply run load_learner ? Do you have some good resources that I can learn these stuffs ? Thanks

I found myself the reason using loop.run_until_complete is because setup_learner is an async function. Simply calling an async function will just return a coroutine and not really executing the function. If we save locally the model and remove await download_file then we can remove the async function and can just use learn=setup_learner()