Deployment Platform: Render ✅

Paypal would be great.:slightly_smiling_face:

1 Like

Yes, please add paypal support asap! thanks!

Hi, JamesT, I have successfully deploy fastai code by following instruction from “Deploying on Google App Engine” , but ast last steps, I get error “502 Bad Gateway” from “nginx”. Do you know why? Thanks first!

1 Like

Hi, @mrfabulous1! Thanks again for your help.

Here’s my requirements.txt:

numpy==1.16.4
torchvision==0.4.0a0+6b959ee
https://download.pytorch.org/whl/cpu/torch-1.0.1.post2-cp37-cp37m-linux_x86_64.whl
fastai==1.0.57
starlette
uvicorn==0.3.32
python-multipart
aiofiles
aiohttp

I’ve matched the values to those in my !pip list as above, but my list doesn’t contain entries for starlette, aoifiles, or aiohttp. Perhaps I need to install these?

I only trained this model on Thursday, so the libraries are likely the same.

Hi go_go_gadget hope you had a good weekend.

If you started with the current Teddy Bear repository on Github https://github.com/render-examples/fastai-v3/blob/master/requirements.txt the latest requirements.txt is as follows.

aiofiles==0.4.0
aiohttp==3.5.4
asyncio==3.4.3
fastai==1.0.57
https://download.pytorch.org/whl/cpu/torch-1.1.0-cp37-cp37m-linux_x86_64.whl
https://download.pytorch.org/whl/cpu/torchvision-0.3.0-cp37-cp37m-linux_x86_64.whl
numpy==1.16.4
starlette==0.12.0
uvicorn==0.7.1
python-multipart==0.0.5

I have ammended it for your GCP versions of fastai and numpy.

I suggest you use the requirements.txt above as yours doesn’t have values for startlette, python-multipart, aiofiles, aiohttp and asyncio setting is missing completely. You should not have to install any libraries, this is being done by Docker. Check against your GCP settings if any libraries have higher versions then you only need change those numbers.

Once again please send a copy of any error you get and the requirements.txt you are using on render.com, when you reply. With so many inconsistencies in your requirements.txt at the moment, its very difficult to resolve the issue, we must get this right.

Hope this helps.

mrfabulous1 :smiley::smiley:

2 Likes

Thank you, @mrfabulous1! I sincerely appreciate all of your help.

The app is rendering now! It’s displaying the text for the teddy bear model, however, though it’s correctly running my classifier (Picasso vs. Monet).

Here’s a screenshot:

I think I must need to edit the code displaying the text, but I can’t tell from the server.py file which part of the code to edit (I’m sorry, I’m very inexperienced!).

Here’s the server.py file:

from starlette.applications import Starlette
from starlette.responses import HTMLResponse, JSONResponse
from starlette.staticfiles import StaticFiles
from starlette.middleware.cors import CORSMiddleware
import uvicorn, aiohttp, asyncio
from io import BytesIO

from fastai import *
from fastai.vision import *

export_file_url = ‘https://www.googleapis.com/drive/v3/files/1dDW2hBlmM7rqEjovUepNOHg0Z23s6WIg?alt=media&key=AIzaSyCreuiBOuN4ae5cvzlh8cIB9iY8tUeSMik
export_file_name = ‘export.pkl’

classes = [‘picasso’, ‘monet’]
path = Path(file).parent

app = Starlette()
app.add_middleware(CORSMiddleware, allow_origins=[’*’], allow_headers=[‘X-Requested-With’, ‘Content-Type’])
app.mount(’/static’, StaticFiles(directory=‘app/static’))

async def download_file(url, dest):
if dest.exists(): return
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
data = await response.read()
with open(dest, ‘wb’) as f: f.write(data)

async def setup_learner():
await download_file(export_file_url, path/export_file_name)
try:
learn = load_learner(path, export_file_name)
return learn
except RuntimeError as e:
if len(e.args) > 0 and ‘CPU-only machine’ in e.args[0]:
print(e)
message = “\n\nThis model was trained with an old version of fastai and will not work in a CPU environment.\n\nPlease update the fastai library in your training environment and export your model again.\n\nSee instructions for ‘Returning to work’ at https://course.fast.ai.”
raise RuntimeError(message)
else:
raise

loop = asyncio.get_event_loop()
tasks = [asyncio.ensure_future(setup_learner())]
learn = loop.run_until_complete(asyncio.gather(*tasks))[0]
loop.close()

@app.route(’/’)
def index(request):
html = path/‘view’/‘index.html’
return HTMLResponse(html.open().read())

@app.route(’/analyze’, methods=[‘POST’])
async def analyze(request):
data = await request.form()
img_bytes = await (data[‘file’].read())
img = open_image(BytesIO(img_bytes))
prediction = learn.predict(img)[0]
return JSONResponse({‘result’: str(prediction)})

if name == ‘main’:
if ‘serve’ in sys.argv: uvicorn.run(app=app, host=‘0.0.0.0’, port=5042)

Link to web app

Thanks again for your time!
g0g0gadget

1 Like

Hi go_go_gadget hope you are having a jolly day!

I am glad to hear that your model is now working!

To change the text in the html page, edit the index.html page in the view directory.

image

Have a wonderful evening.

mrfabulous1 :smiley::smiley:

2 Likes

Yay! It’s working! Thank you again, so very much!

Sincerely,
g0g0gadget

1 Like

Hi go_go_gadget
You’re Welcome!
mrfabulous1 :smiley::smiley::smiley:

1 Like

Hi, I copied the requirements from my Google Colab environment, but I got the following trace:

File “/usr/local/lib/python3.7/asyncio/base_events.py”, line 579, in run_until_complete
return future.result()
File “app/server.py”, line 35, in setup_learner
learn = load_learner(path, export_file_name)
File “/usr/local/lib/python3.7/site-packages/fastai/basic_train.py”, line 628, in load_learner
res.callbacks = [load_callback(c,s, res) for c,s in cb_state.items()]
File “/usr/local/lib/python3.7/site-packages/fastai/basic_train.py”, line 628, in
res.callbacks = [load_callback(c,s, res) for c,s in cb_state.items()]
File “/usr/local/lib/python3.7/site-packages/fastai/basic_train.py”, line 612, in load_callback
res = class_func(learn, **init_kwargs) if issubclass(class_func, LearnerCallback) else class_func(**init_kwargs)
File “/usr/local/lib/python3.7/site-packages/fastai/basic_train.py”, line 461, in init
self.opt = self.learn.opt
AttributeError: ‘Learner’ object has no attribute ‘opt’

Any clue? seems to be related to the fastai version.

Hi!

I’m having the same problem as many other people here where the classifier gets stuck in the “analysing”-phase.

I have at least managed to change the original bear text to my own, so at least something is right :wink:

I have updated the requirements.txt in my forked repository with the versions from when I write “!pip list” in my Jupyter notebook in Paperspace. There I get the following:

Package Version


asn1crypto 0.24.0
attrs 18.2.0
backcall 0.1.0
beautifulsoup4 4.7.1
bleach 3.1.0
Bottleneck 1.2.1
certifi 2018.11.29
cffi 1.11.5
chardet 3.0.4
cryptography 2.3.1
cycler 0.10.0
cymem 2.0.2
cytoolz 0.9.0.1
dataclasses 0.6
decorator 4.3.0
dill 0.2.8.2
entrypoints 0.3
fastai 1.0.55
fastprogress 0.1.21
idna 2.8
ipykernel 5.1.0
ipython 7.2.0
ipython-genutils 0.2.0
ipywidgets 7.4.2
jedi 0.13.2
Jinja2 2.10
jsonschema 3.0.0a3
jupyter 1.0.0
jupyter-client 5.2.4
jupyter-console 6.0.0
jupyter-core 4.4.0
kiwisolver 1.0.1
MarkupSafe 1.1.0
matplotlib 3.0.2
mistune 0.8.4
mkl-fft 1.0.10
mkl-random 1.0.2
msgpack 0.5.6
msgpack-numpy 0.4.3.2
murmurhash 1.0.0
nb-conda 2.2.1
nb-conda-kernels 2.2.0
nbconvert 5.3.1
nbformat 4.4.0
notebook 5.7.4
numexpr 2.6.9
numpy 1.15.4
nvidia-ml-py3 7.352.0
olefile 0.46
packaging 19.0
pandas 0.23.4
pandocfilters 1.4.2
parso 0.3.1
pexpect 4.6.0
pickleshare 0.7.5
Pillow 5.4.1
pip 18.1
plac 0.9.6
preshed 2.0.1
prometheus-client 0.5.0
prompt-toolkit 2.0.7
ptyprocess 0.6.0
pycparser 2.19
Pygments 2.3.1
pyOpenSSL 18.0.0
pyparsing 2.3.1
pyrsistent 0.14.9
PySocks 1.6.8
python-dateutil 2.7.5
pytz 2018.9
PyYAML 3.13
pyzmq 17.1.2
qtconsole 4.4.3
regex 2018.1.10
requests 2.21.0
scipy 1.2.0
Send2Trash 1.5.0
setuptools 40.6.3
six 1.12.0
soupsieve 1.7.1
spacy 2.0.18
terminado 0.8.1
testpath 0.4.2
thinc 6.12.1
toolz 0.9.0
torch 1.0.0
torchvision 0.2.1
tornado 5.1.1
tqdm 4.29.1
traitlets 4.3.2
typing 3.6.4
ujson 1.35
urllib3 1.24.1
wcwidth 0.1.7
webencodings 0.5.1
wheel 0.32.3
widgetsnbextension 3.4.2
wrapt 1.10.11

My requirement.txt:
aiofiles==0.4.0
aiohttp==3.5.4
asyncio==3.4.3
fastai==1.0.55
https://download.pytorch.org/whl/cpu/torch-1.1.0-cp37-cp37m-linux_x86_64.whl
https://download.pytorch.org/whl/cpu/torchvision-0.3.0-cp37-cp37m-linux_x86_64.whl
numpy==1.15.4
starlette==0.12.0
uvicorn==0.7.1
python-multipart==0.0.5

When I upload an image in the classifier online I get the following text in the “log” tab in Render.

Sep 22 04:38:26 PM INFO: (‘10.104.55.126’, 38382) - “POST /analyze HTTP/1.1” 500
Sep 22 04:38:26 PM ERROR: Exception in ASGI application
Sep 22 04:38:26 PM Traceback (most recent call last):
File “/usr/local/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py”, line 368, in run_asgi
result = await app(self.scope, self.receive, self.send)
File “/usr/local/lib/python3.7/site-packages/starlette/applications.py”, line 133, in call
await self.error_middleware(scope, receive, send)
File “/usr/local/lib/python3.7/site-packages/starlette/middleware/errors.py”, line 122, in call
raise exc from None
File “/usr/local/lib/python3.7/site-packages/starlette/middleware/errors.py”, line 100, in call
await self.app(scope, receive, _send)
File “/usr/local/lib/python3.7/site-packages/starlette/middleware/cors.py”, line 84, in call
await self.simple_response(scope, receive, send, request_headers=headers)
File “/usr/local/lib/python3.7/site-packages/starlette/middleware/cors.py”, line 140, in simple_response
await self.app(scope, receive, send)
File “/usr/local/lib/python3.7/site-packages/starlette/exceptions.py”, line 73, in call
raise exc from None
File “/usr/local/lib/python3.7/site-packages/starlette/exceptions.py”, line 62, in call
await self.app(scope, receive, sender)
File “/usr/local/lib/python3.7/site-packages/starlette/routing.py”, line 585, in call
await route(scope, receive, send)
File “/usr/local/lib/python3.7/site-packages/starlette/routing.py”, line 207, in call
await self.app(scope, receive, send)
File “/usr/local/lib/python3.7/site-packages/starlette/routing.py”, line 40, in app
response = await func(request)
File “app/server.py”, line 63, in analyze
prediction = learn.predict(img)[0]
File “/usr/local/lib/python3.7/site-packages/fastai/basic_train.py”, line 366, in predict
res = self.pred_batch(batch=batch, with_dropout=with_dropout)
File “/usr/local/lib/python3.7/site-packages/fastai/basic_train.py”, line 345, in pred_batch
if not with_dropout: preds = loss_batch(self.model.eval(), xb, yb, cb_handler=cb_handler)
File “/usr/local/lib/python3.7/site-packages/fastai/basic_train.py”, line 26, in loss_batch
out = model(*xb)
File “/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python3.7/site-packages/torch/nn/modules/container.py”, line 92, in forward
input = module(input)
File “/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python3.7/site-packages/torch/nn/modules/container.py”, line 92, in forward
input = module(input)
File “/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python3.7/site-packages/torch/nn/modules/conv.py”, line 331, in forward
if self.padding_mode == ‘circular’:
File “/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 539, in getattr
type(self).name, name))
AttributeError: ‘Conv2d’ object has no attribute ‘padding_mode’

Would very much appreciate if someone could help me out. Thanks in advance :slight_smile:

I solved this error (invalid load key ‘<’) by using google API because the export.pkl was bigger than 100 MB.
However, I ended up with the following error. Any suggestions would be appreciated. thanks.

FYI: I am building the app for camvid image segmentation example.
Notebook is available here

My current reqs are

aiofiles==0.4.0
aiohttp==3.5.4
asyncio==3.4.3
fastai==1.0.52
https://download.pytorch.org/whl/cpu/torch-1.1.0-cp37-cp37m-linux_x86_64.whl
https://download.pytorch.org/whl/cpu/torchvision-0.3.0-cp37-cp37m-linux_x86_64.whl
numpy==1.16.3
starlette==0.12.0
uvicorn==0.7.1
python-multipart==0.0.5

Hi JanM I hope you had a wonderful weekend!

I don’t use Paperspace so I can’t test the theory, your versions of pytorch and torchvision in Paperspace are not the latest, testing with a requirements.txt using the following lines may help.

https://download.pytorch.org/whl/cpu/torch-1.0.0-cp37-cp37m-win_amd64.whl
torchvision==0.2.1

cheers mrfabulous1 :smiley::smiley:

Mrfaboulous,

Big thank you for the help!

I tried with your suggestion first and render wasnt able deploy the model. Then I saw that the linux part from the original link had been changed to win for some reason? I just changed the text in the original link from version 1.1.0 to 1.0.0. Then it worked :slight_smile:

Original link:
https://download.pytorch.org/whl/cpu/torch-1.1.0-cp37-cp37m-linux_x86_64.whl

Your link:
https://download.pytorch.org/whl/cpu/torch-1.0.0-cp37-cp37m-win_amd64.whl

My new, modified link:
https://download.pytorch.org/whl/cpu/torch-1.0.0-cp37-cp37m-linux_x86_64.whl

my requirements.txt now looks like this:

aiofiles==0.4.0
aiohttp==3.5.4
asyncio==3.4.3
fastai==1.0.55
numpy==1.15.4
starlette==0.12.0
uvicorn==0.7.1
python-multipart==0.0.5
https://download.pytorch.org/whl/cpu/torch-1.0.0-cp37-cp37m-linux_x86_64.whl
torchvision==0.2.1

My classifier for anyone interested:

https://judo-or-bjj.onrender.com/

4 Likes

Ok I solved it. It is working after I replace acc_camvid with just error_rate in metrics. I think the new function created by jermey “acc_camvid” is the cause of the problem. I tried to use the following trick from Stackoverflow but didn’t work.

import pickle
import acc_camvid
from acc_camvid import Foo 

if __name__=='__main__':
with open('export.pkl', 'rb') as f:
    users = pickle.load(f)

Hi Anurag. I am able to deploy my model but it is giving the following error when I upload the image to analyze. Any suggestions. thanks

Hi @anurag,

Thanks for making render.com available to FastAI students with credit. Like many folks on this thread, I am trying to deploy course1-v3-lesson2 based Docker project to Render.com i.e. from https://github.com/bguan/bguan-bears deploying to https://bguan-bears.onrender.com.

It seems uploading an image to be classified works, but when I provide a URL pointing to an image somewhere on the web e.g. https://upload.wikimedia.org/wikipedia/commons/3/33/Jasper_Dwayne_Reilander-4.jpg the webapp will crash with an OOM.

At first I thought it is truly memory consumed by Torch model. So I switched from a RESNET34 based model (~85MB) to RESNET18 based model (~46MB). However I am still getting OOM from URL classification but not from image file upload. FYI I don’t get this error when running the same docker image locally on my Ubuntu laptop.

Not sure where to look for the root cause.

Looks like it could be URL specific.

Pointing to some URL like https://n.nordstrommedia.com/id/sr3/8469a1d0-a660-49df-b4e5-96deb40cdbaf.jpeg works i.e. this link

But the above URL still fails i.e. this link

Could be that the working URL points to a small image file of ~1MB while the failing URL points to a big file of ~16MB.

Yap, looks like it is my own problem. I’ve added basic checks to limit file upload or image download by URL to max 5MB and the problem went away.

Commenting hoping others may find this helpful when troubleshooting their own Render.com deployment.

1 Like

@gbubs as the screenshot says, you’ll need to upgrade your plan to get more memory for your model. You’re likely using a larger model than one from the bears app.

1 Like