Deployment Platform: Render ✅

According to mrfabulous1, I changed the requirements.txt file, but only changing the version for fastai and numpy worked for me. Rest everything remained the same for me and I was able to deploy. Prior to that I was getting the same error as you are. Try and see if that helps in any way.

Good luck.

1 Like

Thanks anandkhanna, this was what worked for me too for render. Just change the version of fastai & numpy and leave the torch & torch vision .whl file unchanged

[Section of the requirements.txt file that worked for me]
fastai==1.0.59
https://download.pytorch.org/whl/cpu/torch-1.1.0-cp37-cp37m-linux_x86_64.whl
https://download.pytorch.org/whl/cpu/torchvision-0.3.0-cp37-cp37m-linux_x86_64.whl
numpy==1.17.4

Hi anandkhanna Hope all is well!
Please bear in mind that some people are using fastai 0.7 which was released 2-3 years ago. You are right you may have only had to change one or two files but remember the repository was last updated on May 26 2019. If you come back in 3 months, 6 months or a year and all the libraries like PyTorch have been changed, by their creators and the repository hasn’t been updated, you may need to change every file in the requirements.txt.

I have built 50-60 models since February 2019, I have had to change every file in the requirements.txt. over this time, to make some of the apps work, as it can be difficult to know which library affects another.

So other people may have to change more or less files based on such things as:

  • which libraries have been deprecated
  • which libraries have not been upgraded
  • which system you built your model on and its library versions
  • Has a non published update broken the dependencies of libraries that work together
  • How long ago you downloaded the repository
  • How long ago the repository was updated
  • What changes have happened in fastai
  • Which version of fastai are you using

There are many other factors but the above are the most common.

Cheers hopes this helps mrfabulous1 :smiley::smiley:

1 Like

@Enignition figured out that to make it work for them they had to add
scipy==1.3.3
That made it work for me as well.

Thanks ZSW,

I’ve changed fastai and numpy versions but still getting the same error!

Hi Ayman,

Apologies, I am also just trying out different combinations and not able to say what is wrong.

Just to make sure, you have search the thread for “!piplist” as suggested by mrfabulous1 to check the versions and change them according to your notebook. If you have not, you can try making those changes.

Hi ZSW,

I’m terribly sorry! This is my first model, so I guess it’s a steep learning curve ahead
!piplist on the cloud notebook I’m using shows :
torch 1.0.0
torchvision 0.2.1
I can’t find the link to these two old versions of torch/vision to change requirement.txt.
Any ideas?

Hi ayjabri hope you are having a beautiful day!

You can remove or comment out the URLs in the requirements.txt and replace them with the following lines.

torch==1.0.0
torchvision==0.2.1

This should work also!

have a jolly day mrfabulous1 :smiley::smiley:

@mrfabulous1 My render just says “analyzing…” and is stuck. This is the link. This is server.py:

`import aiohttp
import asyncio
import uvicorn
from fastai import *
from fastai.vision import *
from io import BytesIO
from starlette.applications import Starlette
from starlette.middleware.cors import CORSMiddleware
from starlette.responses import HTMLResponse, JSONResponse
from starlette.staticfiles import StaticFiles

export_file_url = ‘https://www.dropbox.com/s/vznmhfiulf4z1ic/export.pkl?dl=1
export_file_name = ‘export.pkl’

classes = [‘Pop Art’,‘Renaissance Art’,‘Chinese Art’,‘Impressionism’,‘Cubism’]
path = Path(file).parent

app = Starlette()
app.add_middleware(CORSMiddleware, allow_origins=[’*’], allow_headers=[‘X-Requested-With’, ‘Content-Type’])
app.mount(’/static’, StaticFiles(directory=‘app/static’))

async def download_file(url, dest):
if dest.exists(): return
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
data = await response.read()
with open(dest, ‘wb’) as f:
f.write(data)

async def setup_learner():
await download_file(export_file_url, path / export_file_name)
try:
learn = load_learner(path, export_file_name)
return learn
except RuntimeError as e:
if len(e.args) > 0 and ‘CPU-only machine’ in e.args[0]:
print(e)
message = “\n\nThis model was trained with an old version of fastai and will not work in a CPU environment.\n\nPlease update the fastai library in your training environment and export your model again.\n\nSee instructions for ‘Returning to work’ at https://course.fast.ai.”
raise RuntimeError(message)
else:
raise

loop = asyncio.get_event_loop()
tasks = [asyncio.ensure_future(setup_learner())]
learn = loop.run_until_complete(asyncio.gather(*tasks))[0]
loop.close()

@app.route(’/’)
async def homepage(request):
html_file = path / ‘view’ / ‘index.html’
return HTMLResponse(html_file.open().read())

@app.route(’/analyze’, methods=[‘POST’])
async def analyze(request):
img_data = await request.form()
img_bytes = await (img_data[‘file’].read())
img = open_image(BytesIO(img_bytes))
prediction = learn.predict(img)[0]
return JSONResponse({‘result’: str(prediction)})

if name == ‘main’:
if ‘serve’ in sys.argv:
uvicorn.run(app=app, host=‘0.0.0.0’, port=5000, log_level=“info”)
`

Please help! Thanks!

Hi daringtrifles hope all is well!
The first thing which most people on this thread do, is make sure that everything has been configured correctly.

If you didn’t build you model on render.com and built it on Google Colab, your local machine or another platform, then please do a search on this forum for pip list.

This is the first step for any model deployment, as we have to make sure the libraries on render.com in the requirements.txt file match the ones you trained your model on, this is a requirement as render.com uses Docker for deployment.

You will see many posts on ‘pip list’ if you carry them out we will get closer to solving your problem.

Kind regards mrfabulous1 :smiley::smiley:

@mrfabulous1 Thank you for your response!

Should I copy and paste all of this verbatim into requirements.txt?

asn1crypto 0.24.0
attrs 18.2.0
backcall 0.1.0
beautifulsoup4 4.7.1
bleach 3.1.0
Bottleneck 1.2.1
certifi 2018.11.29
cffi 1.11.5
chardet 3.0.4
cryptography 2.3.1
cycler 0.10.0
cymem 2.0.2
cytoolz 0.9.0.1
dataclasses 0.6
decorator 4.3.0
dill 0.2.8.2
entrypoints 0.3
fastai 1.0.60
fastprogress 0.2.1
idna 2.8
ipykernel 5.1.0
ipython 7.2.0
ipython-genutils 0.2.0
ipywidgets 7.4.2
jedi 0.13.2
Jinja2 2.10
jsonschema 3.0.0a3
jupyter 1.0.0
jupyter-client 5.2.4
jupyter-console 6.0.0
jupyter-core 4.4.0
kiwisolver 1.0.1
MarkupSafe 1.1.0
matplotlib 3.0.2
mistune 0.8.4
mkl-fft 1.0.10
mkl-random 1.0.2
msgpack 0.5.6
msgpack-numpy 0.4.3.2
murmurhash 1.0.0
nb-conda 2.2.1
nb-conda-kernels 2.2.0
nbconvert 5.3.1
nbformat 4.4.0
notebook 5.7.4
numexpr 2.6.9
numpy 1.15.4
nvidia-ml-py3 7.352.0
olefile 0.46
packaging 19.0
pandas 0.23.4
pandocfilters 1.4.2
parso 0.3.1
pexpect 4.6.0
pickleshare 0.7.5
Pillow 5.4.1
pip 18.1
plac 0.9.6
preshed 2.0.1
prometheus-client 0.5.0
prompt-toolkit 2.0.7
ptyprocess 0.6.0
pycparser 2.19
Pygments 2.3.1
pyOpenSSL 18.0.0
pyparsing 2.3.1
pyrsistent 0.14.9
PySocks 1.6.8
python-dateutil 2.7.5
pytz 2018.9
PyYAML 3.13
pyzmq 17.1.2
qtconsole 4.4.3
regex 2018.1.10
requests 2.21.0
scipy 1.2.0
Send2Trash 1.5.0
setuptools 40.6.3
six 1.12.0
soupsieve 1.7.1
spacy 2.0.18
terminado 0.8.1
testpath 0.4.2
thinc 6.12.1
toolz 0.9.0
torch 1.0.0
torchvision 0.2.1
tornado 5.1.1
tqdm 4.29.1
traitlets 4.3.2
typing 3.6.4
ujson 1.35
urllib3 1.24.1
wcwidth 0.1.7
webencodings 0.5.1
wheel 0.32.3
widgetsnbextension 3.4.2
wrapt 1.10.11

Also, just so that you can get a better understanding of my code, here is my requirements.txt file:

aiofiles==0.4.0
aiohttp==3.5.4
asyncio==3.4.3
fastai==1.0.60
https://download.pytorch.org/whl/cpu/torch-1.1.0-cp37-cp37m-linux_x86_64.whl
https://download.pytorch.org/whl/cpu/torchvision-0.3.0-cp37-cp37m-linux_x86_64.whl
numpy==1.15.4
starlette==0.12.0
uvicorn==0.7.1
python-multipart==0.0.5

Hi daringtrifles

Should I copy and paste all of this verbatim into requirements.txt?

No!

I presume that the long list is where you trained your model.
Your training platform uses these versions.
torch 1.0.0
torchvision 0.2.1

your app uses these versions these are newer than your platform.

https://download.pytorch.org/whl/cpu/torch-1.1.0-cp37-cp37m-linux_x86_64.whl
https://download.pytorch.org/whl/cpu/torchvision-0.3.0-cp37-cp37m-linux_x86_64.whl

may try changing these files/lines to and commenting out the lines above
torch==1.0.0
torchvision==0.2.1

Do you have any console messages when you deploy app.

Cheers mrfabulous1 :smiley::smiley:

@mrfabulous1 These are the errors I get:

Failed to load resource: the server responded with a status of 500 ()
(index):1 Uncaught SyntaxError: Unexpected token I in JSON at position 0
at JSON.parse ()
at XMLHttpRequest.xhr.onload (client.js:31)
xhr.onload @ client.js:31
load (async)
analyze @ client.js:29
onclick @ (index):32
/analyze:1 Failed to load resource: the server responded with a status of 500 ()
(index):1 Uncaught SyntaxError: Unexpected token I in JSON at position 0
at JSON.parse ()
at XMLHttpRequest.xhr.onload (client.js:31)

Update: It works! Thank you sooo much for your help! People like you make the course so much more fun to do, and I(and many other students) really appreciate it.

1 Like

Hi daringtrifles Well done!
mrfabulous1 :smiley::smiley:

Hi can someone help me fix this bug please?? Been stuck at it for hours. I am at the point of deploying it but it will not deploy because of this bug. Thank you!!

Jan 7 03:26:43 PM ==> Cloning from https://github.com/zhu502846/fastai-v3
Jan 7 03:26:44 PM ==> Checking out commit 9c8e1972a79bc687e9ed581e52cde191836a9eca in branch master
Jan 7 03:26:47 PM INFO[0000] Downloading base image python:3.7-slim-stretch
Jan 7 03:26:48 PM INFO[0001] Downloading base image python:3.7-slim-stretch
Jan 7 03:26:49 PM INFO[0002] Downloading base image python:3.7-slim-stretch
Jan 7 03:26:49 PM INFO[0002] Downloading base image python:3.7-slim-stretch
Jan 7 03:26:57 PM INFO[0010] RUN apt-get update && apt-get install -y git python3-dev gcc && rm -rf /var/lib/apt/lists/*
Jan 7 03:27:13 PM INFO[0026] COPY requirements.txt .
Jan 7 03:27:14 PM INFO[0026] extractedFiles: [/requirements.txt /]
Jan 7 03:27:14 PM INFO[0026] RUN pip install --upgrade -r requirements.txt
Jan 7 03:28:01 PM INFO[0074] COPY app app/
Jan 7 03:28:02 PM INFO[0074] extractedFiles: [/app/models /app/static/client.js /app/static/style.css /app/view/index.html / /app/models/models.md /app/server.py /app/static /app/view /app]
Jan 7 03:28:02 PM INFO[0074] RUN python app/server.py
Jan 7 03:28:11 PM Traceback (most recent call last):
File “app/server.py”, line 5, in
from fastai.vision import *
File “/usr/local/lib/python3.7/site-packages/fastai/vision/init.py”, line 3, in
from .learner import *
File “/usr/local/lib/python3.7/site-packages/fastai/vision/learner.py”, line 6, in
from . import models
File “/usr/local/lib/python3.7/site-packages/fastai/vision/models/init.py”, line 2, in
from torchvision.models import ResNet,resnet18,resnet34,resnet50,resnet101,resnet152
File “/usr/local/lib/python3.7/site-packages/torchvision/init.py”, line 2, in
from torchvision import datasets
File “/usr/local/lib/python3.7/site-packages/torchvision/datasets/init.py”, line 9, in
from .fakedata import FakeData
File “/usr/local/lib/python3.7/site-packages/torchvision/datasets/fakedata.py”, line 3, in
from … import transforms
File “/usr/local/lib/python3.7/site-packages/torchvision/transforms/init.py”, line 1, in
from .transforms import *
File “/usr/local/lib/python3.7/site-packages/torchvision/transforms/transforms.py”, line 17, in
from . import functional as F
File “/usr/local/lib/python3.7/site-packages/torchvision/transforms/functional.py”, line 5, in
from PIL import Image, ImageOps, ImageEnhance, PILLOW_VERSION
ImportError: cannot import name ‘PILLOW_VERSION’ from ‘PIL’ (/usr/local/lib/python3.7/site-packages/PIL/init.py)
Jan 7 03:28:11 PM error building image: error building stage: failed to execute command: waiting for process to exit: exit status 1
Jan 7 03:28:11 PM error: exit status 1

1 Like

I suppose it may have to do with !pip list, as mrfabulous1 had mentioned earlier in the thread. However, I don’t know how to use that. I trained my model in jupyter in gradient paperspace, but I cannot run !pip list on the terminal there. !pip: not found. I tried !pip list on my pc (windows) in git bash in my fastai-v3 library but got this:

Collecting fastai
Using cached https://files.pythonhosted.org/packages/f5/e4/a7025bf28f303dbda0f862c09a7f957476fa92c9271643b4061a81bb595f/fastai-1.0.60-py3-none-any.whl
ERROR: Could not find a version that satisfies the requirement list (from versions: none)
ERROR: No matching distribution found for list

very confused >_<

Ok so I realized on jupyter its pip list. So I changed fastai==1.0.60 and numpy==1.15.4. There were many things in jupyter that were not in requirements, and some that were in requirements but not in jupyter. I did not change any of those.

Now, rerunning the deployment, more stuff runs, but I get the same error as before:

ImportError: cannot import name ‘PILLOW_VERSION’ from ‘PIL’ (/usr/local/lib/python3.7/site-packages/PIL/init.py)

@mrfabulous1 save me pls

I had the same problem. Try adding pillow==5.4.1 to your requirements.txt.

4 Likes

Perfect!!! thank you so much. I added this and then it deployed, but it got stuck at analyzing. After reading the thread above, I changed the versions of torch and torchvision and it worked!!!

Thank you!!

1 Like