Deployment Platform: Render ✅

Hi arturola hope you had a wonderful day today!

  1. Can you upload your server.py file please?
  2. Where is your .pkl file located?
  3. How long ago was your model file (.pkl) created?
  4. What is the size of your .pkl file?
  5. if your .pkl file is online you should be able to access it by pasting the link in your browser. You can also use

wget “your .pkl file web address”

Cheers mrfabulous1 :smiley::smiley:

Hi mrfabulous1!

I finally got everything working!
The last error was just an authorization issue with my Google Drive account:man_facepalming:

Thank you very much for all your help! :smiley::+1:

1 Like

Hi arturola Well done!

I was able to get through to the link generation portion of the deployment tutorial, but I can’t figure out the customize app section. Where is the server.py and the app directory and how do I edit it?

Hi https://forums.fast.ai/u/learningML89ou You may have to learn a little Git. :grinning:

In the guide it tells you to fork the repository, if you have done this you should have your own copy. If you double click the repository folder you should see something like this


you can make your changes here. Once you have committed the changes you can, then pull your repository again into render and rerun your app. If you search this thread you will see a good selection of the frequent problems people have deploying their first model.

You should also search for “pip list” as you need to also amend your requirements.txt file.

Hope this helps mrfabulous :grinning::grinning:

1 Like

I’m trying to follow the ‘Deploying To Render’ instructions from here: https://course.fast.ai/deployment_render.html

When I try to deploy I get a segmentation fault…

Dec 17 12:41:52 PM INFO[0140] COPY app app/
Dec 17 12:41:52 PM INFO[0140] RUN python app/server.py
Dec 17 12:41:56 PM Segmentation fault (core dumped)
Dec 17 12:41:56 PM error building image: error building stage: waiting for process to exit: exit status 139
Dec 17 12:41:57 PM error: exit status 1

As far as I can tell I’ve just followed what the instructions say…

Any idea why this might be? Let me know if more info is needed to understand the problem.

3 Likes

Hi J.J hope you are having a fantastic day!

If you have followed the https://course.fast.ai/deployment_render.html only, you will need to do some other things for your model to work.

Basically search this thread for “!pip list”, you run this on the platform you created your model on and record the library versions and update your requirements.txt in your app then redeploy.

I use google colab to build my models so it is a different environment to render, so as you are using docker to deploy the model this step must be done.

Once this has been done, you may have some other issues, but this is the most important first step.

cheers mrfabulous1 :smiley::smiley:

1 Like

@mrfabulous1 thank you!

I’m having the same issue. I get the same Segmentation fault (core dumped) error when trying the GCP deploy tutorial as well.

I keep getting this error every time I upload a photo to the model!
Can someone please help

File “/usr/local/lib/python3.7/site-packages/starlette/routing.py”, line 585, in call
await route(scope, receive, send)
File “/usr/local/lib/python3.7/site-packages/starlette/routing.py”, line 207, in call
await self.app(scope, receive, send)
File “/usr/local/lib/python3.7/site-packages/starlette/routing.py”, line 40, in app
response = await func(request)
File “app/server.py”, line 63, in analyze
prediction = learn.predict(img)[0]
File “/usr/local/lib/python3.7/site-packages/fastai/basic_train.py”, line 365, in predict
res = self.pred_batch(batch=batch)
File “/usr/local/lib/python3.7/site-packages/fastai/basic_train.py”, line 345, in pred_batch
preds = loss_batch(self.model.eval(), xb, yb, cb_handler=cb_handler)
File “/usr/local/lib/python3.7/site-packages/fastai/basic_train.py”, line 26, in loss_batch
out = model(*xb)
File “/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python3.7/site-packages/torch/nn/modules/container.py”, line 92, in forward
input = module(input)
File “/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python3.7/site-packages/torch/nn/modules/container.py”, line 92, in forward
input = module(input)
File “/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python3.7/site-packages/torch/nn/modules/conv.py”, line 331, in forward
if self.padding_mode == ‘circular’:
File “/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 539, in getattr
type(self).name, name))
AttributeError: ‘Conv2d’ object has no attribute 'padding_mode’

According to mrfabulous1, I changed the requirements.txt file, but only changing the version for fastai and numpy worked for me. Rest everything remained the same for me and I was able to deploy. Prior to that I was getting the same error as you are. Try and see if that helps in any way.

Good luck.

1 Like

Thanks anandkhanna, this was what worked for me too for render. Just change the version of fastai & numpy and leave the torch & torch vision .whl file unchanged

[Section of the requirements.txt file that worked for me]
fastai==1.0.59
https://download.pytorch.org/whl/cpu/torch-1.1.0-cp37-cp37m-linux_x86_64.whl
https://download.pytorch.org/whl/cpu/torchvision-0.3.0-cp37-cp37m-linux_x86_64.whl
numpy==1.17.4

Hi anandkhanna Hope all is well!
Please bear in mind that some people are using fastai 0.7 which was released 2-3 years ago. You are right you may have only had to change one or two files but remember the repository was last updated on May 26 2019. If you come back in 3 months, 6 months or a year and all the libraries like PyTorch have been changed, by their creators and the repository hasn’t been updated, you may need to change every file in the requirements.txt.

I have built 50-60 models since February 2019, I have had to change every file in the requirements.txt. over this time, to make some of the apps work, as it can be difficult to know which library affects another.

So other people may have to change more or less files based on such things as:

  • which libraries have been deprecated
  • which libraries have not been upgraded
  • which system you built your model on and its library versions
  • Has a non published update broken the dependencies of libraries that work together
  • How long ago you downloaded the repository
  • How long ago the repository was updated
  • What changes have happened in fastai
  • Which version of fastai are you using

There are many other factors but the above are the most common.

Cheers hopes this helps mrfabulous1 :smiley::smiley:

1 Like

@Enignition figured out that to make it work for them they had to add
scipy==1.3.3
That made it work for me as well.

Thanks ZSW,

I’ve changed fastai and numpy versions but still getting the same error!

Hi Ayman,

Apologies, I am also just trying out different combinations and not able to say what is wrong.

Just to make sure, you have search the thread for “!piplist” as suggested by mrfabulous1 to check the versions and change them according to your notebook. If you have not, you can try making those changes.

Hi ZSW,

I’m terribly sorry! This is my first model, so I guess it’s a steep learning curve ahead
!piplist on the cloud notebook I’m using shows :
torch 1.0.0
torchvision 0.2.1
I can’t find the link to these two old versions of torch/vision to change requirement.txt.
Any ideas?

Hi ayjabri hope you are having a beautiful day!

You can remove or comment out the URLs in the requirements.txt and replace them with the following lines.

torch==1.0.0
torchvision==0.2.1

This should work also!

have a jolly day mrfabulous1 :smiley::smiley:

@mrfabulous1 My render just says “analyzing…” and is stuck. This is the link. This is server.py:

`import aiohttp
import asyncio
import uvicorn
from fastai import *
from fastai.vision import *
from io import BytesIO
from starlette.applications import Starlette
from starlette.middleware.cors import CORSMiddleware
from starlette.responses import HTMLResponse, JSONResponse
from starlette.staticfiles import StaticFiles

export_file_url = ‘https://www.dropbox.com/s/vznmhfiulf4z1ic/export.pkl?dl=1
export_file_name = ‘export.pkl’

classes = [‘Pop Art’,‘Renaissance Art’,‘Chinese Art’,‘Impressionism’,‘Cubism’]
path = Path(file).parent

app = Starlette()
app.add_middleware(CORSMiddleware, allow_origins=[’*’], allow_headers=[‘X-Requested-With’, ‘Content-Type’])
app.mount(’/static’, StaticFiles(directory=‘app/static’))

async def download_file(url, dest):
if dest.exists(): return
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
data = await response.read()
with open(dest, ‘wb’) as f:
f.write(data)

async def setup_learner():
await download_file(export_file_url, path / export_file_name)
try:
learn = load_learner(path, export_file_name)
return learn
except RuntimeError as e:
if len(e.args) > 0 and ‘CPU-only machine’ in e.args[0]:
print(e)
message = “\n\nThis model was trained with an old version of fastai and will not work in a CPU environment.\n\nPlease update the fastai library in your training environment and export your model again.\n\nSee instructions for ‘Returning to work’ at https://course.fast.ai.”
raise RuntimeError(message)
else:
raise

loop = asyncio.get_event_loop()
tasks = [asyncio.ensure_future(setup_learner())]
learn = loop.run_until_complete(asyncio.gather(*tasks))[0]
loop.close()

@app.route(’/’)
async def homepage(request):
html_file = path / ‘view’ / ‘index.html’
return HTMLResponse(html_file.open().read())

@app.route(’/analyze’, methods=[‘POST’])
async def analyze(request):
img_data = await request.form()
img_bytes = await (img_data[‘file’].read())
img = open_image(BytesIO(img_bytes))
prediction = learn.predict(img)[0]
return JSONResponse({‘result’: str(prediction)})

if name == ‘main’:
if ‘serve’ in sys.argv:
uvicorn.run(app=app, host=‘0.0.0.0’, port=5000, log_level=“info”)
`

Please help! Thanks!

Hi daringtrifles hope all is well!
The first thing which most people on this thread do, is make sure that everything has been configured correctly.

If you didn’t build you model on render.com and built it on Google Colab, your local machine or another platform, then please do a search on this forum for pip list.

This is the first step for any model deployment, as we have to make sure the libraries on render.com in the requirements.txt file match the ones you trained your model on, this is a requirement as render.com uses Docker for deployment.

You will see many posts on ‘pip list’ if you carry them out we will get closer to solving your problem.

Kind regards mrfabulous1 :smiley::smiley: