Deployment Platform: Render ✅

Thanks @mrfabulous1. I searched but couldn’t find. I will try to change the teddybear example.

1 Like

On the platform you trained your model on run “pip list” I use Google Colab so I run “!pip list”

Once you have a your list of files, you can change the version numbers of files used in your requirements.txt in your app repository and redeploy it.

Hi piaoya hope all is well!
The above two quotes are from my previous post if you read this forum you will see at least 40 people have used this strategy to get their app working. If you follow a few docker.com tutorials you will understand how it works.

your training platform - pip list may look something like this.

fastai==1.0.52
fastai==1.0.54
torch==1.0.0
torchvision==0.3.0
numpy==1.16.3
pillow~=6.0
python-multipart==0.0.5

Just replace these lines in
your app/requirments.txt

fastai==1.0.52
fastai==1.0.54
torch==1.0.0
torchvision==0.3.0
numpy==1.16.3
pillow~=6.0
python-multipart==0.0.5

redeploy it

Once you have done this, you may or may not get another error which we can then fix.
If you don’t get the above right your model will NEVER work.

Hi everyone,
I trained a model using fastai version 1.0.52 - in the render example (https://github.com/render-examples/fastai-v3) version 1.0.51 is used. Is there a way to make this work anyway, because I don’t get my floydhub-kernel to downgrade to 1.0.51 to export the model again. Any recommendations?
Thank you so much in advance.

The above is the answer to this question.

Cheers mrfabulous1 :smiley: :smiley:

1 Like

@mrfabulous1: Thank you so much, now I got it. I thought I needed !pip list in order to train the model with the specific requirements in the github repo example. Thanks to you I now know it’s the other way around. :wink:

1 Like

Hi piaoya hope you had a jolly day! So glad you got it, as I know it can be tricky to deploy your first app on a new system.

Yes the steps are

  1. train your model on any platform and any version of fastai in a notebook.
  2. save your model.
  3. run pip list or pip freeze save this to a file.
  4. now go to any platform that runs the a compatible version of python, docker or a virtual environment.
  5. change your requirements.txt in your repo to match the versions in your training pip list file.
  6. deploy app.

You can now use that repository on any system that uses docker, if it works on render, it will work on my version of docker desktop or other cloud platforms.

Cheers mrfabulous1 :smiley: :smiley:

1 Like

In case anyone needs to deploy CycleGan of course v3 to Render, I created a sample that uses generator of the CycleGan and the api takes image and returns styled image. You can use it as a starter for any style transfer api. I also fixed the requirements for the people who train on Colab. It means you can train on Google Colab and export pkl and use it directly on Render. Current model in my repo is trained to draw sketch of portraits (still tranining though).
You can replace it with your own models just be sure that dropbox link has “?dl=1” in the end to start downloading immediately otherwise you will have unpickle error “_pickle.UnpicklingError: invalid load key, ‘<’.”

Github Repo

Colab CycleGan Sample

Demo on Render: https://sketch-cb9q.onrender.com/

1 Like

hey @mrfabulous1 I changed my requirements but am getting the following error:
AttributeError: module ‘sys’ has no attribute ‘set_coroutine_wrapper’

Any idea what to do?

Hi qq88 hope you are having a wonderful day!

What version of python are you using in your venv environment?

I believe that ‘set_coroutine_wrapper’ has been removed in python 3.8, so you may have to run or train your model in python 3.7 where I believe it was deprecated but is still included.

Hope this helps mrfabublous1 :smiley: :smiley:

2 Likes

I have this scenario, I don’t get any error and got this screen:

But I can’t see my webservice on the Render domain, it says: Internal Server Error

what could it be??

Hi viritaromero hope all is well!

If you have not done so already, the first step you should do is search this thread for pip list and make sure you have done what the post’s say.

If you still have an error and you can’t find it on the render platform.

I suggest you install a virtual env on your desktop deploy your app there and test it. This way you will be able to see the error that is causing the internal server error.

Hope this helps

Cheers mrfabulous1 :smiley: :smiley:

1 Like

Managed to deploy( server is working ) but there is error ( please see the following )
https://pastebin.com/NMFu0Tn8

requirements.txt

aiofiles==0.4.0
aiohttp==3.5.4
asyncio==3.4.3
fastai==1.0.52
https://download.pytorch.org/whl/cpu/torch-1.1.0-cp37-cp37m-linux_x86_64.whl
https://download.pytorch.org/whl/cpu/torchvision-0.3.0-cp37-cp37m-linux_x86_64.whl
numpy==1.16.3
pillow~=6.0
python-multipart==0.0.5
starlette==0.12.0
uvicorn==0.7.1

Is possible to remove the error?

Hi andrew77 hope you are having a marvelous day.

Managed to deploy( server is working ) but there is error ( please see the following )

Does this mean that your app is working and can make predictions satisfactorily?
As if the model is working we don’t want to break it.

The error in your pastebin is often seen when there are or is a difference in the versions of library you used to train your model.

Is the requirements.txt in your post identical/same as the one you trained your model on?

I use Google Colab to train my models. But often deploy them on many different platforms. I normally deploy them with in 30 minutes of training the model as this is the only way I can can be sure that there have been no changes in any single library since I trained the model.

In order to get rid of the error, its likely you will need to confirm the libraries you are using in your requirments.txt in production are identical to the platform versions you used on the date you trained your model.

The torch versions look like a good place to start.

I would back up the current deployed version and redeploy it with a different version if there is a difference.

If that doesn’t work I would retrain the model and deploy it immediately checking that the library versions between training and platform deployment in requirements.txt are identical.

Hope this helps

Cheers mrfabulous1 :smiley: :smiley:

1 Like

Hi @mrfabulous1

I managed to deploy successfully . I actually just trained the model today.
Maybe the version didn’t match.

Is there a work around for this?

Hi andrew77 I hope you are still having a wonderful day!

can you run pip freeze on your training platform and on your deployment platform and confirm all libraries used in requirements.txt are identical versions and paste them here please both lists here please.

This is the normal work around.

Cheers mrfabulous1 :smiley: :smiley:

colab/training platform
https://pastebin.com/MYDkmdXT

render
https://pastebin.com/X2TiJWtw

Hi andrew77 :smiley: :smiley:

The library versions that you trained on are different to the deployment ones.

I would suggest you start here, change your render requirements.txt libraries to match your deployment library version numbers.

If you search this forum, there was an issue with Pillow you may need version 6.x
nb! backup your work before making changes.

Cheers mrfabulous1 :smiley: :smiley:

1 Like

My solution, and I think it’ll work for everyone, is directly edit your file on GitHub instead of downloading it. It’s easy and no hassle with the versions.

@mrfabulous1,

Thanks for your prompt reply.

I tried modifying to the following but unsuccessful. I think it’s the mix and match thingy between fast.ai , torch and torchvision. Is there a ‘fail proof’ way to do so?

Thanks

Requirement.txt
aiofiles==0.4.0
aiohttp==3.5.4
asyncio==3.4.3
fastai==1.0.52
torch==1.4.0
torchvision == 0.5.0
numpy==1.16.3
pillow~=6.0
python-multipart==0.0.5
starlette==0.12.0
uvicorn==0.7.1

Error message
https://pastebin.com/FNBBt7Xp

Hi andrew77 :smiley:
You need to change the version of fastai also your reqirements.txt should equal your Colab file versions as these are the ones you trained your model on.

Cheers mrfabulous1 :smiley: :smiley:

thanks it’s working now.

sharing my requirements.txt for colab users.

aiofiles==0.4.0
aiohttp==3.5.4
asyncio==3.4.3
fastai==1.0.60
torch==1.4.0
torchvision == 0.5.0
numpy==1.16.3
pillow~=6.0
python-multipart==0.0.5
starlette==0.12.0
uvicorn==0.7.1
1 Like

Hi andrew77 Hooray :trophy:

Well done! remember you will have to go through this process every time you make a model as you never know if a library has had even a minor change which may break another library.

Doing these steps makes finding faults much easier.

Cheers mrfabulous1 :smiley: :smiley: