hey @mrfabulous1 I changed my requirements but am getting the following error:
AttributeError: module ‘sys’ has no attribute ‘set_coroutine_wrapper’
Any idea what to do?
hey @mrfabulous1 I changed my requirements but am getting the following error:
AttributeError: module ‘sys’ has no attribute ‘set_coroutine_wrapper’
Any idea what to do?
Hi qq88 hope you are having a wonderful day!
What version of python are you using in your venv environment?
I believe that ‘set_coroutine_wrapper’ has been removed in python 3.8, so you may have to run or train your model in python 3.7 where I believe it was deprecated but is still included.
Hope this helps mrfabublous1
I have this scenario, I don’t get any error and got this screen:
But I can’t see my webservice on the Render domain, it says: Internal Server Error
what could it be??
Hi viritaromero hope all is well!
If you have not done so already, the first step you should do is search this thread for pip list and make sure you have done what the post’s say.
If you still have an error and you can’t find it on the render platform.
I suggest you install a virtual env on your desktop deploy your app there and test it. This way you will be able to see the error that is causing the internal server error.
Hope this helps
Cheers mrfabulous1
Managed to deploy( server is working ) but there is error ( please see the following )
https://pastebin.com/NMFu0Tn8
requirements.txt
aiofiles==0.4.0
aiohttp==3.5.4
asyncio==3.4.3
fastai==1.0.52
https://download.pytorch.org/whl/cpu/torch-1.1.0-cp37-cp37m-linux_x86_64.whl
https://download.pytorch.org/whl/cpu/torchvision-0.3.0-cp37-cp37m-linux_x86_64.whl
numpy==1.16.3
pillow~=6.0
python-multipart==0.0.5
starlette==0.12.0
uvicorn==0.7.1
Is possible to remove the error?
Hi andrew77 hope you are having a marvelous day.
Managed to deploy( server is working ) but there is error ( please see the following )
Does this mean that your app is working and can make predictions satisfactorily?
As if the model is working we don’t want to break it.
The error in your pastebin is often seen when there are or is a difference in the versions of library you used to train your model.
Is the requirements.txt in your post identical/same as the one you trained your model on?
I use Google Colab to train my models. But often deploy them on many different platforms. I normally deploy them with in 30 minutes of training the model as this is the only way I can can be sure that there have been no changes in any single library since I trained the model.
In order to get rid of the error, its likely you will need to confirm the libraries you are using in your requirments.txt in production are identical to the platform versions you used on the date you trained your model.
The torch versions look like a good place to start.
I would back up the current deployed version and redeploy it with a different version if there is a difference.
If that doesn’t work I would retrain the model and deploy it immediately checking that the library versions between training and platform deployment in requirements.txt are identical.
Hope this helps
Cheers mrfabulous1
Hi @mrfabulous1
I managed to deploy successfully . I actually just trained the model today.
Maybe the version didn’t match.
Is there a work around for this?
Hi andrew77 I hope you are still having a wonderful day!
can you run pip freeze on your training platform and on your deployment platform and confirm all libraries used in requirements.txt are identical versions and paste them here please both lists here please.
This is the normal work around.
Cheers mrfabulous1
Hi andrew77
The library versions that you trained on are different to the deployment ones.
I would suggest you start here, change your render requirements.txt libraries to match your deployment library version numbers.
If you search this forum, there was an issue with Pillow you may need version 6.x
nb! backup your work before making changes.
Cheers mrfabulous1
My solution, and I think it’ll work for everyone, is directly edit your file on GitHub instead of downloading it. It’s easy and no hassle with the versions.
Thanks for your prompt reply.
I tried modifying to the following but unsuccessful. I think it’s the mix and match thingy between fast.ai , torch and torchvision. Is there a ‘fail proof’ way to do so?
Thanks
Requirement.txt
aiofiles==0.4.0
aiohttp==3.5.4
asyncio==3.4.3
fastai==1.0.52
torch==1.4.0
torchvision == 0.5.0
numpy==1.16.3
pillow~=6.0
python-multipart==0.0.5
starlette==0.12.0
uvicorn==0.7.1
Error message
https://pastebin.com/FNBBt7Xp
Hi andrew77
You need to change the version of fastai also your reqirements.txt should equal your Colab file versions as these are the ones you trained your model on.
Cheers mrfabulous1
thanks it’s working now.
sharing my requirements.txt for colab users.
aiofiles==0.4.0
aiohttp==3.5.4
asyncio==3.4.3
fastai==1.0.60
torch==1.4.0
torchvision == 0.5.0
numpy==1.16.3
pillow~=6.0
python-multipart==0.0.5
starlette==0.12.0
uvicorn==0.7.1
Hi andrew77 Hooray
Well done! remember you will have to go through this process every time you make a model as you never know if a library has had even a minor change which may break another library.
Doing these steps makes finding faults much easier.
Cheers mrfabulous1
Hey guys, render works fantastic so far! Started with GCP but gave up after an hour. Render deploy went seamlessly. Need to work on the model but considering this is my first deployment ever, I am loving it!
https://house-plant-classifier.onrender.com/
Hello folks, I would like to ask you a very silly question because I see everyone is doing a great job deploying apps.
Context: I tried to deploy mine, in fact I just changed the resulting model url in the server.py file but the application is the same, detect bears. I tried this because I wanted to see by myself it really worked before coding my own application.
Problem: If you go my site url, the site is all white, blank, nada. Should I wait some time before it deploys?, did I miss something?, I even added my cc number because I thought that was the reason.
I temporarily suspended the site to avoid charges.
More context:
Any ideas?, thank you for your help.
HI kuro_inu hope you are having wonderful day!
If you look at the majority of posts on this thread they all have an error which makes it easier to resolve the issue.
My suggestions would be:
Hopefully if you do all the above you will find your error.
Cheers mrfabulous1
Instead of downloading the model every time, is there a way to store it in render and access it directly?
If so how do I access the model stored in render?
Will storing it in render increase the response time?
Hi Johnyquest I hope you are having a wonderful day!
Instead of downloading the model every time, is there a way to store it in render and access it directly?
Create a copy of your current working repository for backup.
Add model to a directory in copy repository
Edit server.py to point to your model file in your repository files.
The lines below are the ones in question.
export_file_url = 'https://www.dropbox.com/s/6bgq8t6yextloqp/export.pkl?raw=1'
export_file_name = 'export.pkl'
If so how do I access the model stored in render?
?
Your model is accessed in the same way as it was when it was on
say google drive, its now being accessed locally due to server.py edit.
Will storing it in render increase the response time?
Not sure! as Jeremy would say try it.
time it.
Let us know on this forum, so others don’t have the same issues.
My guess? is it should be quicker as disc access is normally quicker than web access.
One point to note sometimes the code with the model is larger than the allowable disk space of the price plan you have purchased.
If this is the case, the above will not work. Some of the posts in this thread describe this problem.
Hope this helps
Cheers mrfabulous1