Puting the Model Into Production: Web Apps

Having some issues with GCP - I am able to get Jeremy’s bear classifier set up in production and everything worked just fine. I made some small tweaks to get my classifier working ( which works well on my local machine running starlette) I deploy the app on GCP and have no issues but when I actually call the URL i get the following error

Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.

Any idea what I may be doing wrong here? I think I got to the log files when the call is being made and here is what I see

Here is a link to the github for the code I deployed, does anyone see any issues?

I’m not sure what the GCP side looks like, but that almost looks like you need to somehow give your folder permissions to run or something. Like a firewall or user permissions issue.

Was exploring deploying AWS lambda function with serverless using Docker, and virtualenv. Wrote a blog post about it; might come in handy for few.You can find it on Medium

Did you find any solution to this?

I started following the instructions for deploying with Render (https://course.fast.ai/deployment_render.html ), but I thought it might be a good idea to clarify in the instructions that even though you “don’t need a credit card to get started,” this isn’t a free service - you need to sign up for a $5/month plan in order to run anything, and apparently you will be billed for the services you use. It’s not really clear how the billing works either - they say they “prorate service by the second” but they also say it’s $5/month.

1 Like

I have tried using this template and built a classifier but whenever I test locally I get this TimeoutError

SSL error in data received
protocol: <asyncio.sslproto.SSLProtocol object at 0x0000005AC1934D68>
transport: <_SelectorSocketTransport fd=2808 read=polling write=<idle, bufsize=0>>
Traceback (most recent call last):
  File "C:\Users\Manimaran\Miniconda3\envs\pokemonapp\lib\asyncio\sslproto.py", line 526, in data_received
    ssldata, appdata = self._sslpipe.feed_ssldata(data)
  File "C:\Users\Manimaran\Miniconda3\envs\pokemonapp\lib\asyncio\sslproto.py", line 207, in feed_ssldata
    self._sslobj.unwrap()
  File "C:\Users\Manimaran\Miniconda3\envs\pokemonapp\lib\ssl.py", line 767, in unwrap
    return self._sslobj.shutdown()
ssl.SSLError: [SSL: KRB5_S_INIT] application data after close notify (_ssl.c:2609)
Traceback (most recent call last):
  File "server.py", line 71, in <module>
    learn = loop.run_until_complete(asyncio.gather(*tasks))[0]
  File "C:\Users\Manimaran\Miniconda3\envs\pokemonapp\lib\asyncio\base_events.py", line 584, in run_until_complete
    return future.result()
  File "server.py", line 45, in setup_learner
    await download_file(export_file_url, path / export_file_name)
  File "server.py", line 38, in download_file
    model_pkl = await response.read()
  File "C:\Users\Manimaran\Miniconda3\envs\pokemonapp\lib\site-packages\aiohttp\client_reqrep.py", line 969, in read
    self._body = await self.content.read()
  File "C:\Users\Manimaran\Miniconda3\envs\pokemonapp\lib\site-packages\aiohttp\streams.py", line 359, in read
    block = await self.readany()
  File "C:\Users\Manimaran\Miniconda3\envs\pokemonapp\lib\site-packages\aiohttp\streams.py", line 381, in readany
    await self._wait('readany')
  File "C:\Users\Manimaran\Miniconda3\envs\pokemonapp\lib\site-packages\aiohttp\streams.py", line 297, in _wait
    await waiter
  File "C:\Users\Manimaran\Miniconda3\envs\pokemonapp\lib\site-packages\aiohttp\helpers.py", line 585, in __exit__
    raise asyncio.TimeoutError from None
concurrent.futures._base.TimeoutError

From what I know this is related to the event of downloading the model from the gdrive link.
I have rebuilt the classifier by reducing the complexity using resnet18 instead of resnet50 but to no use.

Even when I try to deploy on heroku I get boot timeout error

I do not know what else to try to make this work. So any help would be helpful.

My web application

Hello, I’m facing the same issue when testing locally, did you find a solution for this or a workaround without using Render or Heroku?

Umm…You are not using fastai’s library?

Thank you Nikhil. I forked your code and was able to deploy my model with minimal fuss. My modifications are here:

My app to identify if a picture is a peach or nectarine is here:
https://peach-or-nectarine.herokuapp.com

2 Likes

Looks good to me :slight_smile:

Do you know if there is an easy way to update a model in a production environment, instead of “hard-coding” the path to your model in the web server configuration (flask, or whatever that might be?) Thanks!

Can you point to the forked repo? I can’t see your repo, no permission. Thanks!