Puting the Model Into Production: Web Apps

No Anurag, I will try it over the weekend and will let you know.

1 Like

Hi @anurag.
Thanks for your Web service Render to deploy (fastai) Web app and for your tutorial.
I just followed it but I got an error (see last lines from the terminal in the dashboard of my Web service on the Render site). How can I solve it ? Thanks.

Jan 14 08:30:55 PM  Successfully installed Pillow-5.4.1 aiofiles-0.4.0 aiohttp-3.5.4 async-timeout-3.0.1 attrs-18.2.0 bottleneck-1.2.1 certifi-2018.11.29 chardet-3.0.4 click-7.0 cycler-0.10.0 cymem-2.0.2 cytoolz-0.9.0.1 dataclasses-0.6 dill-0.2.8.2 fastai-1.0.39 fastprogress-0.1.18 h11-0.8.1 httptools-0.0.11 idna-2.8 idna-ssl-1.1.0 kiwisolver-1.0.1 matplotlib-3.0.2 msgpack-0.5.6 msgpack-numpy-0.4.3.2 multidict-4.5.2 murmurhash-1.0.1 numexpr-2.6.9 numpy-1.16.0rc1 nvidia-ml-py3-7.352.0 packaging-18.0 pandas-0.23.4 plac-0.9.6 preshed-2.0.1 pyparsing-2.3.1 python-dateutil-2.7.5 python-multipart-0.0.5 pytz-2018.9 pyyaml-3.13 regex-2018.1.10 requests-2.21.0 scipy-1.2.0 six-1.12.0 spacy-2.0.18 starlette-0.9.9 thinc-6.12.1 toolz-0.9.0 torch-1.0.0 torch-nightly-1.0.0.dev20190114 torchvision-0.2.1 tqdm-4.29.1 typing-3.6.6 typing-extensions-3.7.2 ujson-1.35 urllib3-1.24.1 uvicorn-0.3.24 uvloop-0.11.3 websockets-7.0 wrapt-1.10.11 yarl-1.3.0
Jan 14 08:30:56 PM  INFO[0251] COPY app app/
Jan 14 08:30:56 PM  INFO[0251] RUN python app/server.py
Jan 14 08:30:56 PM  INFO[0251] cmd: /bin/sh
Jan 14 08:30:56 PM  INFO[0251] args: [-c python app/server.py]
Jan 14 08:31:07 PM  /usr/local/lib/python3.6/site-packages/fastai/data_block.py:388: UserWarning: Your training set is empty. Is this is by design, pass `ignore_empty=True` to remove this warning.
Jan 14 08:31:07 PM    warn("Your training set is empty. Is this is by design, pass `ignore_empty=True` to remove this warning.")
Jan 14 08:31:07 PM  /usr/local/lib/python3.6/site-packages/fastai/data_block.py:391: UserWarning: Your validation set is empty. Is this is by design, use `no_split()`
Jan 14 08:31:07 PM                   or pass `ignore_empty=True` when labelling to remove this warning.
Jan 14 08:31:07 PM    or pass `ignore_empty=True` when labelling to remove this warning.""")
Jan 14 08:31:07 PM  Traceback (most recent call last):
  File "app/server.py", line 38, in <module>
    learn = loop.run_until_complete(asyncio.gather(*tasks))[0]
  File "/usr/local/lib/python3.6/asyncio/base_events.py", line 484, in run_until_complete
    return future.result()
  File "app/server.py", line 33, in setup_learner
    learn.load(model_file_name)
  File "/usr/local/lib/python3.6/site-packages/fastai/basic_train.py", line 217, in load
    state = torch.load(self.path/self.model_dir/f'{name}.pth', map_location=device)
  File "/usr/local/lib/python3.6/site-packages/torch/serialization.py", line 367, in load
    return _load(f, map_location, pickle_module)
  File "/usr/local/lib/python3.6/site-packages/torch/serialization.py", line 528, in _load
    magic_number = pickle_module.load(f)
_pickle.UnpicklingError: invalid load key, '<'.
Jan 14 08:31:07 PM  error building image: error building stage: waiting for process to exit: exit status 1
Jan 14 08:31:07 PM  error: exit status 1

I believe your model_file_url is incorrect. Try this one:

https://doc-0g-24-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/1l73g1ajuadcpt6um9g7dhjc8ocen9up/1547503200000/07445456612858217027/*/1rJL2DRZjOlJPCnm2rualdpwvtGnmsbPz?e=download

Thanks for your answer @anurag.

I used the web service from your tutorial (my model.pth is on Google Drive). As I used resnet50, the size of my model file is 300 Mo.

I did (in server.py on github) but Render Web service failed (see below the last lines from the terminal). What did I make wrong ?

Jan 14 11:07:44 PM  Successfully installed Pillow-5.4.1 aiofiles-0.4.0 aiohttp-3.5.4 async-timeout-3.0.1 attrs-18.2.0 bottleneck-1.2.1 certifi-2018.11.29 chardet-3.0.4 click-7.0 cycler-0.10.0 cymem-2.0.2 cytoolz-0.9.0.1 dataclasses-0.6 dill-0.2.8.2 fastai-1.0.39 fastprogress-0.1.18 h11-0.8.1 httptools-0.0.11 idna-2.8 idna-ssl-1.1.0 kiwisolver-1.0.1 matplotlib-3.0.2 msgpack-0.5.6 msgpack-numpy-0.4.3.2 multidict-4.5.2 murmurhash-1.0.1 numexpr-2.6.9 numpy-1.16.0rc1 nvidia-ml-py3-7.352.0 packaging-18.0 pandas-0.23.4 plac-0.9.6 preshed-2.0.1 pyparsing-2.3.1 python-dateutil-2.7.5 python-multipart-0.0.5 pytz-2018.9 pyyaml-3.13 regex-2018.1.10 requests-2.21.0 scipy-1.2.0 six-1.12.0 spacy-2.0.18 starlette-0.9.9 thinc-6.12.1 toolz-0.9.0 torch-1.0.0 torch-nightly-1.0.0.dev20190114 torchvision-0.2.1 tqdm-4.29.1 typing-3.6.6 typing-extensions-3.7.2 ujson-1.35 urllib3-1.24.1 uvicorn-0.3.24 uvloop-0.11.3 websockets-7.0 wrapt-1.10.11 yarl-1.3.0
Jan 14 11:07:46 PM  INFO[0276] COPY app app/
Jan 14 11:07:46 PM  INFO[0276] RUN python app/server.py
Jan 14 11:07:46 PM  INFO[0276] cmd: /bin/sh
Jan 14 11:07:46 PM  INFO[0276] args: [-c python app/server.py]
Jan 14 11:08:01 PM  /usr/local/lib/python3.6/site-packages/fastai/data_block.py:388: UserWarning: Your training set is empty. Is this is by design, pass `ignore_empty=True` to remove this warning.
Jan 14 11:08:01 PM    warn("Your training set is empty. Is this is by design, pass `ignore_empty=True` to remove this warning.")
Jan 14 11:08:01 PM  /usr/local/lib/python3.6/site-packages/fastai/data_block.py:391: UserWarning: Your validation set is empty. Is this is by design, use `no_split()`
Jan 14 11:08:01 PM                   or pass `ignore_empty=True` when labelling to remove this warning.
Jan 14 11:08:01 PM    or pass `ignore_empty=True` when labelling to remove this warning.""")
Jan 14 11:08:01 PM  Traceback (most recent call last):
  File "app/server.py", line 38, in <module>
    learn = loop.run_until_complete(asyncio.gather(*tasks))[0]
  File "/usr/local/lib/python3.6/asyncio/base_events.py", line 484, in run_until_complete
    return future.result()
  File "app/server.py", line 33, in setup_learner
    learn.load(model_file_name)
  File "/usr/local/lib/python3.6/site-packages/fastai/basic_train.py", line 219, in load
    get_model(self.model).load_state_dict(state['model'], strict=strict)
  File "/usr/local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 769, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Sequential:
Jan 14 11:08:01 PM      Unexpected key(s) in state_dict: "0.4.0.conv3.weight", "0.4.0.bn3.weight", "0.4.0.bn3.bias", "0.4.0.bn3.running_mean", "0.4.0.bn3.running_var", "0.4.0.bn3.num_batches_tracked", "0.4.0.downsample.0.weight", "0.4.0.downsample.1.weight", "0.4.0.downsample.1.bias", "0.4.0.downsample.1.running_mean", "0.4.0.downsample.1.running_var", "0.4.0.downsample.1.num_batches_tracked", "0.4.1.conv3.weight", "0.4.1.bn3.weight", "0.4.1.bn3.bias", "0.4.1.bn3.running_mean", "0.4.1.bn3.running_var", "0.4.1.bn3.num_batches_tracked", "0.4.2.conv3.weight", "0.4.2.bn3.weight", "0.4.2.bn3.bias", "0.4.2.bn3.running_mean", "0.4.2.bn3.running_var", "0.4.2.bn3.num_batches_tracked", "0.5.0.conv3.weight", "0.5.0.bn3.weight", "0.5.0.bn3.bias", "0.5.0.bn3.running_mean", "0.5.0.bn3.running_var", "0.5.0.bn3.num_batches_tracked", "0.5.1.conv3.weight", "0.5.1.bn3.weight", "0.5.1.bn3.bias", "0.5.1.bn3.running_mean", "0.5.1.bn3.running_var", "0.5.1.bn3.num_batches_tracked", "0.5.2.conv3.weight", "0.5.2.bn3.weight", "0.5.2.bn3.bias", "0.5.2.bn3.running_mean", "0.5.2.bn3.running_var", "0.5.2.bn3.num_batches_tracked", "0.5.3.conv3.weight", "0.5.3.bn3.weight", "0.5.3.bn3.bias", "0.5.3.bn3.running_mean", "0.5.3.bn3.running_var", "0.5.3.bn3.num_batches_tracked", "0.6.0.conv3.weight", "0.6.0.bn3.weight", "0.6.0.bn3.bias", "0.6.0.bn3.running_mean", "0.6.0.bn3.running_var", "0.6.0.bn3.num_batches_tracked", "0.6.1.conv3.weight", "0.6.1.bn3.weight", "0.6.1.bn3.bias", "0.6.1.bn3.running_mean", "0.6.1.bn3.running_var", "0.6.1.bn3.num_batches_tracked", "0.6.2.conv3.weight", "0.6.2.bn3.weight", "0.6.2.bn3.bias", "0.6.2.bn3.running_mean", "0.6.2.bn3.running_var", "0.6.2.bn3.num_batches_tracked", "0.6.3.conv3.weight", "0.6.3.bn3.weight", "0.6.3.bn3.bias", "0.6.3.bn3.running_mean", "0.6.3.bn3.running_var", "0.6.3.bn3.num_batches_tracked", "0.6.4.conv3.weight", "0.6.4.bn3.weight", "0.6.4.bn3.bias", "0.6.4.bn3.running_mean", "0.6.4.bn3.running_var", "0.6.4.bn3.num_batches_tracked", "0.6.5.conv3.weight", "0.6.5.bn3.weight", "0.6.5.bn3.bias", "0.6.5.bn3.running_mean", "0.6.5.bn3.running_var", "0.6.5.bn3.num_batches_tracked", "0.7.0.conv3.weight", "0.7.0.bn3.weight", "0.7.0.bn3.bias", "0.7.0.bn3.running_mean", "0.7.0.bn3.running_var", "0.7.0.bn3.num_batches_tracked", "0.7.1.conv3.weight", "0.7.1.bn3.weight", "0.7.1.bn3.bias", "0.7.1.bn3.running_mean", "0.7.1.bn3.running_var", "0.7.1.bn3.num_batches_tracked", "0.7.2.conv3.weight", "0.7.2.bn3.weight", "0.7.2.bn3.bias", "0.7.2.bn3.running_mean", "0.7.2.bn3.running_var", "0.7.2.bn3.num_batches_tracked".
Jan 14 11:08:01 PM      size mismatch for 0.4.0.conv1.weight: copying a param with shape torch.Size([64, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.4.1.conv1.weight: copying a param with shape torch.Size([64, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.4.2.conv1.weight: copying a param with shape torch.Size([64, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.5.0.conv1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 64, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.5.0.downsample.0.weight: copying a param with shape torch.Size([512, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 64, 1, 1]).
Jan 14 11:08:01 PM      size mismatch for 0.5.0.downsample.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]).
Jan 14 11:08:01 PM      size mismatch for 0.5.0.downsample.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]).
Jan 14 11:08:01 PM      size mismatch for 0.5.0.downsample.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]).
Jan 14 11:08:01 PM      size mismatch for 0.5.0.downsample.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]).
Jan 14 11:08:01 PM      size mismatch for 0.5.1.conv1.weight: copying a param with shape torch.Size([128, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.5.2.conv1.weight: copying a param with shape torch.Size([128, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.5.3.conv1.weight: copying a param with shape torch.Size([128, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.6.0.conv1.weight: copying a param with shape torch.Size([256, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.6.0.downsample.0.weight: copying a param with shape torch.Size([1024, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 128, 1, 1]).
Jan 14 11:08:01 PM      size mismatch for 0.6.0.downsample.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]).
Jan 14 11:08:01 PM      size mismatch for 0.6.0.downsample.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]).
Jan 14 11:08:01 PM      size mismatch for 0.6.0.downsample.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]).
Jan 14 11:08:01 PM      size mismatch for 0.6.0.downsample.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]).
Jan 14 11:08:01 PM      size mismatch for 0.6.1.conv1.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.6.2.conv1.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.6.3.conv1.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.6.4.conv1.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.6.5.conv1.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.7.0.conv1.weight: copying a param with shape torch.Size([512, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 256, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.7.0.downsample.0.weight: copying a param with shape torch.Size([2048, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 256, 1, 1]).
Jan 14 11:08:01 PM      size mismatch for 0.7.0.downsample.1.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]).
Jan 14 11:08:01 PM      size mismatch for 0.7.0.downsample.1.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]).
Jan 14 11:08:01 PM      size mismatch for 0.7.0.downsample.1.running_mean: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]).
Jan 14 11:08:01 PM      size mismatch for 0.7.0.downsample.1.running_var: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]).
Jan 14 11:08:01 PM      size mismatch for 0.7.1.conv1.weight: copying a param with shape torch.Size([512, 2048, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.7.2.conv1.weight: copying a param with shape torch.Size([512, 2048, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 1.2.weight: copying a param with shape torch.Size([4096]) from checkpoint, the shape in current model is torch.Size([1024]).
Jan 14 11:08:01 PM      size mismatch for 1.2.bias: copying a param with shape torch.Size([4096]) from checkpoint, the shape in current model is torch.Size([1024]).
Jan 14 11:08:01 PM      size mismatch for 1.2.running_mean: copying a param with shape torch.Size([4096]) from checkpoint, the shape in current model is torch.Size([1024]).
Jan 14 11:08:01 PM      size mismatch for 1.2.running_var: copying a param with shape torch.Size([4096]) from checkpoint, the shape in current model is torch.Size([1024]).
Jan 14 11:08:01 PM      size mismatch for 1.4.weight: copying a param with shape torch.Size([512, 4096]) from checkpoint, the shape in current model is torch.Size([512, 1024]).
Jan 14 11:08:01 PM  error building image: error building stage: waiting for process to exit: exit status 1
Jan 14 11:08:01 PM  error: exit status 1

Hard for me to debug without knowing more about your models, but it looks like the following line is to blame:

Jan 14 11:08:01 PM      Unexpected key(s) in state_dict: "0.4.0.conv3.weight", "0.4.0.bn3.weight", "0.4.0.bn3.bias", "0.4.0.bn3.running_mean", "0.4.0.bn3.running_var", "0.4.0.bn3.num_batches_tracked", "0.4.0.downsample.0.weight", "0.4.0.downsample.1.weight", "0.4.0.downsample.1.bias", "0.4.0.downsample.1.running_mean", "0.4.0.downsample.1.running_var", "0.4.0.downsample.1.num_batches_tracked", "0.4.1.conv3.weight", "0.4.1.bn3.weight", "0.4.1.bn3.bias", "0.4.1.bn3.running_mean", "0.4.1.bn3.running_var", "0.4.1.bn3.num_batches_tracked", "0.4.2.conv3.weight", "0.4.2.bn3.weight", "0.4.2.bn3.bias", "0.4.2.bn3.running_mean", "0.4.2.bn3.running_var", "0.4.2.bn3.num_batches_tracked", "0.5.0.conv3.weight", "0.5.0.bn3.weight", "0.5.0.bn3.bias", "0.5.0.bn3.running_mean", "0.5.0.bn3.running_var", "0.5.0.bn3.num_batches_tracked", "0.5.1.conv3.weight", "0.5.1.bn3.weight", "0.5.1.bn3.bias", "0.5.1.bn3.running_mean", "0.5.1.bn3.running_var", "0.5.1.bn3.num_batches_tracked", "0.5.2.conv3.weight", "0.5.2.bn3.weight", "0.5.2.bn3.bias", "0.5.2.bn3.running_mean", "0.5.2.bn3.running_var", "0.5.2.bn3.num_batches_tracked", "0.5.3.conv3.weight", "0.5.3.bn3.weight", "0.5.3.bn3.bias", "0.5.3.bn3.running_mean", "0.5.3.bn3.running_var", "0.5.3.bn3.num_batches_tracked", "0.6.0.conv3.weight", "0.6.0.bn3.weight", "0.6.0.bn3.bias", "0.6.0.bn3.running_mean", "0.6.0.bn3.running_var", "0.6.0.bn3.num_batches_tracked", "0.6.1.conv3.weight", "0.6.1.bn3.weight", "0.6.1.bn3.bias", "0.6.1.bn3.running_mean", "0.6.1.bn3.running_var", "0.6.1.bn3.num_batches_tracked", "0.6.2.conv3.weight", "0.6.2.bn3.weight", "0.6.2.bn3.bias", "0.6.2.bn3.running_mean", "0.6.2.bn3.running_var", "0.6.2.bn3.num_batches_tracked", "0.6.3.conv3.weight", "0.6.3.bn3.weight", "0.6.3.bn3.bias", "0.6.3.bn3.running_mean", "0.6.3.bn3.running_var", "0.6.3.bn3.num_batches_tracked", "0.6.4.conv3.weight", "0.6.4.bn3.weight", "0.6.4.bn3.bias", "0.6.4.bn3.running_mean", "0.6.4.bn3.running_var", "0.6.4.bn3.num_batches_tracked", "0.6.5.conv3.weight", "0.6.5.bn3.weight", "0.6.5.bn3.bias", "0.6.5.bn3.running_mean", "0.6.5.bn3.running_var", "0.6.5.bn3.num_batches_tracked", "0.7.0.conv3.weight", "0.7.0.bn3.weight", "0.7.0.bn3.bias", "0.7.0.bn3.running_mean", "0.7.0.bn3.running_var", "0.7.0.bn3.num_batches_tracked", "0.7.1.conv3.weight", "0.7.1.bn3.weight", "0.7.1.bn3.bias", "0.7.1.bn3.running_mean", "0.7.1.bn3.running_var", "0.7.1.bn3.num_batches_tracked", "0.7.2.conv3.weight", "0.7.2.bn3.weight", "0.7.2.bn3.bias", "0.7.2.bn3.running_mean", "0.7.2.bn3.running_var", "0.7.2.bn3.num_batches_tracked".
Jan 14 11:08:01 PM      size mismatch for 0.4.0.conv1.weight: copying a param with shape torch.Size([64, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).

Hello @anurag. My model works perfectly (using resnet50 and fastai v1) in my jupyter notebook. Therefore, as I don’t know what to change in my model, I moved it to a resnet34, trained it just once in my jupyter notebook, saved it into a file resnet34-1.pth with learn.save(), uploaded to my Google Drive, got the shared link, got the download link by using the Direct Link Generator, edited my /app/server.py in my GitHub with the new download link, committed the change, went to my Web service dashboard in the Render site and waited for the installation update. And ? … it worked ! :slight_smile:

Below, the last lines from the Render terminal. We can see that - again - the Render system wrote “UserWarning: Your training set is empty.” but - this time - did try again and finally passed.

(I will try again with the link to my resnet50 model file and will keep you informed)

Jan 15 10:27:39 AM  Successfully installed Pillow-5.4.1 aiofiles-0.4.0 aiohttp-3.5.4 async-timeout-3.0.1 attrs-18.2.0 bottleneck-1.2.1 certifi-2018.11.29 chardet-3.0.4 click-7.0 cycler-0.10.0 cymem-2.0.2 cytoolz-0.9.0.1 dataclasses-0.6 dill-0.2.8.2 fastai-1.0.39 fastprogress-0.1.18 h11-0.8.1 httptools-0.0.11 idna-2.8 idna-ssl-1.1.0 kiwisolver-1.0.1 matplotlib-3.0.2 msgpack-0.5.6 msgpack-numpy-0.4.3.2 multidict-4.5.2 murmurhash-1.0.1 numexpr-2.6.9 numpy-1.16.0rc1 nvidia-ml-py3-7.352.0 packaging-18.0 pandas-0.23.4 plac-0.9.6 preshed-2.0.1 pyparsing-2.3.1 python-dateutil-2.7.5 python-multipart-0.0.5 pytz-2018.9 pyyaml-3.13 regex-2018.1.10 requests-2.21.0 scipy-1.2.0 six-1.12.0 spacy-2.0.18 starlette-0.9.9 thinc-6.12.1 toolz-0.9.0 torch-1.0.0 torch-nightly-1.0.0.dev20190114 torchvision-0.2.1 tqdm-4.29.1 typing-3.6.6 typing-extensions-3.7.2 ujson-1.35 urllib3-1.24.1 uvicorn-0.3.24 uvloop-0.11.3 websockets-7.0 wrapt-1.10.11 yarl-1.3.0
Jan 15 10:27:40 AM  INFO[0230] COPY app app/
Jan 15 10:27:40 AM  INFO[0230] RUN python app/server.py
Jan 15 10:27:40 AM  INFO[0230] cmd: /bin/sh
Jan 15 10:27:40 AM  INFO[0230] args: [-c python app/server.py]
Jan 15 10:27:55 AM  /usr/local/lib/python3.6/site-packages/fastai/data_block.py:388: UserWarning: Your training set is empty. Is this is by design, pass `ignore_empty=True` to remove this warning.
Jan 15 10:27:55 AM    warn("Your training set is empty. Is this is by design, pass `ignore_empty=True` to remove this warning.")
Jan 15 10:27:55 AM  /usr/local/lib/python3.6/site-packages/fastai/data_block.py:391: UserWarning: Your validation set is empty. Is this is by design, use `no_split()`
Jan 15 10:27:55 AM                   or pass `ignore_empty=True` when labelling to remove this warning.
Jan 15 10:27:55 AM    or pass `ignore_empty=True` when labelling to remove this warning.""")
Jan 15 10:27:56 AM  INFO[0245] EXPOSE 5042
Jan 15 10:27:56 AM  INFO[0245] cmd: EXPOSE
Jan 15 10:27:56 AM  INFO[0245] Adding exposed port: 5042/tcp
Jan 15 10:27:56 AM  INFO[0245] CMD ["python", "app/server.py", "serve"]
Jan 15 10:27:56 AM  INFO[0245] Taking snapshot of full filesystem...
Jan 15 10:30:01 AM   ______________________________
Jan 15 10:30:01 AM  < Pushing image to registry... >
Jan 15 10:30:01 AM   ------------------------------
Jan 15 10:30:01 AM          \   ^__^
Jan 15 10:30:01 AM           \  (oo)\_______
Jan 15 10:30:01 AM              (__)\       )\/\
Jan 15 10:30:01 AM                  ||----w |
Jan 15 10:30:01 AM                  ||     ||
Jan 15 10:30:37 AM   ______
Jan 15 10:30:37 AM  < Done >
Jan 15 10:30:37 AM   ------
Jan 15 10:30:37 AM          \   ^__^
Jan 15 10:30:37 AM           \  (oo)\_______
Jan 15 10:30:37 AM              (__)\       )\/\
Jan 15 10:30:37 AM                  ||----w |
Jan 15 10:30:37 AM                  ||     ||
Jan 15 10:32:11 AM  /usr/local/lib/python3.6/site-packages/fastai/data_block.py:388: UserWarning: Your training set is empty. Is this is by design, pass `ignore_empty=True` to remove this warning.
Jan 15 10:32:11 AM    warn("Your training set is empty. Is this is by design, pass `ignore_empty=True` to remove this warning.")
Jan 15 10:32:11 AM  /usr/local/lib/python3.6/site-packages/fastai/data_block.py:391: UserWarning: Your validation set is empty. Is this is by design, use `no_split()`
Jan 15 10:32:11 AM                   or pass `ignore_empty=True` when labelling to remove this warning.
Jan 15 10:32:11 AM    or pass `ignore_empty=True` when labelling to remove this warning.""")
Jan 15 10:32:11 AM  INFO: Started server process [1]
Jan 15 10:32:11 AM  INFO: Waiting for application startup.
Jan 15 10:32:11 AM  INFO: Uvicorn running on http://0.0.0.0:5042 (Press CTRL+C to quit)
Jan 15 10:32:26 AM  /usr/local/lib/python3.6/site-packages/fastai/data_block.py:388: UserWarning: Your training set is empty. Is this is by design, pass `ignore_empty=True` to remove this warning.
Jan 15 10:32:26 AM    warn("Your training set is empty. Is this is by design, pass `ignore_empty=True` to remove this warning.")
Jan 15 10:32:26 AM  /usr/local/lib/python3.6/site-packages/fastai/data_block.py:391: UserWarning: Your validation set is empty. Is this is by design, use `no_split()`
Jan 15 10:32:26 AM                   or pass `ignore_empty=True` when labelling to remove this warning.
Jan 15 10:32:26 AM    or pass `ignore_empty=True` when labelling to remove this warning.""")
Jan 15 10:32:26 AM  INFO: Started server process [1]
Jan 15 10:32:26 AM  INFO: Waiting for application startup.
Jan 15 10:32:26 AM  INFO: Uvicorn running on http://0.0.0.0:5042 (Press CTRL+C to quit)
Jan 15 10:32:35 AM  INFO: ('10.104.12.26', 36626) - "GET / HTTP/1.1" 200
Jan 15 10:32:36 AM  INFO: ('10.104.12.26', 36626) - "GET /style.css HTTP/1.1" 200
Jan 15 10:32:36 AM  INFO: ('10.104.12.26', 36628) - "GET /client.js HTTP/1.1" 200

Hello @anurag. About “UserWarning: Your training set is empty.”, I found the issue.

In your server.py file, you use the single_from_classes method in the following code:

async def setup_learner():
    await download_file(model_file_url, path/'models'/f'{model_file_name}.pth')
    data_bunch = ImageDataBunch.single_from_classes(path, classes,
        tfms=get_transforms(), size=224).normalize(imagenet_stats)
    learn = create_cnn(data_bunch, models.resnet34, pretrained=False)
    learn.load(model_file_name)
return learn

But the single_from_classes is deprecated as written in the fastai docs (read as well this warning).

What would be the corrected code for the “async def setup_learner()” paragraph ? Thanks.

Thanks for tracking this down and glad things are working!

Perhaps someone who’s worked with fastai more can chime in on the single_from_classes replacement. Otherwise I’ll dig into it in the next day or so.

For resnet50, you’d also need to change the following line to replace models.resnet34:

    learn = create_cnn(data_bunch, models.resnet34, pretrained=False)
1 Like

I’m getting this error when using zeit. What changes should I make to upgrade to v2

Error! You tried to create a Now 1.0 deployment. Please use Now 2.0 instead: https://zeit.co/upgrade

1 Like

I saw the same thing. We’ll probably need to remove the Zeit guide until they fix this.

2 Likes

Hello @anurag. I saw that you changed the content of the file server.py to take into account the new model export/import way of fastai v1 (lean.export() / load_learner()). Great :slight_smile:

I just tried it but I got the following error in the Render terminal of my Web service. Do you you know how to solve it? Thanks.

Jan 25 02:14:31 PM  Successfully installed aiofiles-0.4.0 aiohttp-3.5.4 async-timeout-3.0.1 attrs-18.2.0 beautifulsoup4-4.7.1 bottleneck-1.2.1 certifi-2018.11.29 chardet-3.0.4 click-7.0 cycler-0.10.0 cymem-2.0.2 cytoolz-0.9.0.1 dill-0.2.9 fastai-1.0.42 fastprogress-0.1.18 h11-0.8.1 httptools-0.0.11 idna-2.8 kiwisolver-1.0.1 matplotlib-3.0.2 msgpack-0.5.6 msgpack-numpy-0.4.3.2 multidict-4.5.2 murmurhash-1.0.1 numexpr-2.6.9 numpy-1.16.0 nvidia-ml-py3-7.352.0 packaging-19.0 pandas-0.23.4 pillow-5.4.1 plac-0.9.6 preshed-2.0.1 pyparsing-2.3.1 python-dateutil-2.7.5 python-multipart-0.0.5 pytz-2018.9 pyyaml-3.13 regex-2018.1.10 requests-2.21.0 scipy-1.2.0 six-1.12.0 soupsieve-1.7.3 spacy-2.0.18 starlette-0.9.11 thinc-6.12.1 toolz-0.9.0 torch-1.0.0 torchvision-0.2.1 tqdm-4.29.1 typing-3.6.6 ujson-1.35 urllib3-1.24.1 uvicorn-0.4.0 uvloop-0.12.0 websockets-7.0 wrapt-1.10.11 yarl-1.3.0
Jan 25 02:14:32 PM  INFO[0203] COPY app app/
Jan 25 02:14:32 PM  INFO[0203] RUN python app/server.py
Jan 25 02:14:32 PM  INFO[0203] cmd: /bin/sh
Jan 25 02:14:32 PM  INFO[0203] args: [-c python app/server.py]
Jan 25 02:14:41 PM  Traceback (most recent call last):
  File "app/server.py", line 35, in <module>
    learn = loop.run_until_complete(asyncio.gather(*tasks))[0]
  File "/usr/local/lib/python3.7/asyncio/base_events.py", line 584, in run_until_complete
    return future.result()
  File "app/server.py", line 30, in setup_learner
    learn = load_learner(path, export_file_name)
  File "/usr/local/lib/python3.7/site-packages/fastai/basic_train.py", line 469, in load_learner
    state = torch.load(open(Path(path)/fname, 'rb'))
  File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 367, in load
    return _load(f, map_location, pickle_module)
  File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 538, in _load
    result = unpickler.load()
  File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 504, in persistent_load
    data_type(size), location)
  File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 113, in default_restore_location
    result = fn(storage, location)
  File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 94, in _cuda_deserialize
    device = validate_cuda_device(location)
  File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 78, in validate_cuda_device
    raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU.
Jan 25 02:14:41 PM  error building image: error building stage: waiting for process to exit: exit status 1
Jan 25 02:14:41 PM  error: exit status 1

@pierreguillou you’ll need to get the latest version of fastai and export your model again. LMK if that doesn’t work!

Hello @anurag. After updating to current fastai version and lean.export() again my model, I passed a step in the installation of my Web app in Render but it finally failed.

The code from the Render terminal below:

Jan 25 08:38:45 PM  Successfully installed aiofiles-0.4.0 aiohttp-3.5.4 async-timeout-3.0.1 attrs-18.2.0 beautifulsoup4-4.7.1 bottleneck-1.2.1 certifi-2018.11.29 chardet-3.0.4 click-7.0 cycler-0.10.0 cymem-2.0.2 cytoolz-0.9.0.1 dill-0.2.9 fastai-1.0.42 fastprogress-0.1.18 h11-0.8.1 httptools-0.0.11 idna-2.8 kiwisolver-1.0.1 matplotlib-3.0.2 msgpack-0.5.6 msgpack-numpy-0.4.3.2 multidict-4.5.2 murmurhash-1.0.1 numexpr-2.6.9 numpy-1.16.0 nvidia-ml-py3-7.352.0 packaging-19.0 pandas-0.23.4 pillow-5.4.1 plac-0.9.6 preshed-2.0.1 pyparsing-2.3.1 python-dateutil-2.7.5 python-multipart-0.0.5 pytz-2018.9 pyyaml-3.13 regex-2018.1.10 requests-2.21.0 scipy-1.2.0 six-1.12.0 soupsieve-1.7.3 spacy-2.0.18 starlette-0.9.11 thinc-6.12.1 toolz-0.9.0 torch-1.0.0 torchvision-0.2.1 tqdm-4.29.1 typing-3.6.6 ujson-1.35 urllib3-1.24.1 uvicorn-0.4.0 uvloop-0.12.0 websockets-7.0 wrapt-1.10.11 yarl-1.3.0
Jan 25 08:38:48 PM  INFO[0357] COPY app app/
Jan 25 08:38:48 PM  INFO[0357] RUN python app/server.py
Jan 25 08:38:48 PM  INFO[0357] cmd: /bin/sh
Jan 25 08:38:48 PM  INFO[0357] args: [-c python app/server.py]
Jan 25 08:39:01 PM  INFO[0371] EXPOSE 5042
Jan 25 08:39:01 PM  INFO[0371] cmd: EXPOSE
Jan 25 08:39:01 PM  INFO[0371] Adding exposed port: 5042/tcp
Jan 25 08:39:01 PM  INFO[0371] CMD ["python", "app/server.py", "serve"]
Jan 25 08:39:01 PM  INFO[0371] Taking snapshot of full filesystem...
Jan 25 08:41:17 PM   ______________________________
Jan 25 08:41:17 PM  < Pushing image to registry... >
Jan 25 08:41:17 PM   ------------------------------
Jan 25 08:41:17 PM          \   ^__^
Jan 25 08:41:17 PM           \  (oo)\_______
Jan 25 08:41:17 PM              (__)\       )\/\
Jan 25 08:41:17 PM                  ||----w |
Jan 25 08:41:17 PM                  ||     ||
Jan 25 08:41:47 PM   ______
Jan 25 08:41:47 PM  < Done >
Jan 25 08:41:47 PM   ------
Jan 25 08:41:47 PM          \   ^__^
Jan 25 08:41:47 PM           \  (oo)\_______
Jan 25 08:41:47 PM              (__)\       )\/\
Jan 25 08:41:47 PM                  ||----w |
Jan 25 08:41:47 PM                  ||     ||
Jan 25 08:42:56 PM  Traceback (most recent call last):
  File "app/server.py", line 52, in <module>
    if 'serve' in sys.argv: uvicorn.run(app=app, host='0.0.0.0', port=5042)
  File "/usr/local/lib/python3.7/site-packages/uvicorn/main.py", line 184, in run
    server.run()
  File "/usr/local/lib/python3.7/site-packages/uvicorn/main.py", line 213, in run
    config.load()
  File "/usr/local/lib/python3.7/site-packages/uvicorn/config.py", line 119, in load
    self.lifespan_class = import_from_string(LIFESPAN[self.lifespan])
  File "/usr/local/lib/python3.7/site-packages/uvicorn/importer.py", line 23, in import_from_string
    raise exc from None
  File "/usr/local/lib/python3.7/site-packages/uvicorn/importer.py", line 20, in import_from_string
    module = importlib.import_module(module_str)
  File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'uvicorn.lifespan'
Jan 25 08:43:04 PM  Traceback (most recent call last):
  File "app/server.py", line 52, in <module>
    if 'serve' in sys.argv: uvicorn.run(app=app, host='0.0.0.0', port=5042)
  File "/usr/local/lib/python3.7/site-packages/uvicorn/main.py", line 184, in run
    server.run()
  File "/usr/local/lib/python3.7/site-packages/uvicorn/main.py", line 213, in run
    config.load()
  File "/usr/local/lib/python3.7/site-packages/uvicorn/config.py", line 119, in load
    self.lifespan_class = import_from_string(LIFESPAN[self.lifespan])
  File "/usr/local/lib/python3.7/site-packages/uvicorn/importer.py", line 23, in import_from_string
    raise exc from None
  File "/usr/local/lib/python3.7/site-packages/uvicorn/importer.py", line 20, in import_from_string
    module = importlib.import_module(module_str)
  File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'uvicorn.lifespan'

I just fixed it upstream: you need to pin the version of uvicorn in your requirements.txt: https://github.com/render-examples/fastai-v3/commit/740564decf1704b1e30b1364cc7327245659c7e9

1 Like

Anyone know how to get the example zeit web app working in v2? I think all you need to do is reconfigure the now.json file, but I’m not sure what needs to be changed.

@waydegg if you figure it out, let us know. I’ve removed the Zeit guide for now since it doesn’t work.

2 Likes

The problem is that v2 of Zeit’s Now service does not provide Docker based deploys.
They have a builder that should be able to build python apps, but the problem is that the newest version of python it supports is 3.4 and for fastai/pytorch we need a newer version of python. So that is where I failed.

Many thanks @anurag. My Web app works quite well now :slight_smile:

Note: this time, after installation of starlette in my fastai v1 environment, I tested my Web app locally on my computer instead of deploying it directly to Render.com. It saved me a lot of time (local debugging)!

One question: do you know how to run the starlette Web app in a jupyter notebook (like displaying http://localhost:5042/ into a notebook)?

1 Like

I’m trying to adapt the html and css files of the “fastai v1 Web App on Render” to be mobile-friendly thanks to online guides like “How to Make a Mobile-Friendly Website: Responsive Design in CSS”.

It does help but at the end, there is still a problem: the Web app (the one from Jeremy as well, at least on my devices) works on Windows desktops and android smartphones but not on IOS Apple devices (notebooks, ipads and iphones). The problem comes from the Safari Web browser? Someone knows how to solve this issue? Thanks.