Puting the Model Into Production: Web Apps

It appears that the Zeit tarball linked to in the deployment tutorial on the website isn’t compatible with the most recent version of fastai . I tried modifying it to use ImageDataBunch.load_empty() but ran into some trouble.

It did deploy but it appears to not actually return the prediction. I hit Analyze , the button switches to Analyzing... and that’s it. Here’s my code, and I’d be super grateful if someone wants to look through it and try and pinpoint why this might be happening. Look at lines 14, 15, 33, 34, 35 for the changes.

P.S. I’ve never actually created a web app before so I might just be missing something silly.

I’m having problems too with the new fastai code using ImageDataBunch.load_empty(). I’m getting a “TypeError: expected str, bytes or os.PathLike object, not list”

You have to use ImageDataBunch.single_from_classes

It doesn’t work anymore. It’s been replaced by ImageDataBunch.load_empty

I still see single_from_classes in the source code… is it broken?

Many apologies - we accidentally removed single_from_classes. It’s back now. Please update to the latest package.

We do plan to replace it with load_empty - so if you try that and have problems, please let @Sylvain know.

4 Likes

Thanks Jeremy for bringing it back. I was actually working on it. Seeing it going away all of sudden was a nightmare… :slight_smile:

I guess you meant @sgugger ?

I did, thanks :slight_smile:

How to retrain my model after creating it from a different dataset, on a new but few datasets and classes ??
For example,if I created a model with 5 classes and 50 data instances per class. Now I want it to retrain with new data and new class , say 5 data points of 6th class without training it from scratch (i.e without building a whole new databunch with all the data/classes) ??

A lot of people are struggling to deploy their flask app on Heroku because of the size and installation of a library, I have written a guide on GitHub in case anyone needs it

I am also writing a blog which will be coming soon :slight_smile:

4 Likes

If any of you are struggling with Heroku or other providers, I’d love for you to try Render. The guide for fastai-v3 is here: https://course-v3.fast.ai/deployment_render.html

We don’t have any size restrictions on Docker images.

5 Likes

Shankar, were you able to deploy your classifier on Render?

No Anurag, I will try it over the weekend and will let you know.

1 Like

Hi @anurag.
Thanks for your Web service Render to deploy (fastai) Web app and for your tutorial.
I just followed it but I got an error (see last lines from the terminal in the dashboard of my Web service on the Render site). How can I solve it ? Thanks.

Jan 14 08:30:55 PM  Successfully installed Pillow-5.4.1 aiofiles-0.4.0 aiohttp-3.5.4 async-timeout-3.0.1 attrs-18.2.0 bottleneck-1.2.1 certifi-2018.11.29 chardet-3.0.4 click-7.0 cycler-0.10.0 cymem-2.0.2 cytoolz-0.9.0.1 dataclasses-0.6 dill-0.2.8.2 fastai-1.0.39 fastprogress-0.1.18 h11-0.8.1 httptools-0.0.11 idna-2.8 idna-ssl-1.1.0 kiwisolver-1.0.1 matplotlib-3.0.2 msgpack-0.5.6 msgpack-numpy-0.4.3.2 multidict-4.5.2 murmurhash-1.0.1 numexpr-2.6.9 numpy-1.16.0rc1 nvidia-ml-py3-7.352.0 packaging-18.0 pandas-0.23.4 plac-0.9.6 preshed-2.0.1 pyparsing-2.3.1 python-dateutil-2.7.5 python-multipart-0.0.5 pytz-2018.9 pyyaml-3.13 regex-2018.1.10 requests-2.21.0 scipy-1.2.0 six-1.12.0 spacy-2.0.18 starlette-0.9.9 thinc-6.12.1 toolz-0.9.0 torch-1.0.0 torch-nightly-1.0.0.dev20190114 torchvision-0.2.1 tqdm-4.29.1 typing-3.6.6 typing-extensions-3.7.2 ujson-1.35 urllib3-1.24.1 uvicorn-0.3.24 uvloop-0.11.3 websockets-7.0 wrapt-1.10.11 yarl-1.3.0
Jan 14 08:30:56 PM  INFO[0251] COPY app app/
Jan 14 08:30:56 PM  INFO[0251] RUN python app/server.py
Jan 14 08:30:56 PM  INFO[0251] cmd: /bin/sh
Jan 14 08:30:56 PM  INFO[0251] args: [-c python app/server.py]
Jan 14 08:31:07 PM  /usr/local/lib/python3.6/site-packages/fastai/data_block.py:388: UserWarning: Your training set is empty. Is this is by design, pass `ignore_empty=True` to remove this warning.
Jan 14 08:31:07 PM    warn("Your training set is empty. Is this is by design, pass `ignore_empty=True` to remove this warning.")
Jan 14 08:31:07 PM  /usr/local/lib/python3.6/site-packages/fastai/data_block.py:391: UserWarning: Your validation set is empty. Is this is by design, use `no_split()`
Jan 14 08:31:07 PM                   or pass `ignore_empty=True` when labelling to remove this warning.
Jan 14 08:31:07 PM    or pass `ignore_empty=True` when labelling to remove this warning.""")
Jan 14 08:31:07 PM  Traceback (most recent call last):
  File "app/server.py", line 38, in <module>
    learn = loop.run_until_complete(asyncio.gather(*tasks))[0]
  File "/usr/local/lib/python3.6/asyncio/base_events.py", line 484, in run_until_complete
    return future.result()
  File "app/server.py", line 33, in setup_learner
    learn.load(model_file_name)
  File "/usr/local/lib/python3.6/site-packages/fastai/basic_train.py", line 217, in load
    state = torch.load(self.path/self.model_dir/f'{name}.pth', map_location=device)
  File "/usr/local/lib/python3.6/site-packages/torch/serialization.py", line 367, in load
    return _load(f, map_location, pickle_module)
  File "/usr/local/lib/python3.6/site-packages/torch/serialization.py", line 528, in _load
    magic_number = pickle_module.load(f)
_pickle.UnpicklingError: invalid load key, '<'.
Jan 14 08:31:07 PM  error building image: error building stage: waiting for process to exit: exit status 1
Jan 14 08:31:07 PM  error: exit status 1

I believe your model_file_url is incorrect. Try this one:

https://doc-0g-24-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/1l73g1ajuadcpt6um9g7dhjc8ocen9up/1547503200000/07445456612858217027/*/1rJL2DRZjOlJPCnm2rualdpwvtGnmsbPz?e=download

Thanks for your answer @anurag.

I used the web service from your tutorial (my model.pth is on Google Drive). As I used resnet50, the size of my model file is 300 Mo.

I did (in server.py on github) but Render Web service failed (see below the last lines from the terminal). What did I make wrong ?

Jan 14 11:07:44 PM  Successfully installed Pillow-5.4.1 aiofiles-0.4.0 aiohttp-3.5.4 async-timeout-3.0.1 attrs-18.2.0 bottleneck-1.2.1 certifi-2018.11.29 chardet-3.0.4 click-7.0 cycler-0.10.0 cymem-2.0.2 cytoolz-0.9.0.1 dataclasses-0.6 dill-0.2.8.2 fastai-1.0.39 fastprogress-0.1.18 h11-0.8.1 httptools-0.0.11 idna-2.8 idna-ssl-1.1.0 kiwisolver-1.0.1 matplotlib-3.0.2 msgpack-0.5.6 msgpack-numpy-0.4.3.2 multidict-4.5.2 murmurhash-1.0.1 numexpr-2.6.9 numpy-1.16.0rc1 nvidia-ml-py3-7.352.0 packaging-18.0 pandas-0.23.4 plac-0.9.6 preshed-2.0.1 pyparsing-2.3.1 python-dateutil-2.7.5 python-multipart-0.0.5 pytz-2018.9 pyyaml-3.13 regex-2018.1.10 requests-2.21.0 scipy-1.2.0 six-1.12.0 spacy-2.0.18 starlette-0.9.9 thinc-6.12.1 toolz-0.9.0 torch-1.0.0 torch-nightly-1.0.0.dev20190114 torchvision-0.2.1 tqdm-4.29.1 typing-3.6.6 typing-extensions-3.7.2 ujson-1.35 urllib3-1.24.1 uvicorn-0.3.24 uvloop-0.11.3 websockets-7.0 wrapt-1.10.11 yarl-1.3.0
Jan 14 11:07:46 PM  INFO[0276] COPY app app/
Jan 14 11:07:46 PM  INFO[0276] RUN python app/server.py
Jan 14 11:07:46 PM  INFO[0276] cmd: /bin/sh
Jan 14 11:07:46 PM  INFO[0276] args: [-c python app/server.py]
Jan 14 11:08:01 PM  /usr/local/lib/python3.6/site-packages/fastai/data_block.py:388: UserWarning: Your training set is empty. Is this is by design, pass `ignore_empty=True` to remove this warning.
Jan 14 11:08:01 PM    warn("Your training set is empty. Is this is by design, pass `ignore_empty=True` to remove this warning.")
Jan 14 11:08:01 PM  /usr/local/lib/python3.6/site-packages/fastai/data_block.py:391: UserWarning: Your validation set is empty. Is this is by design, use `no_split()`
Jan 14 11:08:01 PM                   or pass `ignore_empty=True` when labelling to remove this warning.
Jan 14 11:08:01 PM    or pass `ignore_empty=True` when labelling to remove this warning.""")
Jan 14 11:08:01 PM  Traceback (most recent call last):
  File "app/server.py", line 38, in <module>
    learn = loop.run_until_complete(asyncio.gather(*tasks))[0]
  File "/usr/local/lib/python3.6/asyncio/base_events.py", line 484, in run_until_complete
    return future.result()
  File "app/server.py", line 33, in setup_learner
    learn.load(model_file_name)
  File "/usr/local/lib/python3.6/site-packages/fastai/basic_train.py", line 219, in load
    get_model(self.model).load_state_dict(state['model'], strict=strict)
  File "/usr/local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 769, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Sequential:
Jan 14 11:08:01 PM      Unexpected key(s) in state_dict: "0.4.0.conv3.weight", "0.4.0.bn3.weight", "0.4.0.bn3.bias", "0.4.0.bn3.running_mean", "0.4.0.bn3.running_var", "0.4.0.bn3.num_batches_tracked", "0.4.0.downsample.0.weight", "0.4.0.downsample.1.weight", "0.4.0.downsample.1.bias", "0.4.0.downsample.1.running_mean", "0.4.0.downsample.1.running_var", "0.4.0.downsample.1.num_batches_tracked", "0.4.1.conv3.weight", "0.4.1.bn3.weight", "0.4.1.bn3.bias", "0.4.1.bn3.running_mean", "0.4.1.bn3.running_var", "0.4.1.bn3.num_batches_tracked", "0.4.2.conv3.weight", "0.4.2.bn3.weight", "0.4.2.bn3.bias", "0.4.2.bn3.running_mean", "0.4.2.bn3.running_var", "0.4.2.bn3.num_batches_tracked", "0.5.0.conv3.weight", "0.5.0.bn3.weight", "0.5.0.bn3.bias", "0.5.0.bn3.running_mean", "0.5.0.bn3.running_var", "0.5.0.bn3.num_batches_tracked", "0.5.1.conv3.weight", "0.5.1.bn3.weight", "0.5.1.bn3.bias", "0.5.1.bn3.running_mean", "0.5.1.bn3.running_var", "0.5.1.bn3.num_batches_tracked", "0.5.2.conv3.weight", "0.5.2.bn3.weight", "0.5.2.bn3.bias", "0.5.2.bn3.running_mean", "0.5.2.bn3.running_var", "0.5.2.bn3.num_batches_tracked", "0.5.3.conv3.weight", "0.5.3.bn3.weight", "0.5.3.bn3.bias", "0.5.3.bn3.running_mean", "0.5.3.bn3.running_var", "0.5.3.bn3.num_batches_tracked", "0.6.0.conv3.weight", "0.6.0.bn3.weight", "0.6.0.bn3.bias", "0.6.0.bn3.running_mean", "0.6.0.bn3.running_var", "0.6.0.bn3.num_batches_tracked", "0.6.1.conv3.weight", "0.6.1.bn3.weight", "0.6.1.bn3.bias", "0.6.1.bn3.running_mean", "0.6.1.bn3.running_var", "0.6.1.bn3.num_batches_tracked", "0.6.2.conv3.weight", "0.6.2.bn3.weight", "0.6.2.bn3.bias", "0.6.2.bn3.running_mean", "0.6.2.bn3.running_var", "0.6.2.bn3.num_batches_tracked", "0.6.3.conv3.weight", "0.6.3.bn3.weight", "0.6.3.bn3.bias", "0.6.3.bn3.running_mean", "0.6.3.bn3.running_var", "0.6.3.bn3.num_batches_tracked", "0.6.4.conv3.weight", "0.6.4.bn3.weight", "0.6.4.bn3.bias", "0.6.4.bn3.running_mean", "0.6.4.bn3.running_var", "0.6.4.bn3.num_batches_tracked", "0.6.5.conv3.weight", "0.6.5.bn3.weight", "0.6.5.bn3.bias", "0.6.5.bn3.running_mean", "0.6.5.bn3.running_var", "0.6.5.bn3.num_batches_tracked", "0.7.0.conv3.weight", "0.7.0.bn3.weight", "0.7.0.bn3.bias", "0.7.0.bn3.running_mean", "0.7.0.bn3.running_var", "0.7.0.bn3.num_batches_tracked", "0.7.1.conv3.weight", "0.7.1.bn3.weight", "0.7.1.bn3.bias", "0.7.1.bn3.running_mean", "0.7.1.bn3.running_var", "0.7.1.bn3.num_batches_tracked", "0.7.2.conv3.weight", "0.7.2.bn3.weight", "0.7.2.bn3.bias", "0.7.2.bn3.running_mean", "0.7.2.bn3.running_var", "0.7.2.bn3.num_batches_tracked".
Jan 14 11:08:01 PM      size mismatch for 0.4.0.conv1.weight: copying a param with shape torch.Size([64, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.4.1.conv1.weight: copying a param with shape torch.Size([64, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.4.2.conv1.weight: copying a param with shape torch.Size([64, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.5.0.conv1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 64, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.5.0.downsample.0.weight: copying a param with shape torch.Size([512, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 64, 1, 1]).
Jan 14 11:08:01 PM      size mismatch for 0.5.0.downsample.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]).
Jan 14 11:08:01 PM      size mismatch for 0.5.0.downsample.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]).
Jan 14 11:08:01 PM      size mismatch for 0.5.0.downsample.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]).
Jan 14 11:08:01 PM      size mismatch for 0.5.0.downsample.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]).
Jan 14 11:08:01 PM      size mismatch for 0.5.1.conv1.weight: copying a param with shape torch.Size([128, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.5.2.conv1.weight: copying a param with shape torch.Size([128, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.5.3.conv1.weight: copying a param with shape torch.Size([128, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.6.0.conv1.weight: copying a param with shape torch.Size([256, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.6.0.downsample.0.weight: copying a param with shape torch.Size([1024, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 128, 1, 1]).
Jan 14 11:08:01 PM      size mismatch for 0.6.0.downsample.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]).
Jan 14 11:08:01 PM      size mismatch for 0.6.0.downsample.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]).
Jan 14 11:08:01 PM      size mismatch for 0.6.0.downsample.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]).
Jan 14 11:08:01 PM      size mismatch for 0.6.0.downsample.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]).
Jan 14 11:08:01 PM      size mismatch for 0.6.1.conv1.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.6.2.conv1.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.6.3.conv1.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.6.4.conv1.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.6.5.conv1.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.7.0.conv1.weight: copying a param with shape torch.Size([512, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 256, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.7.0.downsample.0.weight: copying a param with shape torch.Size([2048, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 256, 1, 1]).
Jan 14 11:08:01 PM      size mismatch for 0.7.0.downsample.1.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]).
Jan 14 11:08:01 PM      size mismatch for 0.7.0.downsample.1.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]).
Jan 14 11:08:01 PM      size mismatch for 0.7.0.downsample.1.running_mean: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]).
Jan 14 11:08:01 PM      size mismatch for 0.7.0.downsample.1.running_var: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]).
Jan 14 11:08:01 PM      size mismatch for 0.7.1.conv1.weight: copying a param with shape torch.Size([512, 2048, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 0.7.2.conv1.weight: copying a param with shape torch.Size([512, 2048, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
Jan 14 11:08:01 PM      size mismatch for 1.2.weight: copying a param with shape torch.Size([4096]) from checkpoint, the shape in current model is torch.Size([1024]).
Jan 14 11:08:01 PM      size mismatch for 1.2.bias: copying a param with shape torch.Size([4096]) from checkpoint, the shape in current model is torch.Size([1024]).
Jan 14 11:08:01 PM      size mismatch for 1.2.running_mean: copying a param with shape torch.Size([4096]) from checkpoint, the shape in current model is torch.Size([1024]).
Jan 14 11:08:01 PM      size mismatch for 1.2.running_var: copying a param with shape torch.Size([4096]) from checkpoint, the shape in current model is torch.Size([1024]).
Jan 14 11:08:01 PM      size mismatch for 1.4.weight: copying a param with shape torch.Size([512, 4096]) from checkpoint, the shape in current model is torch.Size([512, 1024]).
Jan 14 11:08:01 PM  error building image: error building stage: waiting for process to exit: exit status 1
Jan 14 11:08:01 PM  error: exit status 1

Hard for me to debug without knowing more about your models, but it looks like the following line is to blame:

Jan 14 11:08:01 PM      Unexpected key(s) in state_dict: "0.4.0.conv3.weight", "0.4.0.bn3.weight", "0.4.0.bn3.bias", "0.4.0.bn3.running_mean", "0.4.0.bn3.running_var", "0.4.0.bn3.num_batches_tracked", "0.4.0.downsample.0.weight", "0.4.0.downsample.1.weight", "0.4.0.downsample.1.bias", "0.4.0.downsample.1.running_mean", "0.4.0.downsample.1.running_var", "0.4.0.downsample.1.num_batches_tracked", "0.4.1.conv3.weight", "0.4.1.bn3.weight", "0.4.1.bn3.bias", "0.4.1.bn3.running_mean", "0.4.1.bn3.running_var", "0.4.1.bn3.num_batches_tracked", "0.4.2.conv3.weight", "0.4.2.bn3.weight", "0.4.2.bn3.bias", "0.4.2.bn3.running_mean", "0.4.2.bn3.running_var", "0.4.2.bn3.num_batches_tracked", "0.5.0.conv3.weight", "0.5.0.bn3.weight", "0.5.0.bn3.bias", "0.5.0.bn3.running_mean", "0.5.0.bn3.running_var", "0.5.0.bn3.num_batches_tracked", "0.5.1.conv3.weight", "0.5.1.bn3.weight", "0.5.1.bn3.bias", "0.5.1.bn3.running_mean", "0.5.1.bn3.running_var", "0.5.1.bn3.num_batches_tracked", "0.5.2.conv3.weight", "0.5.2.bn3.weight", "0.5.2.bn3.bias", "0.5.2.bn3.running_mean", "0.5.2.bn3.running_var", "0.5.2.bn3.num_batches_tracked", "0.5.3.conv3.weight", "0.5.3.bn3.weight", "0.5.3.bn3.bias", "0.5.3.bn3.running_mean", "0.5.3.bn3.running_var", "0.5.3.bn3.num_batches_tracked", "0.6.0.conv3.weight", "0.6.0.bn3.weight", "0.6.0.bn3.bias", "0.6.0.bn3.running_mean", "0.6.0.bn3.running_var", "0.6.0.bn3.num_batches_tracked", "0.6.1.conv3.weight", "0.6.1.bn3.weight", "0.6.1.bn3.bias", "0.6.1.bn3.running_mean", "0.6.1.bn3.running_var", "0.6.1.bn3.num_batches_tracked", "0.6.2.conv3.weight", "0.6.2.bn3.weight", "0.6.2.bn3.bias", "0.6.2.bn3.running_mean", "0.6.2.bn3.running_var", "0.6.2.bn3.num_batches_tracked", "0.6.3.conv3.weight", "0.6.3.bn3.weight", "0.6.3.bn3.bias", "0.6.3.bn3.running_mean", "0.6.3.bn3.running_var", "0.6.3.bn3.num_batches_tracked", "0.6.4.conv3.weight", "0.6.4.bn3.weight", "0.6.4.bn3.bias", "0.6.4.bn3.running_mean", "0.6.4.bn3.running_var", "0.6.4.bn3.num_batches_tracked", "0.6.5.conv3.weight", "0.6.5.bn3.weight", "0.6.5.bn3.bias", "0.6.5.bn3.running_mean", "0.6.5.bn3.running_var", "0.6.5.bn3.num_batches_tracked", "0.7.0.conv3.weight", "0.7.0.bn3.weight", "0.7.0.bn3.bias", "0.7.0.bn3.running_mean", "0.7.0.bn3.running_var", "0.7.0.bn3.num_batches_tracked", "0.7.1.conv3.weight", "0.7.1.bn3.weight", "0.7.1.bn3.bias", "0.7.1.bn3.running_mean", "0.7.1.bn3.running_var", "0.7.1.bn3.num_batches_tracked", "0.7.2.conv3.weight", "0.7.2.bn3.weight", "0.7.2.bn3.bias", "0.7.2.bn3.running_mean", "0.7.2.bn3.running_var", "0.7.2.bn3.num_batches_tracked".
Jan 14 11:08:01 PM      size mismatch for 0.4.0.conv1.weight: copying a param with shape torch.Size([64, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).

Hello @anurag. My model works perfectly (using resnet50 and fastai v1) in my jupyter notebook. Therefore, as I don’t know what to change in my model, I moved it to a resnet34, trained it just once in my jupyter notebook, saved it into a file resnet34-1.pth with learn.save(), uploaded to my Google Drive, got the shared link, got the download link by using the Direct Link Generator, edited my /app/server.py in my GitHub with the new download link, committed the change, went to my Web service dashboard in the Render site and waited for the installation update. And ? … it worked ! :slight_smile:

Below, the last lines from the Render terminal. We can see that - again - the Render system wrote “UserWarning: Your training set is empty.” but - this time - did try again and finally passed.

(I will try again with the link to my resnet50 model file and will keep you informed)

Jan 15 10:27:39 AM  Successfully installed Pillow-5.4.1 aiofiles-0.4.0 aiohttp-3.5.4 async-timeout-3.0.1 attrs-18.2.0 bottleneck-1.2.1 certifi-2018.11.29 chardet-3.0.4 click-7.0 cycler-0.10.0 cymem-2.0.2 cytoolz-0.9.0.1 dataclasses-0.6 dill-0.2.8.2 fastai-1.0.39 fastprogress-0.1.18 h11-0.8.1 httptools-0.0.11 idna-2.8 idna-ssl-1.1.0 kiwisolver-1.0.1 matplotlib-3.0.2 msgpack-0.5.6 msgpack-numpy-0.4.3.2 multidict-4.5.2 murmurhash-1.0.1 numexpr-2.6.9 numpy-1.16.0rc1 nvidia-ml-py3-7.352.0 packaging-18.0 pandas-0.23.4 plac-0.9.6 preshed-2.0.1 pyparsing-2.3.1 python-dateutil-2.7.5 python-multipart-0.0.5 pytz-2018.9 pyyaml-3.13 regex-2018.1.10 requests-2.21.0 scipy-1.2.0 six-1.12.0 spacy-2.0.18 starlette-0.9.9 thinc-6.12.1 toolz-0.9.0 torch-1.0.0 torch-nightly-1.0.0.dev20190114 torchvision-0.2.1 tqdm-4.29.1 typing-3.6.6 typing-extensions-3.7.2 ujson-1.35 urllib3-1.24.1 uvicorn-0.3.24 uvloop-0.11.3 websockets-7.0 wrapt-1.10.11 yarl-1.3.0
Jan 15 10:27:40 AM  INFO[0230] COPY app app/
Jan 15 10:27:40 AM  INFO[0230] RUN python app/server.py
Jan 15 10:27:40 AM  INFO[0230] cmd: /bin/sh
Jan 15 10:27:40 AM  INFO[0230] args: [-c python app/server.py]
Jan 15 10:27:55 AM  /usr/local/lib/python3.6/site-packages/fastai/data_block.py:388: UserWarning: Your training set is empty. Is this is by design, pass `ignore_empty=True` to remove this warning.
Jan 15 10:27:55 AM    warn("Your training set is empty. Is this is by design, pass `ignore_empty=True` to remove this warning.")
Jan 15 10:27:55 AM  /usr/local/lib/python3.6/site-packages/fastai/data_block.py:391: UserWarning: Your validation set is empty. Is this is by design, use `no_split()`
Jan 15 10:27:55 AM                   or pass `ignore_empty=True` when labelling to remove this warning.
Jan 15 10:27:55 AM    or pass `ignore_empty=True` when labelling to remove this warning.""")
Jan 15 10:27:56 AM  INFO[0245] EXPOSE 5042
Jan 15 10:27:56 AM  INFO[0245] cmd: EXPOSE
Jan 15 10:27:56 AM  INFO[0245] Adding exposed port: 5042/tcp
Jan 15 10:27:56 AM  INFO[0245] CMD ["python", "app/server.py", "serve"]
Jan 15 10:27:56 AM  INFO[0245] Taking snapshot of full filesystem...
Jan 15 10:30:01 AM   ______________________________
Jan 15 10:30:01 AM  < Pushing image to registry... >
Jan 15 10:30:01 AM   ------------------------------
Jan 15 10:30:01 AM          \   ^__^
Jan 15 10:30:01 AM           \  (oo)\_______
Jan 15 10:30:01 AM              (__)\       )\/\
Jan 15 10:30:01 AM                  ||----w |
Jan 15 10:30:01 AM                  ||     ||
Jan 15 10:30:37 AM   ______
Jan 15 10:30:37 AM  < Done >
Jan 15 10:30:37 AM   ------
Jan 15 10:30:37 AM          \   ^__^
Jan 15 10:30:37 AM           \  (oo)\_______
Jan 15 10:30:37 AM              (__)\       )\/\
Jan 15 10:30:37 AM                  ||----w |
Jan 15 10:30:37 AM                  ||     ||
Jan 15 10:32:11 AM  /usr/local/lib/python3.6/site-packages/fastai/data_block.py:388: UserWarning: Your training set is empty. Is this is by design, pass `ignore_empty=True` to remove this warning.
Jan 15 10:32:11 AM    warn("Your training set is empty. Is this is by design, pass `ignore_empty=True` to remove this warning.")
Jan 15 10:32:11 AM  /usr/local/lib/python3.6/site-packages/fastai/data_block.py:391: UserWarning: Your validation set is empty. Is this is by design, use `no_split()`
Jan 15 10:32:11 AM                   or pass `ignore_empty=True` when labelling to remove this warning.
Jan 15 10:32:11 AM    or pass `ignore_empty=True` when labelling to remove this warning.""")
Jan 15 10:32:11 AM  INFO: Started server process [1]
Jan 15 10:32:11 AM  INFO: Waiting for application startup.
Jan 15 10:32:11 AM  INFO: Uvicorn running on http://0.0.0.0:5042 (Press CTRL+C to quit)
Jan 15 10:32:26 AM  /usr/local/lib/python3.6/site-packages/fastai/data_block.py:388: UserWarning: Your training set is empty. Is this is by design, pass `ignore_empty=True` to remove this warning.
Jan 15 10:32:26 AM    warn("Your training set is empty. Is this is by design, pass `ignore_empty=True` to remove this warning.")
Jan 15 10:32:26 AM  /usr/local/lib/python3.6/site-packages/fastai/data_block.py:391: UserWarning: Your validation set is empty. Is this is by design, use `no_split()`
Jan 15 10:32:26 AM                   or pass `ignore_empty=True` when labelling to remove this warning.
Jan 15 10:32:26 AM    or pass `ignore_empty=True` when labelling to remove this warning.""")
Jan 15 10:32:26 AM  INFO: Started server process [1]
Jan 15 10:32:26 AM  INFO: Waiting for application startup.
Jan 15 10:32:26 AM  INFO: Uvicorn running on http://0.0.0.0:5042 (Press CTRL+C to quit)
Jan 15 10:32:35 AM  INFO: ('10.104.12.26', 36626) - "GET / HTTP/1.1" 200
Jan 15 10:32:36 AM  INFO: ('10.104.12.26', 36626) - "GET /style.css HTTP/1.1" 200
Jan 15 10:32:36 AM  INFO: ('10.104.12.26', 36628) - "GET /client.js HTTP/1.1" 200

Hello @anurag. About “UserWarning: Your training set is empty.”, I found the issue.

In your server.py file, you use the single_from_classes method in the following code:

async def setup_learner():
    await download_file(model_file_url, path/'models'/f'{model_file_name}.pth')
    data_bunch = ImageDataBunch.single_from_classes(path, classes,
        tfms=get_transforms(), size=224).normalize(imagenet_stats)
    learn = create_cnn(data_bunch, models.resnet34, pretrained=False)
    learn.load(model_file_name)
return learn

But the single_from_classes is deprecated as written in the fastai docs (read as well this warning).

What would be the corrected code for the “async def setup_learner()” paragraph ? Thanks.