Productionizing models thread

Thanks Anurag, appreciate the help and look forward to leveraging render!

I am trying to change model_dir of cnn_learner . Because ıf I use default path ,its warning about read-only file system at kaggle and I want to export it to tmp/models/ . How can ı change the path or am I doing wrong ?

You can pass model_dir=Path(bla) in the arguments of cnn_learner, with bla pointing to a directory you can write.

Thanks for help . I did it like that and it worked . But now how can ı changing path of learn.export . with fname or Path() ? Should ı read document ?

In learn.export, you specify the location you want with learn.export(file = Path(bla)) where bla is a file name (with full location) ending in .pkl

1 Like

I’m having problems using the following code after creating a export.pkl from learn.export():

learn = load_learner(Path(abspath('./models')))
res = learn.predict(Image(img))

Error

SourceChangeWarning: source code of class 'torchvision.models.resnet.Bottleneck' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
Traceback (most recent call last):
  File "/home/h/.vscode/extensions/ms-python.python-2019.3.6558/pythonFiles/ptvsd_launcher.py", line 45, in <module>
    main(ptvsdArgs)
  File "/home/h/.vscode/extensions/ms-python.python-2019.3.6558/pythonFiles/lib/python/ptvsd/__main__.py", line 391, in main
    run()
  File "/home/h/.vscode/extensions/ms-python.python-2019.3.6558/pythonFiles/lib/python/ptvsd/__main__.py", line 272, in run_file
    runpy.run_path(target, run_name='__main__')
  File "/home/h/miniconda3/envs/drone/lib/python3.6/runpy.py", line 263, in run_path
    pkg_name=pkg_name, script_name=fname)
  File "/home/h/miniconda3/envs/drone/lib/python3.6/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/home/h/miniconda3/envs/drone/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/h/work/drone/play.py", line 16, in <module>
    res = learn.predict(Image(img))
  File "/home/h/miniconda3/envs/drone/lib/python3.6/site-packages/fastai/basic_train.py", line 370, in predict
    out = ds.y.reconstruct(pred, ds.x.reconstruct(x[0])) if has_arg(ds.y.reconstruct, 'x') else ds.y.reconstruct(pred)
  File "/home/h/miniconda3/envs/drone/lib/python3.6/site-packages/fastai/data_block.py", line 91, in reconstruct
    return self[0].reconstruct(t,x) if has_arg(self[0].reconstruct, 'x') else self[0].reconstruct(t)
  File "/home/h/miniconda3/envs/drone/lib/python3.6/site-packages/fastai/data_block.py", line 109, in __getitem__
    if isinstance(idxs, Integral): return self.get(idxs)
  File "/home/h/miniconda3/envs/drone/lib/python3.6/site-packages/fastai/data_block.py", line 66, in get
    return self.items[i]
IndexError: index 0 is out of bounds for axis 0 with size 0

Note The training of this was done with fp16 precision

Two basic questions that im not sure i understand from the fastai documentation.

  1. When using images in a production model, do i have to resize them (to size=299 like I did when training), or does the export.pkl file do that for me? (I think the fastai documentation says it will do that for me, but do not entirely understand).
  2. When using learn.get_preds(ds_type=DatasetType.Test), how do you pair the results with your test set? My learner was defined as follows:
    learn = load_learner(’./export.pkl’, test=ImageList.from_folder(path))
    I know the predictions are accurate, i cant seem to determine how to pair them up though.
  1. The learner will do that for you. You might loose some of the image if it isn’t square so make sure that what ever your interested is in the centre.

Awesome. Thanks much for the clarification and tip!

For your second quetion, predictions are in the same order as your filenames. You can find them in learn.data.test_ds.x.items.

1 Like

Thanks much! That certainly worked like a charm!

This is quite an useful piece of information.

:smile:

Hi, everyone.

I made a tutorial for deploy in Render, Heroku and Google Cloud Run: https://github.com/weltonrodrigo/fastai-v3/blob/master/tutorial/deploy-do-classificador-fastai.md

It is in Portuguese right now, but I can translate it.

How could I get them included in the https://course.fast.ai documentation?

1 Like

I hope this is the right place for my question. I am having great success using fastai to inspect specular parts. My issue is trying to get pyinstaller to work to create an executable file so I can easily load my program on other machines.

import statements:

from fastai.vision import load_learner
from fastai.vision import Image
from torch import from_numpy
from fastai.callbacks.hooks import hook_output

When I try to run the program, I get the below error:

pkg_resources.DistributionNotFound: The ‘fastprogress>=0.1.19’ distribution was not found and is required by the application

Does anyone know how I can help pyinstaller locate fastprogress?

Thank you so much.

2 Likes

Hey Learners ,
I have put together a github repo of cloud GPU providers for deploying and Productionizing our project check the repo
Repo link - https://github.com/zszazi/Deep-learning-in-cloud
PS: I am a newbie to fastai so please pardon me if I posted this in the wrong thread

1 Like

Hi!
I join the club.
I cannot wrap my head around the idea of using deep copies instead of weight sharing in RNNs as suggested in the reference github issue. It’s fundamental to backprop over the same parameters and not a copy, no?? Am I understanding something fundamentally wrong?

If you find anything besides the github issue please share.
Thanks!

Anyone tried deploying on Google Cloud Functions yet? Looks like AWS Lambda is limited to 500 MB of /tmp storage which might be tough for my model (Cloud Functions get 2GB to share between memory & /tmpfs)

I just spent some time getting fastai to work with google cloud functions, and was able to do so. It’s really pretty easy, with one caveat. In the requirements.text file, if you just specify fastai it will attempt to download the cuda version of pytorch, which is unsupported on cloud functions. Thus, you need to manually add the download URL for the linux python3.7 cpu pytorch version, which you can get on the pytorch website.
Sample requirements.txt:

google-cloud-storage
https://download.pytorch.org/whl/cpu/torch-1.0.1.post2-cp37-cp37m-linux_x86_64.whl
fastai

I include google-cloud-storage so I can upload the model to a bucket and the function will grab and download it when it initializes. Here is a sample function that downloads the model to /tmp and loads a language model:

from fastai.text import *
from google.cloud import storage

storage_client = storage.Client()
bucket = storage_client.get_bucket("your_bucket")
blob = bucket.blob("export_lm.pkl")
blob.download_to_filename("/tmp/export_lm.pkl")

learn = load_learner("/tmp","export_lm.pkl")

def beam_search(request):
 request_json = request.get_json(silent=True)
 request_args = request.args
 text = request_args["query"]
 return learn.beam_search(text,5)
6 Likes

Nice, I may be trying that tomorrow; I’ve been trying to create a lambda layer with fastai and pytorch but the zipped file needs to be under 50MB which is proving difficult. I’ve removed quite a bit (eg spacy languages) but it’s still 107MB zipped.

Hat tip to you :tophat: for this!

I got my model deployed as a Google Cloud Function now so theoretically I have infinite scalability but won’t have to pay anything during low traffic times. Should be great for launching on Hacker News / Product Hunt in a couple of weeks!

2 Likes