Deploying using Binder (2021) (Working)


Follow the steps:

  1. Upload 3 files to your repo:
    a. bears_classifier.iypnb (your own notebook or you can download from here)
    b. export.pkl ( It will be a large file. If it is >25MB, uploading through github is not allowed. Use the method shown in this video)
    c. requirements.txt (download from here)
    After this your repo should look like this:

  2. Open . Copy your repo URL in ‘GitHub repository name or URL’ and ‘/voila/render/bear_classifier.ipynb’ in ‘URL to open (optional)’ and change ‘File’ option to ‘URL’.
    After this, it should look like this:

  3. Press ‘Launch’. Should take some time for the first time. It launches faster next time.

If at Step 2, you are getting errors like ‘404: Not found’ or ‘There was an error when executing cell [4]. Please run Voilà with --debug to see the error message.’ then do the following:


  1. Same as Step 1 above

  2. Open . Copy your repo URL in ‘GitHub repository name or URL’ and leave other fields empty and press Launch.

  3. This opens the Jupyter Notebook interface:

    Open bear_classifier.iypnb and press the voila button on the top:
    This will open the notebook with voila in another tab. If it runs properly go back to binder home page and follow from Step 2 of above method.

This should work!


This really helped me out, thank you @abhinavnayak11

Thank you! this really helped too!! I cloned my own repo to include the big file with terminal.

However… now I am receiving an error from voila now that I try to access deployed version, and it also seems that gihub repo of the bear-classifier is buggy. Have you looked into it? :frowning:

This is the error I get and cannot seem to debug.
New to Python :sweat_smile:

>     AttributeError                            Traceback (most recent call last)
>     <ipython-input-2-87ace48b76b0> in <module>
>           1 path = Path()
>     ----> 2 learn_inf = load_learner(path/'export.pkl', cpu=True)
>           3 btn_upload = widgets.FileUpload()
>           4 out_pl = widgets.Output()
>           5 lbl_pred = widgets.Label()
>     /srv/conda/envs/notebook/lib/python3.7/site-packages/fastai/ in load_learner(fname, cpu, pickle_module)
>         372     "Load a `Learner` object in `fname`, optionally putting it on the `cpu`"
>         373     distrib_barrier()
>     --> 374     res = torch.load(fname, map_location='cpu' if cpu else None, pickle_module=pickle_module)
>         375     if hasattr(res, 'to_fp32'): res = res.to_fp32()
>         376     if cpu: res.dls.cpu()
>     /srv/conda/envs/notebook/lib/python3.7/site-packages/torch/ in load(f, map_location, pickle_module, **pickle_load_args)
>         592           
>         593                     return torch.jit.load(opened_file)
>     --> 594                 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
>         595         return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
>         596 
>     /srv/conda/envs/notebook/lib/python3.7/site-packages/torch/ in _load(zip_file, map_location, pickle_module, pickle_file, **pickle_load_args)
>         851     unpickler = pickle_module.Unpickler(data_file, **pickle_load_args)
>         852     unpickler.persistent_load = persistent_load
>     --> 853     result = unpickler.load()
>         854 
>         855     torch._utils._validate_loaded_sparse_tensors()
>     AttributeError: Can't get attribute 'CrossEntropyLossFlat' on <module 'fastai.layers' from '/srv/conda/envs/notebook/lib/python3.7/site-packages/fastai/'>

I get the same error as ruta.zem.

Has anyone gotten this working recently? Going through the book with a couple of classmates.

Thanks for the post @abhinavnayak11! It was super helpful.

However, I was still stuck for a while because my binder ‘failed’. it is solved now so I’d like to post the solution here for others.

In short: when using your own repo, make sure it is public!

In both methods, after trying to launch on mybinder, it gave the following error message:

Error: Could not resolve ref for XXX/HEAD. Double check your URL. GitHub recently changed default branches from "master" to "main".

This was solved for me when I set the repo to public.

Thanks @abhinavnayak11!

I just deployed mine. I cloned the main repo and replaced the export.pkl with my own, since I had changed the main notebook and did penguins instead of bears. All other changes were cosmetic. Worked like a charm!


Thanks for this tutorial. I have tried both the methods but still my deployment isn’t happening. I tried the seconf method after doing the first one but at both times, it gives me this error.

Method 1

Method 2

My repo is a public one and here is a link to the same. Can anyone help me with where I might be going wrong?


Solved this. The issue was that I was not having the repo structure mentioned above. My repo had folder within which the entire code was residing. I changed that by moving it out of the folder and keeping it in the root and it worked.

Thanks! :slight_smile:


Hello all,

I have put in a pull request to the main fastai/bear_voila repository in the hope that people like @ruta.zem and @Algorant can get past the issues they faced. The pull request shows the changes needed in requirements.txt to use the old (released) export.pkl file:

To see it working I also put it on Binder - Binder