Docker image for fastaiv2+ubuntu18.04+sudo+default user setup in a conda environment (compressed size=2.17GB)


Base Dockerfile for fastaiv2 + ubuntu 18.04 + sudo + default user setup in a conda environment (compressed size = 2.17GB).

Dockerhub link, Dockerfile link

Summary of Dockerfile contents

  1. Builds on nvidia/cuda:10.2-base-ubuntu18.04. Configuration of nvidia driver is done in the image. On the host machine follow the instructions of NVIDIA/nvidia-docker to setup nvidia-docker2 packages.

  2. sudo + default user setup.
    username = default
    password = default
    userid = 1001

    /home/default directory is setup, in the same way as a regular ubuntu install.

  3. Miniconda setup

    • python=3.8.3
    • jupyter notebook = 6.0.3
    • pytorch = latest release
    • torchvision = latest release
    • nbdev = master commit
    • fastcore = master commit
    • fastaiv2 = master commit
  4. jupyter notebook. To use notebook in docker container the command is

    jupyter notebook --ip= --port=8889

    Any port can be used. To avoid writing this long command, I have added an alias in .bashrc as follows:

    alias note='jupyter notebook --ip= --port=8889'

    This assumes you started your container with -p {host_port}:8889. I generally make these ports same (-p 8889:8889). Now jupyter notebook can be started by typing note in terminal.

  5. pytorch install. When I build the docker image, the latest release is 1.5.1. I will update the fastai:latest image on new releases of pytorch as they are released.

  6. fastai packages. In home directory, there is script, which will update nbdev, fastcore, fastaiv2 to the latest git commit.

  7. test_container.ipynb. This notebook can be used to check

    • Pytorch is using GPU
    • fastai works
    • print versions of all main packages in the container

Package versions

Code in test_container.ipynb.

    matplotlib: 3.2.2
    notebook: 6.0.3
    numpy: 1.18.5
    pandas: 1.0.5
    pillow: 7.1.2
    pip: 20.1.1
    python: 3.8.3
    scikit-learn: 0.23.1
    scipy: 1.5.0
    spacy: 2.3.0
    pytorch: 1.5.1
    torchvision: 0.6.0a0+35d732a
        Hash = bf455de9bc37c76f7f92b3c43227ef9d4779b614
        Time = 2020-06-17 20:23:42 -0400
        Hash = 4a2d5ea702d0dc4a6c34c4acefafd9b494d9e222
        Time = 2020-05-20 05:51:34 -0700
        Hash = 465597eedfb52ad5cd7cd6c378b8da6c851b4796
        Time = 2020-06-22 12:47:11 -0400

PyTorch+GPU example

In [1]: import torch                                                                                                                                                         

In [2]: torch.cuda.is_available()                                                                                                                                            
Out[2]: True

In [3]: !nvidia-smi                                                                                                                                                          
Fri Jun 26 11:20:22 2020       
| NVIDIA-SMI 440.100      Driver Version: 440.100      CUDA Version: 10.2     |
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|   0  Quadro M1200        Off  | 00000000:01:00.0  On |                  N/A |
| N/A   48C    P5    N/A /  N/A |    657MiB /  4043MiB |      0%      Default |
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |

In [4]: a = torch.zeros(100,100, device=torch.device('cuda'))                                                                                                                

In [5]: a.shape, a.device                                                                                                                                                    
Out[5]: (torch.Size([100, 100]), device(type='cuda', index=0))

fastai example

In [1]: from import *                                                                                                                                     

In [2]: path = untar_data(URLs.CAMVID_TINY) 
   ...: codes = np.loadtxt(path/'codes.txt', dtype=str) 
   ...: fnames = get_image_files(path/"images") 
   ...: def label_func(fn): return path/"labels"/f"{fn.stem}_P{fn.suffix}" 
   ...: dls = SegmentationDataLoaders.from_label_func( 
   ...:     path, bs=8, fnames=fnames, label_func=label_func, codes=codes 
   ...: )                                                                                                                                                                    
In [3]: dls.show_batch(max_n=2)                                                                                                                                              

In [4]: learn = unet_learner(dls, resnet18, pretrained=False) 
   ...: learn.fine_tune(1)                                                                                                                                                   
epoch     train_loss  valid_loss  time    
0         3.365898    3.249032    00:10                                                                                                              
epoch     train_loss  valid_loss  time    
0         2.496570    3.055262    00:04                                                                                                              

In [5]: !rm -r /home/default/.fastai/data/camvid_tiny 

Docker commands

  1. docker run -it --gpus device=0 --name temp -p 8889:8889 fastai:latest

    • -it - open a terminal connected to the container
    • --gpus device=0 - which GPUs to use in container(all will use all GPUs)
    • --name temp - Name of the containre
    • -p 8889:8889 - The format is {host_port}:{container_post}
  2. To add another terminal to a container the commands are

    docker ps
    docker exec -it {id} bash

    Get the container id using docker ps and then use that id in second command as docker exec -it 2a0dd52f02e8 bash.

  3. Login to docker docker login --username {docker_hub_username}.

  4. To start a stopped terminal the commands are

    docker start {container_name}
    docker container attach {container_name}

Add docker to nbdev_template

I think this process can be automated, but for now the steps are

  1. Create docker/Dockerfile file and add these contents to it, depending on your preference.

    FROM kushaj/fastai:latest
    # (optionally) add some meta-information
    LABEL author  = "..." \
        email   = "..." \
        website = "..."
    WORKDIR /home/default
    # To install library with git
    RUN git clone {git_url}  && \
        pip install -e {lib_name}
    # To install using pip
    RUN pip install {lib_name}
  2. Build docker image.

    docker build -f docker/Dockerfile -t {image_name}:{image_tag} docker/
    • -f - location of Dockerfile
    • docker/ - location of context (same as Dockerfile in this case)
  3. Push to Dockerhub

    docker tag {image_name}:{image_tag} {dockerhub_username}/{image_name}:{image_tag}
    docker push {dockerhub_username}/{image_name}:{image_tag}

Help reduce image size

If the Dockerfile image size can be reduced, please tell.


I am curious what is your intent for your docker image, is this meant to be used to go through notebooks for a particular course version?

I was looking at maybe having multiple images:

  • Base FastAI – used as base for a FastAI release
  • Class v3 FastAI – used to go through notebooks for a particular course version. Will be built on base
  • Flask FastAI – with flask . Will be built on base

I was looking myself at building something like above. Though I have a much large image right now, I was really concern with making all notebooks in V3 work first (I think I have done this). Then I was going to work at removing bloat.

If you looking at doing something like that, please let me know so I can contribute/share.

When I made this image, I wanted something that I could use to quickly set up a docker image for a paper that I implemented (or anything that I did with nbdev_template). This is the reason why I added a section Add docker to nbdev_template. Using this base image, I only need to do pip install {lib} and the paper that I implemented is available as a docker image.

I also wanted something that was reproducible (in terms of package version numbers). Basically, when I make a new docker image, I would show the versions of all the packages inside the docker image, so that people could see which packages it was built on (done using test_container.ipynb) (in the future if I update docker image, I can use nbdev_test_notebooks so see if the project works on newer package version, if yes then update those package versions in README)

I initially thought about this but did not do it as fastai2 is evolving so it did not make much sense to make an image for an old pip release. For this reason I added these version numbers

        Hash = bf455de9bc37c76f7f92b3c43227ef9d4779b614
        Time = 2020-06-17 20:23:42 -0400

If we want to make a docker image for a particular fastai release, the following lines need to be changed

    git clone && \
    cd fastcore                                  && \
    pip install -e ".[dev]"                      && \
    cd ..                                        && \
    git clone  && \
    cd fastai2                                   && \
    pip install -e ".[dev]"                     


pip install fastai2=={version}

This is an extension of above. After installing a particular fastai version using pip, we just need to do git clone {course-repo} (and then maybe pip install requirements.txt).