Docker image with RTX compatibility

Does anyone have a docker image made for fast.ai 2018 course with compatibility for RTX cards?

There isn’t difficult to build one. Do you know how to ?

I believe you already know this, but just to recap:

How to setup a Docker image ready for RTX GPU and Fastai 2018 with Pytorch 1.0 nightly

Part 1 - setting up the nvidia-drivers, nvidia-docker2, nvidia-container-runtime

If you are in Linux, you just have to install the GPU drivers on your machine, don’t need the entire CUDA toolkit:

Note: If you already know this part 1 you can jump to part 2:

1- Install the drivers:

sudo add-apt-repository ppa:graphics-drivers/ppa -y
sudo apt update
sudo apt install --no-install-recommends \
      libcuda1-410 \
      libxnvctrl0 \
      nvidia-410 \
      nvidia-410-dev \
      nvidia-libopencl1-410 \
      nvidia-opencl-icd-410 \
      nvidia-settings

2 - You will need to remove old docker and install nvidia-docker2 and nvidia-container-runtime, here you can get how:

# If you have nvidia-docker 1.0 installed: we need to remove it and all existing GPU containers
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
sudo apt-get purge -y nvidia-docker

# Add the package repositories
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
  sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
  sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update

# Install nvidia-docker2 and reload the Docker daemon configuration
sudo apt-get install -y nvidia-docker2
sudo pkill -SIGHUP dockerd

3- This file connects your GPU to nvidia-docker by default.
Note: Preferably restart your machine after this. Also take a look that in order to build a image flattening the content you need to use this file with “experimental”:true, else --squash will not be available, because is a experimental API

cat <<EOF  | sudo tee /etc/docker/daemon.json
{
    "experimental": true,
    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}
EOF

3- You will need to decide witch image to use (ubuntu16.04), here I am using 16.04

nvidia-docker pull nvidia/cuda:10.0-cudnn7-devel-ubuntu16.04

4- Test Nvidia acess with

nvidia-docker run --rm nvidia/cuda:10.0-cudnn7-devel-ubuntu16.04 nvidia-smi

Part 2 - Building the Docker image

5- Create a simple Dockerfile to build your image ready for RTX

Note: Save this code as Dockerfile .

FROM nvidia/cuda:10.0-cudnn7-devel-ubuntu16.04

ARG user=root
ENV USER=${user}

ARG uid=0
ENV UID=${uid}

ARG PYTHON_VERSION=3.7
RUN apt-get update && apt-get install -y --no-install-recommends \
         sudo \
         build-essential \
         cmake \
         git \
         curl \
         vim \
         ca-certificates \
         libjpeg-dev \
         libpng-dev &&\
     rm -rf /var/lib/apt/lists/*

RUN useradd -Um ${USER}

RUN echo "${USER} ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers;

USER $USER

WORKDIR /home/${USER}

RUN curl -o ${HOME}/miniconda.sh -O  https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh  && \
     chmod +x ${HOME}/miniconda.sh && \
     ${HOME}/miniconda.sh -b -p ${HOME}/conda && \
     rm ${HOME}/miniconda.sh

ENV PATH ${HOME}/conda/bin:${PATH}

RUN  ${HOME}/conda/bin/conda install -y python=${PYTHON_VERSION} numpy mkl mkl-include setuptools cmake cffi typing pyyaml scipy mkl mkl-include cython typing pip jupyter notebook && \
     ${HOME}/conda/bin/conda install -y -c pytorch pytorch-nightly && \
     ${HOME}/conda/bin/conda install -y -c mingfeima mkldnn

RUN echo "export PATH=${HOME}/conda/bin:${PATH}" | tee -a ${HOME}/.bashrc

RUN echo "source activate" | tee -a ${HOME}/.bashrc

CMD "/bin/bash"

6- build the image on the same folder as the Dockerfile:

nvidia-docker build --squash \
                    --build-arg user=${USER} \
                    --build-arg uid=${UID} \
                    -t fastai:2018 ./Dockerfile

7- After finish, instantiate your container :

nvidia-docker run -it fastai:2018 /bin/bash

Now use your way to install Fastai, this is just a example you can change as you like.

Part Extra - Build the nvidia/cuda image locally

In case of the image nvidia/cuda:10.0-cudnn7-devel-ubuntu16.04 is not available on DockerHub anybody can build it too:

1- Clone the dockerfiles from nvidia:

git clone https://gitlab.com/nvidia/cuda.git
git checkout ubuntu16.04

2-Build the nvidia image locally:

nvidia-docker build --squash \
                    -t nvidia/cuda:10.0-base-ubuntu16.04 \
                    ./cuda/10.0/base;

nvidia-docker build --squash \
                    --build-arg repository=nvidia/cuda \
                    -t nvidia/cuda:10.0-runtime-ubuntu16.04 \
                    ./cuda/10.0/runtime;

nvidia-docker build --squash \
                    --build-arg repository=nvidia/cuda \
                    -t nvidia/cuda:10.0-devel-ubuntu16.04 \
                    ./cuda/10.0/devel;

export CUDNN_VERSION=7
export MODE=devel

nvidia-docker build --squash \
                    --build-arg repository=nvidia/cuda \
                    -t nvidia/cuda:10.0-cudnn${CUDNN_VERSION}-${MODE}-ubuntu16.04 \
                    ./cuda/10.0/${MODE}/cudnn${CUDNN_VERSION};

3-Test if the nvidia image is working:

nvidia-docker run --rm nvidia/cuda:10.0-cudnn${CUDNN_VERSION}-${MODE}-ubuntu16.04 nvidia-smi
5 Likes