RTX 3090 Docker

Are there docker containers with fastai which i can use on a machine with RTX 3090?

Download the official Nvidia Pytorch container, install fastai into it, and you are done.

I tried this with:
git clone https://github.com/fastai/fastai
pip install -e “fastai[dev]”
It downgraded pytorch to 1.7 and did not work

1 Like

Strange. Why doesn’t it work? Any debug msg?

I’m having a similar issue maybe

My setup is as follows:
OS - Ubuntu 20.04.1 (Deleted my WSL2 Windows just for this… no regrets :slight_smile: )
GPU - RTX3090

I installed the Nvidia-Container-Runtime and my GPU shows up inside docker when running the following:

docker run --rm --gpus all nvidia/cuda:11.1-base nvidia-smi

I’m also able to run the fastai jupyter notebook in Docker. I added a volume for fastbook

git clone https://github.com/fastai/fastbook.git

docker run -dit --name fastai -v $PWD/fastbook:/fastbook --gpus all -p 8888:8888 fastdotai/fastai ./run_jupyter.sh

But when I run try to run 01_intro.ipynb in Jupyter notebook the kernel dies.

Any suggestions?

Okay I got it working…

Operating System: Ubuntu 20.04
GPU: RTX 3090

Setting up the Enviornment

  1. Install the latest Docker
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
sudo apt update
apt-cache policy docker-ce
sudo apt install docker-ce
sudo usermod -aG docker ${USER}
  1. Install Nvida-Container-Runtime
sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | \
sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list
sudo apt-get update
  1. Run test to make sure GPU is visible in container
    docker run --rm --gpus all nvidia/cuda:11.1-base nvidia-smi

Downloading & Running the FastAi Tutorials

  1. Make a folder to place the project files
    mkdir -p ~/source/fastai && cd ~/source/fastai

  2. Grabbing the projects from GitHub
    git clone https://github.com/fastai/fastbook.git

  3. Running the Jupyter Notebook environment is Docker
    docker run -dit --name fastai -v $PWD/fastbook:/workspace/fastbook --gpus all --ipc=host -p 8888:8888 fastdotai/fastai:latest ./run_jupyter.sh
    (Note: –ipc=host increases the shared memory which was my issue)

  4. Get the Jupyter Notebook token in order to authenticate
    docker logs fastai | grep token | cut -d '=' -f2 | head -n 1

  5. Logging into Jupyter Notebook and enter token
    http://localhost:8888/

  6. Go to projects to get started fastbook\clean
    01_intro.ipynb

1 Like

Nice, this approach works for me with a RTX 3090 and Ubuntu 20.04.

Using these commands my container reverted to pytorch 1.7.0 and cuda 11.0. Performance would be better with 11.1 or 11.2, but hopefully pytorch conda builds are coming soon.

1 Like

Turns out I was using a deprecated image. I updated my answer to include the image that’s being updated daily and it’s showing I’m running cuda 11.2 now.

To update just run the following from your ~/source/fastai folder

docker rm -f fastai
docker run -dit --name fastai -v $PWD/fastbook:/workspace/fastbook --gpus all --ipc=host  -p 8888:8888 fastdotai/fastai:latest ./run_jupyter.sh
# grabbing the latest token again
docker logs fastai | grep token | cut -d '=' -f2 | head -n 1

FYI , it shaved 1 second off the first 15s training. Thanks for pointing that out.

Using this method, I still get the following error. Any suggestions?

GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70 sm_75.
If you want to use the GeForce RTX 3090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/