Setting up fastaiv2 - Locally

Installing FastAi V2 Locally

This guide will help you setup fastaiv2 on a laptop / desktop running Ubuntu LTS.

If you are not running an LTS version of Ubuntu, you should not proceed.

IMPORTANT

  • In this guide, we will not be using conda.

  • We will only be using a combination of virtualenv, virtualenvwrapper and pip. This is because you will have full control on exactly what is being installed.

Check your Ubuntu Version:

lsb-release -a

You should see something like:

Distributor ID: Ubuntu
Description: Ubuntu 18.04.3 LTS
Release: 18.04
Codename: bionic

Check Nvidia CUDA Drivers

nvidia-smi

Note: You MUST have driver version >= 418.39.

If you don’t have the latest drivers, proceed to the next step and include the driver in the CUDA installation.

Download CUDA 10.1 (for PyTorch 1.4)

Download the local runfile installer for your Ubuntu version.

mkdir ~/Downloads && cd ~/Downloads
wget -O cuda_10.1.105_418.39_linux.run https://developer.nvidia.com/compute/cuda/10.1/Prod/local_installers/cuda_10.1.105_418.39_linux.run
chmod +x cuda_10.1.105_418.39_linux.run
sudo sh cuda_10.1.105_418.39_linux.run

IMPORTANT:

This run file installer will ask you to select options.
If you have your Nvidia Driver installed already, make sure you uncheck that part of the installation.

If you have another version of CUDA installed, make sure you don’t clobber you existing installation; by selecting the directory where CUDA 10.1 should be installed manually.

In the Installer’s UI click Options -> Root Install Path

and then enter /usr/local/cuda-10.1 and then select Done.

Now select Install.

If you have a pre-exisitng version of CUDA you will be asked:

A symlink already exists at /usr/local/cuda. Update this installation ?
Select `No`.

Now let the installer complete installation.

Install virtualenv and virtualenvwrapper

To setup virtualenv and virtualenvwrapper, follow the instructions at:

https://www.pyimagesearch.com/2018/05/28/ubuntu-18-04-how-to-install-opencv/ [Look at Step #3]

Update your ~/.bashrc with the following:

# CUDA
export CUDA_10=/usr/local/cuda-10.0 # Only if you have multiple installations of CUDA
export CUDA_10_1=/usr/local/cuda-10.1
export CUDA=$CUDA_10
export USR_LOCAL=/home/$USER/.local/
export PATH=$CUDA/bin:$USR_LOCAL/bin:$PATH
export CUDA_PATH=$CUDA
export LD_LIBRARY_PATH=$CUDA/lib64

# Virtualenv
export WORKON_HOME=$HOME/.virtualenvs
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3

# Virtualenv Wrapper
source /home/$USER/.local/bin/virtualenvwrapper.sh

# Helpers to switch CUDA Versions
# For side by side installs only. Ignore if you only have one version of CUDA

set_cuda_10() {
    echo "Setting CUDA to v10.0"
    export CUDA=$CUDA_10
    export PATH=$CUDA/bin:$USR_LOCAL/bin:$PATH
    export CUDA_PATH=$CUDA
    export LD_LIBRARY_PATH=$CUDA/lib64
}

set_cuda_10_1() {
    echo "Setting CUDA to v10.1"
    export CUDA=$CUDA_10_1
    export PATH=$CUDA/bin:$USR_LOCAL/bin:$PATH
    export CUDA_PATH=$CUDA
    export LD_LIBRARY_PATH=$CUDA/lib64
}

Note:

  • When using fastaiv2 you need to be using CUDA 10.1.
  • If you only have a single CUDA installation you can use that version to be the canonical $CUDA version.

Install cuDNN

You will need to create an Nvidia Developer account for the next step.
Login, and visit https://developer.nvidia.com/rdp/cudnn-download

IMPORTANT: Make sure you select the TAR file install for CUDA 10.1. This is labelled as cuDNN Library for Linux.

The URL should look something like: https://developer.nvidia.com/compute/machine-learning/cudnn/secure/7.6.5.32/Production/10.1_20191031/cudnn-10.1-linux-x64-v7.6.5.32.tgz.

This URL actually is a HTTP redirect. If you are installing this headless - then open the Chrome Network tab, and copy the actual redirected URL.

wget -O cudnn.tar.gz 'https://developer.download.nvidia.com/compute/machine-learning/cudnn/secure/7.6.5.32/Production/10.1_20191031/cudnn-10.1-linux-x64-v7.6.5.32.tgz?a_really_long_encrypted_token'

To install cuDNN from the TAR File you can find instructions at: https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#installlinux-tar

sudo cp cuda/include/cudnn.h /usr/local/cuda-10.1/include
sudo cp cuda/lib64/libcudnn* /usr/local/cuda-10.1/lib64
sudo chmod a+r /usr/local/cuda-10.1/include/cudnn.h /usr/local/cuda-10.1/lib64/libcudnn*

Create a Virtual Environment

mkvirtualenv torch
workon torch
set_cuda_10_1
echo $CUDA # Should point to the right $CUDA directory

IMPORTANT: Anytime you pip install or you use python you should ALWAYS activate your virtual environment with the right CUDA versions.

Install PyTorch

More instructions at https://pytorch.org/get-started/locally/

pip install torch torchvision

Verify PyTorch installation

From inside the virtual environment, run python:

import torch
torch.cuda.is_available()
>>> True # Should be True

Install fastaiv2


mkdir FastAi && cd FastAi
git clone https://github.com/fastai/fastcore
cd fastcore
pip install -e ".[dev]"
cd ..

git clone https://github.com/fastai/fastai2
cd fastai2
pip install -e ".[dev]"
cd ..

Install Course Materials

git clone https://github.com/fastai/course-v4.git
cd course-v4
# Dependencies from `requirements.txt`
pip install graphviz ipywidgets matplotlib nbdev>=0.2.12 pandas scikit_learn azure-cognitiveservices-search-imagesearch sentencepiece

cd ..

Launch Jupyter

From the course-v4 directory, run

jupyter notebook
4 Likes

Thanks for this guide! A couple of questions:

  1. if I install Anaconda first, will that create problems with this non-Conda install?
  2. I notice Anaconda has now moved to python 3.8. Will this cause any issues? Hard-way experience has taught me a simple formula that seems to be true more often than not: “new version = bad.” (With the exception of fastai, of course)

I will definitely heed the warning to keep CUDA at 10.1.

I don’t like mixing Anaconda and non-Anaconda installs.

Also, I prefer setting up virtualenvs on my own and having full control on the environments. :slight_smile: Anaconda is a bit too magical for my taste.

I am planning to use the Ubuntu installation for inference only without CUDA. What changes are required to your installation process. Is there any change in the installation commands post the release of fastai2.

I am having a thin client without GPU with Ubuntu 18.04. I want to use it for inference only. What are the steps for installing fastai2 and the course material. I plan to train the models on Colab / Gradient.

Regards

I think the only difference there is to not have to do the Nvidia driver + CUDA setup.

Also now fastai can be installed using pip install fastai, without having to clone fast-core and fast-ai on your own.