Platform: Local Server - Ubuntu

My apologies in advance for the crudeness of this post, but I wanted to provide some setup instructions to those who intend on running the fastai code without using the cloud options. I know Jeremy has recommended that new users focus on learning the deep learning methodologies over troubleshooting a local installation, but the instructions below should be easy enough to follow. We are coders, right?

They will show how to get fastai versions 1 and 2 up and running independently with access to both the notebooks in course v3 and v4. My intent is create a blog post using fastpages sometime in the future but with the course just started last night, I wanted to get the information out there as soon as possible. If you do not want or need the fastai version 1 code or the coursev3 notebooks, you can skip steps 3 and 4, and run step 5 later in conjunction with step 8.

The steps below were done using a clean Ubuntu 18.04.4 LTS server install. This install is actually running inside of a Virtual Machine within UNRAID with GPU passthru. I will post a blog/topic later on how to set that up as well. My GPU is an Nvidia 1080Ti. If you run bare metal Ubuntu, then you can use these instructions. If you run a Virtual Machine that enables GPU passthru, then you can also run these instructions.

1. Install the Nvidia drivers for your GPU:

The first thing needed for Ubuntu are the drivers for the video card. You can easily check to see if your drivers are installed by executing the nvidia-smi command at the command line. As I had a clean install, there were no drivers installed. For this setup, I chose the version 440 drivers. Version 440.64 is what was installed using these commands:

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo apt install nvidia-driver-440

After a reboot, running nvidia-smi should show an image like the following

Do not proceed until this screen can appear.

2. Install Anaconda

With the GPU recognized by Ubuntu, we can now install Anaconda. Get Anaconda using this command

wget https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86_64.sh

Extract and install the download using

bash Anaconda3-2019.10-Linux-x86_64.sh

You will have to press ENTER and space a few times to accept the licensing agreement. The install will need a new shell access, so exit out and get a new terminal.

3. Create an environment for fastaiv1

Execute the following commands to create the fastaiv1 environment.

conda create --name fastaiv1
conda activate fastaiv1

Run these commands to install fastaiv1 and any dependencies. Click y or yes or whatever as needed to add the items (from fastai github readme)

conda install -c pytorch -c fastai fastai
conda uninstall --force jpeg libtiff -y
conda install -c conda-forge libjpeg-turbo pillow==6.0.0
C="cc -mavx2" pip install --no-cache-dir -U --force-reinstall --no-binary :all: --compile pillow-simd
conda install jupyter notebook
conda install -c conda-forge jupyter_contrib_nbextensions

4. Get the coursev3 data

I like to keep all my cloned data in a sub directory called “repos”. The instructions will reflect this.

mkdir repos
cd repos
git clone https://github.com/fastai/course-v3.git

5. Test the version 1 installation

For my setup, running Ubuntu server, I do not use a browser on the Ubuntu machine, so I do not start Jupyter notebook with one. I also port forward 8889 on the Ubuntu machine to my local host as port 8888. So the following command takes that into account:

jupyter notebook --no-browser --port 8889

Once the server is running, I can copy the link shown and paste it into my browser.

I will need to change the port number as shown below:

…and will be presented with the Jupyter Notebook screen

With the notebook server running, access to the terminal can be had by clicking New>Terminal as shown

image

Enter the following command if you would like to watch how the GPU is used during training of our models

watch -n 1 nvidia-smi

I suggest now opening the course-v3/nbs/dl1/lesson1-pets.ipynb notebook and run a few cells to verify the code is working


Once you get to the training part…

…you can switch over to the terminal tab to see if the GPU is being used by looking at the memory consumption

With fastaiv1 working with the exisiting notebooks, we can shut down the server and work on the fastaiv2 portion. Deactivate the fastaiv1 environment using

conda deactivate

6. Create the fastai2 environment

While still in the “repos” directory (create it if you did not do step 4) execute the following commands (from the fastai2 github page)

git clone https://github.com/fastai/fastai2
cd fastai2
conda env create -f environment.yml
conda activate fastai2

7. Install fastai2 and its dependencies (from the github page)

Run the following commands:

pip install fastai2
pip install nbdev
nbdev_install_git_hooks
conda install pyarrow
pip install pydicom kornia opencv-python scikit-image

8. Test the fastai2 installation

Start the jupyter notebook. Since this is a different environment, you will have to enter the key/password like before. You should see the new fastai2 folder like shown below.

Open the notebook as shown below from the fastai2 folders. This is a similar notebook to the one tested before.

The key difference is that some of the learn commands use the .to_fp16() option. My 1080ti card and I believe the 20 series cards can use this method, but other 10 series cards cannot, so be careful just running these notebooks as is.

9. Get the course v4 notebooks

With the fastai2 installation validated, you can now get the notebooks being used in version 4 of the course. Stop the server and change back to the “repos” directory. Run the following command:

git clone https://github.com/fastai/course-v4

Trying to run the first notebook in the repo will result in some errors. You will need to install a few more dependencies (credit to @zerotosingularity ):

pip install graphviz
pip install azure
pip install azure-cognitiveservices-vision-computervision
pip install azure-cognitiveservices-search-websearch
pip install azure-cognitiveservices-search-imagesearch
pip install "ipywidgets>=7.5.1"
pip install sentencepiece
pip install scikit_learn

You should be able to restart the notebook server, and see the new directory. The first notebook should now run without errors.

I hope this helps people running a local installation.

27 Likes

Thanks for sharing, i’m doing similar but using Ubuntu 18.04 but using virtualenv instead of conda. Note that I tried first with Debian 10 but ran into dependency isssues installing cuda 10.2. Some of these steps might not be necessary since I was playing around running order versions of tensorflow and fastai. Anyway here are my notes…

Install cuda, cudnn & tensorrt

sudo ubuntu-drivers autoinstall
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin
sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub
sudo add-apt-repository "deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /"
sudo apt-get update
sudo apt-get -y install cuda
sudo dpkg -i libcudnn7_7.6.5.32-1+cuda10.2_amd64.deb
sudo dpkg -i libcudnn7-dev_7.6.5.32-1+cuda10.2_amd64.deb
sudo dpkg -i nv-tensorrt-repo-ubuntu1804-cuda10.2-trt7.0.0.11-ga-20191216_1-1_amd64.deb 

sudo bash -c 'echo "deb https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 /" > /etc/apt/sources.list.d/machine-learning.list'
sudo bash -c 'echo "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /" > /etc/apt/sources.list.d/cuda.list'

sudo apt install libnvinfer-plugin6=6.0.1-1+cuda10.2
sudo apt install nvidia-cuda-dev
sudo apt-get install python3.7 git virtualenv unzip python3-dev

Setup virtualenv

mkdir ~/tools
virtualenv --python python3.7 ~/tools/python3.7_venv

Clone the repo’s

git clone https://github.com/fastai/fastai2
git clone https://github.com/fastai/course-v4
git clone https://github.com/fastai/fastbook

Setup fastai2

source ~/tools/python3.7_venv/bin/activate
cd fastai2
pip install -e ".[dev]"
pip install jupyter notebook
jupyter notebook

optional - increase swap 2G > 8G

sudo swapoff -a
dd if=/dev/zero of=/swapfile bs=1G count=8
sudo mkswap /swapfile
sudo swapon /swapfile
5 Likes

why increase swap?

In my case, I was running out of RAM for some NLP models…

I am having issues with this but I can’t pin point the problem with my 1080i when running 10_nlp.ipynb. First all it’s memory was grabbed and the next call to it failed as no memory available. Then Sylvain mentioned the .to_fp16() which I removed and now none of the notebooks seem to activate the GPU it’s as if it’s not there, Even after many redeployments of latest core and 2 each day with the pip dev install . While writing this I thought I must go to basics so I shall reboot and see if that changes anything.

OK after reboot GPU 1080i is being activated, but now I am back to CUDA out of memory at the learn.fit_one_cycle(1, 2e-2) cell in the Fine tuning the language model of 10_nlp.ipynb again.

I’ll try to reduce batch size bs=6 and try again after stopping the running process to clear GPU memory.

OK that works so far using 1691MiB as I watch nvidia-smi every second

Changed the bs=64 at bs=32 around 3400MiB and 30 mins predict cycle at bs=64 using 7141MiB on 1080i were as with bs=6 1hr 20m per cycle

1 Like

Hi… I am unable to open jupyter notebook as per step 8(with fastai2 env activated). I already have course-v3 so when i setup fastai2, steps 6 & 7 go fine but at 8 i get an error


i updated conda and tried but the same happens.
When this env is deactivated then it works. Anything else that i should setup?

I had to use the following to resolve a gv() error “Make sure the Graphviz executables are on your systems”. I’m guessing it can be used instead of the pip install graphviz documented above.

conda install python-graphviz
3 Likes

FYI, I’m using AnyDesk remote software (free) to remote from my Windows notebook into the Ubuntu DL system using the public internet. AnyDesk has an easy to use port-forwarding feature. Use it to forward Jupyter’s port to your notebook’s browser. This solves three main problems: enables remote access, secure public internet access to local server, easy and robust (so far) port forwarding for Jupyter. Alternatively, there’s also a port-forwarding feature in VsCode which I can confirm works for Jupyter.

For those on a mac, as of right now you may get an error with pip install sentencepiece. For example, I got

ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

Fortunately, there is a fix:

pip install https://github.com/google/sentencepiece/releases/download/v0.1.85/tf_sentencepiece-0.1.85-py2.py3-none-macosx_10_10_x86_64.whl

Note: the install was successful, but I haven’t tested the package (yet).

For mac users:

In addition to the other problem I had (with sentencepiece), I had issues with graphviz. In fact, there are two things:

  1. Graphviz, the graph visualisation software, installed via
brew install graphviz

and
2. graphviz, the python package, installed via

pip install graphviz

Somehow the python package knows where to look for the software, but it worked for me.

Thanks for the guide, @FourMoBro. A few updates for the Ubuntu installation. I’m running 18.04 LTS.

pip install azure - no longer works, but doesn’t seem necessary to run the nbs
pip install "ipywidgets >= 7.5.1" - requirement already satisfied
pip install scikit_learn - requirement already satisfied

Otherwise, I haven’t had any issues thus far.

1 Like

I installed fastai2 according to your guide in the top post. Thanks for the clear instructions!

How do I keep the fastai2 that my notebooks use up-to-date? I have been using pip install fastai2 fastcore --upgrade.

The reason I may be confused is that the above command replies
Requirement already satisfied: fastai2 in /home/malcolm/anaconda3/envs/fastai2/lib/python3.7/site-packages (0.0.16)

Yet when in the fastai2 directory, git pull updated many files. Also, I saw in another user’s notebook that they are using fastai2 0.0.17.

To be clear, I do not want to contribute yet to fastai2. I only want to be using the latest released version.

Thanks for your help!

@Pomo, When you run:

pip install fastai2 fastcore --upgrade

You are getting your fastai2 from pypi. That version is 0.0.16, it was released on March 30th 20220.

The reason why your are seeing the following message:

is because, you are trying to install the fastai2 last version found at pypi, and you already have it on your local machine. Therefore, the Requirement already satisfied

As for the other notebook:

In that notebook, they most likely installed the latest version of fastai2 (0.0.17 as of today) using one the 2 following options:

Option 1: From Github - Non editable version
you have to install directly from the fastai2 master branch like this:

pip install git+https://github.com/fastai/fastai2.git

As a good practice, you have to install fastcore at the same time, like this:

pip install git+https://github.com/fastai/fastcore.git

Option 2: From Github - Editable version

  • Installing the fastai2 editable version
git clone https://github.com/fastai/fastai2
cd fastai2
pip install -e .
  • Installing the fastcore editable version
git clone https://github.com/fastai/fastcore
cd fastcore
pip install -e .

Every time, you want to upgrade to latest version of both fastai2 and fastcore, you run (from the corresponding folder):

git pull

This is already a long answer but if you would like to learn more about this subject, I wrote this blog post: 3 ways to pip install a package: fastai2 use-case where you will find a more detailed answer.

4 Likes

Thanks so much! Your blog post explained exactly how to proceed. Now updated to v 0.0.17.

What was confusing was the need to clone the fastai2 repo before creating the new conda environment, while the fastai2 in actual use is coming from pypi. Is that fastai2 clone now irrelevant?

Again, thanks for your willing response. It saved me a lot of frustration.

1 Like

@Pomo, You are very welcome!

The cloning will copy the environment.yml file on your local machine. Then, it’s used when creating the conda virtual environment like this:

conda env create -f environment.yml

If you are not interested in cloning the fastai2 repo, you can create an empty environment.yml text file on your local machine, and copy the content found in the repo and paste in your newly created file, and run the command above. Afterwards, you pip install fastai2.

Even better, I think you can just add fastai2 as a dependency in the environment.yml file, and just run the same command ( without the pip install fastai2 needed here above):

conda env create -f environment.yml
I don't remember trying this last option but I think it should work. I will try it whenever I will have time.

I just tried the option (with the strike-through) mentioned here above, and it doesn’t work because fastai doesn’t have yet a fastai2 package in the Anaconda Repository. Until then, one have to create a virtual environment using the the environment.yml file found in the fastai2 repo, and then pip install fastai2.

Since it has been more than a month, and the post is not a wiki, I cannot edit the first post, but here is a 4/24/20 Update:

With the new release of Ubuntu 20.04 LTS in the past 24 hrs, I decided to rebuild my VM using this new LTS image. I then proceeded to follow my original instructions to see if they still work. Everything works, for the most part. Here is what I found, similar to what others have since posted, for step 9:

  1. Replace pip install graphviz with conda install graphviz or as @bsalita suggested, conda install python-graphviz

  2. DO NOT run pip install azure You will get errors, and it appears that it may not be needed. (credit @neuradai )

Otherwise, the install steps still work as originally written, even if some of the last few commands can be skipped or changed slightly. Happy learning!!!

4 Likes

Any particular reasons for going to 20.04? I purposely went 18.04 hoping all the bugs were ironed out.

1 Like

20.04 is an LTS just like 18.04. It just came out, so there may be interest by others to try it out. For me it is in a virtual machine running as a headless server that I ssh into, and this VM only runs fastai. If it breaks, so what, I have the 18.04 server in a VM to fall back on. So no real reason other than, “why not?”. I will try out the desktop build in another VM maybe tomorrow.

For those with an older CUDA or Nvidia driver (in my case CUDA 10.1 and driver 418.67), a workaround is to force pytorch 1.4 and CUDA 10.1

You can do this by editing a copy of environment.yml (https://github.com/fastai/fastai2/blob/master/environment.yml) to replace the line “pytorch>=1.3” with the lines:
cudatoolkit=10.1
pytorch=1.4

Otherwise follow the instructions by FourMoBro above

Just a heads up for part 2 and Swift4Tensorflow much of the early release were on LTS 18.04.

Perhaps not sure if that’s included in V4 Part2 this year. The latest from the toolchain suggest 18.04 but for 20.04 would probably need a local compile from source.

S4TF Toolchain