FastAI package requirements


#1

Hi!

I wonder do I need the latest CUDA and cuDNN libraries installed to follow fast.ai notebooks? I am working on 1 part now (lesson 3) and I’ve installed required packages onto my desktop machine a couple of months ago. And, I’m using CUDA 8.0 with cuDNN 5.3.

Therefore, my question is, do I need to update to the most recent versions of these tools or can continue to follow lessons by updating only fast.ai package itself?

Also, probably a bit off-topic, is it possible to keep several CUDA versions installed?


(Ramesh Sampath) #2

CUDA 8.0 is fine for PyTorch / FastAI.

AFAIK - If you want to keep multiple CUDA Versions, you may need to do that via Docker Containers in your machine.


#3

Understood, thank you for the response.

Agree, Docker is a good idea. I guess there should be a few pre-build Docker containers for PyTorch and TensorFlow libs. Or just minimalistic one with CUDA only installed. By the way, could you please share a link to some good container to start with? (If you use any, of course).

Now I am using this setup on my machine, but I would say having a few “clean” containers with CUDA 8/9 only would be great.


#4

Though it seems there is a special version of docker runtime modified by NVIDIA to support GPU access:

As diagram shows, it seems one still need to install driver onto host to make it accessible from container.

Probably the best way to have several versions of CUDA installed is to edit activate script for a specific conda environment to substitute LD_LIBRARY_PATH variable (at least, it should work for TF).


(Adriano Bottaio) #5

I guess that the simplest approach to have several CUDA versions is to tweak CUDA symlink (e. g. by swapping destination from /usr/local/cuda -> /usr/local/cuda-8.0 to /usr/local/cuda -> /usr/local/cuda-9.0).

The approach with activate script is just a cleaner way of kinda doing exactly that.


#6

Yes, agree, that should be a way to go.