Local setup for Ubuntu 18.04 with NVIDIA GPU 1080

(Yiyi Chen) #1

I have a PC with Windows 10. Then at first round I got frustrated because of the errors due to the LINUX setup. So I setup a Linux system and worked all night to get the GPU going on the system. I’d like to share the experience here in case someone encounters the same problems.

(Yiyi Chen) #2
  1. First find a driver suitable for your operating system and the GPU.
    Nvidia Download Drivers

  2. Then Noveau kernel driver will most certainly get in the way. There are two ways to solve this:
    2.1. Edit the /etc/modprobe.d/blacklist.conf file to blacklist some modules so they do not interefere. (source : Nvidia GTX 1080 installation on Ubuntu 16.04 LTS )
    # blacklist added for nvidia gtx 1080 installation on ubuntu 18.04
    blacklist amd76x_edac
    blacklist vga16gb
    blacklist nouveau
    blacklist rivafb
    blacklist nvidiafb
    blacklist rivatv

If this does not work for you, try next: (because it did not work for me, Nouveau somehow still runs after reboot)
2.2. open terminal and enter the following commands: (source disable Nouveau)
$ sudo bash -c "echo blacklist nouveau > /etc/modprobe.d/blacklist-nvidia-nouveau.conf"
$ sudo bash -c "echo options nouveau modeset=0 >> /etc/modprobe.d/blacklist-nvidia-nouveau.conf"

and update kernel initramfs:

$ sudo update-initramfs -u

then reboot
$ sudo reboot

  1. after reboot, it’s time to install the driver:
    cd ~/Downloads/
    chmod +x NVIDIA-Linux-x86_64-390.59.run

then reboot again. It works!

(Dmitry Frumkin) #3

On 18.04, you can install Nvidia drivers with a single line:

sudo ubuntu-drivers autoinstall

as described in https://linuxconfig.org/how-to-install-the-nvidia-drivers-on-ubuntu-18-04-bionic-beaver-linux
Then you follow the instructions at https://github.com/reshamas/fastai_deeplearn_part1/blob/master/tools/setup_personal_dl_box.md

Worked for me just a couple of days ago. Good luck!


(Ekam Singh) #4

Hi Dmitry,

What versions of CUDA and CUDNN did you install on 18.04?

(Dmitry Frumkin) #5

Hello Ekam!

The thing is that I just followed the instructions and things worked! :slight_smile:
(torch.cuda.is_available() == True and things ran fast)
Note that environment.yml contains cuda90 and cudnn as dependencies. I guess that takes care of it.


(Yiyi Chen) #6

Yeah. I didn’t install CUDA separately. And it worked.

(Ekam Singh) #7

Wow, thanks guys. I tried it out and it totally worked, that was so easy lol. Installing Ubuntu as dual boot with windows was more of a pain.

(Tait Larson) #8

Quick question. If you have one graphics card do you end up shutting off X and ssh’ing to your machine before you do deep learning on the GPU?

(Arne Schirmacher) #9

No, you just use the command “jupyter notebook” to start a local notebook server, then you navigate to the fastai/courses/dl1 directory, load the lesson notebook and run all cells.

Use the command “nvidia-smi” or “watch nvidia-smi” from a different terminal to inspect the memory usage, temperature etc. of your GPU.

(Tait Larson) #10

Thanks Arne. I’m familiar with how to run jupyter notebook.

I guess the question is, can X windows and pytorch / CUDA processes be run at the same time on the same GPU? If so, is there any significant performance hit this causes when you are training your model?

(Arne Schirmacher) #11

The GPU resources are shared between Linux OS and the machine learning applications. On my system, Linux uses about 400 MByte, the fastai lessons use typically at least 1.5 GByte. When training a model I can continue watching the youtube lessons although the video is a bit jerky. You can even run several training sessions in parallel. The GPU utilization is only occasionally going to 100%, even when training several models. Maybe on my system the main CPU is the limiting factor, not the GPU.


When running X windows and cuda in parallel, X will take up some amount of the GPU memory which is not available for deep learning purposes. If your mainboard came with an onboard Intel graphics card, you might consider using the Intel card for all X windows purposes, thus having the NVidia GPU 100% available for Cuda. See this thread for approaches on how to achieve that.

(Tait Larson) #13

Thank you! Just the info I needed.

(Tait Larson) #14

I’m going to try to follow the instructions here to allow me to startx manually. Not sure if anyone has tested them on 18.04.

(Marc) #15

Hi Tait, I can vouch for this method which I have used on 2 different pcs successfully:

But the newer version of this should just be to only install the headless drivers, see one of the last entries of that same thread.

Or do you not have an internal intel graphics?

(Tait Larson) #16

Or do you not have an internal intel graphics?

I don’t. I just bought this motherboard. Unless I’m missing something it has no onboard graphics card.

I’d happily buy a second, very cheap GPU. But it sounds like just shutting off X when needed or running pytorch/cuda alongside X for some workloads would work fine.


The Intel GPU is part of the CPU, not a separate chip. If you have an Intel CPU, you most likely have it. Your mainboard may or may not have a monitor connector. If it does not have it, you might still be able to route the display output through the connector of the external graphics card - but I have not worked with a configuration like that myself. It might require some BIOS settings to enable it.