Using my own GPU locally,TitanX

Sorry if this sounds like a terribly dumb question, do we need to use AWS / GCP for data training if we already have a TitanX, can’t we just run things locally?

Locally is fine. There’s a lot of threads on this topic already, so do a search, and reply on one of them if you get stuck.

(Note that this is considered an advanced topic, so best off using the AWS AMI if you’re not comfortable setting up Linux, libraries, etc.)

1 Like

I went through the same experience a few months ago as I was discovering Part 1 (2016) and had a Gaming PC at home (ie. decent CPU/HD/RAM).

Would you like me to post some basic guides, in the Beginner Section, on getting your Windows PC to dual-boot with Ubuntu, and install the necessary software for Deep Learning on it (CUDA, cuDNN, TensorFlow, PyTorch) and so on ?

@Ekami has some fantastic guides for example. Even if they were written for Python 2.7 (and not 3.6 etc.), I used them again yesterday to reinstall my dual-boot PC and it felt like ezy-pzy.:sunglasses:


A post on dual boot would be really good.

On dual booting:

I had issues with installing different versions of Ubuntu and having to do a work around with Xorg to get my mouse and keyboard running properly under Ubuntu 17.10, whenever I tried Ubuntu 16.04, it would fail to the grub cli which I had issues with. Fixed now, but was such a pain to get up and running.

Re; Installs for CUDA, cuDNN, TF, and PyTorch would be good too – this was a pain to install on Windows when I had done it before a dual boot awhile back mostly bc I kept missing in the documentation and had installed a different version of CuDNN that threw everything off (so much easier on a Mac!).

Sure, I’ll try and create a post tomorrow.

Just to clarify, as @Ekami can attest, most guides become “partly obsolete” within 6 months in terms of software versions and links: you’ll find guides explaining how to install Python 2.7 on Ubuntu 14.04, while today it’s Python 3.6 on Ubuntu 16.04.

Just like Part 1 v1 in 2016 was based on Python 2.7, Theano (announced to be dead soon) and Keras 1.2 (Keras 2.0 today).

So you’ll have to keep that in mind when using them :sunglasses:

I would say it depends but yes most of the time guides get outdated really quick! Here is the part 1 of my guide and the part 2. These guides teaches you how to setup your environment for DL. They are not outdated (I hope @EricPB can confirm :smiley: ) , really beginner friendly and I try to explains how everything you do actually “works”. But just to clarify these guides uses python 3, not 2 although they are not “python specific” guides :slight_smile:


I used Tuatini’s (@Ekami) guides when I set up mine… They are great!


Thanks! I installed Ubuntu 17.10 and I’m wondering if I should be using CUDA 9 (apparently PyTorch just said they’re updating binaries so I may install from source?).

I would recommend using cuda 8 for now. The latest release of Tensorflow does not use CUDA 9, nor does Pytorch yet (according to their latest release). Even if nvidia claim cuda 9 has better perf I saw somewhere someone posting the benchmarks and showing that nothing changed. But anyway you don’t have much of a choice here if you want the latest release of TF and Pytorch to work :slight_smile:

Indeed, CUDA 9 isn’t compatible with Tensorflow current version, I made that mistake three days ago and ended up re-installing Ubuntu because it was faster than fixing.

Actually I don’t think it’s compatible with anything yet and still they present it as the go-to standard, while making it hard to find the Cuda 8 version (it’s in the “Legacy” section).

Note that the same goes with CuDNN, you need to get the version compatible with Cuda 8 (it was the v6 for me).

The guide for dual-boot Ubuntu + Windows is here.