Using a GTX950M


I have a notebook with Ubuntu 16.04, the lscpi command gives me

01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce GTX 950M] (rev a2)

Wich is my card, but when I tried to use cuda (I installed it from the NVIDIA website) I get the error that
WARNING (theano.sandbox.cuda): CUDA is installed, but device gpu0 is not available (error: Unable to get the number of gpus available: CUDA driver version is insufficient for CUDA runtime version)

So the computation takes forever to complete.

I tried GPU0, GPU1, GPU etc, but nothing worked, have any of you had a similar problem? did you fixed it?

It means your driver is too old, I believe. That’s a pretty slow GPU however with limited RAM - I’d strongly suggest using Paperspace.

1 Like

Try updating ~/.theanorc to be



But if you are using Theano, you are on the old version of the course. Theano is past its sell-by date.

Try switching over to the current version ( which uses PyTorch and see if you can get things to work. You should be able to get some things to run, faster than the CPU computation you experienced here, but still on the slow side. You can probably get a lot done on sample sized datasets, but seriously consider paperspace for doing full dataset training. If you have the budget, save yourself some frustration and do everything on paperspace. $0.40/hr x 10-20 hrs a lesson = $4-8/lesson plus a few bucks for storage and public IP. Of course, if you catch the fever, you will spend many more hours and will start hoping for a bitcoin crash so GPU’s drop back down to retail price.

1 Like

Now I swiched as recommended to the V2 of the course and it seems that Pytorch is using the GPU because is REALLY fast.

Thanks all for the answers

That’s also the GPU on my computer (a Dell 7559 Laptop which I got recently.) I’ve been told that this GPU isn’t great for DL but I’ve had no problems running a couple of examples off of Youtube prior to this course… but I suppose it won’t cut it for, say, Kaggle competitions, right?