@mariya If you already have the parts and have a few hours, why try doing it yourself. The hardware part is easy… just plug and play if you follow the instructions. Getting ubuntu might be a bit tricky, but people here are very helpful.
You can just let them install windows, and then install ubuntu when you get it home.
Think @sravya8 found someone to build hers.
You shouldn’t be intimidated by it though, hardest thing was buying the parts. Now they’re just expensive legos. Just ground yourself a bit to avoid static electricity and don’t spill coffee on it and you’re fine.
While that would be ideal, I don’t have a few hours to spare. Hence the desire to outsource as much as possible.
@sravya8 - I think you went to Central Computer right? Or did you build elsewhere?
Yes, central computers it is.
Are you running Ubuntu 16.04? If so, I’d love to know the magic chant to get the 1080 TI setup running. As of now, CUDA doesn’t recognize the graphics card; I installed the 378.13 driver. X and OpenGL run OK, but CUDA no luck:
“cudaGetDeviceCount returned 30 -> unknown error”
After taking a break with mud pie and coffee, I found the magic chant: https://devtalk.nvidia.com/default/topic/938988/-solved-cuda-8-0-on-ubuntu-16-04-gpu-not-available/. I purged the drivers + cuda, then rebooted in maintenance mode, installed NVIDIA 381.13 and then CUDA 8.0 (without installing driver), then continued to boot into X.
Oh sorry I just saw this! Good to hear you figured it out.
Performance is not what I hoped (about 340s on CatsDogs), but surely beats the 10,000+ seconds for the CPU version. I’m using an old Z800 workstation, perhaps CPU speed or PCI Express 2 are a performance bottleneck (which would surprise me).
The CPU shouldn’t matter as much, but if you have PCI Express 2 instead of 3, I think it could definitely account for the difference. If you look at the bandwidth of PCIE here: https://www.trentonsystems.com/industry-applications/pci-express-interface you’ll see that, assuming x16 links, PCIE 3 is twice as fast as PCIE 2. 32GB/s versus 16GB/s.
I have an X99-based motherboard with PCIE 3 and only a GTX 1070 and I’m getting 243s on catsdogs. So I think if you can upgrade your motherboard for a newer PCIE bus, you should see much faster speeds. With a GPU as nice as 1080 TI, it should be worth it.
CPU and PCI Express 2.0 can make a big difference in final training speeds. With an older Intel 3770K and PCI Express 2.0 I was seeing 324s training times, with an Intel 7700K and PCI Express with the same 1070 it went down to 229s.
I moved the 1080 ti into a two-year old HP Z640 that has PCI Express 3 and a E5-1620 3.5 GHz (4 core Xeon). Cats and dogs now run at 266s, which is faster than Z800’s 340s but still far shy of the faster speeds reported by others.
When watching NVIDIA’s Graphics Card information, I noted that the GPU Utilization is usually 100% on the Z640 but is more “chunky” (sometimes dropping to 0%) on the Z800; the GPU is likely starving for data that has not been uploaded yet. I don’t think that’s a PCI bandwidth issue (max utilization is 3% or less) but it might be latency, non-optimized DMA transfers, etc.
I’m very surprised by this; I would expect that the performance would be only GPU-bound, but it’s not as of now. I presume that my setup can be tweaked wrt performance (my setup is using theano.sandbox.cuda rather than gpuarray, per default install-gpu.sh), but I’ll worry about that later.
Thanks for the feedback!
Kudos for the setup.
I also found the AWS thing a bit repelling.
I am just starting up, but I guess my home server will just work fine.
Found a speed improvement: instead of device=gpu, use device=cuda in .theanorc.
I’m now running catsdogs in 198s (Ubuntu 16.04, GTX 1080 Ti, no overclocking (performance level 2).
Thanks for the input. I am gingerly applying these tips to my own new environment. Perhaps you have a point of view as I was working through the install_gpu.sh script. So far I have followed that script but have changed my cuda-repos to version 61-1 from 44-1 (I too have a 1080ti installed so later version better perhaps), we come to a point where we install the cudnn libraries. Do I need to change these, do we know what they where built against my feeling is they are at a higher level.
I too use a Z640 I have two gpu’s one to feed my ageing monitor (dvi-i) it only has a Gb of memory. I placed my 1080ti in the bottom x16 slot and came across some issues. The cable from the flash drives (not USB front ports) to the mother board restricts the bottom slot when the GPU is large. This is not the only issue the more serious issue is the data cables were stopping the cpu fan from running, these I tied back to get clearance and removed the link from flash drives, the cable is much to heavy, won’t bend and restricts air flow around the GPu.
Noting what you guys have done and seeing you ran ‘run’ files etc I thought I would do the same even though I have not run a notebook yet so I don’t know that I have any problems. I went to download a run file for cuda-8 and noted it was cuda_8.0.61_375.26_linux.run.
As I changed version 44-1 to 61-1 of the ‘deb’ at the start (I guess the numbers match) I now feel duty bound to give my installation a shot. My nvidia-smi reports driver version 375.39, the correct info for gpu 0 and gpu 1 and then reports that cpu 0 is not supported, this is my Geforce GT 610 for dvi-i monitor.
I’ll edit this reply after I have time for lunch and run a jupiter notebook
EDIT :- I am working with deb package cuda-repo-ubuntu1604_8.0.61-1_amd64.deb which I have installed, I have cudnn version libcudnn.so.6.0.20 installed, the version of theano is 0.9.0. when importing theano I get warning from theano.sandbox.cuda to use gpuarray so installed libgpuarray to anaconda2. Try to run one epoch of dogscats and it fails with error: identifier “cudnnSetfilterNdDescriptor_v4” is undefined.
When I run deviceQuery the Result = PASS so my nvidia looks okay. I do get the warning about the mismatch with cudnn and theano
The driver version is 375.39
So I have some issues but they seem to be with theano
@kzuiderveld has a working set although the environment is deployed different using run files to deploy cuda 8.0.61 and driver 378.13, then by zip cudnn 6.0.20, and pip theano 0.9.0
After following the script install_gpu.sh with changes to version and using packages I now have difficulty removing then sudo apt --purge remove can’t find the package and I had an awful time rm -R ing which results in a fresh install of ubuntu.
Thanks for any help.
I expect there must be a ‘Make your own server for dl2’ is it possible to link this forum to that one so we can share
To use the 1080Ti, you need to use a more recent driver than the one bundled with the CUDA distribution. I did the following AFTER executing the entire install-gpu.sh script (so Anaconda etc was in place and I was theano on the CPU):
- Download and install the NVIDIA 378.13 driver.
- Grab latest CUDA distribution from NVIDIA - not the .deb file but the regular installer. Then install CUDA without installing the driver.
- Things worked with this setting, but I wondered if I could make things faster by using newer libs. So: downloaded a more recent version of libcudnn (6.0.20, which introduces support for NVIDIA Pascal) and I installed it.
- The newer libcudnn breaks Theano, so you also have to upgrade your Theano version to the latest (0.9).
- Change use device=cude in .theanorc.
I got my 1080ti in the second slot (“the other x16”), I didn’t run into the cable issues you mention. If your aging monitor doesn’t have a higher resolution than 1920x1200, I’d forget about the second card and drive your monitor via a displayport-to-DVI adapter (one comes with the 1080ti, they’re cheap otherwise).
In my initial setup, nvidia-smi did recognize the graphics card but CUDA was not working. Make sure to go to the CUDA samples directory, compile 1_Utilities/deviceQuery and run it - if the program reports Result = PASS as the final line, you’re set.
Just to be clear I downloaded the 61-1 version of the deb file not the 44-1 as in the install_gpu script. I assume you used version 44-1 as per original script before using run files. So I’ll try with the 6101 deb first.
libcudnn I may have to change as i have the one from platform.ai used in the script. I have already Theano 0.9.
I will try the CUDA sample too see were I am.
I noted the dvi adapter after forking out for the other display card, life is wonderful sometimes
Was able to make deviceQuery.o can’t see how to link it. Quite a long time since I have built at the command line.
Anyone thinking about AMD Ryzen based builds? Was thinking about a AMD Ryzen 7 1700 8 Core 3.0ghz. They can be had for $299.
@kzuiderveld Hi I am having a weird issue with ~/.theanorc using the dpkg I have /usr/local/cuda which is a link cuda-8.0 which has further links to include/ and lib64/ which both point to cuda-8.0/target.
Iam testing with the code from this page
THEANO_FLAGS=device=cuda0 python gpu_tutorial1.py
The result is either used cpu or used gpu
Now the issue is when I place
in rc file I get error theano.gpuarray : could not init pygpu
similarly if I write
I get the same
But if I write
I get used the gpu
However although that works it can’t find the header for cudnn. So it can’t compile. hence slower I guess.
So my question is in your set up are there links like /usr/local/cuda-8.0/include -> targets/x86_64-linux/include
Had to add [dnn] with pointers lib64 and include but took no effect in ~.theanorc
Added CPLUS_INCLUDE_PATH to linux environment and now finally works and finds cudnn.h. However this is version 5 now change back to version 6.0.20 everything works fine outside of notebook but with warning about cudnn. In notebook blows up as it’s still looking for libcudnn5