OK, here’s what I landed on in the end. it works out.
Whilst I’m waiting for it to arrive, I’m thinking about the software setup. I’d like Windows 10 on it eventually, but I’d also like to not install Windows 10 now. @slavivanov said to install Windows 10 first if you’re going to dual boot; I’m wondering if it’s possible to sidestep those issues by installing Windows on an external hard drive. Can you dual boot from 2 hard drives? I’d love to be able to just focus on getting Ubuntu running for the time being and work on Windows later, but I won’t do that if it will ruin my Ubuntu partition!
Hi, I am also inclining towards buying a gaming desktop from costco for deep learning projects specially kaggle competitions. Can you please share your experience with the costco computer which you purchased.
With the help of this forum and the blog posts I listed below, I built my deep learning pc after a couple of months of research and been using it for a couple of weeks. It was a very fun process !!
Specs
Intel - Core i7-6850K 3.6GHz 6-Core Processor
Asus - X99-E ATX LGA2011-3 Motherboard
Corsair - Vengeance LPX 32GB (2 x 16GB) DDR4-2400 Memory
Samsung - 850 Pro Series 512GB 2.5” Solid State Drive
NVIDIA GeForce GTX 1080 Ti Founders Edition
Corsair - Air 540 ATX Mid Tower Case
Seasonic X-1250 1250W 80+ Gold Fully Modular Power Supply
Benchmark
I ran a couple of CNNs on it and it and was very happy with the performance you can have a look at the code, results and the time it took here.
Hello @jeremy
I just wrote to @nolanchan that this pip install of keras does not work in my local computer (read Keras 2 Released) and the conda-forge install of keras 1.x does not exist anymore on line.
Thanks in advance for any ideas to solve my issue.
I thought that having the 7700k would be better because later I could add more memory. I mean, it’s easier and cheaper to change the ram than the processor.
But if the processor doesn’t help much, maybe it’s not worth to get the 7700k.
I was thinking about what @EricPB said, he is using a good CPU and just 16GB and he’s able to participate in Kaggle competitions.
More RAM is better, but 16 GB is plenty for many use cases. You may not be able to hold your entire dataset in RAM but for many datasets that wouldn’t be possible anyway.
I’m currently doing the Cdiscount competition on Kaggle, which is about 60GB of (compressed) data. My machine is only using 5 GB of RAM during training.
Note that many motherboards recommend that you install memory in pairs because this gives a speedup. So you’d buy 2 sticks of 8 GB instead of 1x16 GB. The downside is that now in order to upgrade to 64 GB you can’t use your 2x8 sticks anymore and you’ll have to buy 4 sticks of 16 GB, so it will end up being more expensive. (Installing memory in pairs isn’t required, but check the manual for your motherboard.)
Hello. After agonizing over this for a very long time, I finally decided to buy a proper deep learning machine. Shown below are some of my experiences which I hope can help others.
I went with an open box unit for $1,619.16. It included a 1 year warranty and space for 2 additional GPUs. I will probably need to upgrade the RAM. Don’t forget the surge protector. It did not come with much bloatware other than Windows .
Partition space in Windows – I tried to skip this step, but Ubuntu did not recognize Windows and I didn’t want to mess up the Windows OS.
While in Windows:
Right click start button
Select Disk Management
Right click on the Windows (C:) box in the middle and select Shrink
I decided to split the C: drive in half – 50% Windows and 50% Ubuntu
The first time I did this, Ubuntu started to load, but eventually froze up with the following errors:
5.908731 nouveau fifo sched_error
5.908732 nouveau fifo sched_error
…
Apparently, this had something to do with the video drivers. To fix this:
Reboot Ubuntu
Wait for the purple screen with the keyboard logo at the bottom
Press any key
Select your language
Press F6
Toggle down to nomodeset and press the spacebar to mark it with an “x”
Press esc
Select “try Ubuntu without installing” – you can actually eventually install Ubuntu from this option
Very nice, always good to see someone investing in their deep learning progression!!!
If it’s not too late though, I’d consider looking at a CPU with a higher # of max PCIe lanes. The 16 of the 7700K will be absolutely fine for 1 GPU, but if you ever upgrade to a second GPU, you will be bottlenecked by the PCIe lanes of the 7700K, which would then be split 8x/8x.
I just bought all my gear and fell into this same trap. Ended up returning my processor and mobo as a result. I’ve grabbed the 6850K instead, which has 40 PCIe lanes. This way 2 can run fully with no bottleneck.
Just a heads up, in case you decide to upgrade to a dual GPU setup.
Anyways, really hope you have lots of fun with your new machine!!
You can check pcpartpicker for compatibility. Automatically filters out any results that aren’t compatible.
From what I found, looks like all the high end cards are compatible
As for multi-GPU support, it says on the link you provided:
Supports AMD Quad-GPU CrossFireX™ Technology
Also, PCPartPicker didn’t give me the option to add a second video card when it normally would. Which is to say, no. I don’t believe you can add multiple Nvidia GPUs. And unfortunately, deep learning requires Nvidia, as I’m sure you know
The loss in performance at 8x lanes vs 16 lanes for a 1080TI is pretty small - probably less than ~5%.
I can’t find any benchmarks for 1080TI right now, but you can see almost no loss in performance for a 1080 (slightly weaker card, so slightly harder to botleneck)[1][2], and a little higher loss in performance for a titan (slightly better card so easier to bottleneck)[3] and the 1080TI is in the middle.
So if you have 2 on a 16 lane cpu, you’re not losing much, as long as you dont also share those lanes with a nvme drive.
Thanks Corbin! As long as it supports one high end Nvidia GPU, I am happy already :).
One of my friends said that it makes sense to have one extra GPU (low end is fine) so that when high end one hangs due to calculation or failure, monitor still responds.
Is this a valid concern? I didnt see people in this thread talking about such.
How did you determine that using the 8x/8x split was a bottleneck for using two GPUs? Did you actually run into performance problems? If so, would you mind explaining what sort of neural network you were training at the time?
I’m curious to know this, since (as I’ve pointed out before in this thread), the GPU will spend a relatively long time on doing its calculations versus the time it spends on transferring data to/from the CPU. So “only” 8x would still be fast enough.
But if you have a counter example, I’d like to see if I can reproduce that. (I think the whole “you need lots of PCIe lanes” thing is a bit of a myth, but I don’t mind being proven wrong.)
Note that those tests are not about doing GPU compute on deep neural networks (as far as I can tell), but playing games or just pure bandwidth tests.
For a game you want the GPU to deliver results in less than 16 ms per frame. So the GPU does a relatively short amount of work, and therefore it becomes more important that you can copy data to the GPU quickly.
When doing deep learning, a forward & backward pass may easily take up 100 ms or more, so the GPU cannot use data at the maximum rate that you can transfer it anyway. The bottleneck here is the GPU, not the PCIe bus.
Considering that those tests show that the performance loss is small even for games, it matters even less for deep learning.