Building deep learning workstation for $2200


#1

Hey guys, this is the first PC I’m building myself so I expect I will make a decent amount of mistakes which might be obvious to more experienced people. I plan to use this pc for deep learning/ computer vision tasks. I’m going to start with one GPU but purchase up to 3 in the future. Kindly give your opinion.

https://pcpartpicker.com/list/M4DKkd


(Cedric Chee) #2

Check the existing threads on here for much more rich discussions.


#3

I’ve been building PCs for 20 years and I think it looks like a nice one.

Any thoughts on going with 1080ti vs 2080/2080ti? I’ve signed up for the upcoming live course and was pondering whether to upgrade my 1060 6GB to one of these options. I’m wondering if 8GB at half precision can handle as much/more as 11GB at full precision.


#4

Dettmers’ TL;DR; recs:

TL;DR advice

Best GPU overall : RTX 2080 Ti
Cost-efficient but expensive : RTX 2080, GTX 1080
Cost-efficient and cheap : GTX 1070, GTX 1070 Ti, GTX 1060
I work with datasets > 250GB : RTX 2080 Ti or RTX 2080
I have little money : GTX 1060 (6GB)
I have almost no money : GTX 1050 Ti (4GB) or CPU (prototyping) + AWS/TPU (training)
I do Kaggle : GTX 1060 (6GB) for prototyping, AWS for final training; use fastai library
I am a competitive computer vision researcher : GTX 2080 Ti; upgrade to RTX Titan in 2019
I am a researcher : RTX 2080 Ti or GTX 10XX -> RTX Titan — check the memory requirements of your current models
I want to build a GPU cluster : This is really complicated, you can get some ideas here
I started deep learning and I am serious about it : Start with a GTX 1060 (6GB) or a cheap GTX 1070 or GTX 1070 Ti if you can find one. Depending on what area you choose next (startup, Kaggle, research, applied deep learning) sell your GPU and buy something more appropriate
I want to try deep learning, but I am not serious about it : GTX 1050 Ti (4 or 2GB)


#5

I’ve posted in the other pinned thread but just found this one - and my build is very similar. I’m actually amazed we both have up to three GPUs and equivalent motherboards!

https://au.pcpartpicker.com/list/Fq6VbX

Interesting differences:
I almost went Threadripper as well, eventually decided on Intel i7 7800X for the compatibility with MKL library. Has 28 pcie lanes and 6 cores, so should be Ok for three GPUs + NVMe even though the Ryzen has more cores.

Chosen a second SSD instead of HDD since I understand you can split input and output to different drives to avoid bottlenecks.

1000W power supply of mine might be cutting it close, need to research it more.

The CPU has been the hardest choice for me so far, since we need good CPU and cores for any data augmentation bottlenecks.
GPU is a relatively easy choice (multiple 1080Ti for prototyping, 2080Ti upgrade down the track if it makes sense performance - cost wise. Currently doesn’t seem to)

Interested on any thoughts.


(Andrea de Luca) #6

This is a very interesting question. It’s my dilemma, as well. It will depend upon how much of the mixed precision computation will actually be done in FP16.

Until we have solid benchmarks in every typical task, I think it will be better to stick with a 11/12Gb Pascal.


#7

His comparisons on the RTXs in that post are theoretical since they hadn’t come out yet. It says in the article that he would update after the GPUs came out; I’ve been checking back every week since they came out but still nothing. It would be great if the 2080 topped the chart in terms of efficiency/$ as it does in his estimations.


#8

Looks like it should be soon:

Tim Dettmers ‏ @ Tim_Dettmers Oct 4

More

Given the new benchmark results for the RTX cards. I would currently recommend an RTX 2080 if the 8 GB of RAM is sufficient; RTX 2080 Ti otherwise. GTX 1080 Ti can be a good option if you can find a cheap (used) one. I will update my blog post this weekend.

Tim Dettmers ‏ @ Tim_Dettmers

FollowFollow @ Tim_Dettmers

More

Tim Dettmers Retweeted Master Yoda

These benchmarks are exactly what I have expected from my theoretical analyses. Shows much better TensorFlow RTX 2080 and RTX 2080 Ti performance than previously shown. Even the LSTM performance matches the numbers that I have quite closely!

Tim Dettmers added,

Master Yoda @ Master_Yoda_1

Replying to @ Tim_Dettmers

These benchmarks just came out! @ skc https://www.pugetsystems.com/labs/hpc/NVIDIA-RTX-2080-Ti-vs-2080-vs-1080-Ti-vs-Titan-V-TensorFlow-Performance-with-CUDA-10-0-1247/

12:14 PM - 4 Oct 2018


#9

Wow that’s great to hear! I always forget that twitter is a thing. I was holding off pulling the trigger until he verified his results, I’m going to start shopping around now!


(Andrea de Luca) #10

Hold the trigger a bit longer. The most interesting thing to know would be: how does the use of FP16 reflects upon memory usage? The good Tim says that FP16 doubles the amount of memory available, which is true. But we do know that such a card will operate in mixed precision. Will the 2080’s 8Gb be sufficient (or at least equivalent to the 11Gb of the 1080ti)?

Thus far, the 2080 is some 500 euros cheaper than the Ti sibling, which is quite something. But if one gets limited by memory, that’s a dealbreaker.

Today I witnessed resnext 101 occupying 22Gb on a V100 (which has 32Gb) with medium-sized batches.

Be careful when it comes to memory. If you can afford it, buy a quadro RTX 5000 (2300$) which has more vram that the 3000$ Titan V.


#11

I bit the bullet and got a 2080 with AIO water cooling. $905 total cost, so a very capable GPU for many/most tasks, but enough savings over the 2080 ti to cover quite a few cloud hours when I need higher memory.

If/when I do paid work on my home box, I’ll start lusting after an RTX Quadro 8000 with 48GB of memory, but this should keep me happy for now.