Building deep learning workstation for $2200

Hey guys, this is the first PC I’m building myself so I expect I will make a decent amount of mistakes which might be obvious to more experienced people. I plan to use this pc for deep learning/ computer vision tasks. I’m going to start with one GPU but purchase up to 3 in the future. Kindly give your opinion.

https://pcpartpicker.com/list/M4DKkd

Check the existing threads on here for much more rich discussions.

I’ve been building PCs for 20 years and I think it looks like a nice one.

Any thoughts on going with 1080ti vs 2080/2080ti? I’ve signed up for the upcoming live course and was pondering whether to upgrade my 1060 6GB to one of these options. I’m wondering if 8GB at half precision can handle as much/more as 11GB at full precision.

Dettmers’ TL;DR; recs:

TL;DR advice

Best GPU overall : RTX 2080 Ti
Cost-efficient but expensive : RTX 2080, GTX 1080
Cost-efficient and cheap : GTX 1070, GTX 1070 Ti, GTX 1060
I work with datasets > 250GB : RTX 2080 Ti or RTX 2080
I have little money : GTX 1060 (6GB)
I have almost no money : GTX 1050 Ti (4GB) or CPU (prototyping) + AWS/TPU (training)
I do Kaggle : GTX 1060 (6GB) for prototyping, AWS for final training; use fastai library
I am a competitive computer vision researcher : GTX 2080 Ti; upgrade to RTX Titan in 2019
I am a researcher : RTX 2080 Ti or GTX 10XX -> RTX Titan — check the memory requirements of your current models
I want to build a GPU cluster : This is really complicated, you can get some ideas here
I started deep learning and I am serious about it : Start with a GTX 1060 (6GB) or a cheap GTX 1070 or GTX 1070 Ti if you can find one. Depending on what area you choose next (startup, Kaggle, research, applied deep learning) sell your GPU and buy something more appropriate
I want to try deep learning, but I am not serious about it : GTX 1050 Ti (4 or 2GB)

2 Likes

I’ve posted in the other pinned thread but just found this one - and my build is very similar. I’m actually amazed we both have up to three GPUs and equivalent motherboards!

https://au.pcpartpicker.com/list/Fq6VbX

Interesting differences:
I almost went Threadripper as well, eventually decided on Intel i7 7800X for the compatibility with MKL library. Has 28 pcie lanes and 6 cores, so should be Ok for three GPUs + NVMe even though the Ryzen has more cores.

Chosen a second SSD instead of HDD since I understand you can split input and output to different drives to avoid bottlenecks.

1000W power supply of mine might be cutting it close, need to research it more.

The CPU has been the hardest choice for me so far, since we need good CPU and cores for any data augmentation bottlenecks.
GPU is a relatively easy choice (multiple 1080Ti for prototyping, 2080Ti upgrade down the track if it makes sense performance - cost wise. Currently doesn’t seem to)

Interested on any thoughts.

1 Like

This is a very interesting question. It’s my dilemma, as well. It will depend upon how much of the mixed precision computation will actually be done in FP16.

Until we have solid benchmarks in every typical task, I think it will be better to stick with a 11/12Gb Pascal.

His comparisons on the RTXs in that post are theoretical since they hadn’t come out yet. It says in the article that he would update after the GPUs came out; I’ve been checking back every week since they came out but still nothing. It would be great if the 2080 topped the chart in terms of efficiency/$ as it does in his estimations.

Looks like it should be soon:

Tim Dettmers ‏ @ Tim_Dettmers Oct 4

More

Given the new benchmark results for the RTX cards. I would currently recommend an RTX 2080 if the 8 GB of RAM is sufficient; RTX 2080 Ti otherwise. GTX 1080 Ti can be a good option if you can find a cheap (used) one. I will update my blog post this weekend.

Tim Dettmers ‏ @ Tim_Dettmers

FollowFollow @ Tim_Dettmers

More

Tim Dettmers Retweeted Master Yoda

These benchmarks are exactly what I have expected from my theoretical analyses. Shows much better TensorFlow RTX 2080 and RTX 2080 Ti performance than previously shown. Even the LSTM performance matches the numbers that I have quite closely!

Tim Dettmers added,

Master Yoda @ Master_Yoda_1

Replying to @ Tim_Dettmers

These benchmarks just came out! @ skc https://www.pugetsystems.com/labs/hpc/NVIDIA-RTX-2080-Ti-vs-2080-vs-1080-Ti-vs-Titan-V-TensorFlow-Performance-with-CUDA-10-0-1247/

12:14 PM - 4 Oct 2018

1 Like

Wow that’s great to hear! I always forget that twitter is a thing. I was holding off pulling the trigger until he verified his results, I’m going to start shopping around now!

Hold the trigger a bit longer. The most interesting thing to know would be: how does the use of FP16 reflects upon memory usage? The good Tim says that FP16 doubles the amount of memory available, which is true. But we do know that such a card will operate in mixed precision. Will the 2080’s 8Gb be sufficient (or at least equivalent to the 11Gb of the 1080ti)?

Thus far, the 2080 is some 500 euros cheaper than the Ti sibling, which is quite something. But if one gets limited by memory, that’s a dealbreaker.

Today I witnessed resnext 101 occupying 22Gb on a V100 (which has 32Gb) with medium-sized batches.

Be careful when it comes to memory. If you can afford it, buy a quadro RTX 5000 (2300$) which has more vram that the 3000$ Titan V.

1 Like

I bit the bullet and got a 2080 with AIO water cooling. $905 total cost, so a very capable GPU for many/most tasks, but enough savings over the 2080 ti to cover quite a few cloud hours when I need higher memory.

If/when I do paid work on my home box, I’ll start lusting after an RTX Quadro 8000 with 48GB of memory, but this should keep me happy for now.

2 Likes

Hi so I’ve been doing ML for some time now and plan to dive into Deep Learning. I’ve used Google Colab quite a few times but now plan to get a laptop with a i7 8th gen, 16GB ram, and 1060 6 GB (Black Friday!). Will it be better than Colab? Kindly reply. Thanks!

I had a 1060 w 6gb in my desktop before upgrading. You will be limited somewhat by memory size, but many things will work fine. I haven’t used Colab so I can’t say which is ‘better’ but there is some benefit to having your own hardware at the ready and not having to worry about logging into the cloud.

OK fine if not Colab then how does it fare against AWS GPU instances or Paperspace? The 1060?

The only cloud offering I have much time on is the AWS P2. The 1060 is quite a bit faster than the K80 used by the P2, but has less GPU memory. Your laptop would have more CPU compute, but less system memory.

Oh, okay. Thank you.

With a resnet50 I’ve done comparisons between my own RTX 2070 in mixed precision mode and a PaperSpace V100 in full precision mode, with otherwise the EXACT same configuration: Memory usage with fp32 was 1.85 times that of the fp16 (14159 MiB on the V100 vs 7641 MiB on my RTX 2070).

Tim Dettmers has updated his blog post: The RTX 2070 is currently king of the value castle. :wink:

I do not concur with Dettmers about that: there are still problems with fp16: see the mixed precision thread here.

Thanks, this is helpful information. Could you do the same test upon a text model using AWD-LSTM? This would really be great! :slight_smile:

Yes, I should have made clear that this test is obviously only valid for this sort of network architecture. I have read in various discussion fora that mixed precision is not such an easy story with recursive-type networks.

I myself have not yet had the opportunity to test an LSTM in the same way.

However, I did make this ansible setup https://vxlabs.com/2018/11/21/a-simple-ansible-script-to-convert-a-clean-ubuntu-18-04-to-a-cuda-10-pytorch-1-0rc-fastai-miniconda3-deep-learning-machine/ so that anyone can easily provision a V100 and do comparative testing by themselves. :wink:

1 Like

There is absolutely no comparison or any other useful information at the link you posted, but only an advertisement for different computers with NVIDIA cards.

Did you paste an incorrect link by accident?