GTX 2080/2080Ti RTX for Deep Learning?

The new item of GPU lust: RTX Titan


4608 CUDA cores vs 4352 for the 2080ti FE
Boost clock 1770 vs 1635
Memory - 24GB vs 11GB

1 Like

I just returned my RTX 2080Ti (plus the extra 32Gb DDR4 Ram I bought to support it in Multi-Gpu) to the vendors.

As I found out that it didnā€™t guarantee me a medal, over the 1080Ti, in the Kaggle Google Doodle competition.

:nail_care: @radek .

So I said ā€œI want my money back !ā€ :rofl:

Joke aside, itā€™s still a beast of a card for Computer Vision, especially since now the latest PyTorch+Fastai enable the Fp_16/Mixed-Precision faster computing time (at least 50% faster vs Fp32).

But I think the RTX 2070 is really the ā€œKing of the Hillā€ now.
You canā€™t beat its DL performance for the price (2070-8G for ā‚¬550, 2080Ti-11G for ā‚¬1,300)

2 Likes

Some benchmark from google

No idea why fp32 score so low for 2080 ti while fp16 is very close to v100 (Any one knows?)
https://www.microway.com/knowledge-center-articles/comparison-of-nvidia-geforce-gpus-and-nvidia-tesla-gpus/
FP32

|GeForce RTX 2080 Ti|estimated ~0.44 TFLOPS
|Tesla P100*|4.7 ~ 5.3 TFLOPS
|Tesla V100*|7 ~ 7.8 TFLOPS

FP16

|GeForce RTX 2080 Ti|28.5 TFLOPS|
Tesla P100*|18.7 ~ 21.2 TFLOPS*|
|Tesla V100*|28 ~31.4 TFLOPS*|

Still thinking which to buy 2080 / 2080 ti

Eric sent back his 2080ti. I have the 2080 and while I am happy with it, I would buy a 2070 or two if I had a do-over.

1 Like

If you had the money and the will for a 2080TI, I urge you to consider the TITAN. It has 24Gb and memory really is the big deal if you ask me. Big leap wrt previous generation.
You can have a bit more patience if your card is not powerful enough, but you definitely cannot tuck your model in 8gb if it just doesnā€™t fit.
Cutting the BS too much can lead to an untrainable model (at least if you want satisfactory results).

And while you can somewhat circumvent the problem with cnns (a 3-4 2070 setup), the same does not stand for RNN and other archs.

2 Likes

Just to clarify why I returned my RTX 2080Ti-11G, to be fair and transparent:

  • this is my current PC rig, with dual-boot for Windows10 and Ubuntu18.04, plus a 3Tb HD.
    https://pcpartpicker.com/user/EricPB/saved/#view=9vvNNG

  • my current card is a 1080Ti-11G, blower version from Spring 2017.

  • adding the RTX 2080Ti (ā‚¬1300) on top of it, in my rig required an extra 32Gb RAM (ā‚¬250), so the two cards could perform crunching/modelling together (you need 64Gb RAM imho for double-GPUs).

  • overall, I found out that the extra ā‚¬1550 didnā€™t deliver, vs my ā€œbasicā€ 1700X+1080Ti+32Gb RAM.
    Itā€™s not like Iā€™m spending 24 hours a day crunching numbers for Kaggle or some cutting-edge research.
    So I thought that much of serious money would be better spent on a new Weber BBQ + a Dyson wireless vacuum-cleaner (which are not either First-World-Problems, I admit).

If I were to build a new Deep Learning from scratch today, I would seriously either consider the 2080Ti as the main GFX, or the 2070 one (to support with a second 2070 later).

/salute

7 Likes

If I were going from scratch today I would decide up front if I was EVER going to use more than one card. If I was only ever going to use 1 card, I would pick the 2080Ti (for the memory) or a 2070(bang for buck) depending upon my budget. If I was planning on adding a second card down the line, I would start with the 2080(non-TI) over the 2070 as the 2070 cannot take advantage of NVLink. Then add another 2080 when able.

2 Likes

I fully agree with your analysis, though the benefit of SLI or NVLink for dual-GFX in DL/ML is far from confirmed, when it comes to parallel computing.
(Check the blog and comments on http://timdettmers.com/2018/11/05/which-gpu-for-deep-learning/#)

Regarding your ā€œI would decide up front if I was EVER going to use more than one card.ā€, thatā€™s a key point for anyone building a new PC rig: using more than one card will REQUIRE an appropriate Power Supply (PSU), at least 850W.
Plus your PSU MUST come with two dedicated ā€œ8pin+8pinā€ connectors.
I made the mistake when building my rig initially, only to find out that my PSU could not handle two GFX with ā€œ8pin+8pinā€ each.
Itā€™s a silly, and expensive to correct, mistake.
So donā€™t try and save ā‚¬40 on the PSU initially, the mandatory upgrade will cost you an extra ā‚¬130 later, plus an annoying time figuring out how to disconnect the old PSU and reconnect the new PSU, plus a useless under-powered PSU in the garbage.

1 Like

I was just typing up a reply on the NVLink issue - certainly nice to have, but 2x 2070 w/o NVLink for less than a 2080ti is hard to argue with.

1 Like

Disclaimer: I could be wrong on this:
My understanding of NVLink is that you can have multiple GPU cards connected with the NVlink and it would be recognized as a single GPU with increased memory within the system. So then, in theory, if I had 2x 2080s connected with Nvlink, I would have easy access to all the ram (16GB) all the cores(5888) without bandwidth limitations or code changes needed for a parallel/distributed setup. If I do not have the NVlink connection, I need code changes to access all that 16GB of RAM and my core count(4608) would be lower for the 2070s. It would be nice if a forum member who has access to 2 RTX cards try this out and verify how NVlink works with the fastai library.

I know with my current 2x 1080Ti setup, some aspects of parallel/distributed processing using both cards was cumbersome with early fastai builds. I know they added some functionality in version 37 or so which I will play with over the next two weeks to see if I can get it to work predictably.

3 Likes

Tim Dettmers did some benchmarking about just PCI-express vs. NVlink. It seems NVlink starts to be useful with 4 cards. If you plan to use 2-3 cards you wonā€™t see any real benefit, even if you have just 8x gen3 interconnect. Check his blog for further details.

1 Like

If you just want to see two card as one (along with their memory) you can in fact do it just using DataParallel upon a CNN. As of now, it doesnā€™t work so straightforward for RNN/LTSMs.

I wonder if NVlink allows us to use multiple cards as one regardless of the network type.

Please share your results with us if you want. I want to upgrade, and Iā€™m quite undecided between a single rtx titan, 3 2070, or 3 2080 (non-ti).
Establishing the exact capabilities of NVlink will weigh a lot upon my final choice.

It depends. If NVlink overcomes the network type limitations as @FourMoBro stated above, it could be worth of serious consideration.

1 Like

Hi everyone,
Iā€™m hoping to get a new PC soon.
These are the specs Iā€™ve picked: https://in.pcpartpicker.com/list/NQrDyX

Looking forward to your suggestions/feedback.

Thanks & Regards,
Sanyam.

Feedback on your build but double check all my facts:

  1. Youā€™ve selected a SATA SSD. Iā€™d recommend paying more for a fast NVME drive such as the Samsung 970 EVO.
  2. Iā€™d recommend a Z390 motherboard instead of a Z370. A Z370 has two x16 PCIe 3.0 lanes at 16+4 but the Z390 does 8+8, somewhat better for a 2nd high performance PCIe 3.0 card such as a second GPU, Raid, etc. Z390 usually has USB 3.1 Gen 2, better Raid support and faster memory speeds.
  3. 3600 memory is good for future proofing but this is an area where you can save some money by going with 2400 (or motherboard max speed) and not experience much speed loss.
  4. Make sure your Trio graphics card can fit into the either PCIe x16 slot without being obstructed.
  5. As of this time, consider an AMD 3800X CPU and motherboard instead of Intel. The 3800X looks like a 8700K killer. Weā€™re suppose to learn more about 3800X at next weekā€™s CES show.
4 Likes

Thanks @bsalita

  1. I canā€™t find the NVME on partpicker but yes, Iā€™ll add that
  2. I wasnā€™t aware of this, could you point me to how do I find this?
  3. I think the difference is marginal and I do get some RGB Love :smiley:
  4. Iā€™ve double checked, two trio cards should fit in without an issue.
  5. Sure, Iā€™ll wait until CES and probably order in Mid-Jan.

Thanks again for the suggestions

Sorry, I need to correct my statement. Looks like the motherboard you selected, an ATX Z370, does support PCIe 3x16 including an 8x8x4 config. Also PCI 3.1 Gen2 is supported. My mATX Z370 did not support either.

1 Like

I donā€™t think you necessarily need a liquid CPU cooler, you could save some money with an air cooler.

1 Like

And itā€™s safer :sunglasses:. I myself have a noctua NH-D14 (D15 is the last version) and it works wonders.

1 Like

@wdhorton The price difference was just 20-40$ so I preferred the liquid one.
But Iā€™ll do a little searching.

@lesscomfortable Uh-oh that one is double the cost of the liquid one (in India)
But Iā€™ll find a few air coolers and ask for your opinion

I need some advice about the M.2 drive

I know its highly recommended as well as Jeremy always suggests using a NVMe M.2 if possible

The 1 TB Samsung 970 M.2 is for about 710 US$ in India
Iā€™m now considering to install a 1TB SATA III SSD + 512GB Samsung 970 M.2, this would be for 400-500 US$ in India.

SSD: OS + General stuff
M.2: SWAP + Current Datasets

Is this a good idea?