GTX 2080/2080Ti RTX for Deep Learning?


The new item of GPU lust: RTX Titan

4608 CUDA cores vs 4352 for the 2080ti FE
Boost clock 1770 vs 1635
Memory - 24GB vs 11GB

(Eric Perbos-Brinck) #104

I just returned my RTX 2080Ti (plus the extra 32Gb DDR4 Ram I bought to support it in Multi-Gpu) to the vendors.

As I found out that it didn’t guarantee me a medal, over the 1080Ti, in the Kaggle Google Doodle competition.

:nail_care: @radek .

So I said “I want my money back !” :rofl:

Joke aside, it’s still a beast of a card for Computer Vision, especially since now the latest PyTorch+Fastai enable the Fp_16/Mixed-Precision faster computing time (at least 50% faster vs Fp32).

But I think the RTX 2070 is really the “King of the Hill” now.
You can’t beat its DL performance for the price (2070-8G for €550, 2080Ti-11G for €1,300)

(Elvin Wong) #105

Some benchmark from google

No idea why fp32 score so low for 2080 ti while fp16 is very close to v100 (Any one knows?)

|GeForce RTX 2080 Ti|estimated ~0.44 TFLOPS
|Tesla P100*|4.7 ~ 5.3 TFLOPS
|Tesla V100*|7 ~ 7.8 TFLOPS


|GeForce RTX 2080 Ti|28.5 TFLOPS|
Tesla P100*|18.7 ~ 21.2 TFLOPS*|
|Tesla V100*|28 ~31.4 TFLOPS*|

Still thinking which to buy 2080 / 2080 ti


Eric sent back his 2080ti. I have the 2080 and while I am happy with it, I would buy a 2070 or two if I had a do-over.

(Andrea de Luca) #107

If you had the money and the will for a 2080TI, I urge you to consider the TITAN. It has 24Gb and memory really is the big deal if you ask me. Big leap wrt previous generation.
You can have a bit more patience if your card is not powerful enough, but you definitely cannot tuck your model in 8gb if it just doesn’t fit.
Cutting the BS too much can lead to an untrainable model (at least if you want satisfactory results).

And while you can somewhat circumvent the problem with cnns (a 3-4 2070 setup), the same does not stand for RNN and other archs.

(Eric Perbos-Brinck) #108

Just to clarify why I returned my RTX 2080Ti-11G, to be fair and transparent:

  • this is my current PC rig, with dual-boot for Windows10 and Ubuntu18.04, plus a 3Tb HD.

  • my current card is a 1080Ti-11G, blower version from Spring 2017.

  • adding the RTX 2080Ti (€1300) on top of it, in my rig required an extra 32Gb RAM (€250), so the two cards could perform crunching/modelling together (you need 64Gb RAM imho for double-GPUs).

  • overall, I found out that the extra €1550 didn’t deliver, vs my “basic” 1700X+1080Ti+32Gb RAM.
    It’s not like I’m spending 24 hours a day crunching numbers for Kaggle or some cutting-edge research.
    So I thought that much of serious money would be better spent on a new Weber BBQ + a Dyson wireless vacuum-cleaner (which are not either First-World-Problems, I admit).

If I were to build a new Deep Learning from scratch today, I would seriously either consider the 2080Ti as the main GFX, or the 2070 one (to support with a second 2070 later).



If I were going from scratch today I would decide up front if I was EVER going to use more than one card. If I was only ever going to use 1 card, I would pick the 2080Ti (for the memory) or a 2070(bang for buck) depending upon my budget. If I was planning on adding a second card down the line, I would start with the 2080(non-TI) over the 2070 as the 2070 cannot take advantage of NVLink. Then add another 2080 when able.

(Eric Perbos-Brinck) #110

I fully agree with your analysis, though the benefit of SLI or NVLink for dual-GFX in DL/ML is far from confirmed, when it comes to parallel computing.
(Check the blog and comments on

Regarding your “I would decide up front if I was EVER going to use more than one card.”, that’s a key point for anyone building a new PC rig: using more than one card will REQUIRE an appropriate Power Supply (PSU), at least 850W.
Plus your PSU MUST come with two dedicated “8pin+8pin” connectors.
I made the mistake when building my rig initially, only to find out that my PSU could not handle two GFX with “8pin+8pin” each.
It’s a silly, and expensive to correct, mistake.
So don’t try and save €40 on the PSU initially, the mandatory upgrade will cost you an extra €130 later, plus an annoying time figuring out how to disconnect the old PSU and reconnect the new PSU, plus a useless under-powered PSU in the garbage.


I was just typing up a reply on the NVLink issue - certainly nice to have, but 2x 2070 w/o NVLink for less than a 2080ti is hard to argue with.


Disclaimer: I could be wrong on this:
My understanding of NVLink is that you can have multiple GPU cards connected with the NVlink and it would be recognized as a single GPU with increased memory within the system. So then, in theory, if I had 2x 2080s connected with Nvlink, I would have easy access to all the ram (16GB) all the cores(5888) without bandwidth limitations or code changes needed for a parallel/distributed setup. If I do not have the NVlink connection, I need code changes to access all that 16GB of RAM and my core count(4608) would be lower for the 2070s. It would be nice if a forum member who has access to 2 RTX cards try this out and verify how NVlink works with the fastai library.

I know with my current 2x 1080Ti setup, some aspects of parallel/distributed processing using both cards was cumbersome with early fastai builds. I know they added some functionality in version 37 or so which I will play with over the next two weeks to see if I can get it to work predictably.

(Andrea de Luca) #113

Tim Dettmers did some benchmarking about just PCI-express vs. NVlink. It seems NVlink starts to be useful with 4 cards. If you plan to use 2-3 cards you won’t see any real benefit, even if you have just 8x gen3 interconnect. Check his blog for further details.

(Andrea de Luca) #114

If you just want to see two card as one (along with their memory) you can in fact do it just using DataParallel upon a CNN. As of now, it doesn’t work so straightforward for RNN/LTSMs.

I wonder if NVlink allows us to use multiple cards as one regardless of the network type.

Please share your results with us if you want. I want to upgrade, and I’m quite undecided between a single rtx titan, 3 2070, or 3 2080 (non-ti).
Establishing the exact capabilities of NVlink will weigh a lot upon my final choice.

It depends. If NVlink overcomes the network type limitations as @FourMoBro stated above, it could be worth of serious consideration.

(Sanyam Bhutani) #115

Hi everyone,
I’m hoping to get a new PC soon.
These are the specs I’ve picked:

Looking forward to your suggestions/feedback.

Thanks & Regards,

(Robert Salita) #116

Feedback on your build but double check all my facts:

  1. You’ve selected a SATA SSD. I’d recommend paying more for a fast NVME drive such as the Samsung 970 EVO.
  2. I’d recommend a Z390 motherboard instead of a Z370. A Z370 has two x16 PCIe 3.0 lanes at 16+4 but the Z390 does 8+8, somewhat better for a 2nd high performance PCIe 3.0 card such as a second GPU, Raid, etc. Z390 usually has USB 3.1 Gen 2, better Raid support and faster memory speeds.
  3. 3600 memory is good for future proofing but this is an area where you can save some money by going with 2400 (or motherboard max speed) and not experience much speed loss.
  4. Make sure your Trio graphics card can fit into the either PCIe x16 slot without being obstructed.
  5. As of this time, consider an AMD 3800X CPU and motherboard instead of Intel. The 3800X looks like a 8700K killer. We’re suppose to learn more about 3800X at next week’s CES show.

(Sanyam Bhutani) #117

Thanks @bsalita

  1. I can’t find the NVME on partpicker but yes, I’ll add that
  2. I wasn’t aware of this, could you point me to how do I find this?
  3. I think the difference is marginal and I do get some RGB Love :smiley:
  4. I’ve double checked, two trio cards should fit in without an issue.
  5. Sure, I’ll wait until CES and probably order in Mid-Jan.

Thanks again for the suggestions

(Robert Salita) #118

Sorry, I need to correct my statement. Looks like the motherboard you selected, an ATX Z370, does support PCIe 3x16 including an 8x8x4 config. Also PCI 3.1 Gen2 is supported. My mATX Z370 did not support either.

(William Horton) #119

I don’t think you necessarily need a liquid CPU cooler, you could save some money with an air cooler.

(Francisco Ingham) #120

And it’s safer :sunglasses:. I myself have a noctua NH-D14 (D15 is the last version) and it works wonders.

(Sanyam Bhutani) #121

@wdhorton The price difference was just 20-40$ so I preferred the liquid one.
But I’ll do a little searching.

@lesscomfortable Uh-oh that one is double the cost of the liquid one (in India)
But I’ll find a few air coolers and ask for your opinion

(Sanyam Bhutani) #122

I need some advice about the M.2 drive

I know its highly recommended as well as Jeremy always suggests using a NVMe M.2 if possible

The 1 TB Samsung 970 M.2 is for about 710 US$ in India
I’m now considering to install a 1TB SATA III SSD + 512GB Samsung 970 M.2, this would be for 400-500 US$ in India.

SSD: OS + General stuff
M.2: SWAP + Current Datasets

Is this a good idea?