The new item of GPU lust: RTX Titan
4608 CUDA cores vs 4352 for the 2080ti FE
Boost clock 1770 vs 1635
Memory - 24GB vs 11GB
The new item of GPU lust: RTX Titan
4608 CUDA cores vs 4352 for the 2080ti FE
Boost clock 1770 vs 1635
Memory - 24GB vs 11GB
I just returned my RTX 2080Ti (plus the extra 32Gb DDR4 Ram I bought to support it in Multi-Gpu) to the vendors.
As I found out that it didnāt guarantee me a medal, over the 1080Ti, in the Kaggle Google Doodle competition.
@radek .
So I said āI want my money back !ā
Joke aside, itās still a beast of a card for Computer Vision, especially since now the latest PyTorch+Fastai enable the Fp_16/Mixed-Precision faster computing time (at least 50% faster vs Fp32).
But I think the RTX 2070 is really the āKing of the Hillā now.
You canāt beat its DL performance for the price (2070-8G for ā¬550, 2080Ti-11G for ā¬1,300)
Some benchmark from google
|GeForce RTX 2080 Ti|estimated ~0.44 TFLOPS
|Tesla P100*|4.7 ~ 5.3 TFLOPS
|Tesla V100*|7 ~ 7.8 TFLOPS
|GeForce RTX 2080 Ti|28.5 TFLOPS|
Tesla P100*|18.7 ~ 21.2 TFLOPS*|
|Tesla V100*|28 ~31.4 TFLOPS*|
Still thinking which to buy 2080 / 2080 ti
Eric sent back his 2080ti. I have the 2080 and while I am happy with it, I would buy a 2070 or two if I had a do-over.
If you had the money and the will for a 2080TI, I urge you to consider the TITAN. It has 24Gb and memory really is the big deal if you ask me. Big leap wrt previous generation.
You can have a bit more patience if your card is not powerful enough, but you definitely cannot tuck your model in 8gb if it just doesnāt fit.
Cutting the BS too much can lead to an untrainable model (at least if you want satisfactory results).
And while you can somewhat circumvent the problem with cnns (a 3-4 2070 setup), the same does not stand for RNN and other archs.
Just to clarify why I returned my RTX 2080Ti-11G, to be fair and transparent:
this is my current PC rig, with dual-boot for Windows10 and Ubuntu18.04, plus a 3Tb HD.
https://pcpartpicker.com/user/EricPB/saved/#view=9vvNNG
my current card is a 1080Ti-11G, blower version from Spring 2017.
adding the RTX 2080Ti (ā¬1300) on top of it, in my rig required an extra 32Gb RAM (ā¬250), so the two cards could perform crunching/modelling together (you need 64Gb RAM imho for double-GPUs).
overall, I found out that the extra ā¬1550 didnāt deliver, vs my ābasicā 1700X+1080Ti+32Gb RAM.
Itās not like Iām spending 24 hours a day crunching numbers for Kaggle or some cutting-edge research.
So I thought that much of serious money would be better spent on a new Weber BBQ + a Dyson wireless vacuum-cleaner (which are not either First-World-Problems, I admit).
If I were to build a new Deep Learning from scratch today, I would seriously either consider the 2080Ti as the main GFX, or the 2070 one (to support with a second 2070 later).
/salute
If I were going from scratch today I would decide up front if I was EVER going to use more than one card. If I was only ever going to use 1 card, I would pick the 2080Ti (for the memory) or a 2070(bang for buck) depending upon my budget. If I was planning on adding a second card down the line, I would start with the 2080(non-TI) over the 2070 as the 2070 cannot take advantage of NVLink. Then add another 2080 when able.
I fully agree with your analysis, though the benefit of SLI or NVLink for dual-GFX in DL/ML is far from confirmed, when it comes to parallel computing.
(Check the blog and comments on http://timdettmers.com/2018/11/05/which-gpu-for-deep-learning/#)
Regarding your āI would decide up front if I was EVER going to use more than one card.ā, thatās a key point for anyone building a new PC rig: using more than one card will REQUIRE an appropriate Power Supply (PSU), at least 850W.
Plus your PSU MUST come with two dedicated ā8pin+8pinā connectors.
I made the mistake when building my rig initially, only to find out that my PSU could not handle two GFX with ā8pin+8pinā each.
Itās a silly, and expensive to correct, mistake.
So donāt try and save ā¬40 on the PSU initially, the mandatory upgrade will cost you an extra ā¬130 later, plus an annoying time figuring out how to disconnect the old PSU and reconnect the new PSU, plus a useless under-powered PSU in the garbage.
I was just typing up a reply on the NVLink issue - certainly nice to have, but 2x 2070 w/o NVLink for less than a 2080ti is hard to argue with.
Disclaimer: I could be wrong on this:
My understanding of NVLink is that you can have multiple GPU cards connected with the NVlink and it would be recognized as a single GPU with increased memory within the system. So then, in theory, if I had 2x 2080s connected with Nvlink, I would have easy access to all the ram (16GB) all the cores(5888) without bandwidth limitations or code changes needed for a parallel/distributed setup. If I do not have the NVlink connection, I need code changes to access all that 16GB of RAM and my core count(4608) would be lower for the 2070s. It would be nice if a forum member who has access to 2 RTX cards try this out and verify how NVlink works with the fastai library.
I know with my current 2x 1080Ti setup, some aspects of parallel/distributed processing using both cards was cumbersome with early fastai builds. I know they added some functionality in version 37 or so which I will play with over the next two weeks to see if I can get it to work predictably.
Tim Dettmers did some benchmarking about just PCI-express vs. NVlink. It seems NVlink starts to be useful with 4 cards. If you plan to use 2-3 cards you wonāt see any real benefit, even if you have just 8x gen3 interconnect. Check his blog for further details.
If you just want to see two card as one (along with their memory) you can in fact do it just using DataParallel upon a CNN. As of now, it doesnāt work so straightforward for RNN/LTSMs.
I wonder if NVlink allows us to use multiple cards as one regardless of the network type.
Please share your results with us if you want. I want to upgrade, and Iām quite undecided between a single rtx titan, 3 2070, or 3 2080 (non-ti).
Establishing the exact capabilities of NVlink will weigh a lot upon my final choice.
It depends. If NVlink overcomes the network type limitations as @FourMoBro stated above, it could be worth of serious consideration.
Hi everyone,
Iām hoping to get a new PC soon.
These are the specs Iāve picked: https://in.pcpartpicker.com/list/NQrDyX
Looking forward to your suggestions/feedback.
Thanks & Regards,
Sanyam.
Feedback on your build but double check all my facts:
Thanks @bsalita
Thanks again for the suggestions
Sorry, I need to correct my statement. Looks like the motherboard you selected, an ATX Z370, does support PCIe 3x16 including an 8x8x4 config. Also PCI 3.1 Gen2 is supported. My mATX Z370 did not support either.
I donāt think you necessarily need a liquid CPU cooler, you could save some money with an air cooler.
And itās safer . I myself have a noctua NH-D14 (D15 is the last version) and it works wonders.
@wdhorton The price difference was just 20-40$ so I preferred the liquid one.
But Iāll do a little searching.
@lesscomfortable Uh-oh that one is double the cost of the liquid one (in India)
But Iāll find a few air coolers and ask for your opinion
I need some advice about the M.2 drive
I know its highly recommended as well as Jeremy always suggests using a NVMe M.2 if possible
The 1 TB Samsung 970 M.2 is for about 710 US$ in India
Iām now considering to install a 1TB SATA III SSD + 512GB Samsung 970 M.2, this would be for 400-500 US$ in India.
SSD: OS + General stuff
M.2: SWAP + Current Datasets
Is this a good idea?