GTX 2080/2080Ti RTX for Deep Learning?

As you know, NVIDIA releases a new generation of 20xx cards. And my question is, does it make sense to look at these cards for the purpose of neural nets training? Or better off use 1080/1080Ti models? From the price/performance ratio point of view. It seems that these cards are only useful if you’re going to practice Deep Learning techniques through computer games =)

3 Likes

You should check out the post here, it’s pretty comprehensive in terms of performance and price/performance: http://timdettmers.com/2018/08/21/which-gpu-for-deep-learning/. The 2080ti does represent a speed up for deep learning training over the 1080ti, I’ve seen figures of roughly 50%-60% faster.

4 Likes

i the coming week i hope to have time to benchmark 1 x 2080 Ti vs. 1 x 1080 Ti especially in FP16.

5 Likes

@antorsae I look forward to reading it! :slight_smile:

Hm, interesting. Probably it is worth to wait for 20xx releases then.

I’ve been trying to get 2080ti plus cuda 10 plus nvidia 410 plus pytorch (compiled) working on 18.04. It’d be great if anyone could share their experience if they’ve successfully made it through the hoops.

I did tensorflor 1.11 (cuda10 + 410 on ubuntu 16.04). Will attempt pytorch tonight!

2070 is out now with 75% the power of a 2080 for 75% of the price, so the same price/performance ratio, but it has 100% of the memory (8GB), so a better value.

5 Likes

with 20xx the answer is pretty easy - look for it if you want to do fp16 compute, otherwise, 1080ti is much better value

…which you almost certainly do. :slight_smile:

6 Likes

Well, I definitely need to read more about these cards then. Thank you for the explanations! And sorry if the questions are too obvious. I’m glad to know the opinion of people who work with Deep Learning a lot about these cards.

So, finally, if you were choosing between 1080Ti and 2080 (plain), which one would you prefer? Or, if take into account the whole range of cards: 1080Ti, 2070, 2080, and 2080Ti, which one is the best choice considering their computational power, memory, etc?

As I can understand, 2070/2080 seems like pretty decent things. And, 2080Ti is the best solution if you have money to buy it.

Also, if I have 1080Ti in my system, and have an extra 8x PCI-E slot, is it enough for another 10xx/20xx card? And, do I need to install 4xx drivers for both of them to work?

Out of curiosity, if I were to buy a card today for the class, would support for fp16 make a difference? I have a 1070ti but was looking for a 2nd one to be able to run larger notebooks while coding/testing new ones and I was considering a 1080ti, but willing to go for a 2080ti.

Is fast.ai automatically using fp16?

1 Like

FYI there are benchmarks of 2080 Ti, 2080, and 1080 series with and without NVLink at the Puget HPC blog:

3 Likes

It’s about twice as fast.

You have to add to_fp16 to your learner. See the docs.

5 Likes

8x PCIe should be sufficient. I’ve read that the performance difference vs. 16x is quite small.

2 Likes

I’d say it’s worth it just for fp16 precision (2X faster training all else equal). Previously gpu’s with fp16 were mainly only found in cloud instances like v100 (aws p3) or p100 (gcp).

3 Likes

Another suggestion:
If you can live with a 40% slower (approx) card, you could almost get 2x1080Ti for the price of 1x2080Ti. So you could run more experiments in parallel.

Or get one card for faster experiments on 1 and the possibility of adding another card to double it’s memory later.

4 Likes

Twice as fast sounds like a good deal :smile:

Yeah, there is definitely something to think about! Didn’t know that fp16 computations are supported through the new architectures. So it effectively means, like, twice “more” memory, right?

3 Likes

Agree, also 1080Ti + 2080 sounds like a good pair to train in background mode, and to carry on faster experiments with less memory available.

Unfortunately no. Check out:
one of the best speedups in benchmarks I’ve seen is 50% when switching to fp16 from fp32 https://lambdalabs.com/blog/best-gpu-tensorflow-2080-ti-vs-v100-vs-titan-v-vs-1080-ti-benchmark/
And keep in mind that they mainly tested 2 architecture types for image recognition, if you are doing something with LSTMs or just using PyTorch(which has different speedup when switching to fp16) your result will differ a lot
Other benchmarks to check out:
https://www.reddit.com/r/nvidia/comments/9ikas2/rtx_2080_machine_learning_performance/
https://www.reddit.com/r/nvidia/comments/9jo2el/2080_ti_deep_learning_benchmarks_first_public/e6tarvw/?context=3
https://www.reddit.com/r/deeplearning/comments/99h5ol/1x_2080ti_vs_2x_1080ti_for_deep_learning/

I was monitoring this deep learning support in RTX really tightly, since I’m looking to build my own deep learning machine right now, and overall my personal rating looks like this:
go for RTX 2080ti if you can allow it. It’s costly, but it has the best performance and really nice fp16 compute+11Gb of VRAM
RTX 2080 vs 1080ti: this one is harder and depends on your needs. If you want to be able to reproduce almost any existing repo and can sacrifice fp16 speed - go for 1080ti. It’s a little cheaper at the moment and has 11gb of VRAM which helps sometimes(just few days ago I had problems with reproducing YOLO results on COCO with one of the PyTorch implementations).
If you need fp16 compute or you know, that you’ll be doing more from scratch work and less reproducing than RTX 2080 might be a better choice - same fp32 compute, better fp16 compute, 8gb VRAM, similar price
RTX 2070 vs GTX 1080: go for 2070 unless you find a really cheap 1080 and you want to save money
1070 vs ???: 1070 is the cheapest acceptable option for deep learning, don’t go any cheaper.

If you are feeling adventurous you can look at AMD side, but support for their ROCm isn’t as good as for Nvidia CUDA, so probably isn’t recommended for novice users

10 Likes