Recommendations on new 2 x RTX 3090 setup

It will. It’s one of the best PSUs in its tier (in case you are not aware: you can switch multi-rail/single-rail, since it’s a digitally controlled unit).
People with a 1000w and even 1200w class psu (like me), will have to upgrade.

1 Like

I wonder how long it would take for the different manufacturers to restock those RTX 3090. Apparently, almost every unit was bought by bots from resellers.

Are there any comparisons between the thermal performance of the different 3090 models under load for more than 1 hour?

IIRC Guru3d.com had some really nice comparisons

1 Like

It seems the Founders Edition’s cooling system is not that bad after all…

Wait for benchmarks, if they have disabled features it may no be that good.
I hope they enable Titan drivers on them.

Keeping a 450W (peak) monster under 70C? It’s one of the best air cooling systems ever engineered.

2 Likes

It should not be an issue for us, though. AFAIK (but correct me if I’m wrong), all the features relevant to deep learning have been enabled.

1 Like

I will not be so sure.

Please, elaborate.

Mh, it seems you were right, Thomas.

Look: https://youtu.be/YjcxrfEVhc8?t=577

And particularly:

"The same also applies for AI training"

Apart from that grotesque guy’s video, I couldn’t find any other reference to AI use case. But I’d really like to know exactly what the consumer Amperes (or their drivers) do miss with respect to professional Turings.

EDIT: Found this: https://www.pugetsystems.com/labs/hpc/RTX3090-TensorFlow-NAMD-and-HPCG-Performance-on-Linux-Preliminary-1902/

Particularly:

image

2 Likes

Yeap, I am also waiting for more benchmarks before buying anything.
Also, RTX turing cards where plagued with issues at release (frying, dying, etc…).
My quadro 8000 is still a good performer =P.

2 Likes

You really invested in the big guns, eh? :grin:

Nvidia sent us that for free… (nvidia AI incubator program)

1 Like

Yep, I believe that’s the one I posted above… :wink: The only AI-oriented review as of yet.

Based on what I read this might also be due to early version of drivers/cuda not supporting everything?

Don’t think so. It’s called customer segmentation.

That Jensen bloke is greedy as hell.

Note the batch size used for 3090. I think he used the same batch sizes for each of these runs. In theory, the 3090 can have 2x the 2080 TI batch size. Usually, this will mean ~2x the training speed.

In my experience, it’s not so, at least if you have a workstation that can feed the minibatches properly (CPU, ssd…). But even allowing for that, just compare the 3090 and the Titan.

I had one of the best conversations on my podcast about research (in general) and also the 3000 series with Tim Dettmers. I’ve a long queue for the releases so I’ll share the RTX 3000 series related Qs here (copy pasting from my tweet thread):

Q: Which GPU should you get?

Look at Tim’s blog for a detailed answer
Upgrading from 20xx series -> 30xx isn’t a HUGE gain unless you badly need the memory

Q: How to make multiple of these work?

It’s really unclear at this point It’s best to wait to see how someone makes these work together. It’s quite a big rabbit hole and there are many uncertainties.

Q: For 2x Cards?

Go for the biggest case you can get and use PCI-E extenders and mounting with proper planning

Q: Do 3rd Party/Brand matter?

For multi-GPUs, it might have better cooling so it’s an interesting factor to look into.

Given the new design with FE, it’s unclear.

For 3rd party ones following older design: It might work, assuming you find a mobo that can fit these.

Q: The RTX Series doesn’t have all of the “features” enabled in an (assumed) favour of upcoming quadro cards? So are these bad?

No, this was similar to the earlier series, and the price/perf ratio for Quadro isn’t practical, only buy unless you’re forced to

5 Likes