Buying new GPUs right now

Since I finally managed to put together a grand for one or two new GPUs, I’d like to know your opinion about what you would buy right now.

A couple of 1070, a single 1080 ti, or better wait 6-8 months for the new Turing/Ampere devices?

As of now, I have a single 1070 and my desire would be being able to run more experiments at once. I have enough lanes (40) and slots for 3 cards.

Also, assuming a total combined VRAM of ~20Gb, how much RAM I would need not to be bottlenecked by ram scarcity?

NVIDIA is expected to announce the details around the release of the new GTX 1180 in the next couple of weeks. When these hit the stores, the prices of the cards you mentioned are likely to drop. Also, the 1180 may become the new gold standard for personal deep learning rigs (especially for those unwilling to pony up $3K for a Titan V).

6 Likes

Looks like GTC Taiwan was on May 29. I haven’t watched the keynote, but I tried to google any news about 1180 and haven’t found anything.

I think we won’t see any ampere/turing gpu before fall 2018 or even q1/2019 That’s because:

  1. Pascal production lines are still active.

  2. Demand for Pascal is still high, as well as prices.

  3. They would want to sell a substantial number of Voltas before throwing less costly solutions at us.

I think I’ll grab another 1-2 Pascals right now. Allow me one question:

I’d vastly prefer to go with two 1070 instead of one 1080 ti (in addition to the 1070 I already own) for the sake of versatility: running multiple experiments at once,and preserve at least one GPU for interactive stuff.

My main concern however, is about VRAM.
What kind of dataset and NN architecture one can handle with 11gb with respect to just 8gb?

About your point of adding up VRAM- I think SLI will help running experiments in parallel and won’t allow running them across a set of GPUs.
Not on Fast AI AFAIK.

1080Ti is hugely overpriced right now. If you want to go with a set-go with 1070s, so that even when you decide to chuck them out, you don’t regret the fact that you had purchased that overpriced amounts.

SLI in not used in DL. You can achieve model parallelism across multiple GPUs without it.

Yes, parallelism is not currently supported by fastai, but you can achieve it quite straightforwadly by meddling directly with pytorch.

However, I’m not too interested in parallelism. My goal is to run different experiments over different GPUs, so the a single GPU vram capacity could limit very deep (and wide) models.

As of now, 1080 ti is priced around 700 euros in Europe, while 1070 is ~400. Hard choice.

1 Like

I didn’t know that, Thanks.

I’d vouch for 2x1070.

1 Like

It’s not as straightforward though, each GPU uses 16 PCI-e lanes, so if expansion is planned in the future and Motherboard and CPU have a capacity to handle two GPUs, getting 2x1070 now might be a limit for the machine, meaning that one 1080Ti will be a better option. All depends on the need to expand later and current PC (MB/CPU) configs. (More about PCI-e lanes here: Picking a GPU for Deep Learning. Buyer’s guide in 2019 | by Slav Ivanov | Slav)

I got slots for 3 gpus, and 40 lanes in total, 4 of which are for a nvme drive.

I can do 16x/16x or 16x/8x/8x (but not 16x/16x/8x).

I was in a similar position - had a 1070, ended up selling it and getting a used 1080. The speedup is noticeable. If running 2x GPU’s that are open style in same case from my experience, you cant cool them enough with air with the GPU’s in slots 1 and 3 (I dont want to have them sit at >80 degrees all day). I couldn’t try slots 1 and 4 due to motherboard limitations. I built my own mount to separate them which you can see here. Even in slots 1 and 4 air flow to upper GPU could be a problem. If you have blower style or AIO GPU cooling may be fine in slots 1 & 3. Custom water-cooling loop would be ideal if you had the time.

On GPU ram: if you find 8GB to be a problem and do get 2x 1070’s you could use pytorch DataParallel - I havent used yet though so cant give any feedback.

I suggested 2 1070s given the price inflation-assuming there isn’t a problem with arranging them in the Motherboard or Giving them enough slots without compromosing performance.

Interesting, and clever solution. Bookmarked

Right now, I have an open style 1070, although my next Gpu will certainly be blower-style. My mainboard has plenty of space for a 2-gpu setup, but a third would suffer, I suppose, even if you opt for a blower-style.

Maybe I will be more content with 1070+1080ti.

1 Like

This morning I bought 7 great condition used fractal design and coolermaster 120mm fans for $10 :grinning: , replaced the side 230mm coolermaster fan with 4 x 120mm ones, added a second 120mm to the base of the case and… basically no difference to gpu temps, maybe 1 degree lower for bottom gpu (70-72 deg), no change in upper one, still sitting at 78 degrees under full load, bit disappointing, may be able to tweak fans to get a degree or two cooler but kind of reached air cooling limit. I think ill get a kraken g12 for the upper gpu (already have a kraken x52 i can use), will cost $85 for new cpu cooler +$49 for the g12 +$20 for heatsinks. Will be last mod for quite a while i hope.
On new gpu’s aio ones may be the way to go, i see them everywhere now, cooler than a turbo and quieter too.

Yes but you will add another possible failure point to your rig. Coolant spilling would be very bad, or even just pump failure.

My next build, as soon as e5-v3/v4 start to be less pricey, would be some 2011-3 system, possibly based upon x99-e10 which provides real 64 lanes (despite the cpu). If the gpus start to overheat due to proximity, I’ll just buy extender cables and leave them out of the case.

Building a custom external case with those extenders would be great. Pity nobody did it already (just those crappy thunderbolt enclosures).

Have you considered going with the 1070 plus two 1060s instead?

Mhhh… Since I don’t want to go with parallelism (at least for now), with two 1060 I would be limited to 6Gb per model. Unwise, since new NN archs are getting larger and larger.

Another issue, like I said, is RAM.
Suppose I go for the mighty 1080 ti, plus my existing 1070. As you may know, it is recommended (by @jeremy too), that you don’t want to go under a 2/1 ratio between ram and vram.
However, I would end up with 19Gb of vram, and just 32Gb of ram (and I have all ram slots occupied).

Last but now least, allow me another question: is it possible to do parallelization experiments with different gpus (and different amounts of vram)?

I think I’d go for two completely separate computers rather than 1 computer with 2 cards. Also wouldn’t buy anything instead of using remote servers if I could help it. Too expensive, too much noisy crap cluttering the house. Hetzner EX51-GPU is unfortunately out of stock again, but maybe it will come back:

https://www.hetzner.com/dedicated-rootserver/ex51-ssd-gpu

That’s 99 euro/month (+ 99 euro setup fee) for a fairly powerful server with a 1080. Probably around 2000 euro for the hardware alone, and they host it for you (electricity, fast internet, etc). That is probably what I want to do next (or something like it), if I outgrow paperspace. They have been in and out of stock a few times this year. Right now paperspace works pretty well though.

I’ve researched parallelism in models. The results are - that it all depends on what do you want to parallel: data parallelism with 1 model, N different models, or get a speed boost. You can easily get data parallelism on N video cards with 1 model, but other two options are more complicated. For model parallelism, you can go for python multiprocessing. For speed boosting Id instead go for two different computers than two cards, because you will get the bottleneck in data exchange between CPU, MEM, HDD/SDD and Video card. By my experience - 1 quick card (1080ti) is better than two weaker.

I’d also recommend you to switch to 2066 platform because of more RAM, AVX-512 and “accelerated python”

Thank you all for your opinions. I ordered a 1080 ti at 736 euros.

Also ordered an nvme drive which will be used solely to serve minibatches to the GPUs. The OS will continue to reside onto a regular sata ssd.

I will keep you posted about benchmarks.

2 Likes

I do not concur with you. Let me explain why.

  1. Multiple machines, each one with a gpu are, in my opinion, not advisable because I don’t see any advantage apart from a bit more resilience to hardware failures, and multiple disadvantages: more hardware and software to maintain, more power consumption, more dispersion.

  2. The server you linked lacks both ECC ram and nvme drives, not to mention the 8gb vram limit. Supermicro has very good gpu workstations (both 2011-3 and 2066). They are pricey, though.