1080Ti announced, beats Titan X

11gb ram, 35% faster than the 1080. $699. Faster than the new Titan X.

4 Likes

Are you kidding. Why couldn’t they announce it 1 week back before I bought the 1080 :slight_smile:

1 Like

If you can wait. Return it. The difference is pretty big. Especially the additional 3GB.

After seeing that my AWS bill was getting way past 100EUR I decided to order a 1070.

It’s still in delivery, supposed to arrive next week. 400EUR.

Now for 300€ more I can get a card that is 30% faster than 1080… Seriously thinking of returning the 1070 when it arrives.

1 Like

I called Newegg and they dont replace gpu’s which is un-boxed. I saw in forums where people bought it just a day before and couldn’t return. I think you should go for it, seems definitely worth it.

I got a 1070 when it came out with plans to buy a few more but I will likely end up getting the 1080ti instead.

So would it be bad to have a 1080 and a 1080ti? For whatever reason I feel like if I have multiple cards they should be identical, but I have no actual evidence that it is a better idea…

If you were doing gaming, yes but for ML I do not believe so. I am not 100% sure but 99% sure you would be good. At worst case you can just run two models at same time, two different experiments. Gaming uses SLI when using more than one card and they have to be the same class, but even that is changing with Vulcan support.

I spoke with CentralComputers and despite the March 5th release date, they figured in-store availability of these chips is still roughly a month out. They mentioned that larger online distributors will likely get supplied first.

Thanks for the thoughts @dradientgescent I’ve just ordered the parts for my first build and I started with one 1080, but got a motherboard that will allow up to three cards. I figure if I like where this is going I’ll get more cards and switch from theano to tensorflow…

Yeah I suspect lte March and if it is like the 1070 and 1080 will be sold above recommended price for a while until demand settles below supply.

You could buy the rest of the PC from Central, and buy the card online and plug it in yourself when it arrives…

The 1080Ti is a great choice, since more RAM makes such a big difference (bigger batch sizes means quicker epochs and the ability to have lower learning rates).

1 Like

+1

Intel CPUs have GPU on them so you can have them use that to install/test the machine prior to releasing it to you. Then you can just drop the card in. In fact, this is ideal because you won’t be using the good GPU for operating system and will allow you to reserve even more memory. Unless you running it headless (no monitor) then it doesn’t matter but some shops won’t release a machine (or warranty it) unless it is complete.

Anyone else pre-order it from NVIDIA’s online store? I ordered 2 and the SLI bridge. :wink: This is in addition to the Titan X Pascal I already have.

@jeff I only get a notify me link, not a preorder/buy. Where are you in the world?

Silicon Valley. I found it by following the link in their Twitter at 8am Pacific Time: https://twitter.com/NVIDIAGeForce/status/837074455125880832 I just tried again and it looks like they’ve stopped taking pre-orders.

Is SLI useful here? Does it make use of the combined cards memory - so would it “see” 2x the RAM and allow for quicker epochs and bigger batch sizes, etc?

CUDA can’t use SLI, so no it isn’t useful.
SLI is for gaming.

@stevelizcnao You don’t need the SLI bridge, but yes your system will benefit by having multiple gpu cards–you just don’t need the SLI bridge. But what you do need is the low-level library that handles that.

My understanding is that you would need to switch from Theano to Tensorflow and configure you system to use the multiple gpus. I’ve read some posts (which of course I can’t find at the moment) that say that it isn’t quite a doubling (or tripling) of performance but that it is very close. That is the overhead of managing the work across multiple gpus is a lot smaller than the gain of having them.

Theano has experimental support for multiple GPUs but it is no where near as mature as Tensorflow’s nor is it as plug and play.