Serious benchmarking Titan V vs 1080Ti (on Ryzen 1700X) with Fastai?

Greetings,

I just got a new rig, thanks to a pleasant surprise, based on a Ryzen 1700X (8 cores/16threads) with 32Gb RAM and a 1Tb NVMe SSD.

Its current GPU for ML/DL is the well-respected Nvidia GTX 1080Ti, thus probably a good reference for a single-user workstation.

I have, as pre-order to be delivered around Dec 30 in Sweden, an Nvidia Titan V (a reduced version of the Volta V100 server card).

There are basically no benchmark today on the Titan V in DL/ML use.

Assuming I get my Titan V in due time, I was wondering if we could use Fast.ai library as a solid approach for benchmarking it vs the 1080 Ti.

I’ve never done this kind of benchmark, would it make sense ?

E.

1 Like

fastai doesn’t currently support half precision training. I’m hoping to add it soon-ish however, since I want to benchmark the P3 properly. In the meantime, you can use the official pytorch imagenet training example, which has been modified for fp16 by Nvidia.

Check this thread

1 Like

That thread doesn’t have the fixes to use the tensor cores FYI.

Oh, sorry Jeremy, I should have added EricPB to reply as it was link for him. I thought it might be useful to know about:

torch.cuda.synchronize()
torch.backends.cudnn.benchmark=True

as in this thread guys were also benchmarking Titan V vs 1080Ti.

Oh yeah I know - sorry for my terse reply. The issue is that without using the tensor cores it’s a pretty meaningless comparison IMHO.

3 Likes

Thx @sermakarevich, it came up in another thread too. :+1: