EricPB
(Eric Perbos-Brinck)
December 19, 2017, 9:27pm
1
Greetings,
I just got a new rig, thanks to a pleasant surprise, based on a Ryzen 1700X (8 cores/16threads) with 32Gb RAM and a 1Tb NVMe SSD.
Its current GPU for ML/DL is the well-respected Nvidia GTX 1080Ti, thus probably a good reference for a single-user workstation.
I have, as pre-order to be delivered around Dec 30 in Sweden, an Nvidia Titan V (a reduced version of the Volta V100 server card).
There are basically no benchmark today on the Titan V in DL/ML use.
Assuming I get my Titan V in due time, I was wondering if we could use Fast.ai library as a solid approach for benchmarking it vs the 1080 Ti.
I’ve never done this kind of benchmark, would it make sense ?
E.
1 Like
jeremy
(Jeremy Howard)
December 19, 2017, 11:41pm
2
fastai doesn’t currently support half precision training. I’m hoping to add it soon-ish however, since I want to benchmark the P3 properly. In the meantime, you can use the official pytorch imagenet training example, which has been modified for fp16 by Nvidia.
jeremy
(Jeremy Howard)
December 20, 2017, 5:10pm
4
sermakarevich:
Check this thread
That thread doesn’t have the fixes to use the tensor cores FYI.
Oh, sorry Jeremy, I should have added EricPB to reply as it was link for him. I thought it might be useful to know about:
torch.cuda.synchronize()
torch.backends.cudnn.benchmark=True
as in this thread guys were also benchmarking Titan V vs 1080Ti.
jeremy
(Jeremy Howard)
December 20, 2017, 5:49pm
6
Oh yeah I know - sorry for my terse reply. The issue is that without using the tensor cores it’s a pretty meaningless comparison IMHO.
3 Likes
EricPB
(Eric Perbos-Brinck)
December 20, 2017, 6:31pm
7
Thx @sermakarevich , it came up in another thread too.