As I am using a Titan V (and Titan XP) and am trying to benchmark their performance, I moved a previous post to this thread.
As background, I decided to subsidize/rationalize my DeepLearning GPU purchase of a Titan V and Titan Xp by using them for ethereum crypto-mining. As a result, and discussed below, I have come across some puzzling phenomena.
When I run the following code without any other jobs running, it is significantly slower than when the GPU is running other process. (Specifically, it is under heavy load running crypto-mining software.) I have repeated the trials numerous times to make sure that there were no differences in pre-computing or caching taking place. Moreover, I have tested this off and on over several weeks with the same result. I have used
nvidia-smi to verifying what jobs are running on the GPU. Here are the times:
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz))
learn = ConvLearner.pretrained(arch, data, precompute=True)
This really doesn’t make sense to me.
In trying to figure it out, I was wondering is anyone is using either a Titan V or Titan X. If so, I was wondering if they could let me know how long the above code runs for them. This is right out of Lesson1. Note that in learn.fit(0.01, 5), I am running 5 epochs vs. 3.