I was happy to run my first deep learning process from Lesson 1 on the AWS P2 instance, but noticed it took 2 hours to process the 20K images. Is that normal?
Its around that time, although it took me about 1 hr :30 min or so
In this post Nathaniel mentions 700-800s with some possible configuration changes to theano 4 hours to fit vgg16?
looks like the server wasn’t using the GPU. It seems a bit flaky when you sometimes start the server so you have to make sure the GPU is being used.
It should be around 650s per epoch and there are two epochs in first one. So total it was around 22 minutes on p2 if it is running properly. About 8 minutes if you run it on a 1070 locally.
i got 6 minutes for the batch of 22k pics when it uses the gpu
About 500s to run on a 970 local machine.