I’m a bit surprised to see that aws p2.xlarge is quite slow considering its high cost.
Currently working on the statefarm sample, I notice that each epoch takes 26~33s although the initial output was showing that each epoch ran in 11s.
I’ve checked that I’m really using gpu with theano’s script after changing
cuda.use('gpu0') (as shown by
nvidia-smi, there’s only one gpu available on p2.xlarge).
Therefore, I’m curious as to what has been used when building the course. Anyone can shed some light on it?