Hi All,
I have been exploring the various options available for launching a cloud GPU instance.
I had set up an account on GCP and launched a Jupyter NB (with GPU) by following this tutorial.
I was able to set this up properly and launch the notebook. I then tried running the minst_cnn example provided in keras github repo (link).
The training time was around 70 sec per epoch (I had set up my instance with 1 GPU (Tesla K80) and 8 vCPU)
I was also in parallel trying out the Crestle service. The same example ran much faster - just 9 sec/epoch. I am really not sure why there is such a huge difference (I believe even Crestle provides 1 K80 GPU)
Any thoughts?
Thanks!