I’ve been trying to find the time to complete the Fast AI course for two years now, and have started about 4 times over the last two years. During that time the available compute services and the course have changed considerably.
This time I decided to see if I could get it running on my 3060 based laptop and compare it to the free Gradient option.
Getting it running on Windows 10 turned out to be pretty straight forward using Miniconda and the latest GeForce drivers. It took me a while to convince myself that it was actually using the GPU for training but taskmgr’s performance tab (GPU) showed it eating memory. The GPU itself appears to be barely ticking over.
In comparison to Gradient’s free Quadro M4000 (you need a subscription for Quadro RTX 4000) the 3060 is maybe 50% faster at training. I suspect the speed is limited by memory capacity / bandwidth.
However, the data loading was significantly slower, as my internet connection is no doubt significantly inferior to Paperspace’s servers.
Both seemed noticeably slower than the Google Compute I was using with the $300 credit, but sadly that ran out a while ago.
I’m curious if anyone else has experience of the speed of data / compute on a windows machine?