Cloud GPU services vs having your own, which is better for AI?

My experience is that running on your own machine can be significantly faster than an equivalently spec’d card in the cloud, but it has been several years since I’ve run things in the cloud. I think the 3090 w/24GB of VRAM provides good bang for your buck, especially now that prices have dropped. It all depends on how much you’re going to use it, but I made the switch after several $200/mo AWS bills. Going from a K80 to a 1080ti was dramatically faster, I think somewhere around 3-10x. If you’re curious I can run a speed test on a dataset on my machine so you can compare it to running in the cloud. I did run a test on the IMDB language models and posted the results here:

The results will vary based on the model type. I didn’t note which GPU I got in COLAB when I did this test, but I’m guessing it was a T4 or P100. It was not a K80.

1 Like