I sent this over to Jeremy as it would be ideal for Part II.
They totally reworked the entire system, was limited to Python 3 and TensorFlow (no Jupyter Notebooks) but now it supports most every configuration. Pricing went up a bit but far more flexible. Even supports Jupyter now and will have even faster GPU’s than AWS soon.
Supports per second billing and it auto shuts off your instance after your job has run. Will cost significantly less than AWS and less cost overrun.
I still prefer to do my own hardware, but for very large jobs or if you don’t have your own GPU(s) it is a killer option.
It’s actually 100 hours free, but that’s still really generous. I’m definitely going to check it out. I’ve got a 980 in my machine which has been pretty good so far but I’m interested in seeing what you can accomplish with a k80.
The fact that they’ve set it all up specifically for deep learning is awesome, and even better they support jupyter so it’s really easy to get started.
Their GPU instances have 32 gigs of ram and 12GB K80s for $0.43/hour which seems incredibly reasonable. They also have high performance coming soon, which I’m assuming will be multi-gpu clusters.
and Microsoft Azure. Azure’s pricing is similar to AWS, but they charge in minute granularity not hourly, which can add up. Also every power cycle in AWS charges you a full hour, which is a rip off too.
Apologies! Yes, we’re experience some heavy traffic (thanks guys! ) and are overcapacity on our GPUs. Feel free to use our CPUs for free. We’re actively working on provisioning more GPUs, but till then you might see your jobs being queued - sorry!
@geniusgeek We are a pretty lean startup ourselves, so we don’t have the resources to offer GPUs for free But we understand that GPUs are insanely expensive and one of our primary goals is to lower their cost. We currently cost <50% of AWS on-demand instance, and we’ll do our best to keep it that way or lesser.
Happy to talk to folks from Fast.ai to see how we can best help!
Thanks for sharing this resource. While I am still waiting to get AWS’s approval for my request to use P2 instance, it was natural for me to start exploring this alternative resource.
I have to admit that I am a newbie, so I had quite a number of failures to begin with. Eventually, I had it figured out - for lesson 1 and for the sample dataset. I have a rough write up of my setting up process, hoping it will be helpful and that other learners don’t have to replicate all my failures.
Here is my write up.
Any suggestions/comments on how to improve/optimize the setup process would be helpful.