We are Sergiy, Davit and Jason, founders of Snark AI currently in YCombinator summer batch. We’re taking advantage of the idle GPUs in enterprises’ private GPU cloud to provide low-cost GPUs for deep learning. We started Snark AI during our PhD programs at Princeton University working on hardware specific deep learning inference optimization and large scale distributed deep learning training.
We’ve been a huge fan of Fast.ai courses since we were at Princeton. Fast.ai course was a great reference for the undergrad deep learning class we taught as TA at Princeton. That’s why we decided to build native support for Fast.ai and give back to the community free GPU credits.
It’s very easy to launch Fast.ai jupyter notebook with us
$pip3 install snark
$snark start --pod_type fast.ai --jupyter
Please register on lab.snark.ai and use the username and password for
snark login. Once you’re in lab.snark.ai, click Add Credit and use the promo code FastAI2018 to retrieve free 100 hours of GPU credits.
After running the command
$snark start --pod_type fast.ai --jupyter, you can go to your browser at localhost:8888 to start running Fast.AI notebooks
Stop the pods to avoid being charged when you’re not running anything. You can use
$snark ls to list the pods running and run
$snark stop pod_xxxx to stop pods when you’re not using them to avoid being charged. You have persistent storage so it’s always easy to stop pods when it’s idle and start again later when you need it. Stopping your pods does not destroy the files in your home folder. Your files will still be there when you start again later.
You can also log into the pod hosting your jupyter notebook by running
$snark attach pod_xxxxx. Running
$snark ls will give you the pod number.
We’re actively adding more features and would greatly appreciate any advice from the community! Feel free to leave messages at our website chat box.
Update (Aug 4, 2018): We have just expanded our GPU supply and dedicated sufficient resources for fast.ai community.
Update (Aug 8, 2018): Added Frequently Asked Questions below
Pod Lifecycle: We noticed that many users start pods but forget to stop them. To get the list of active pods you can do
snark ls and to stop each pod you are able to
snark stop pod_id. If you want to connect to already started pod you can run
snark attach pod_id. For your ease you can name the pod when you start it by
snark start foo and then stop it
snark stop foo.
GPU hours decrease at the same time: On our Dashboard you can notice the amount of GPU hours you have for each GPU. You have single total credit behind the scenes and it gets decreased proportional to the power of the GPU you use.
Windows Support: You have to preinstall ssh and python on windows to use our command line interface.
Persistent storage, files vs software: When you load your custom data to the pod it persists, however if you install a package and then stop the pod your dependency will be lost. If you are looking for persistent environment, you need to create customized docker on top of fast.ai image and add your libraries. If this sounds slightly advanced please shoot us an email and we will guide you.
You can more information find at docs.snark.ai