Free GPU credits for Fast.ai Courses

Update Aug 15 2018: [scalability upgrade]
We really appreciate the interests from fast.ai community but we are bottlenecked on system scalability right now. Working hard on the upgrade. In the mean time the service may be a unstable. Will annouce here once we’re done with the upgrade.

==============

We are Sergiy, Davit and Jason, founders of Snark AI currently in YCombinator summer batch. We’re taking advantage of the idle GPUs in enterprises’ private GPU cloud to provide low-cost GPUs for deep learning. We started Snark AI during our PhD programs at Princeton University working on hardware specific deep learning inference optimization and large scale distributed deep learning training.

We’ve been a huge fan of Fast.ai courses since we were at Princeton. Fast.ai course was a great reference for the undergrad deep learning class we taught as TA at Princeton. That’s why we decided to build native support for Fast.ai and give back to the community free GPU credits.

It’s very easy to launch Fast.ai jupyter notebook with us

  • $pip3 install snark
  • $snark login
  • $snark start --pod_type fast.ai --jupyter

Please register on lab.snark.ai and use the username and password for snark login. Once you’re in lab.snark.ai, click Add Credit and use the promo code FastAI2018 to retrieve free 100 hours of GPU credits.

After running the command $snark start --pod_type fast.ai --jupyter, you can go to your browser at localhost:8888 to start running Fast.AI notebooks :slight_smile:

Stop the pods to avoid being charged when you’re not running anything. You can use $snark ls to list the pods running and run $snark stop pod_xxxx to stop pods when you’re not using them to avoid being charged. You have persistent storage so it’s always easy to stop pods when it’s idle and start again later when you need it. Stopping your pods does not destroy the files in your home folder. Your files will still be there when you start again later.

You can also log into the pod hosting your jupyter notebook by running $snark attach pod_xxxxx. Running $snark ls will give you the pod number.

We’re actively adding more features and would greatly appreciate any advice from the community! Feel free to leave messages at our website chat box.

Update (Aug 4, 2018): We have just expanded our GPU supply and dedicated sufficient resources for fast.ai community.

Update (Aug 8, 2018): Added Frequently Asked Questions below

Pod Lifecycle: We noticed that many users start pods but forget to stop them. To get the list of active pods you can do snark ls and to stop each pod you are able to snark stop pod_id. If you want to connect to already started pod you can run snark attach pod_id. For your ease you can name the pod when you start it by snark start foo and then stop it snark stop foo.

GPU hours decrease at the same time: On our Dashboard you can notice the amount of GPU hours you have for each GPU. You have single total credit behind the scenes and it gets decreased proportional to the power of the GPU you use.

Windows Support: You have to preinstall ssh and python on windows to use our command line interface.

Persistent storage, files vs software: When you load your custom data to the pod it persists, however if you install a package and then stop the pod your dependency will be lost. If you are looking for persistent environment, you need to create customized docker on top of fast.ai image and add your libraries. If this sounds slightly advanced please shoot us an email and we will guide you.

You can more information find at docs.snark.ai

64 Likes

Thanks so much for this!

After using the code you get 42 hours using 1080, 52.5 hours using 1070 and 110 hours using P106 which is very helpful.

Thanks again for giving back to the Fastai community :smiley:

5 Likes

Thanks for sharing! Looking forward to hearing about students’ experiences with this.

5 Likes

@snarkai Super cool. I registered for the snark and started a pod for fast.ai in 5 mins no brainer.
I am trying to run lesson1 notbook and I get a error while doing a learn.fit it says
"Permission denied cannot create data/dogscats/tmp "

Seems like the code cannot create this folder.
How can I ssh into the pod and then I can create this folder manually, or if there is another way to get over this error let me know.

I created a tm dir using
!sudo mkdir /data/dogscats/tmp

However then it fails as it not able to create files within tm
But hatsoff this looks super cool.

Sagar

1 Like

This is really cool! Literally took me less than 5min to set up.

However it didn’t take long to run into bugs.

I tried creating a folder using JN and got this error

Then I tried using the Terminal but it was the same problem

mkdir: cannot create directory 'newdata': Permission denied

(I solved it using sudo but I’m not sure that should have happened in the first place correct?)

1 Like

Thank you @sagar_mainkar for testing our product! We’re looking at the bug right now. To ssh into the pod you can first run snark ls to find out your pod number and run snark attach pod_xxxxx

Thanks a lot @richardreeze for the bug report! We’re fixing it now.

1 Like

Thanks for offering the GPU access. I tried it and got an error:

~$ snark start --pod_type fast.ai --jupyter
Setting up the pod…
Error: Couldn’t successfully schedule pod execution. Please try again

Any suggestion is much appreciated.

1 Like

@snarkai Thanks for your cool stuff, but I come across the same error.
$ snark start --pod_type fast.ai --jupyter
Setting up the pod…
Error: Couldn’t successfully schedule pod execution. Please try again

1 Like

Hey thanks for letting us know @brian2005 ! We’re updating the backend for bug fixes. Please try again in an hour.

2 Likes

@CrazyTensor Thanks for letting us know! Please try again in a few hours. We’re updating the backend now and the system could be a bit unstable. Apologies for the inconvenience.

Hey @richardreeze we’ve fixed the bug. Feel free to give it a try!

1 Like

@sagar_mainkar we’ve fixed the issue. Can you try it again?

@brian2005 @CrazyTensor we’ve fixed the bug. let us know if there’s any other problem. Thanks a lot guys!

@snarkai Getting an error “Couldn’t successfully schedule a pod execution. Please try later”
Not sure what that means?

1 Like

Thank you so much for sharing this! I love this project.

2 Likes

Very quick and easy installation. Thanks!

1 Like

Just tried it. Working great so far, trained a LM on it. Thanks a ton!

1 Like

thank you so much!

1 Like

how do you log of from pod?