Emrys beta expansion -- free credits for gpus

tl;dr: emrys is an easy & effective way to rent GPU compute (for training models or messing around in jupyter notebooks) and I’m currently offering a lot of free credits for new users

I am expanding the beta for a project I’ve been working on the past ~two years called emrys. Emrys is a two-sided marketplace for gpu compute (“Uber for GPUs”, if I must). I won’t wax-poetic on the details here, but if you want to learn more check out the site, docs, or join our forum/slack. (I can also be reached via email @ wminshew @ gmail).

As a user, you can spin up a jupyter notebook or execute a python script with a single command. With notebooks, the kernel will be executed remotely but you’ll still access via localhost on your browser. With python scripts, your logs are streamed back to you & the output downloaded on completion. No fiddling with websites to find other GPUs, no SSHing. Just a single command on your local machine.

I’m offering the next 20 users who signup $50 in credits [~= 250 gtx1080ti hours @ current prices] and the 80 thereafter $25 [~= 125 gtx1080ti hours @ current prices]. I will also be generously awarding additional credits for good feedback :slight_smile: I’m just one person working on this and wary of expanding too quickly, so I probably won’t accept any add’l users beyond that for a few weeks (depending on how the systems hold up and how balanced the user vs supplier market is).

As a miner, once connected to the network you’ll bid on jobs & get paid for executing them if you win. (Bonus: you have the ability to mine crypto with your own command in between jobs, if you choose.) As mentioned above, the market rate for a gtx 1080 ti is currently about $0.2/hr, so the yield is quite favorable compared to crypto mining. If you built a deep learning rig but it mostly sits idle, this is a pretty good way to monetize. (I’m currently not taking any fees on jobs, though I will have to cover my server bills etc eventually.)

Why did I build this? Three practical reasons and one technical(/philosophical?) opinion:

First, because AWS GPUs are still too expensive, and nobody can make money mining crypto with GPUs anymore. So why not build a bridge between the two groups? Emrys is ~90% cheaper than renting GPUs from AWS.

Second, dealing with AWS for GPU access is still kind of a pain in the ass. Fast.ai has some excellent instructions on how to get it working w/ ssh-tunneling back to your localhost, but this aims to make it even easier—one command from your terminal, no dealing with backend infrastructure.

Third, running deep learning jobs across multiple cloud GPUs (beyond say the 8 you can get in a single AWS instance) is still painful. By the end of the year, I expect users to be able to, with one command, run python scripts across N GPUs (theoretically unbounded… in practice, we’ll see). Want to run a job on one GPU? No problem. Want to run reduce the time of training a model by spreading it across across 100 GPUs? Also no problem.

On the technical/philosophical side I think parallel compute is going to take an increasingly large share of global compute, and it only makes sense to have a liquid global endpoint where people can rent as much parallel compute on demand as they desire elastically. Long-term think of it as Expedia atop AWS, GCP, and other third party providers—trusted to find the cheapest compute fitting your project needs at any point in time. (Though in the near term I expect most of the jobs to be routed to individuals ‘mining’ on the network.)

Please check us out if you have any interest!

1 Like

a few updates based on early user feedback:

  • added macos client
  • removed sudo requirement for users
  • added conda integration
  • more example tutorials [will release fastai repos ~next week; should bring the estimated cost of completing all of fast.ai down to just a few dollars]

still looking for more users & more feedback — register w/ the FASTAI100 promo before september for $100 of emrys credit (~500 gtx 1080 ti hours @ current prices). For those who signed up / reached out w/ feedback last time, thank you!!