Jupyter Notebooks in the cloud w/ GPUs, starting at $0.35/hour!

Hetelek.com lets you run Jupyter Notebooks on powerful GPUs.

Main features

  • Pre-installed deep learning libraries - Keras, Tensorflow, OpenCV, etc.
  • Root SSH access for each instance
  • Data is persisted across all your instances through the /shared-data directory
  • Jupyter Notebook URL for each launched instance
  • Multiple powerful GPU options
  • Small instance option (ideal for uploading / downloading data)

Check it out at https://www.hetelek.com. There is no signup cost, and a K80 GPU starts at $0.35/hour!

Hetelek vs Crestle Pricing

Service Hetelek (root access) Crestle (no root access)
Storage (GB-month) $0.30 $0.42
CPU Instance (per hour) $0.01 $0.059
1 K80 GPU Instance (per hour) $0.35 $0.59
8 K80 GPU Instance (per hour) $3.00 Not available
16 K80 GPU Instance (per hour) $5.00 Not available

Please let me know if there’s anything I could do to make the service better! :slight_smile:

8 Likes

Hello Stergios, this sounds very interesting and I want to try it out, do you accept paypal or do you have any signup credits that you off new signups btw? I just signed up under mike.moloch . Do you have anyone doing fast.ai assignments on your platform that you know of? Just trying to get a sense of how much tweaking does this platform need for fast.ai lessons.

I’m finding out that on paperspace I still have to reconfigure some things even though it’s a pretty good setup and they offer a template specifically for fast.ai related work.

Thanks,

Mike

Mike,

I currently do not support PayPal but will try and make that available in the near future. I am not aware of anyone using Hetelek for fast.ai assignments, but it shouldn’t be a hassle to get started. If there is missing software on the pre-configured machines, I will make a fast.ai-specific option to make this smoother. Just let me know if you’re missing anything.

Thanks,
Stergios.

OK, thanks Stergios, I will keep you posted!

Mike

1 Like

Hi @hetelek,
You have a very good Machine Learning platform. It is the best I have ever seen for datasets lower than 25 GBytes (regarding the storage cost). It is only for higher than that we can reach other solutions in term of prices and performance.
Good luck on the way.

1 Like

Thank you Kouassi. We are trying to keep the prices as low as possible to allow everyone to run/train their models on powerful GPUs.

Here’s a quick pricing breakdown of Hetelek vs Crestle (also, notice Crestle doesn’t give root SSH access):

Service Hetelek (root access) Crestle (no root access)
Storage (GB-month) $0.30 $0.42
CPU Instance (per hour) $0.01 $0.059
1 K80 GPU Instance (per hour) $0.35 $0.59
8 K80 GPU Instance (per hour) $3.00 Not available
16 K80 GPU Instance (per hour) $5.00 Not available

:slight_smile:

2 Likes

Hetelek.com looks great! I’m looking forward to trying it and I’m glad we have so many good options emerging, including Crestle, Paperspace, Floydhub, AWS, Google Cloud…

For a fair pricing comparison with Crestle, let’s look at this scenario from your pricing page:

You are charged at least a full hour for every launched instance. For example, if you kill a small instance after 10 minutes of use, you would be charged $0.35.

I believe Crestle bills by the minute, even during the first hour, so that scenario would cost less than $0.10 on Crestle. ($0.59/hour * 10/60.)

So figuring out which platform is less expensive depends on a user’s usage patterns, right?

This is pretty cool - my initial plan was to use spot p2.xlarge instances and mount a persistent EBS volume as the notebook root, but I’ve had trouble getting p2 instances to stick around more than an hour or so, and this appears to be a little cheaper than I was getting spot instances for anyway.

My initial experience with it mostly good - it was really easy to get it setup and get into the box. The only thing that appears to be slightly unfortunate is the I/O - unzipping the dogscats.zip took over 10 minutes on this box, but is under a minute on a p2.xlarge with an attached EBS volume. Not sure why it is so slow or if that is something that can be improved, but it isn’t terrible.

I tried to get the first lesson from this course running on it, and probably due to how quickly some of these libraries are changing, I had to make a handful of changes to the environment and still didn’t get it fully working…

Here’s what I had to do (I’m fairly new to both Python and deep learning, so let me know if there is anything I’m missing):

  1. Switch to the Python 2.7 env:
    source activate py2.7-env
  2. Install a bunch of missing libraries:
    conda install Pillow
    conda install scikit-learn
    conda install bcolz
  3. Remove the installed versions of Keras and Theano and re-install them via pip:
    conda_remove_keras
    conda_remove_theano
    pip install theano==0.9.0
    pip install keras==1.2.2
  4. Tell Keras to use Theano instead of Tensorflow (got some error running Vgg16 with TF, not sure if it supported or not, but I know in the Amazon AMI it is setup to use Theano)

After all of that, I get the following now when running it:

> Exception: ('The following error happened while compiling the node', GpuDnnConv{algo='small', inplace=True}(GpuContiguous.0, GpuContiguous.0, GpuAllocEmpty.0, GpuDnnConvDesc{border_mode='valid', subsample=(1, 1), conv_mode='conv', precision='float32'}.0, Constant{1.0}, Constant{0.0}), '\n', 'nvcc return status', 2, 'for cmd', '/usr/local/cuda/bin/nvcc -shared -O3 -Xlinker -rpath,/usr/local/cuda/lib64 -arch=sm_37 -m64 -Xcompiler -fno-math-errno,-Wno-unused-label,-Wno-unused-variable,-Wno-write-strings,-DCUDA_NDARRAY_CUH=c72d035fdf91890f3b36710688069b2e,-DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION,-fPIC,-fvisibility=hidden -Xlinker -rpath,/home/ubuntu/.theano/compiledir_Linux-4.4--aws-x86_64-with-debian-stretch-sid-x86_64-2.7.14-64/cuda_ndarray -I/home/ubuntu/.theano/compiledir_Linux-4.4--aws-x86_64-with-debian-stretch-sid-x86_64-2.7.14-64/cuda_ndarray -I/usr/local/cuda/include -I/home/ubuntu/anaconda3/envs/py2.7-env/lib/python2.7/site-packages/theano/sandbox/cuda -I/home/ubuntu/anaconda3/envs/py2.7-env/lib/python2.7/site-packages/numpy/core/include -I/home/ubuntu/anaconda3/envs/py2.7-env/include/python2.7 -I/home/ubuntu/anaconda3/envs/py2.7-env/lib/python2.7/site-packages/theano/gof -L/home/ubuntu/.theano/compiledir_Linux-4.4--aws-x86_64-with-debian-stretch-sid-x86_64-2.7.14-64/cuda_ndarray -L/home/ubuntu/anaconda3/envs/py2.7-env/lib -o /home/ubuntu/.theano/compiledir_Linux-4.4--aws-x86_64-with-debian-stretch-sid-x86_64-2.7.14-64/tmpYDFYaZ/ea4e203b6529466794536f8a1bfa77ae.so mod.cu -lcudart -lcublas -lcuda_ndarray -lcudnn -lpython2.7', "[GpuDnnConv{algo='small', inplace=True}(<CudaNdarrayType(float32, 4D)>, <CudaNdarrayType(float32, 4D)>, <CudaNdarrayType(float32, 4D)>, <CDataType{cudnnConvolutionDescriptor_t}>, Constant{1.0}, Constant{0.0})]")

I’m guessing I’m missing/have the wrong version of some of the Cuda libraries/drivers, but when I tried to follow the steps from install-gpu.sh, I ran out of disk space (looks like the root volume only has 20 GB) - I’ll try freeing some space by deleting the python 3 env and attempt to get it to work again tomorrow.

@hetelek have you thought about creating a tutorial or video that walks me through the setup process? Including file upload and everything.

So, with a bit of trial and error, I was able to get one of these instances in a state where I could run several of the notebooks from the first course without any issue.

The biggest hurdle is the fact that the root volume only has less than 2 GB of free space right out of the gate, which makes it impossible to install/upgrade everything you need to run these notebooks without deleting a whole bunch of other stuff first.

@hetelek Any chance you could add 5 - 10 GB to the root volume to give a little more breathing room for installing/upgrading packages? I’m not sure if there are better ways to handle it, but that seems like the most straightforward way.

1 Like