Platform: JarvisCloud Simple & Affordable

Hi Everyone,

Thanks to the brave hearts who tried JarvisCloud in its early days when it had minimal functionality.

We have added several new features based on our initial plan and also based on the feedback received from early adopters. So we believe that it’s time for us to create a new thread sharing with you all the key features of JarvisCloud, links for documentation, and upcoming features.

Key Features

:rocket: 1 click Jupyter Lab < [30 seconds]
:rocket: Pause the instance and Resume from where you left.
:rocket: SSH to the instance.
:rocket: Scale up/down GPUs on resume.
:rocket: Auto-Pause using jarviscloud.pause() in your code.
:rocket: Pay per usage – Minute Billing [After first 15 minutes]
:rocket: Affordable pricing.

Discount

As a fastai user, you get a 20% discount on RTX 5000 cards which comes with 16GB of VRAM.

Quick Start

  1. Create an account at cloud.jarvislabs.ai
  2. Add payment information in Billing section
  3. Recharge wallet to add funds using recharge wallet
  4. Select machine type here and click Launch
  5. Start, Pause, or Delete with the buttons actionbutton
  6. Scale up/down/type GPUs on resume.

Note: Paused instances are only charged for storage.

Docs

Detailed documentation can be found at fast.ai’s course website.

Talk to us

We will be happy to assist you in spinning your first instance and many more. You can use one of these platforms to reach us.

  1. Chat option on the website.
  2. Email us - hello@jarvislabs.ai
  3. Comment in the thread.

Like to see a new feature in the platform or to discuss upcoming features you can join us in Github Discussions. We love to keep the development open to our users.

Thanks to Jeremy, without him I would not have started my DL journey and JarvisCloud would have not been possible.

Thanks to our early adopters, who have helped in refining the product. Hope you all enjoy it.

3 Likes

Hi,

I used Jarvislabs a couple of times.
I tried out at its start in last year december, then checked again and again especially after I saw a great news in this forum :slight_smile:
So I also checked again this year after the internet speed upgrade news.
Actually it got way more upgrades than that as you write the key features in this forum thread, I will write about them in next post.

But first, internet speed.
I don’t compare its speed vs my home machine, I like to compare it with other solutions like Paperspace (which is more reasonable comparison in my opinion).
I uploaded many datasets and downloaded network weights many times, so I have a picture about their speed distribution (I also checked some other solutions in Europe, but more on that maybe another time).
Why I’m talking about speed distribution and not single speed - because there is no such thing as single speed :slight_smile:
The speed will vary and depends when you use it and where you are - but it’s true for all providers around the world, so when you use them in the busy hours they tend to be slower, it’s that simple.

Here is a comparison table for Jarvislabs and Paperspace with a 1GB zip file, same time tested parallel outside both of their busy hours.
(I decided to run the test around midnight for Europe, so it’s 4.5 hours more in India and 6 hours less in New York)
Paperspace webbrowser, directly from JupyterLab:
avg download speed: 6.5 MB/sec (1 GB down 2.62 mins)
Jarvislabs webbrowser, directly from JupyterLab:
avg download speed: 7.3 MB/sec (1 GB down 2.34 mins)

I run the test 3 times, the avg is based on that.
So they have a similar range, but I also note if you run this again on another day it can give the opposite order so we should thinking only in ranges.
I also note I would write avg speed even for a single download, because on Paperspace it’s changing between 5.5-7.3 MB/sec for a single file, and it also gives around 6.5 MB/sec but it has a larg standard deviation.

Let’s see another kind of comparison:
Jarvislabs webbrowser, drag-and-drop into JupyterLab:
avg download speed: 7.3 MB/sec (1 GB down 2.34 mins)
avg upload speed: 2.4 MB/sec (1 GB up 7.11 mins)
Jarvislab WinSCP copy:
avg download speed: 4.5 MB/sec (1 GB down 3.79 mins)
avg upload speed: 8.5 MB/sec (1 GB up 2.01 mins)

The picture is not that simple :wink:
So both Paperspace and Jarvislabs use JupyterLab and you can just drag-and-drop files into their windows or right click for download.
I really like these features and I note the download is simple and fast this way so I will always use it on both (because don’t need to login to the server and I really like to use just the browser for everything if I can).
BUT you should not upload big files with drag-and-drop, because even if it’s simple and convenient it can be really slow. (I still use this for small files like a jupyter notebook file or a small csv)

The interesting thing is the upload speed in WinSCP is larger than the download speed in any way :slight_smile: (so I can really suggest to use it)
BUT, another interesting thing is the download speed in WinSCP is slower than browser o.O
(so it’s another reason I will still use the JupyterLab right click for download - not just because it’s more simple, but it’s also faster ;))
(note: I used WinSCP from Win10, but I also have Manjaro linux, I use both OS interchangeably)

For completeness, I live in Hungary, in Europe, but the Paperspace servers are in USA and Jarvislabs servers are in India. (I know the question is why I don’t use something in Europe if I’m in Europe - the answer is: there’re not so many options here, maybe 1, Genesis Cloud, but they use only 1080 Ti or Radeon MI25 cards and not so convenient setup process)
I know Paperspace servers are near New York on East Coast and another pack is near San Francisco on West coast - and both are near Google Cloud Datacenters.
So the speed also can depend which is used when you use the service.
Someone wrote in the forum they wanted to use Kaggle datasets - this fact also sheds light why it can be fast to download Kaggle datasets there.
I can suggest you to download the dataset to your home machine if you are in Europe like me and then upload it to Jarvislabs, it will be faster. (and a lot of times the original datasets contain tfrecords too, which makes them 2x larger and you don’t need those if you use pytorch or fastai, because those are tensorflow stuffs, so remove those from the original and upload only what you really need ;))
And do not drag-and-drop into the JupyterLab window, but use ssh or similarly WinSCP or FileZilla, because it will be 3.5x faster that way :slight_smile:
When it comes to download the results or network weights, just compress it into 1 big file and right-click download from JupyterLab for max speed.

Why I write this in great details?
Well, if you are as poor as me and/or a maximalist, you can optimize this way your time/money to the extreme ^^
Maybe it can help somebody else too :slight_smile:

Let’s see the features.

  • fast spinup: I still really like the fast spinup time, only seconds (the fastest from all competitors)
  • instance pause/resume: I also like this feature, because you don’t need to install again and again libraries and packages.
    You only need to do once. (in theory, but more on that later)
  • gpu scale up/down: I didn’t use it yet, but if you have money, it can be a gamechanger to test something in 1 gpu then scale up.
    So I like the idea to have it ^^
  • auto-pause from code: Yepp, I like it, it can save money and don’t need to check it manually time-to-time.
  • minute based billing: It changed from hour to minutes which also more user-friendly, so I really like it.
  • affordable pricing: Well, the price was 0.39$/hour for RTX 5000, but increased to 0.49$.
    Okay, I understand this is still reasonable price, but I liked the previous one more :stuck_out_tongue:
  • JupyterLab: So it changed from vanilla Jupyter notebooks to JupyterLab and I like this move.
    I also like this is a full-fledged jupyterlab, so I can change the theme to dark as I like it.
    In the beginning the drag-and-drop feature in the file browser window didn’t worked as I noticed in 1 day, but when I wanted to ask for it from support on next day,
    it was already supported as I realized :smiley:
    So you can ask anything anytime from support, but you didn’t even have to ask sometimes they are that fast :DDDD
  • website improvements: There are more clarification on the site compared to the early days, it’s about the prices, what you use, how much you use it in time and price.
    There is also a new community chat forum, a new and more detailed FAQ if you have any initial questions.
    A new very detailed documentation if you have more questions :slight_smile:
    And if you have still questions you can ask them any time in the support chat - I always got fast answers to any of my questions :slight_smile:
  • Recharge wallet: I set it as a new point from the website improvements.
    It’s simple and fast.
    The only problem I have with it - but it’s a minor problem.
    It finishes the recharge in seconds (it’s cool, it’s not the problem :D), but the problem is it doesn’t refresh the site when it finished, you don’t see the new wallet value in this account page except if you refresh the page with an F5.
    And the Recharge button also just shows the processing circle animation even if it finished - so if you are a first user and don’t know this, you can just wait forever
    (ok, you don’t because when you got bored you will refresh it or go back to home page and there you realise the wallet value is already changed, so it is already finished and you can close the account page.).
    So I think it would be nice if it shows when it finishes the process.
  • persistent storage: Here I’m both happy and sad at the same time. In early days, there were no persistent storage, but you didn’t need to pay for paused instances.
    Now if I use a 20 GB drive then it costs 0$ (zero) additional cost per hour for usage,
    but the paused instance with 20 GB drive costs 0.01$/hour, so if it paused for 24 hours it costs 0.24$ which is half hour training time with the RTX 5000.
    In this case I think I will delete the instance every time I finished the usage, because I can upload all data to it faster than half hour (for example 7 GB data 14 mins < 30 mins).
    So this kills the other features totally - I don’t need the instance pause/resume when I don’t have instance anymore.
    Another math calculation is this:
    On Paperspace for 24$/month I can rent 1 TB persistent drive, so I keep the “paused” instance there + all the results
    and don’t worry about it too much when I download it.
    (“paused” is in quotation mark, because it’s not really paused there, that’s why it takes minutes to spinup the instance and that’s why you need to install anything again if it’s not in the standard original package)
    But here, I got 20 GB and if I would pause it for 1 month then it would cost 7.2$ - but it’s 51.2x smaller drive,
    so 1 TB would cost 368.64$ in this logic.
    Okay, these are not the same, because totally different business models, but I try to compare them anyway.
    If we calculate that the max drive is 500 GB here, but it has an additional usage cost 0.24$/hour,
    then I’m not sure it’s still a really good choice.
    So this storage thing is still a question for me.
    I think I don’t like it - but I’m not sure :confused:

Afternote: my internet speed max is 11.7 MB/sec, so it cannot be higher than that for any site for me.
If we take into account that different protocols need some bandwidth too, that’s why these measured numbers are lower, probably 8.5 MB/sec not that bad in this case.
If you have better net maybe you can get better numbers, you can share your numbers to compare mine.

Adding data directly to jupyter lab depends on client bandwidth as mentioned by you. To download datasets from Kaggle you can either use a wget/kaggle API which will result in increased speeds. In some of our experiments, we have seen download speeds in the range of 30 to 100MB/Seconds.

1 Like

Thanks for sharing your experience. I am happy to know that you are enjoying most part of our service. We will fix some of the challenges that you raised.

  • Recharge - Yes the balance should get reflected once you recharge. It should be fixed today.

  • Persistent Storage - The storage that Paperspace provides is SSD which is in general cheaper and has a low IO. What we offer currently is NVME which has several times high IO which significantly improves your model training performance. The current storage system is not equivalent to Paperspace persistent storage. I understand your cost constraint, we will try to provide more affordable storage options in the future.

  • Resume - To make things more clear you need not install any libraries again in JarvisCloud.

  • For storing large volumes of data, we may provide a cheaper alternative.

  • Pricing - For fastai students an RTX 5000 is still 0.39.

2 Likes

Yes, I will try that + I didn’t know I can use the kaggle api on the server, so it will change this whole picture. It’s even in the new documentation, but I didn’t read the updated one - my fault.
I didn’t tried it, because it didn’t worked on paperspace gradient when I tried there months ago - so I assumed it will not work here either.

So now the best scenario: even if something is not created on kaggle, just load the dataset to kaggle from home machine and download with kaggle api from there to jarvis :slight_smile: (if it’s not sensitive data, and a lot of time it isn’t)
And when it’s originally a kaggle dataset, well, then it’s a win-win and just download it :slight_smile:

1 Like

Yes, the kaggle api download speed is changing between 20-100 MB/sec.
For me today it downloaded 11.7 GB data in 9mins 7 secs, so the avg speed was 21.9 MB/sec ^^
It’s better than my slow home internet :slight_smile:

I saw the features you are working on.
Here: https://github.com/jarvislabsai/JarvisCloud-ChitChat/discussions/3

I think for me the most important 3 from that list are:

    1. Allow storage to be modified post creation
    1. Fix the ports while resuming the instance
    1. Clone an instance

I also suggest 1 very important feature which is not yet on your list, but it’s more important than all others for me.

  • GPU Type: None (or GPU Type: CPU)

Because we can change post creation the gpu types from RTX 5000 to RTX 6000 or the number of gpu-s.
BUT, when I just want to copy something to the instance or from the instance I don’t want to use any gpu for those minutes, so I would like to switch the type to cpu.
(On paperspace you can choose cpu or gpu when you start an instance)

You don’t need to fear too much from cpu only usage, because nobody trains on cpu-s (too slow), only copy or when you setup the instance - and the setup time can be anywhere in 10-45 mins 1. time, so it’s really wasted time with gpu…

Thanks, @AmorfEvo for the feedback. Supporting a CPU-only system is definitely in the pipeline but I cannot commit on time.

We are not a VC-funded company, so we will not be definitely able to do a lot of things that companies like Paperspace do. Our focus is on staying simple and affordable. Survival is vital for us to continue supporting the DL community. As we grow to the next stages, we will continue optimizing the platform. I hope you agree to it, as you would have seen the evolution of the product in the last few months.

1 Like

Yes, I understand.
I suggested these because as a user I don’t always know what features are prios to you and what features are easy to implement in the current architecture - they are not always the same, so you can decide ^^

1 Like

How do we let you guys know we are FastAI users when we pay?

Just use the link from the docs. or directly click any link on the top post made by me.

1 Like

I am happy to share that we have been accepted to Nvidia Inception program for startups.

We have also added A100 cards.

1 Like

Hi all, I have launched Jarvislabs.ai on ProductHunt. Please come and check us out on https://www.producthunt.com/posts/jarviscloud.

1 Like

Hi All, Here is my experience for using Jarvis - Deep Learning Workstation With JarvisLabs | by satish1v | May, 2021 | Medium

2 Likes

Is it possible to use Tensorboard or Visdom when training on JarvisCloud? Would be very helpful to monitor the model as it trains (especially when it takes some time)

Tensorboard is on the roadmap. It should be available by next week.

1 Like

Hi, As promised we have now Tensorboard loaded into all the newly launched machines. Just add
logs to the run folder, we take care of the rest.

2 Likes