Floyd - alternative to AWS P2 instance


(Sai Prashanth Soundararaj) #48

There is no error :slight_smile:

What you see is the previously saved notebook. When you actually run the cell, you should see:

Using cuDNN version 5110 on context None
Preallocating 10867/11439 Mb (0.950000) on cuda
Mapped name None to device cuda: Tesla K80 (0000:00:1E.0)
Using Theano backend.

(Susan Li) #49

Hi, I just signed up, but when I login, my browser did not open so I did not get an authentication token. Can you please help? Thanks

Susan


(Susan Li) #50

Never mind, I was able to figure it out. But now I am having problem uploading data, is it because my trial 3600 seconds ran out? My jupyter notebook has become “404”.

Susan


#51

is there any chance that you’ll offer the free 100h trial again (or perhaps something similar)?


(Susan Li) #52

I am having the same issues with Floyd on uploading data. What did you end up using? I’m having “Segmentation fault (core dumped)” problem with AWS when downloading dogscats data. So frustrated.


(Sai Prashanth Soundararaj) #53

If you don’t have access to a browser for floyd login, please use the --token flag. See http://docs.floydhub.com/commands/login/


(Sai Prashanth Soundararaj) #54

Our 100 hours free promotional plan has ended. We are working on a new pricing plan, including a new free tier that should be out in a couple of weeks!


(Sai Prashanth Soundararaj) #55

Sorry to hear you’re facing problems with the data upload. What exactly are you seeing? Any logs or screenshots would be helpful. You can also reach out to us: support@floydhub.com

We are aware that our data uploads are slow and we are actively working on fixing this!


(Prasad Chalasani) #56

Loving Floydhub so far.

@sai @narenst

I have a question about downloading data-sets from AWS S3. My PyTorch models are trained on huge amounts of data that we generate using a Spark process and dump to an AWS S3 location. My Python code loads files of data from there on-demand. Of course when I try to do that from the Floyd instance, it fails because it is not authorized to access my AWS S3 data.

How do you guys suggest getting around this? I think your data-set creation workflow is only for uploading data from my local computer. But it would be great to support downloading from an AWS S3 bucket, and if the bucket has access restrictions, there should be a way to supply the necessary credentials (secret/access key).

Once I know how to do this, I can switch my workflow to Floydhub


(Vikrant Behal) #57

Hi All,

I’m new to DL and very new to most of the software mentioned in the setup video so I’ve a doubt with respect to environment setup if we prefer Floyd.

For AWS we’ve steps that we’ve to follow to setup environment before we proceed with course. I wanted to know if anyone here has chosen Floyd and can provide which all steps I’ve to alter and where can I find altered commands?

@sai, Does Floyd provides any document or help guide on which commands I’ve to run in place of AWS while configuring environment using Floyd?


(Ketan Kumar Todi) #58

Hi The link for the dataset posted here is not available. Can you please help with it.


(joel) #59

Hi @narenst , sorry to revive this thread. For what I see in the rest of the conversation there’s a big roadblock on the dataset. In this post in March sounds like you made the dogscats dataset publicly available to anyone but as of today (aug 27) the link takes you to a 404 and I can’t find any reference to a publicly avaialble dataset.
I understand we’re able to mount any publicly available dataset as per http://docs.floydhub.com/guides/data/mounting_data but we need the details of it. Can you share it or confirm if it is no longer available and everyone needs to mount his/her own?
Thanks again for the support and for Floydhub!


(Vikrant Behal) #60

Do you’ve any docker image which can be used for setup?


#61

I ran into a couple of issues regarding backward compatibility of Keras 2 which is the current default on Floyd.
My issues could be fixed by following the notes expressed in https://github.com/YuelongGuo/floydhub.fast.ai/issues/1#issuecomment-323515380

Thanks everyone! Jasper


(Fernando Santos) #62

Hey @todiketan and @joelg,

You can search for floyd public datasets here. Some datasets used in the course are already there, like catsvsdogs and catsvsdogs redux.

And I saw now there’s also a project there with the files for the first lesson of the course: https://www.floydhub.com/fastai/projects/lesson1_dogs_cats. I could clone it and create my own project with the exact same files. It was quite simple to do it…

Fernando


(Giulio Matteucci) #63

Hi @sai!

Is there a way to customize Jupyter running in Floyd using jupyterthemes (https://github.com/dunovank/jupyter-themes)?

I tried to do so installing the package via terminal and then typing “jt -t grade3” but the appeareance of my jupyter notbook did not change (also after a refresh)…


(Francisco Ingham) #64

Hi kijes if could you share

prepare.py

that would be very useful.

I am having some trouble building up the data structure from within Floydhub given that the /input directory where I have my data is read-only.

Thanks!


(Rafal Kijewski) #65

Hi,
You can check this gist: https://gist.github.com/kijes/163bbceca3b6fb25f210f1d2033a4f97
I’ve refactored the code since the last post. Generally the preparation is a separate step and then I use its output as input to training phase.
In the gist:

  1. I call “prepare_train_full_fh_cmd.sh” to run FH command to prepare the data
  2. The script calls another one “prepare_train_full_fh.sh” which just calls python script “prepare_train_full_fh.py”
  3. “prepare_train_full_fh.py” downloads and unpack data from Google Drive into “/output” (you need to provide file id that you can get when you share your file)
    I download everythin that’s is needed for the experiment: training data, test data or pretrained models.
    You will need to adjust the paths.

(Francisco Ingham) #66

Thanks kijes!


(Vivek Reddy) #67

Hi kijes,

Thank you for your post.
You mentioned earlier that you had a shareable google drive link to the lung data.
I was wondering by any chance if you would be willing to share the link? I have had trouble being able to download the data and no space on my personal computers.