Just a lost embryo... where is my data?

I am an embryo to programming and data science. This is a new environment for me.
Not sure how I found this course…Thank God. Forgive my ignorance.

I watched the first lesson, twice. The first time everything went over my head.

Finally, I can log into my paperspace machine and jupyter notebook. (This was challenging for me, thanks for the class notes and examples).

I have a very simple and small problem.
I don’t know how to download datasets to my paperspace machine or jupyter notebook. I created a small folder of 30 pictures that I downloaded from Google. 15 MJ and 15 Kobe. I wanted to test the three lines of code on this small set before moving to a serious dataset.

I don’t know how to add/ upload a folder to jupyter notebooks nor create the PATH to run the three lines of codes.

Just a few more questions about data. How do you create several folders for the data? Is this like supervised learning? Do I just make the folders myself?

1st folder = All pictures
2nd folder = test
3rd folder = train
4th folder = valid

Thank you ( Jeremy Howard and Rachel Thomas)
Thank you very much.

Hey,

You did a great job so far!

You can use scp
scp -r /path/to/foo user@your.server.example.com:/home/ubuntu/data/
where
-r Recursively copy entire directories

If you use a private key for connection you may want to add this option too
-i ~/.ssh/private_key_file

Here is how to split your data in test train valid Wiki: Lesson 1

And yes, you need to make the folders yourself, and then you specify in jupyter notebook the PATH to them.

You should have a structure like this, where PATH = '/home/ubuntu/data/folder/'
image

1 Like