So I’m using GCP and there was no easy way of getting files there (like scp) without creating buckets, so I’ve found another way.
- Make a zip/rar of your data and upload to google drive.
- Make the file public
- On remote machine installl gdrive and dowload by typing in terminal:
pip install gdown
Where file_id is this part of the shareable url: https://drive.google.com/file/d/1oc7HA5pHOr_UlrqlgbUSOvXmkHNd1IV0/view?usp=sharing
This is great, but why would one download the data to local, then upload it to VMs?. What are you using this for?
I was dowloading Xrays of examples of hip dysplasia in dogs and finding files for this takes a lot of time, and I don’t pay for using my laptop.
Great! So you downloaded the dataset, cleaned it manually and then uploaded to drive and got it to VM from there?
Almost, I’ve made the dataset by hand (and it turns out this is really non trivial and the data quality was quite meh and learning failed miserably), you can’t get some data from simple google search, sadly.
Yes it will, I was googling this wrong, because I’ve only seen info in gcp manual about transferring from/to data buckets.
Do you know a good way to work with Google drive?
I accidentally created 50 thousands small files in my main Google drive directory.
And now it is a huge pain in the ass to remove them through Google drive UI, which is super slow.
Have you tried google colab?
Then mount your drive with this code
from google.colab import drive drive.mount('/content/gdrive')
Then you can work with your files with normal python commands.