I’m just getting started with Part 1 2019 using GCP myself, and I ran into the same error message about “Quota ‘GPUS_ALL_REGIONS’ exceeded. Limit: 0.0 globally.” while following the setup directions you posted.
A little Googling turned up this suggestion. I followed the instructions in the only answer posted, and by setting the filter (1) and then “EDIT QUOTAS” (2), I was able to submit a request to increase my GPU count from 0 to 1.
Just a quick update: I received a couple of emails from Google Cloud Support between yesterday and today, giving me periodic updates about their review of my request. About 24 hours after my request was submitted, I received an email saying that my request had been approved.
I saved it pushing: Ctrl + x
then, it asked me to save it in: .bashrc and i just hit enter
but then i go back and it showed me what is on the picture and it stays there. When i type the whole command, it works; but just typing ‘gc’ doesn’t.
Hey Christian, sorry it isn’t working. Can you post the exact line you put in your .bashrc file? To save make sure you type ‘y’ then enter. Did you make sure to refresh it by running source .bashrc? Here’s my exact line, looks like we chose the same instance name haha.
alias gc='gcloud compute ssh --zone=us-west2-b jupyter@my-fastai-instance -- -L 8080:localhost:8080'
Post more details and we’ll get it figured out. It’s a huge time saver so worth any trouble imo.
I’m new here as well, but let me take a shot at answering your questions.
Those images are downloaded to the Google Compute Engine instance. When you ran gcloud compute ssh --zone=$ZONE jupyter@$INSTANCE_NAME -- -L 8080:localhost:8080 you were logged into that GCP instance. Whatever you do on the Jupyter notebooks will only affect that instance and not your local system.
The images reside in the /home/jupyter/.fastai/data folder of your GCP instance. You won’t see these folders in Jupyter as they’re hidden, but you can now use the terminal window we used to run the above command to explore these files with usual linux commands:
You can find the way to build your own dataset on the lesson2-download notebook. If you already have the dataset on your local machine, you can just upload it to the Jupyter hub as well.
Follow up to #2.
So if I have my dataset on my virtual machine. It’s over 6,000 images. Dragging and dropping into Jupyter hub prompts me to click “upload” on every file… So clicking 6,000 + times doesn’t look efficient. So there has to be a better way.
"Convert it into a single Zip file and upload that. to unzip the folder use the code down bellow
import zipfile as zf
files = zf.ZipFile("ZippedFolder.zip", 'r')
files.extractall('directory to extract')
That’s great, but typing this in my GCE Jupyter Hub:
import zipfile as zf
reveals following GCE folders…
And I am not really sure if there is a way to navigate to my VM desktop.
Are there any solutions vs what I described and/or is there a way to navigate from GCE to VM desktop in Jupyter hub, or better yet, is there a way to just store these images in Google Cloud Storage and then pull them from there?
I don’t think there’s a trivial way to read files in your desktop directly from your VM. You’d either have to copy it over to some cloud location like you’ve just done, or you can use the scp command to transfer the file directly to the VM.
Instructions to do this on GCP is here. Check the examples section. Its pretty much the same as the ssh command, and you can transfer whole folders without making them into zip files.
Nice! Thanks! I will give it a try as well. Not sure how fast google cloud storage is vs having files directly on VM.
Update, just found another way, even simpler, but one file at a time … : (
Same as before, click SSH under google cloud platform.
Click on gear looking icon, right hand corner. There is an option to upload a file…