Is it in in Google cloud shell you suggested or in my local downloaded Google Cloud SDK shell or they are the same thing?
I see i can access http://localhost:8080/tree after i do gcloud command in SDK shell (had to edit
$ZONE, $INSTANCE_NAME to their full string because SDK shell did not recognize export).
Surprisingly, i see a tutorials folder --> data, fastai, pytorch folders. Where did this come from/when did i define them?
I can’t do git clone in SDK shell after typing first gcloud command because it does not accept anymore input. How do i get the course-v3 stuff then?
Does the cloud shell interact with my local files?
Could you please explain big picture what is going on both locally and remotely when those 2 commands are used?
P.S My windows 7 does not have WSL to just copy paste linux commands from the guide
First one is in the cloud shell which does SSH into the server you created.
Just place the second after you sshed into the server. (Basically all of these happening inside the cloud shell).
I have also written a guide which does things step by steps with GCP. Try that if you still having problems.
You can type it in any shell after you download Google Cloud SDK.
Google DL Image preinstalls them for you as demo.
You are supposed to clone them in the Google Cloud instance you get connected to.
No, it doesn’t.
Basically, you are controlling the remote computer and projecting its shell into your local shell. SSH also tunnels the data in the Jupyter notebook to your localhost:8080, so that you can work directly with the notebook running in the remote server in the convenience of your browser.
Apologies if this is a silly question but how do I access the Jupyter notebook after creating the VM? I have an external IP address available, do I just copy paste it into my browser window along with the port (8888)?
Note that if the first time you access an instance that you forget this, you’ll need to create a whole new instance, since it uses that first run to setup security.
Just a note to mention - the official fastai image already has that set up for you. ssh is just used to create the tunnel, so you don’t have to open any ports other than ssh.
I found simplification for this, to avoid switching between users.
Deeplearning-platform allows adding jupyter-user metadata while creating an instance
So if we add this to metadata --metadata='install-nvidia-driver=True,jupyter-user=[MAIN_USER_NAME]'
in installation command, an instance will not create jupyter user, and default jupyter (this one running on 8080) will be running on choosen user instead.
[MAIN_USER_NAME] is gcloud user in snake_case notation (e.g. Thomas Anderson will become thomas_anderson)
It seems I can’t run gcloud on my Mac, so I’ve run the GCP commands (including gcloud) on my Linode Ubuntu instance. However, I use multiple domains on Linode to host multiple domains there, and this seems to be interfering with the ssh tunnel. Any hints on how to get this working?
Can someone please help me with installing the kaggle api? I tried !pip install kaggle and !pip install --user kaggle but still get /bin/sh: 1: kaggle: not found error when trying !kaggle --help.
You should use python 3 because fastai requires python3.6
Basically it’s about that, there is “standalone” python2.7 installed without conda, used by default with sudo.
So when using sudo you have to tell that you want conda python (3.6) , not standalone python (2.7).
I read through the official guide and notice that ZONE=“us-west1-b”. So, if I am living in UK, is there any additional benefit for me to choose zone nearer to where I live? I realise that GCP charge a bit extra for European zone.
Question about boot disk size. If we want more than 120 GB, should we just increase the boot disk size, or should we have a separate disk (possibly SSD) that contains our data and somehow attach it? Anyone with some experience on that front?
I’m asking as some Kaggle image classification competitions have few hundred GBs of data.