Platform: GCP ✅

SDK Shell vs Cloud (in browser) Shell

When I use following command from SDK shell I am able to access FastAI notebooks , but same command doesn’t work on cloud shell (command finish without error but can’t access http://localhost:8080/tree)

gcloud compute ssh --zone=us-west2-b jupyter@my-fastai-instance – -L 8080:localhost:8080

Am I missing anything on cloud shell ?

I am working on Windows 7 laptop. Even I tried starting jupyter notebook on cloud shell but it didn’t work.

I have been having issues getting the test app deployed using the google app engine. I followed the guide, but I’m getting a timeout error. The full details are below, along with a link to my app engine repository on github. I have tried deploying both the resnet34 and resnet50 versions of the model. Does anyone have any experience with the app_start_timeout_sec setting metinoned in the error, or an idea of where I can find more detailed logs?

Updating service [default] (this may take several minutes)…failed.
ERROR: (gcloud.app.deploy) Error Response: [4] Your deployment has failed to become healthy in the allotted time and therefore was rolled back. If you believe this was an error, try adjusting the ‘app_start_timeout_sec’ setting in the ‘readiness_check’ section.

*I have the .pth file prestaged in the models directory, so there is no URL in my server.py file. But I dont’ believe that is causing any issues, because I have gotten errors when testing and forgetting to change the cnn_create line to use the correct architecture. It is loading the model up to the point where it can tell that much.

1 Like

Hey @ShawnE, I got the same problem. My deployment crashes at Updating service as well. You can check logs at https://console.cloud.google.com/logs/viewer. I found that in my case the problem was:

_check_disk_space.sh: Free disk space (792 MB) is lower than threshold value 954 MB. Reporting instance permanently unhealthy. Note: different instances will have a different threshold value.

That said, I still have no idea what exactly should I do with that. I am pretty new to GCP and still a little confused about it. Does this app, which we are trying to run, is deployed on the same instance we use for fastai course, or the new one is created or sth else?

How to use CurlWget in GCP? I’ve downloaded a dataset by pasting the link to my cloud terminal but I couldn’t find where the dataset was.

I get the same error.I solve it by doing this:

  1. restart my intance in gcp
  2. change the command: gcloud compute ssh --zone “us-west2-b” jupyter@“my-fastai-instance” – -L 8080:localhost:8080

I think it is maybe because of the argus like $ZONE

Do you know by any chance how to actually load a dataset into a Jupyter Notebook on GCP?
Thanks.

Do u mean u want to save some dataset like images into the dir of jupyter?

  1. u can use the function "upload " through the “upload” function
  2. u can use the command in shell,it can copy files to or from the server

gcloud compute scp

It is it possible to upload image folders? I thought you could only upload notebooks.

Hi, i have GCP instance with fastai, and Jupyter Notebook and for now it is working fine. But I did have problem with git checkout . and git pull command it say permission denied and not sure why and what I have to do.
As i say notebooks works fine and I did do conda update fastai and it is working but git not working, and i woul like to have also some other things on that instance so definitely would like to be able to run git.

So please help, I know that this is basic and I do know how to work with git but not sure how to setup on google instance and why i dont have permission, perhaps something with roles that I have on instance, but I am owner since it is my projct and account.

Thank you.

it should be FileLink with capital L.

1 Like

I’m sure you have resolved this issue by now, but since I ran across the same problem I will post what worked for me for any future students.

After running pip install --user kaggle
I received the warning The script kaggle is installed in '/home/jupyter/.local/bin' which is not on PATH.

Which means I needed to add ‘/home/jupyter/.local/bin’ to my PATH environment variable.
To ensure that the change persists across shell sessions I did the following:

vim ~/.bash_profile
Paste PATH=$PATH:/home/jupyter/.local/bin inside your ~/.bash_profile
Save and exit.
Finally run source ~/.bash_profile so the changes take effect immediately

Before, on Paperspace for example, I would ssh into my instance, start tmux or screen, run jupyter notebook, so that the notebook could be kept running when training was taking a long time and the ssh connection was broken. If the ssh connection was lost, I could just reattach the screen session, press Enter a couple of times in the terminal in which jupyter notebook was executed, then the training progress bar would start to move again in the browser.

How does one do the above with GCP? In following this guide:
https://course.fast.ai/start_gcp.html, nothing like is screen/tmux involved, so what can one do if the ssh connection is lost and the training progress bar stops moving?

If your connection drops simply restart the ssh session with the same command… That is what I normally do.

gcloud compute ssh --zone=$ZONE jupyter@$INSTANCE_NAME -- -L 8080:localhost:8080

You don’t need to manually run jupyter notebook because the notebook server is already running in the VM.

1 Like

Thanks. If the notebook is still unresponsive after executing this command, clicking the red button near the top right that says ‘Not Connected’ might help.

Getting the above message while nothing seems to happen in the notebook. Does it mean that things are actually still running, just that the results can’t be seen in the browser because the server is “temporarily” not sending any output? How long is “temporary” (because it’s been several hours!)? ssh session is open in this case.

I’ve been trying to setup the older 0.7.0 version of fastai in one of the conda environments to keep it separate from the newer version, to test out some of the older notebooks to understand its logic more thoroughly. But my jupyter notebook only runs directly from the base. I’m using the deep learning pytorch image with cuda 10 which by default uses fastai 1.0 version. Is there a way to run the jupyter notebook on another environment, instead of being forced to reload the environments to switch between versions of fastai to test out different things or being forced to run all the notebooks in the base conda environment?

Hi. Were you able to resolve this issue?

Hi! Thanks for the great course.

I have it running on GCP but having some issues updating git.

cd tutorials/fastai/course—v3
git pull
remote: Enæerating objects: 33, done .
remote: Counting objects: 100* (33/33), done.
error: insufficient permission for adding an object to repository
fatal: failed to write object
fatal : unpack—objects failed

Any clues as to why the permissions are not correct for the logged in user?
I’m using the terminal from within GCP via browser URL to jupyter, not the local terminal setup on windows.

I can’t use sudo as it now asks for a password for jupyter@fastaiv3-vm, but there’s no password given in the vm details I can see.

Edit: Never mind. It works properly if I use a terminal from with the VM cloud console via SSH and using my Google account. Best to avoid using the terminal within Jupyter I guess

solved the issue for me (using GCP). Because the problem threw error only at the stage of the data download, at first I mistakenly thought there was something wrong with the data download itself.

Anyone know how to get ImageCleaner(ds, idxs, path) to work in Lesson 2?
It just shows kernal busy and nothing else happens. Latest git pull and fastai update done ok.