Platform: GCP ✅

Hi,

I’m having issues connecting to Jupyter notebook after successfully connecting to my GCP instance. I’m following the ‘Returning to GCP’ document at https://course.fast.ai/update_gcp.html. I ran gcloud compute ssh --zone=us-west1-b jupyter@my-fastai-instance-p100 -- -L 8080:localhost:8080. I was then able to successfully update the course repo and fastai library. But when I try to open http://localhost:8080/tree in a broswer, I get Connection refused. I’m on a Mac. Could someone point out what I’m missing? jupyter_notebook_connection_refused

Hi. Like @Shubhajit I’m running into SSH problems with GCP.
When I’m running the nn-vietnamese.ipynb notebook from the nlp-course of Rachel on my GCP instance, everything goes fine until the training of the learner: after some time (each time different), the connection to my instance is broken by GCP (the instance keeps running) and I get the following error message in my Ubuntu terminal (I’m using Windows 10):

Connection reset by xx.xx.xxx.xxx port 22
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].

I did a lot of Web search and tried solutions from TroubleShooting SSH and this post on I’m Coder but without success.

More, I turned Off the Preemptible option.

Any ideas? How to train a NLP model (that needs time) if GCP stops the ssh connexion?

@pierreguillou I found some temporary workaround here, changing the network (mobile hotshot - > public wifi) helped most of the times. Don’t know why this is the case, but it’s working.
First I thought my ISP was blocking the port (22), but later discovered, it wasn’t.
This is frustrating!
I would really appreciate if someone from GCP team will look at it.

Hi, I tried setting up GCP as per the tutorial ( I used Google Cloud Shell instead of Ubuntu Terminal)

After i ran,

gcloud compute ssh --zone=$ZONE jupyter@$INSTANCE_NAME -- -L 8080:localhost:8080

I got the following error:

ssh: connect to host port 22: Connection refused
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].

How can i go about in fixing this issue?

1 Like

Hello @Shubhajit. Thank you for your answer, but in my case, I use my Internet connection at home (not a cellular connection or a public wifi connection). But what your experiences mean is that the lost of SSH connection would come from the ISP, not from the GCP. If this is the case, it would mean that there is no way to use a cloud GPU to train large DL models such as Language Model, at least in the conventional way (ie, from home computer):

gcloud compute ssh --zone=ZONE jupyter@MY-FASTAI-INSTANCE -- -L 8080:localhost:8080

If true, it would be better to launch the connection to the GCP instance not from my home computer terminal but from an online terminal (to avoid to get my ISP between my instance ).

An idea that makes sense? Possible? Cloud Shell on GCP could allow to do that?

1 Like

The answer from Jeremy: launch your notebook in a tmux session on your GPU online platform.

I’m not sure if this is the right place or if I should open a new thread. Anyway, the pricing section in

https://course.fast.ai/start_gcp.html

is either wrong or outdated (meaning that GCP became vastly more expensive, and the guide needs to be updated). As a matter of fact, the price of the standard compute option is estimated to be (80 hours of homework plus the 2 hours of working through each lesson and 2 months of storage):

  • Standard Compute + Storage : (80+27)$0.38 + $9.6*2 = $54.92

Using the official GCP calculator we get instead:

per month. Since the course duration is 2 months (in the above scenario), the total cost will be 112.9$. The main error in the https://course.fast.ai/start_gcp.html estimate is the cost of storage per month, which is 40.8$, not 9.6$.

This is the command I used to build the instance:

gcloud compute instances create $INSTANCE_NAME        \
 --zone=$ZONE        \
 --image-family=$IMAGE_FAMILY       \
 --image-project=deeplearning-platform-release   \     
 --maintenance-policy=TERMINATE     \    
 --accelerator="type=nvidia-tesla-p4,count=1"  \    
 --machine-type=$INSTANCE_TYPE      \  
 --boot-disk-size=200GB    \    
 --metadata="install-nvidia-driver=True"      \
 --preemptible

and the pricing I got was consistent with the estimate (8.9$ for just 3 days of storage).

I think in the fastai docs in the GCP guide the ‘Standard Provisioned Space’ was chosen, whereas you chose the SSD Provisioned Space, which is more expensive. @AndreaPi Here is the estimate for the standard option for the region you picked:
grafik
:slightly_smiling_face: (Note: I think I did not pick the same estimated using time, so my price is a bit higher than yours because of the slightly higher costs for the instance which was around 17 USD)

Did your command activate the SSD or the standard disk option?

Hi Elena,

Thanks for the answer! You make an interesting point, however I used the command I copied above, which I copy here again:

    gcloud compute instances create $INSTANCE_NAME        \
 --zone=$ZONE        \
 --image-family=$IMAGE_FAMILY       \
 --image-project=deeplearning-platform-release   \     
 --maintenance-policy=TERMINATE     \    
 --accelerator="type=nvidia-tesla-p4,count=1"  \    
 --machine-type=$INSTANCE_TYPE      \  
 --boot-disk-size=200GB    \    
 --metadata="install-nvidia-driver=True"      \
 --preemptible

This is exactly the same command as given in the fastai docs for GCP, so there are only two possibilities:

  • either the fastai docs are wrong, and the command above activates an expensive SSD disk
  • or the command is correct, and GCP is cheating, because it’s charging me the rate of a SSD permanent disk, even though I activated a standard permanent disk.

I’m not sure what to think. Which command did you use, and how much does it cost you for day when you don’t use the VM, I.e., what’s the storage coat you’re incurring?

Hi AndreaPi,
I just set my fastai project at GCP up and I used the exact same commands stated in the fastai docs for GCP. When I look at my ‘Compute Engine’/‘Disks’ for my project, it says ‘Type: Standard persistent disk’ with the size of 200 GB. There aren’t any costs showed yet, but I will tell you as soon as I see them in my account.

What does the disk type for your instance say - is there something mentioned like ‘SSD’ ?

1 Like

Hi Elena,

I think I found the issue: if I’m correct, then the GCP fastai docs are essentially correct and no modification is needed.

  1. My boot disk type is “Standard persistent disk”, size 200 Gb
  2. It appears that the monthly cost is deduced upfront from your free credits as soon as you create the instance, rather than gradually each day (so, by deleting and recreating the instance, I payed twice :disappointed:), thus I overestimated the cost. The reason for the misunderstanding is connected to this:

The costs are actually already there :slightly_smiling_face: but you don’t see them because of the 300$ free credits. To show the actual costs, go to Billings>Reports and uncheck “one time credits” in the bottom right corner. You’ll see how much you actually consumed (of your free credits) until now. If you can do this check and let me know what you see, that would be great.

Hi AndreaPi,

Nice, thanks for that tip! Now I can see the costs, they are 0.76€ right now, but I have not used the instance for more than 2 hours or so I guess up until now :smile:

I also found that in the documentation of the API it says the default option for disk type is ‘pd-standard’, so the commands shown in the fastai course docs should correctly create such a disk :slight_smile:

Right now I am trying to detach the boot disk and create a second CPU-only instance with that disk, so that I can use this disk for both instances and don’t get charged for GPU time when I don’t need that. This strategy was mentioned in some posts within this thread, so I hope this will work :smiley:

Thus yesterday, training seemed to be quite slow when I was experimenting with the course notebooks :thinking: I think I will have to check if the GPU is running or not.

1 Like

@noskill That’s a neat procedure! One question: I just created a CPU+GPU instance as shown in the fastai course docs with 200GB boot disk. I guess it is not possible to detach this boot disk and attach it to a CPU only instance like you mentioned because the disk source image is the ‘pytorch-latest-gpu’ (and for for a CPU-only instance the boot disk image probably has to be 'pytorch-latest-cpu)?

Thus, you are writing of an ‘external disk’: are your CPU and GPU instances having a small boot disk size (like 10GB or so) and are sharing a larger external disk which is attached and detached as needed?

1 Like

were you able to do this? would love a guide on this. I will try on my own later too and update this thread.

So, I had the SSH passphrase issue.
This link solved the problem. Took a few hours of trying various solutions to get this final solution.
Feel free to ask any dobt.

The last couple of weeks, my instances have been preempted with such regularity to the point of making them almost useless. Has anyone else experienced this??

3 Likes

Well, not exactly useless, but indeed they’ve been preempted quite frequently. I’ve been able to train models all the same, though. It probably helps that I recently started, so I’m still at lessons 1-2, which are probably less compute intensive than the others :slightly_smiling_face: Anyway, I found that sometimes changin zone helps: like, us-west2-b is maybe the GCP zone with the highest demand right now?! You could follow my suggestions to find a zone which still has P4 GPUs, but not as much demand as us-west2-b

Alternatively, you could create a standard (non preemtible) instance, but of course that would be considerably more expensive.

4 Likes

Hi, I’m just getting started.
I tried to follow the tutorial instructions, when I run the command to create an instance, I get an error saying:

ERROR: (gcloud.compute.instances.create) Could not fetch resource:
 - The resource 'projects/{...}/zones/us-west2-b/acceleratorTypes/nvidia-tesla-p100' was not found

I referred to Google GPUs on Compute Engine docs to find a zone providing nvidia-tesla-p100, changed the zone in the command, and it worked. Maybe the tutorial should be updated?

1 Like

I’d like to bounce this idea off of y’all about managing the GCP environment.

I’ve created a preemptible fastai instance based on the tutorials, and have been using it for the fastai notebooks and kaggle competitions. Lately, like other posters have mentioned, the connectivity for the preemptible instances has been frustrating.

Question: how easy is it to use the same persistent drive with both a preemptible and standard (persistent) instance?

Would I be able to point both instances at the same disk (maybe with an image?), or will I need to ‘detach’ this disk and ‘reattach’ it to whichever instance I am using at the moment?

Does anyone else have a workflow like this? (Different instances used with the same disk?)

Many thanks in advance

I’m a relative noob to linux, I have some limited fiddling in a prior work setting but nothing serious. In order to save GCP billing time, I’ve cloned the fastai v3 files and libraries to my local laptop to do practice operations which don’t involved training such as doing the image downloading and other stuff.

So I’ve used lesson2-download notebook to download a bunch of images from google and arrange them into folders as prescribed, but in my laptop. I then manually scrubbed the images to remove the obvious irrelevant and outlier images. I even ran the training on my local machine which was very slow as expected, but that is not my point. What I attempted to do was use linux scp to copy the image directories from my laptop into my GCP instance, but for the life of me I can’t figure out what I’m doing wrong. I tried to follow a number of guides on the web but to no avail.

After logging into my GCP server, I entered the following command:
$ scp -r HP-ENVY@[2600:1700:6470:15a0:c8b7:4e4d:ba18:348d]:"C:\Users\redex\OneDrive\Documents\Education\Fastai_Course_v3\nbs\dl1\data\autos" /home/jupyter/tutorials/fastai/course-v3/nbs/dl1/data

It returns the following error:
ssh: connect to host 2600:1700:6470:15a0:c8b7:4e4d:ba18:348d port 22: Network is unreachable

I’m using the -r option since I’m copying a directory, HP-ENVY is my laptop name, the IPv6 address is from my wifi connection which sits behind my home gateway. The path is in quotes as my laptop is Windows 10. I’m assuming that my laptop is the remote device relative to the server terminal, hence the IP address included.

I realize this forum thread is not necessarily for linux but I figured it’s at least related to GCP. Any linux experts who can advise me on this would be much appreciated.

Note: I eventually just replicated the image download process in GCP, but it sure would be good to know how to make scp work in case I need it in the future.