Platform: GCP ✅

Hi AndreaPi,
I just set my fastai project at GCP up and I used the exact same commands stated in the fastai docs for GCP. When I look at my ‘Compute Engine’/‘Disks’ for my project, it says ‘Type: Standard persistent disk’ with the size of 200 GB. There aren’t any costs showed yet, but I will tell you as soon as I see them in my account.

What does the disk type for your instance say - is there something mentioned like ‘SSD’ ?

1 Like

Hi Elena,

I think I found the issue: if I’m correct, then the GCP fastai docs are essentially correct and no modification is needed.

  1. My boot disk type is “Standard persistent disk”, size 200 Gb
  2. It appears that the monthly cost is deduced upfront from your free credits as soon as you create the instance, rather than gradually each day (so, by deleting and recreating the instance, I payed twice :disappointed:), thus I overestimated the cost. The reason for the misunderstanding is connected to this:

The costs are actually already there :slightly_smiling_face: but you don’t see them because of the 300$ free credits. To show the actual costs, go to Billings>Reports and uncheck “one time credits” in the bottom right corner. You’ll see how much you actually consumed (of your free credits) until now. If you can do this check and let me know what you see, that would be great.

Hi AndreaPi,

Nice, thanks for that tip! Now I can see the costs, they are 0.76€ right now, but I have not used the instance for more than 2 hours or so I guess up until now :smile:

I also found that in the documentation of the API it says the default option for disk type is ‘pd-standard’, so the commands shown in the fastai course docs should correctly create such a disk :slight_smile:

Right now I am trying to detach the boot disk and create a second CPU-only instance with that disk, so that I can use this disk for both instances and don’t get charged for GPU time when I don’t need that. This strategy was mentioned in some posts within this thread, so I hope this will work :smiley:

Thus yesterday, training seemed to be quite slow when I was experimenting with the course notebooks :thinking: I think I will have to check if the GPU is running or not.

1 Like

@noskill That’s a neat procedure! One question: I just created a CPU+GPU instance as shown in the fastai course docs with 200GB boot disk. I guess it is not possible to detach this boot disk and attach it to a CPU only instance like you mentioned because the disk source image is the ‘pytorch-latest-gpu’ (and for for a CPU-only instance the boot disk image probably has to be 'pytorch-latest-cpu)?

Thus, you are writing of an ‘external disk’: are your CPU and GPU instances having a small boot disk size (like 10GB or so) and are sharing a larger external disk which is attached and detached as needed?

1 Like

were you able to do this? would love a guide on this. I will try on my own later too and update this thread.

So, I had the SSH passphrase issue.
This link solved the problem. Took a few hours of trying various solutions to get this final solution.
Feel free to ask any dobt.

The last couple of weeks, my instances have been preempted with such regularity to the point of making them almost useless. Has anyone else experienced this??

3 Likes

Well, not exactly useless, but indeed they’ve been preempted quite frequently. I’ve been able to train models all the same, though. It probably helps that I recently started, so I’m still at lessons 1-2, which are probably less compute intensive than the others :slightly_smiling_face: Anyway, I found that sometimes changin zone helps: like, us-west2-b is maybe the GCP zone with the highest demand right now?! You could follow my suggestions to find a zone which still has P4 GPUs, but not as much demand as us-west2-b

Alternatively, you could create a standard (non preemtible) instance, but of course that would be considerably more expensive.

4 Likes

Hi, I’m just getting started.
I tried to follow the tutorial instructions, when I run the command to create an instance, I get an error saying:

ERROR: (gcloud.compute.instances.create) Could not fetch resource:
 - The resource 'projects/{...}/zones/us-west2-b/acceleratorTypes/nvidia-tesla-p100' was not found

I referred to Google GPUs on Compute Engine docs to find a zone providing nvidia-tesla-p100, changed the zone in the command, and it worked. Maybe the tutorial should be updated?

1 Like

I’d like to bounce this idea off of y’all about managing the GCP environment.

I’ve created a preemptible fastai instance based on the tutorials, and have been using it for the fastai notebooks and kaggle competitions. Lately, like other posters have mentioned, the connectivity for the preemptible instances has been frustrating.

Question: how easy is it to use the same persistent drive with both a preemptible and standard (persistent) instance?

Would I be able to point both instances at the same disk (maybe with an image?), or will I need to ‘detach’ this disk and ‘reattach’ it to whichever instance I am using at the moment?

Does anyone else have a workflow like this? (Different instances used with the same disk?)

Many thanks in advance

I’m a relative noob to linux, I have some limited fiddling in a prior work setting but nothing serious. In order to save GCP billing time, I’ve cloned the fastai v3 files and libraries to my local laptop to do practice operations which don’t involved training such as doing the image downloading and other stuff.

So I’ve used lesson2-download notebook to download a bunch of images from google and arrange them into folders as prescribed, but in my laptop. I then manually scrubbed the images to remove the obvious irrelevant and outlier images. I even ran the training on my local machine which was very slow as expected, but that is not my point. What I attempted to do was use linux scp to copy the image directories from my laptop into my GCP instance, but for the life of me I can’t figure out what I’m doing wrong. I tried to follow a number of guides on the web but to no avail.

After logging into my GCP server, I entered the following command:
$ scp -r HP-ENVY@[2600:1700:6470:15a0:c8b7:4e4d:ba18:348d]:"C:\Users\redex\OneDrive\Documents\Education\Fastai_Course_v3\nbs\dl1\data\autos" /home/jupyter/tutorials/fastai/course-v3/nbs/dl1/data

It returns the following error:
ssh: connect to host 2600:1700:6470:15a0:c8b7:4e4d:ba18:348d port 22: Network is unreachable

I’m using the -r option since I’m copying a directory, HP-ENVY is my laptop name, the IPv6 address is from my wifi connection which sits behind my home gateway. The path is in quotes as my laptop is Windows 10. I’m assuming that my laptop is the remote device relative to the server terminal, hence the IP address included.

I realize this forum thread is not necessarily for linux but I figured it’s at least related to GCP. Any linux experts who can advise me on this would be much appreciated.

Note: I eventually just replicated the image download process in GCP, but it sure would be good to know how to make scp work in case I need it in the future.

I’am having difficulties working on GCP with preemptible instance recently. I either cannot start the instance or get disconnected every 10 to 20 min.
I’ve been trying different regions (us-west, us-central, us-east, europe-west) but could not find a zone where I can work well.
What regions are you using?

1 Like

I started with us-west, and then duplicated my instance on us-central after us-west servers were down.

On central, I’ve been experiencing similar issues as you mentioned. It’s been annoying me so much I log whenever my instance can’t start or is remotely closed. Anecdotally, I’ve found that I will have 2-4 sessions of just a few minutes, and then a longer session that lasts a few hours.

2 Likes

Yeah, same here. Have only gotten my setup sorted today, but it’s been pretty frustrating, as constantly being kicked has been really hampering my ability to even complete the first notebook (generally get down to the fine-tuning section, then get kicked and have to re-run everything again).

Thinking about moving to Kaggle - at least while I get my feet wet and get a study routine in place.

Loving the course so far, and the notebooks are awesome - so no issue with the fastai side at all! Thanks :wink:

2 Likes

You have to start your instance from inside cloud.google.com. Go to the Compute > Instances tab. Click on start and make sure it’s running.

This error comes from you trying to ssh into an instance that isn’t turned on.

1 Like

The instance was on, but i fixed this.
In my case, I believe my Internet Provider was blocking this port, and hence I got this error.
When I tried the same using another connection, it worked.

1 Like

Seems like it was updated and the tutorial now shows us-west1-b for a nvidia-tesla-p4 GPU. However I got the same error as you. According to the link you posted the P4 is no longer available in this location, so the tutorial needs to be updated again.

I am getting this error on step 3, closely following of the GCP tutorial (https://course.fast.ai/start_gcp.html):

ERROR: (gcloud.compute.instances.create) Could not fetch resource:

  • The resource ‘projects/…/zones/us-west1-b/acceleratorTypes/nvidia-tesla-p4’ was not found

What happened?

@dries

You could try export ZONE=“us-west2-b” because that worked for me.

3 Likes

Thanks! That did the trick.