Thank you @Harvey. I was trying to use my free credits. It seems this condition was added recently bcz for Part 1 2019, I was able to run a preemptible instance with the free credits.
@salmaniai@steef if you can’t set it up from the CLI you can also set up vm instances with the through the google cloud console, see below for more info.
After reading some GCP docs I realized that N2D machines are in beta and they are no longer supported in the west zone + they no longer support the p100 GPU.
I got the following setup to work which has a little more memory than the recommended setup but has the same GPU as recommended.
@jeremy FYI that N2D machines no longer support the west zone nor the p100 GPU. You might want to update your documentation. @rachel FYI too.
export IMAGE_FAMILY="pytorch-latest-gpu"
export ZONE="us-west1-b"
export INSTANCE_NAME="my-fastai-instance"
export INSTANCE_TYPE="n1-highmem-16" # It seems like the N2D machines are in beta and are no longer available in all zones + not working with p100 anymore.
gcloud compute instances create $INSTANCE_NAME \
--zone=$ZONE \
--image-family=$IMAGE_FAMILY \
--image-project=deeplearning-platform-release \
--maintenance-policy=TERMINATE \
--accelerator="type=nvidia-tesla-p100,count=1" \
--machine-type=$INSTANCE_TYPE \
--boot-disk-size=200GB \
--metadata="install-nvidia-driver=True" \
#--preemptible # Don’t use preemptible as it gave me issues before
@salmaniai I ran into the same existence issue that you described and that issue is also resolved with this solution.
I think you made a mistake copying INSTANCE_TYPE
try
export INSTANCE_TYPE=“n2d-highmem-8”
instead. It should work fine. Also i suggest not using us-west1-b as your zone. It is quite a busy server, and your instance frequently gets preempted . I suggest ‘europe-west1-b’ or something else.
All the best
I need some help related to setup. I tried to increase my quota of GPUs to 1. I have followed all the steps as mentioned in the server setup for google cloud & made a request to increase quota. I got a confirmation email saying that the request to increase quota was successfully received. But within few seconds, I get another email quoting:
Unfortunately, we are unable to grant you additional quota at this time. If
this is a new project please wait 48h until you resubmit the request or
until your Billing account has additional history.
Your Sales Rep is a good Escalation Path for these requests, and we highly
recommend you to reach out to them.
My project is new and I have waited for weeks, yet the quota didn’t change. I have tried a lot of times, yet I get the same email within few seconds after the request confirmation email every single time. Can someone help me resolve the issue?
Note: I have upgraded my account, my project is linked to a billing account.
Dear All, I think there are n2d-highmem-8 in some zone, and there are p100 GPU in other zones. I think there were machine n2d-highmem-8 with P100 GPU in US-West-??somewhere before. Thus, I would like to see what other budget options of machine/GPU combinations available. Anyone can share their success story and info about the cost. BTW, I am using Colab for fastai2 as a free option.
Thanks, @duerrseb for his post: FastAI2 notebooks in Kaggle which were updated March 22. https://github.com/seduerr91/fastAI_v4/blob/master/fastai2%20on%20colab.md
Hi @Vineeth
I’m not exactly aware of the exact issue, but it may be possible that you submitted more than one requests for quota allocation. In that case, you can go to your GCP console and check how many quotas you have. You should have 1 quota (most probably, or atleast)
If you dont have any quotas, try resubmitting a request, and you should be good to go.
Cheers, Stay Safe
You can follow the rest as is mentioned on fastai’s installation guide.
But I think you won’t be able to do any computation on CPUs because of that. That’s okay, we anyways do most of our computation on GPUs. Like, for example, in the Fastai course v3 (part 1), there’s only one place in one of the notebooks, where inference was done on CPUs, as far as i remember. You can easily carry out that process on GPUs instead. Shouldn’t create a problem.