Hello, Here is the statement I used to create a Google Cloud Instance:
gcloud compute instances create "fastai" --zone="us-west2-b" --image-family="pytorch-latest-gpu" --image-project=deeplearning-platform-release --maintenance-policy=TERMINATE --accelerator="type=nvidia-tesla-p4,count=1" --machine-type="n1-highmem-8" --boot-disk-size=200GB --metadata="install-nvidia-driver=True" --preemptible
I got the following warning message, which I don’t think I should ignore.
WARNING: Some requests generated warnings:
- Disk size: ‘200 GB’ is larger than image size: ‘30 GB’. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.
The reason I think this should not be ignored is because when I try to run the lesson 1 notebook 2 times in a row, I run out of memory.
Has anyone run into this issue too? What can I do to not run out of memory. This also happened to me on my AWS instance. I think I am missing something really obvious. Thank you for your help.