type “cd tutorials”, then “cd fastai”, then “cd course-v3”
then try “git pull”
if that doesn’t work, type “git stash” and then try “git pull” again.
type “cd tutorials”, then “cd fastai”, then “cd course-v3”
then try “git pull”
if that doesn’t work, type “git stash” and then try “git pull” again.
Thank you for helping us
Hi all, am using the fastai for the Google Inclusive Image Challenge on Kaggle. The training dataset ( 1,743,042 training images) is very large and I’ve been running into the Cuda Out of Memory issue.
Am using an instance on GCP with the following specs:
custom (32 vCPUs, 310 GB memory)
Intel Ivy Bridge
4 x NVIDIA Tesla V100
Pls any suggestion. I have reduced bs but still getting the same error. Out of Memory
data = ImageDataBunch.from_csv(path, folder=path_img, ds_tfms=get_transforms(), suffix='.jpg', size=299, bs=8)
try with sudo:
sudo /opt/anaconda3/bin/conda install ...
You probably just want one GPU, unless you already know how to do multi-GPU training.
My bad, should have read the comment properly. Followed the instruction properly on http://course-v3.fast.ai/update_gcp.html and it worked.
Hello,
After getting the new update using “sudo /opt/anaconda3/bin/conda update fastai” when I try to use the download_images it says it is not defined. Does that mean that mean that my version was not updated.
I checked the fastai version.py file and it says version = “1.0.12” .
PS: When I do sudo /opt/anaconda3/bin/conda update fastai it says All packages already installed
Have fill up the form @czechcheck. Thanks for this amazing opportunity.
Are you using the gcloud command from your local machine or your instance? I got this same message when I mistakenly type this into my instance terminal instead of my local machine.
Thank you! Fellows here are so nice!
Standard SCP works for me with this format (note jupyter as username):
scp -i ~/.ssh/google_compute_engine ~/.kaggle/kaggle.json jupyter@YOUREXTERNALIPADDRESS:~/.kaggle/
You can find your instance’s external IP address on the page you use to stop it. Similar to AWS.
Reverse transfer direction by swapping source and destination, refer to this link
Solved this issue. If you are facing the same problem that the library is not getting updated to “1.0.14” even after following the steps on http://course-v3.fast.ai/update_gcp.html, please try replacing :
sudo /opt/anaconda3/bin/conda update fastai
with
sudo /opt/anaconda3/bin/conda install -c fastai fastai
It worked for me. Requesting the admin to please check this once.
Thanks for catching that. I was on the instance.
Thanks Mauro !
Here’s what I do to solve my issue.
I activate google cloud instance, gcloud compute ssh … at ubuntu terminal and open jupyter notebook, two folders (course-v3 and tutorials) appear.
I cd into course-v3 and do ‘git pull’, the course repo is then updated.
So my steps are ‘cd course-v3’, then ‘git pull’.
Fixed in the docs now - thanks.
Many thanks for outlining this option - worked perfectly! The key advantage is that no SSH is required on the local machine, hence no need to create a terminal using Cygwin for Windows 7, etc.
gcloud compute scp with one file is working fine
but gcloud compute scp -r not working for folder
Any solution?
I tar’ed the folder first before I do the scp.
Ok some workaround I found for this is to use copy-files in place of scp. It works but it show that it is deprecated
I have made a account recently. If you refer me will extra $500 get add into my account or do we need to make account from scratch for this?