Fully featured fastai setup on Google Cloud (starting from $0.2/hour)

FYI we recommend avoiding jupyterlab since we often come across incompatibilities, and also students won’t see the same UI that we show in the course.

Oh really. I’ll update it back to notebooks.

Hey, I tried the GCP default image today.
It’s default to jupyter labs.

I also had some issues with the progress bar previously. But after updating to the latest version of fastai everything worked fine.

Yeah in the GCP guide we wrote it links to /tree to work around this.

1 Like

Ah cool. I’ll update the jupyter systemd script after the instance created.
Had some real progress with https://github.com/arunoda/fastai-shell
I think I can finish it by Monday.

1 Like

@ arunoda
what is the way I can configure your fastai instance to run jupyter notebooks on my localhost the way done by fastai(GCP) as shown below.

gcloud compute ssh --zone=$ZONE jupyter@$INSTANCE_NAME -- -L 8080:localhost:8080

i.e. instead of using  http://external-ip:8888 I want to use http://localhost:8080 
bcoz it takes more then 30 mins for the fastai kernel to initialize if I use the  http://external-ip:8888

I tried following
gcloud compute ssh --zone=us-west1-b jupyter@fastai -- -L 8080:localhost:8080
or
gcloud compute ssh --zone=us-west1-b jupyter@fastai -- -L 8888:localhost:8080

but this doesn't work ..it just opens putty and I am getting following prompt:
jupyter@fastai:~$ jupyter@fastai:~$

Not exactly sure what you are asking. For my method, I’m building a fastai node from a scratch. That’s why it’s taking some time.
But that’s a one time option.

After that, you can start the instance in just few seconds.

There’s no need to use the way suggested in the official guide as the port 8888 is always open via the firewall.

I’m working on a much simpler tool to do this workflow using the official image. So, we can cut down that 30 min time as well.
ETA: Early next week.

Follow the progress here: https://github.com/arunoda/fastai-shell

basically based on your blog. I have created a fastai vm instance.

when I access the notebooks using http://external-ip:8888 and then if I try to run any notebook lets say lesson2-download.ipynb. Then fastai kernel starts after more then 40 mins.

then again if I try to access some different notebook then again fastai kernel takes more then 40 mins to start i.e. for each notebook it takes around 40 mins to start.

Wow! That shouldn’t happen. I’ve followed Arunoda’s setup, too, but I’ve had different results. After I connect to the http://..., I see a Jupyter interface with a sidebar on the left with multiple options. The main portion of the screen on the right shows a tabbed interface where I can open multiple Jupyter NBs at once. It might take a minute to load an NB if it already contains quite a bit of code, but blank ones open right away.

Do you see the Jupyter interface? If so, is it taking 40 minutes to open the NB in the main part of the interface? Or something else?

That’s very weird. Not sure how it’s happen.

What you are looking at the jupyer lab. It’s the same as notebook. But with a different interface.

Jeremy ask me to change it back. Will do it.

Indeed, I noticed it was JupyterLab. Also, I had followed your conversation w/ Jeremy wrt changing it back. Using it as-is hasn’t been a problem so far, though. We can rule it out as the culprit for 40 minute load times, for sure.

I can open any or all nbs and I can see the complete content of nbs. But the problem is I can’t execute any cell as the fastai kernel is taking more then 40 mins to initialize.

For the time being I have deleted the instance and will again try next week with the arunoda’s updated script.

But anyway great work arunoda.

Hi @arunoda… Nice work budy. I however am getting the error :–
(gcloud.compute.instances.create) Could not fetch resource:
- Quota ‘GPUS_ALL_REGIONS’ exceeded. Limit: 0.0 globally.
Does it have something to do with the increasing GPU Quota Permission (as AWS) or something else…?

Also, Please tell how to edit the instance types (like P100 to V100) and boot size 50GB (ur default) to 200GB (as recommended by Jeremy).

This might be due to some project limitations.
Search this thread: https://forums.fast.ai/t/platform-gcp/27375
You can increase GPU quota (usually it’s 1 for the whole account) and upgrade your account (You won’t still get charged until you spend credits)

When you run fastai start you’ll show a list of GPUs supported by the region you setup it. Then you can select the GPU. (See this video: https://www.youtube.com/watch?v=ui_y60ZtE5c)

About 200GB. Jeremy asked to use an standard disk and 200GB is needed to get some decent IOPS for standard disks.
But with 50GB (still the same cost), you’ll get much better IOPS. For learning purpose 50GB is enough.
But you can change this script: https://github.com/arunoda/fastai-shell/blob/master/fastai.sh#L208

1 Like

Thanks a lot… !!! I will try that…

Is your setup identical to the fastai setup on their page? I was getting some error when I had one machine but wanted to add one more.

Hi @arunoda , is it possible to install fastai==0.7.0 here . I tried installing !pip install fastai==0.7.0 in the jupyter notebook and it was throwing an error saying setup.py file not found