[HELP] Training Speed (Lesson 1)

It took me so long to train the first model in Lesson 1 - one or two minutes. Comparing to the training time done in the lecture, which is only 30 seconds or so, mine has some problems. Actually, not just in this example, it’s very slow to train other models too.

My system: AWS

Hi Ran, training speed depends on the performance of the system you’re running, especially GPU performance. When you are running on a low-cost AWS instance, it might just run slower than what Jeremy used in the lecture. I don’t think that’s a bug.
As a free alternative, try running the notebooks on Google Colab, they offer free GPUs (with some limitations regarding run time limits) and it’s great for experimenting. You don’t have any guaranteed performance, but often you end up with a pretty good GPU.

1 Like

Hi guys!

I am started online video Fast.AI Course on YouTube.
I did AWS EC2 instance , install all software and do Lesson 1 computing.

I do connect via SSH by -

ssh -L localhost:8888:localhost:8888 ubuntu@'myIP’

command.

To see Jupyter in my local browser.
Is it right, by the way?

I starts 01_intro from ‘clean’ folder, ‘Running Your First Notebook’ section and see that computing speed much lower than on Jeremy videos.

In my case times are:

  1. 8:43
  2. 11:51

Am I doing everything right?
It is just a AWS speed?

Any thoughts?

What kind of ec2 instance did you get? Did you get one with a GPU?

Instance type:
g4dn.xlarge

Did it like in tutorial.

Check the GPU usage when training. If close to 100%, you’re getting your money’s worth. Try with the nvidia-smi command line tool.

Will check it.
Same time I try to setup google cloud, but there was error during conda install.
Will provide screens later.
Best solution for me for now - it is Colab)

After Amazon low speed, try to setup Google Cloud and have error during installation.
On step:

pip install -Uqq fastbook

have error:
"
jupyterlab-git 0.11.0 requires nbdime<2.0.0,>=1.1.0, but you have nbdime 3.1.0 which is incompatible.
"
Help! =)