Most likely in the video the dataset is already downloaded and Jeremy is probably running it on a private server with a much faster GPU, CPU, Ram and Storage, all of which can make a difference (GPU being the biggest difference).
My 1080ti was much faster than a K80. I can’t remember by how much but I believe it was > 2x faster so I would expect your 1070ti would be faster as well.
Playing with batch sizes also can make a huge difference. I did some benchmarking on the NLP model here on Colab and my personal machine w/ a 3090.
I ran a benchmark of the 01 intro notebook on my local machine for reference as well. Not sure how long the dataset downloads took, but you can see the training times. It was definitely several min including the download time for the datasets. When you use colab, you have to re-download the datasets and models each time your notebook instance is released which makes it take longer than if you have a dedicated machine with all of that pre-downloaded. In my case I did not have the models or datasets pre-downloaded on my local machine for several of the models. If you’re using a dedicated instance on AWS that you turn on/off each time you use it, you should not have to re-download the models and datasets which saves time.
01_intro-Copy1.pdf (1.4 MB)