Looking for suggestion to try Lesson 1's routine on very large dataset

I am planning to try the routine described in lesson 1 on an active competition.

Among currently active competition, quickdraw doodle [https://www.kaggle.com/c/quickdraw-doodle-recognition] 's task is most similar to the ones we have been doing.

However, the data size is 73GB.

I would like to have some advice on repeating our procedure on dataset of this size.

What can I do besides renting more expensive machine to speed up the process ?

You can switch between two modes:
When developing your code, train on a subset of the data. You can also reduce the size of the images. This way you can get immediate feedback on your performance.
Once you are happy, you can train on a larger dataset or larger image size for better accuracy.