How to train 100+ GB data with limited GPU

How to train at least 100+ GB of data in with single Nvidia GPU(1080). Any suggestion about training large dataset with limited GPU allocation. Because now I am not in a position to afford for expensive GPU and still willing to compete in one of kaggle competition. Any promising papers, the topic suggestion would be helpful to deal with a complex problem and make the best use of kaggle TPU and GPU to take up experiment into the next step.

What have you tried so far? GCP gives like 100$ or 300$ to new users and if you know a bit or two about linux you will do just fine using it.