Later in the chapter we are fine-tuning the model after unfreezing it for 10 epochs, which i cannot afford:
learn.unfreeze()
learn.fit_one_cycle(10, 2e-3)
QUESTION:
How do i make it train for less time? I figured, i could limit the number of entries to dls_lm. This would affect accuracy of the model but im ok with it. But i cannot figure out how to do it.
Or mb there is another way to train for less time?
No real ways to make it go faster, I’m afraid. LSTM, RNN, and NLP model architectures in general take longer (and more GPU-mem) to train.
Not sure what you are trying to do here, but if you are not bothered by the accuracy, and just want to go through the steps, you could just train for fewer epochs? For example, after unfreezing just do learn.fit_one_cycle(1, 2e-3) instead of 10 epochs. It’s mainly to show you what are the training metrics and changes that you should watch out for anyways, so you could also just look at the output in the provided notebooks, if you cannot actually go through all the training steps on your machine.