Running imdb notebook on custom dataset (news articles for last year from 20+ media). Each epoch for building language model takes about 2.5 hours on paperspace P6000 machine. The notebook interrupted on epoch 7 out of 15, but learner.save('lm1')
did work and saved a file.
I wonder what was saved and can I pick up the training where left off? Is there a way to look what’s inside lm1
and lm1_enc
files? Is there a way to continue training the model on epoch 7?