Creating a custom pre-trained LM with WikiText103

What would be the easiest way to create a pre-trained LM with WikiText103? I want to create a LM with a smaller architecture. 1 layer instead of 3 and 200 activation units instead of 1150. The data set I am using for text classification is really small and I think the default settings for network architecture are a more than I need and causing some overfitting issues.

Has anyone done something like this using v1? If so how long did it take to train? What GPU did you use?

Thanks for your help!