No training progress using AWD_LSTM model with human numbers

Hello,
in the books chapter 12 a LSTM language model is build from scratch (LMModel7).
I try to get a comparable training result with the fastai’s AWD_LSTM model.
While running the model there is no remarkable progress in training.
Here is what I did:

awd_model = AWD_LSTM(vocab_sz=30, emb_sz=64, n_hid=64, n_layers=2)
awd_learn = Learner(dls, awd_model, loss_func=CrossEntropyLossFlat(), metrics=accuracy)

awd_learn.fit_one_cycle(30, 0.05, wd=0.4)

The first 15 results:

|epoch |train_loss |valid_loss |accuracy |time|
|---|---|---|---|---|
|0 |2.886273 |2.994271 |0.469320 |00:01|
|1 |2.877679 |2.996679 |0.469157 |00:01|
|2 |2.877726 |2.991746 |0.331950 |00:01|
|3 |2.871159 |2.988177 |0.470133 |00:01|
|4 |2.869933 |2.971005 |0.467529 |00:01|
|5 |2.867466 |2.984934 |0.461589 |00:01|
|6 |2.875038 |2.987801 |0.330078 |00:01|
|7 |2.877980 |2.972339 |0.468018 |00:01|
|8 |2.873280 |2.968270 |0.468343 |00:01|
|9 |2.873732 |2.918520 |0.471191 |00:01|
|10 |2.869561 |2.967269 |0.468506 |00:01|
|11 |2.872257 |2.988611 |0.468343 |00:01|
|12 |2.870201 |2.986552 |0.470459 |00:01|
|13 |2.864710 |2.985961 |0.470052 |00:01|
|14 |2.869624 |2.992571 |0.469238 |00:01|

The input data are unchanged - Human numbers, where the next word is predicted after each single word.
Is the dataset not compatible with the predefined AWD_LSTM model?

Any hint to put me on the right track?