Pre-trained ULMFIT slow to train

hey all,
as i published, i trained a ULMFIT on my native language, and get a higher score on one banchmark.

when i trained the model, i notice some strange behavier. to see the different of the pretrained model, i create 2 notebook, one without pretrained, and one with pre-trained (on wiki).

now, as you can see, i use preety much the same LR, but when i trained the classification model, it’s took much more time & epoch to the pre-trained one.
i trained one cycle on each freezeto(-n), and in the final “unfreeze” cycle, i got 0.3 error in the new, and 0.42 on the pre-trained one.
any idea why this happen?
thanks!