Training language_model_learner error

hi all,

i am following the fastai ULMFit tutorial and keep running out of memory when training the language_model_learner. When i adjust the batch size to 4 i get the following error :

dls_lm = TextDataLoaders.from_df(df, text_col=1, label_col=2, valid_pct=0.3, bs=8)

learn = language_model_learner(dls_lm, AWD_LSTM, metrics=[accuracy, Perplexity()], wd=0.1).to_fp16()

learn.fit_one_cycle(1, 1e-2)

/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
2214 if input.size(0) != target.size(0):
2215 raise ValueError(‘Expected input batch_size ({}) to match target batch_size ({}).’
-> 2216 .format(input.size(0), target.size(0)))
2217 if dim == 2:
2218 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)

ValueError: Expected input batch_size (46804) to match target batch_size (4).

You built classification DataLoaders rather than language model ones (there should be a is_lm flag IIRC)

2 Likes

thank you for pointing that out, how did i miss that ? i was sure had the flag on it !

copy+paste error i think