ULMfit - during fit, my loss is showing #na#

I’m experimenting with AWD_LSTM and Transformer using ULMfit on a dataset of text messages with labels (text, label). I’m getting a #na# when I try to fit my text classification learner. The language model learner fits just fine. #na# gets reported in fromat_stats() in basic_train.py in fastai. It seems to get reported when a stat is None.

Can anybody help? Would appreciate any direction. The issue occurs while training the text classifier, the language model fine tune goes through fine.

Some code below:

Language model data

data_lm = TextLMDataBunch.from_df(train_df = df_trn, valid_df = df_val, path = “”)

Classifier model data

data_clas = TextClasDataBunch.from_df(path = “”, train_df = df_trn, valid_df = df_val, vocab=data_lm.train_ds.vocab, bs=32)

learn = language_model_learner(data_lm, arch=AWD_LSTM, pretrained=True, drop_mult=0.7)

learn.fit_one_cycle(500, 1e-1) # <— This works fine

learn.save_encoder(‘ft_enc’)
learn = text_classifier_learner(data_clas, arch=AWD_LSTM, drop_mult=0.7)
learn.load_encoder(‘ft_enc’)

learn.fit_one_cycle(1, 1e-2) # <— This reports an #na#. I don’t see an accuracy or val loss.

3 Likes

i also get this, but why…does that means the loss was too high ?

I’ve run into this issue a couple of times now but haven’t figured out what’s causing it, did you ever figure it out ?

Same problem here