I’m experimenting with AWD_LSTM and Transformer using ULMfit on a dataset of text messages with labels (text, label). I’m getting a #na# when I try to fit my text classification learner. The language model learner fits just fine. #na# gets reported in fromat_stats() in basic_train.py in fastai. It seems to get reported when a stat is None.
Can anybody help? Would appreciate any direction. The issue occurs while training the text classifier, the language model fine tune goes through fine.
Some code below:
Language model data
data_lm = TextLMDataBunch.from_df(train_df = df_trn, valid_df = df_val, path = “”)
Classifier model data
data_clas = TextClasDataBunch.from_df(path = “”, train_df = df_trn, valid_df = df_val, vocab=data_lm.train_ds.vocab, bs=32)
learn = language_model_learner(data_lm, arch=AWD_LSTM, pretrained=True, drop_mult=0.7)
learn.fit_one_cycle(500, 1e-1) # <— This works fine
learn = text_classifier_learner(data_clas, arch=AWD_LSTM, drop_mult=0.7)
learn.fit_one_cycle(1, 1e-2) # <— This reports an #na#. I don’t see an accuracy or val loss.