Lesson3 NLP

Hi,
I’m very existing on getting thinks working so quick.
I’m trying to build a text multi-class classification model.
The dataset is structured like this:
text,category,sub_category1,sub_category2

I use the following code, but seems doesn’t work, I get some negative valid_loss and no accuracy.
Is there any way to get multi-class predictions on text data?

#load my data and build language model, and train it a little
data_lm = (TextList.from_csv(path, 'all.csv', cols='text')
                .split_by_rand_pct(0.1)  
                .label_for_lm()          
                .databunch(bs=bs))
data_lm.show_batch()

learn = language_model_learner(data_lm, AWD_LSTM, drop_mult=0.5)
learn.fit_one_cycle(1, 1e-2)
learn.unfreeze()
learn.fit_one_cycle(3, 1e-3)
learn.predict("This is a review about", n_words=10)

#load my data and build multi-class classification model
data_df = pd.read_csv(path/'all.csv')
data_clas = (TextList.from_df(data_df, vocab=data_lm.vocab)
             .split_by_rand_pct(0.2)
             .label_from_df(cols=['category','sub_category','sub_category2'])
             .databunch(bs=bs))

learn = text_classifier_learner(data_clas, AWD_LSTM, drop_mult=0.5)
learn.fit_one_cycle(1, 2e-2)
learn.unfreeze()
learn.fit_one_cycle(3, slice(2e-3/100, 2e-3))
learn.predict("I really loved that movie, it was awesome!")

Regards