Trouble with Language Model

Hi,
I am trying to train a language model with text in medical notes and I am having issues with the accuracy. When I just trained the last layer the accuracy of my language model was 27%, but when I unfroze and fine tuned all the layers, the accuracy dropped down to 2%. Can anyone comment on what is going on and how I can improve. I copied the code below:

Subset of the data frame

df_10K = df_notes.iloc[:10000,:]

Data Preparation

data_lm_10K = (TextList.from_df(df_10K, cols=‘clean_TEXT’)
.split_by_rand_pct(0.1)
.label_for_lm()
.databunch(bs=bs))
data_lm_10K.show_batch()

Instantiation and training

learn_10K = language_model_learner(data_lm_10K, arch=AWD_LSTM, drop_mult=0.3)
learn_10K.lr_find()
learn_10K.recorder.plot(skip_end=15)
learn_10K.fit_one_cycle(1, 5e-1, moms=(0.8,0.7))
learn_10K.save(‘fit_head_10K_2’)

|—|---|—|---|—|
epoch: 0
train_loss: 5.049602
valid_loss: 4.778851
accuracy: 0.279365
time: 17:24
|—|---|—|---|—|

UnFreezing and Training

learn_10K.unfreeze()
learn_10K.fit_one_cycle(1, 5e-1, moms=(0.8,0.7))

|—|---|----|----|-----|

Train_loss|-----|Valid_loss|-----|Accuracy

7.418520|-----| 7.139403|-----|0.020575
33.320972|-----|13.103899|-----|0.021186
7.165906|-----| 7.220859|-----|0.027120
54.145081|-----|10.419573|-----|0.027120