Validation Loss Does Not Improve

I have been having trouble with Tabular datasets. I can’t really get the validation loss down after a certain point (and this usually happens within the first or second epoch). What can I do to improve this? I trained one model for 4 hours and can’t seem to be getting anywhere…

The data is fairly large (9 million rows and 90 columns for just the training dataset with almost all being categorical variables) so I thought it would be ideal for NN. I tried different learning rates, dropout and weight decay parameters but it doesn’t seem to make much of a difference. Any tips on strategy would be appreciated.