Validation Loss seems to be same

I am using fast.ai while participating in a competition , while training the model i get to use as i try to run the model for like 5 epochs , each time the training loss seems to decrease but the validation loss seems to fluctuate between the same levels more or less .
Why is this happening ?
Can i do anything to avoid this in the future?
Where have i went wrong?

If the training loss is decreasing but you’re validation loss decreases at first but then starts increasing, it means that your model is overfitting. It is learning only on the training data but is unable to carry over what it has learnt to unseen data.

Techniques to avoid overfitting (covered in the course)

  1. Get more data
  2. Use data augmentation
  3. Use generalizable architectures
  4. Use Regularization
  5. Reduce complexity of your architecture
1 Like

Well thanks Hamza , the fact that i applied data augmentation and still it led to this really concerned me , nevertheless will take a look into it.