Am I overfitting at this point? Optimal number of cycles?

After having fitted, trained, and fine tuned my image classification model, I think I’ve found the optimal number of cycles to run one_fit_cycle without overfitting the model. Or so I suspect…

As can be seen by the image provided below, after 6 cycles of fit_one_cycle my model’s error rate and accuracy hit a wall. Interestingly enough, my train loss continues to plunge while my validation loss is virtually unchanged. Although it’s great that my train loss is notably lower on the 6th cycle vs. the 8th cycle, the same isn’t necessarily true for my validation loss. I suspect this means that I’m just overfitting. Am I thinking about this correctly?

Here’s another example… the 5th epoch produces the same error rate/accuracy, but the training loss is nearly 7% lower. Perhaps after the 4th epoch all I’m doing is overfitting?