How to tell if I'm overfitting when I'm using dropout?

Traditionally, we know we are overfitting when our training loss is much lower than our validation loss, because that tells us that our model has learned to fit specifically to the instances of the training set and won’t generalize well to new data (such as that in the validation set).

On the other hand, when we use dropout, that dropout (i.e. throwing away of activations) is only applied during train time. This increases the train loss compared to the validation loss, such that we frequently observe that our validation loss is lower than the train loss.

I feel like that in this case, the simple sign of overfitting (train loss << val. loss) gets obscured. How do I know, then, that I’m overfitting? Maybe “train loss ~ val.loss” is already overfitting. Or, if my dropout is quite large, maybe I’m already overfitting even when train loss is twice the val. loss.

Would there be other, better criteria to check?