I was wondering if someone has explored the topic a bit and has some good rules of thumb (i.e. with the goal of maximizing a certain metric like F1). A very similar thread on the topic (but not for images) here thanks to @maxmatical.
I mean, always, except when it doesn’t work.
Which cases do not tend to work so well? Any pattern or is everyone like me trying on a case by case basis?
From the docu:
Training with mixup improves the best accuracy. Note that the validation loss is higher than without mixup, because the model makes less confident predictions: without mixup, most predictions are very close to 0. or 1. (in terms of probability) whereas the model with mixup makes predictions that are more nuanced. Before using mixup, make sure you know whether it’s more important to optimize lower loss or better accuracy.
This is also an interesting point.
For image segmentation, I tried and it doesn’t work well
I tried mixup on the CAMVID dataset. It seemed to work pretty well in my experiments.