Should you and how do you turn off mixup during test?

Was reading about the mixup data augmentation in fastai very cool stuff.

When launching the model in production, or even testing it on a held-out test set, I’m guessing that mixup should be turned off. I don’t see any commands for doing that and was wondering what the best practices here?

Here’s my pseudocode:

# Create learner.
learn = learn.mixup()
# Adjust settings
learn.fit_one_cycle(etc...)
learn.unfreeze()
# Adjust more settings
learn.fit_one_cycle(etc...)
learn.save(savethis)
learn.load(savethis)
loss, err = learn.validate(data.train_dl)
print("Training", loss, err)
loss, err = learn.validate(data.valid_dl)
print("Validation", loss, err)
# Create new tst_data databunch that loads test set, no splitting
loss, err = learn.validate(tst_data.train_dl)
print("Test", loss, err)

I’ve noticed the validation metrics computed using learn.validate is the same as that reported in learn.fit, which suggests that mixup was not disabled during test time. So how do I do that?

1 Like

If you look at the code for the mixup callback, you will see that part:

def on_batch_begin(self, last_input, last_target, train, **kwargs):
“Applies mixup to last_input and last_target if train.”
if not train: return

So, normally, it should only happen during training.

My guess is that you are creating a new databunch which doesn’t seem like the standard way to do things.

loss, err = learn.validate(tst_data.train_dl)
Above, you are using the new databunch’s train_dl, so you would still be in train mode.

I think your issue comes from having your test dataset be the train ds of a new databunch, instead of being the test dataset of your new or main databunch.

My whole post is just a guess, but I hope this puts you in the right direction.

1 Like

What’s the ‘standard way’ for test? The issue with the test set is that fastai assumes there is no labels. My test set does…

Actually I’ve noticed something, and I don’t think mixup is being called when using learn.validate.

The reason is that my loss value looks “normal” (without mixup) when I call learn.validate(data.train_dl), and the value is the same during training for learn.validate(data.valid_dl). From Seb’s comments that the code only runs mixup during training and not for validation (and thus the loss is different and expected to be higher), it would seem to be implied mixup is not called at using when using learn.validate.