# The line that causes the error:
self.learn.yb = tuple(L(self.yb1,self.yb).map_zip(torch.lerp,weight=unsqueeze(self.lam, n=ny_dims-1)))
And that they solved it by setting y_int = True
Inspecting the code for MixUp, we only fall into the error causing code when self.stack_y = False. The value for self.stack_y is set by MixHandler (parent class of MixUp) here:
i.e., if your loss_func doesn’t have the y_int attribute or if it’s set to false.
y_int is something fastai sets for their loss functions, that’s why the author of the issue had to set it manually for their custom loss function. It is also not set by PyTorch in their losses that’s why MixUp isn’t working right for you. So basically, all you have to do is change your loss function from nn.CrossEntropyLoss to CrossEntropyLossFlat.