Does/can LabelSmoothingCrossEntropy support an "ignore_index"?

Working on integrating a seq-to-seq model training into v2 and would love to be able to pass an ignore_index that could be used to tell LabelSmoothingCrossEntropy to not look at certain token ids when calculating the loss (e.g., ignore any -1 token ids) when comparing the generated text to the actual.

In v1, not knowing any better, I just created my own class to create this one modification in the forward pass:

def forward(self, output, target):
        c = output.size()[-1]
        output = output[target != self.ignore_index]
        target = target[target != self.ignore_index]
        log_preds = F.log_softmax(output, dim=-1)

Is there a better way in v2?

No it does not support this. You can create your custom loss function, as you mentioned.

1 Like

Ok thanks.

Is there a way to access the Learner from the loss function? Or would I need to pass in the Learner as an argument when creating it?

No, but a callback naturally has the Learner as an attribute and can modify the loss.

Ok that makes sense.

And one last kinda related question … when I use splitter to create my own “layer groups”, how can I display those layer groups in v2? Tried len(learn.layer_groups) but that looks deprecated now.

THey are in the optimizer: learn.opt (after you create it if needed, with learn.create_opt)

1 Like

Aight … thanks much and much appreciated. That is all my questions for the day (most likely) :slight_smile: