I have a multi-label vision classification task. However, my training data is potentially noisy and I believe some of the training labels are incorrect. I think label smoothing would help.
I tried to implement label smoothing in the multi-label case using a callback at the beginning of the batch:
def __init__(self, epsilon = 0.05): self.epsilon = epsilon def begin_batch(self): self.learn.yb = (torch.where(self.y == 1., 1 - epsilon, epsilon), )
The idea is to alter the labels at the start of every batch to the smooth labels. For example the label [1, 0, 0, 1] is changed to [0.95, 0.05, 0.05, 0.95] before the loss is computed (Note that there is no need for the labels to sum to 1 in this case). Then nn.BCEWithLogitsLoss is used to compute the loss from the predictions and the new smooth labels.
However, this seems to have no effect on the training, but no errors are thrown. Is my code correct? Can I actually alter the labels or is it not possible?
Thanks in advance for any help!