L1 cost penalty for specific layer

In my continued conversation with myself, I can maybe try another approach, which I think is the fastai way to do this with a Learner callback. So I could do something like:

class L1RegCallback(Callback):
    def __init__(self, reglambda = 0.0001):
        self.reglambda = reglambda
        
    def before_backward(self):
        regularization_loss = 0.0
        for param in self.learn.model.cnv2.parameters():
            regularization_loss += torch.mean(torch.abs(param))
        
        self.learn.loss += self.reglambda*regularization_loss

And then something like:

learn = Learner(dls, mynetwork, opt_func=RMSProp, loss_func = nn.MSELoss(reduction='mean'), metrics=nn.MSELoss(reduction='mean'), cbs=[L1RegCallback()])

And it would add the L1 loss to the loss (before backprojection).

@sachinruk and @sgugger you have had some other threads on this:

But I think some of that was based on previous version of fastai. So I was wondering what your thoughts are, is the approach above correct for adding an L1 loss (for a specific layer).