L1 regularization on weight parameters

Is it possible to modify the loss function in order to add an L1 constraint (regularization) on some/all of the weight parameters?

You will have to write your own Callback for this, but you can anything you want to the loss function in the on_backward_begin call, and the new value you return will be used for backpropagation.
An example is in RNNTrainer in callbacks.rnn, where we add AR and TAR to the loss.

2 Likes

I was checking the callbacks section just now. Many thanks for confirming this info.