Constraining weight tensor element-wise manually


I have made a 2 layer linear net from 2 2x2 tensors. Very simple network. It is based on actual equation so I kind of know how the weights should look like and what rules should be there in the weight matrix e.g. w1==w2, w3==0 etc. Problem is, I can’t seem to figure out the place where to apply these constrains or how exactly.
I did create nn.Parameters in my model:

def __init__(self):
    self.m1 = nn.Linear(2, 2)
    self.m1.weight = torch.nn.Parameter(torch.zeros(2,2))
    self.m2 = nn.Linear(2, 2)
    self.m2.weight = torch.nn.Parameter(torch.zeros(2,2))

But I have no idea on when or how to access them later. Should it be a custom loss function, optimizer, callback?
Help will be much appreciated.

Part 2 of the course goes straight into creating your own models from scratch. You should start there.

Hi Madis,

I am not sure I understand your needs exactly. But you can access the weights with

To change a weight:[1,0] = 99

Analogously for model.m1.bias. You can change the parameters at any point that makes sense, though I’d be cautious about changing them between model.forward() and optimizer.step(). It might mess up the gradient calculations.

As I am using learn.fit_one_cycle, I can’t just put this model.m1.weight easily in-between my training steps unless I wanted to make the whole Learner code custom (so that I could edit what is being run and when).
So my problem is rather accessing that model variable inside a loss or optimizer function. Not sure how to do that. For example, optimizer object only has parameters:

Parameter Group 0
amsgrad: False
betas: (0.95, 0.999)
eps: 1e-08
lr: 0.0004
weight_decay: 0

But how to access my model parameters from inside the optimizer or the loss function?

Got it, learned about callbacks.

Made a custom class:

class WeightConstrainer(LearnerCallback):
    def __init__(self, learn:Learner):
    def on_backward_end(self, **kwargs):

And added it to my learners callbacks with: