Learnable threshold in PyTorch

Hi all,

I am trying to implement this paper but have got a little stuck.

What I need to do is as follows:

  1. Take a tensor, p, of dimensions (bs, ch, h, w)
  2. From this subtract a learnable threshold/bias tensor of dimensions (1, ch, 1, 1), i.e. subtract a value for each channel.

It seems simple but I can’t seem to get any tensor I define to update in the backward pass. I tried using:

self.threshold = nn.Parameter(torch.rand(channels), requires_grad=True)

and variations on it but seem a little stuck. It’s entirely possible I’ve also missed a simpler way of solving this problem.

Any help much appreciated.

Thanks in advance,

Mark

1 Like

Update: the way to do I think is to define the operation inside a class as follows:

class BiasLayer(torch.nn.Module):
    def __init__(self, subtract):
        '''Subtract a learnable bias or multiply a number by the negative bias
        https://academic.oup.com/bioinformatics/article/32/12/i52/2288769 '''
        super().__init__()
        self.bias = nn.Parameter(torch.rand(n_channels), requires_grad=True)
        self.subtract = subtract

    def forward(self, x):
        bias_clamped = self.bias.clamp(min=0, max=1).unsqueeze(0)
        if self.subtract:
            return x - bias_clamped
        else:
            return -x*bias_clamped