Loss that depends on model and outputs (not just outputs)

I’m trying to migrate from fastai 1.

I want to calculate my loss function that must use the model itself, not just the output of the model. So, what I used to do is have a ModelWrapper like this:

class ModelWrapper(nn.Module):
    def __init__(self, model):
        super().__init__()
        self.model = model
    def forward(self, *x):
        return self.model,*x

And so my loss function could take as input a model, and then the

def loss_func(model, *x, y):
    # do stuff with model, x and y

This used to work fine in fastai1, but in fastai2 I get:

TypeError: is_floating_point(): argument 'input' (position 1) must be Tensor, not Sequential

I love fastai, but I think fastai2 assumes way too much in general about what data and models are supposed to look like.

Wouldn’t this work if you change the order of loss_func arguments and for example pass the model as a key word argument?