Any way to use a metric (KappaScore) as the loss function for my Learner?

I’m a beginner to DL and ML in general especially on using fastai. I’ve been researching on how to penalize my cnn_learner based on quadratic weighted kappa and found 2 papers where they implemented a custom loss function based on qwk and a reformulation on simple squared error. Is there anyway to implement the already written kappascore for metric as my loss function? Or do i need to write one based on nn.Module (I have no idea how to do this)? Thanks!

Hi @winterChroma,
You can definitely direct your learner to use a new loss function. You can set it at the learner creation like this:
learn = (Learner(data, models.xresnet50(),
metrics=[accuracy,top_k_accuracy], wd=1e-3, opt_func=opt_func,
bn_wd=False, true_wd=True, loss_func = LabelSmoothingCrossEntropy())

Coding the loss function to assign it to - that is the harder part.
Do you already have PyTorch code for it?

@LessW2020

I don’t. I haven’t used pytorch yet so I have no idea how to do it because I came to fastai after the old Andrew Ng’s ML course.

Could you post the papers you mentioned? I’d be interested in implementing this but I wouldn’t know how to reformulate QWK as a loss function without some reading.

If you want a crack at it yourself I’d start with looking at how current loss functions are implemented and going from there.

1 Like

Here are the two papers but one is behind a paywall.

https://www.sciencedirect.com/science/article/abs/pii/S0167865517301666