So it seems to be actually way easier than expected. With a tiny digging through the code I saw that the *RNN_Learner* overwrites it’s super classes “*Learner*”'s **_get_crit** function with a return of the Pytorch *F.cross_entropy* function. That already accepts weights, so you can just pass your calculated weights as

```
loss_weights = torch.FloatTensor(trn_weights).cuda()
learn.crit = partial(F.cross_entropy, weight=loss_weights)
```

I calculated my weights simply with that code:

```
trn_labelcounts = df_trn.groupby(["labels"]).size()
val_labelcounts = df_val.groupby(["labels"]).size()
trn_label_sum = len(df_trn["labels"])
val_label_sum = len(df_val["labels"])
trn_weights = [count/trn_label_sum for count in trn_labelcounts]
val_weights = [count/val_label_sum for count in val_labelcounts]
trn_weights, val_weights
```

To check the correct parsing of your weights you can simply print them:

`print(learn.crit)`

which should return something like:

```
functools.partial(<function cross_entropy at 0x00000282813B3268>, weight=tensor([0.1815, 0.1816, 0.1414, 0.4956], device='cuda:0'))
```

If you have any trouble don’t hesitate to write me @knesgood