Strange result of NNLLoss with class weights

I wanted to set class weights in binary classification problem. My code looked like this

weights = [1, 20]
class_weights = torch.FloatTensor(weights).cuda()
class_learner = text_classifier_learner(data_cls, AWD_LSTM)
class_learner.loss_func = torch.nn.NLLLoss(weight=class_weights)

and I’ve got negative values of a loss. First negative values were quite small about -0.01 but they started to converge to minus infinity. Does anyone know why this might happen? I also tried Flattening the loss function like this but the results were the same.

def NNLLossFlat(*args, axis:int=-1, **kwargs):
    return FlattenedLoss(torch.nn.NLLLoss, *args, axis=axis, **kwargs)

Btw when I used CrossEntropyFlat instead of torch.nn.NLLLoss it worked fine.