Weighting the cross-entropy loss function for binary classification

Hi everyone. I am dealing with the Breast Histopathology Images dataset from Kaggle. The class distribution is:

  • 198,738 negative examples (i.e., no breast cancer)
  • 78,786 positive examples (i.e., indicating breast cancer was found in the patch)

I am defining the loss function: (as referred from here)

# Assign the class weights and pop it to GPU
from torch import nn

weights = [0.4, 1]
class_weights=torch.FloatTensor(weights).cuda()

learn = cnn_learner(data, models.resnet50, metrics=[accuracy]).to_fp16()
learn.loss_func = nn.CrossEntropyLoss(weight=class_weights)

Is the right way or is there anything better than this approach for this case? Thank you in advance.

2 Likes

[Changing default loss functions](http://this post) answers the question i believe. Also fastai loss function are indeed torch objects.

When doing my research it was the first result, hope it will help future visitor too

Hi Sayak! How did you define those weights?