Class_weight

Do any of you know the proper method / an example of using class_weight with either fit or fit_generator if you have multi-output?

I have a mse as well as binary cross entropy output (think fisheries competition), and I’d like to assign class weight as a function of the class type, but not have it be affected by the bounding box output. I’m guessing simply mapping a class index -> weight isn’t going to work in this case…

For the recent Kaggle Lung Cancer Prediction competition there was a ratio 3:1 for the no cancer/cancer classes, so in order to make some experiments with balanced classes I used the class_weight parameter with the fit_generator function.

Given class 0=no cancer, class 1=cancer, I defined a dictionary:
my_class_weight = {0: 1., 1: 3.}

Then I was able to use it with fit_generator:
model.fit_generator(…, class_weight=my_class_weight)

Generalizing to multi-class should be simple, however I do not clearly understand the implications of having also a bounding box.

Check the last comment here: https://datascience.stackexchange.com/questions/13490/how-to-set-class-weights-for-imbalanced-classes-in-keras

I think none of these answers truly answer the question of @haresenpai.
The question is: how can we add weights to the loss function in a setting where the output (the target) is one hot encoded (typically in the case of multi-class assignments, where a given sample can be assigned to several classes and be encoded like [0 1 0 0 1 0 0 0 1], and thus where the loss function used would be binary_crossentropy, not categorical_crossentropy).

I am interested because I have the very same question.
In essence, each output class is independent from the others and the problem concretely reduces to n simple yes or no classification problems, for which we could use a .fit(…, class_weight = {0: x, 1: y}), with x and y representing weight corrections for the imbalance between the yes and no samples for the given class.

However, in my case, I have around 200 classes, each single one very imbalanced towards ‘no’. I am not exactly excited about training 200 different models sequentially, just to properly correct all my imbalances, while without correcting the imbalance, I can train all 200 classifications jointly.

If only I could pass a list of my weights for each class, that would solve the problem:
class_weight = [{0: x1, 1:y1}, {0: x2, 1: y2}, …]
Wouldn’t that be neat?

Ok, from my little investigation, it seems that, in the Keras code, the class_weight parameter will work appropriately for 2D targets only for the single label classification, but not in the multi-label classification.

In more detail:
in keras/engine/training.py we have the following condition:

if y.shape[1] > 1:
y_classes = np.argmax(y, axis=1)

this means that if a target for a given sample is [1 0 0 1 1 0], the sample would be considered to belong to class 0, and class 0 only (ignoring the ones at indices 3 and 4) and become weighted according to the class_weight[0] provided by the user via the dictionary class_weight.

So currently, we can say, no, Keras does not support class_weight in a multi-label classification task
Bummer…

So what to do?
I hope the fantastic brains that are floating around this forum will help us finding another way than training many single models in a for loop…

@Ptilulu I found this stackoverflow reply that I think can do what you described.