Default metrics in multi-label classification

(Alexander) #1

I would like to know, what is the default metrics for multi-label classification. It is obviously shown as ‘accuracy’ in binary classification, but shown as ‘lambda’ in multi-label.


Do you have any idea about how the sigmoid works for multi-label problem?
I got stuck in the satellite notebook, so is that true that we just have to replace the softmax with a sigmoid in the final layer? But how could a sigmoid output a N-d vector? Or we have to train N sigmoids, each for one of the N labels?
What I don’t understand is, how to calculate the loss of multi-label classification?
If you got any clue about this, please tell me some about the details, thanks!

(Aditya) #3

Use the cross entropy loss function


Thanks! But what I don’t understand is, how could sigmoid output something like [0.8, 0.4, 0.6, 0.1, …]? I mean, sigmoid is just 1-d right?

(Aditya) #5

Sigmoid and softmax are not the same thing
In case of a softmax, the probs will add up to 1 necessarily

(Alexander) #6

Sigmoid just gives probability from 0 to 1 for each label.

(Aditya) #7

The sigmoid function is used for the two-class logistic regression, whereas the softmax function is used for the multiclass logistic regression (a.k.a. MaxEnt, multinomial logistic regression, softmax Regression, Maximum Entropy Classifier).

Ans is from regression point of view but it generalizes


yes, I know what you mean, but what i don’t get is sigmoid just output a probability between [0, 1], but when it comes to multi-label problem, how could we just use 1 sigmoid cell to get something like [0.8, 0.4, 0.6, 0.1, …] (and the real label is [1, 0, 1, 0,…])

Or we have to use N sigmoid cells for each label (for each is binary-classification)


A sigmoid in the last layer will yield N “binary classifiers” if you train with hot-encoded N labels which produce probabilities as you mentioned in your example.