Evaluation Measures

how can i get the accuracy given the true labels t and the predicted labels p? i am getting lost !
t = (1,3,2,4,3,2,1,4,3,2,3,4)

   p = (1,2,2,4,4,2,1,4,1,2,3,4)

I made it in https://colab.research.google.com/drive/1zp256J5rgDShMBZ-mL_SSUB-dZPYOCSF?authuser=2#scrollTo=OZQiUbItxU8u

You need to convert your predictions to one hot encoding to mimic the probabilities result.

def one_hot_embedding(labels, num_classes):
“”"Embedding labels to one-hot form.

Args:
  labels: (LongTensor) class labels, sized [N,].
  num_classes: (int) number of classes.

Returns:
  (tensor) encoded labels, sized [N, #classes].
"""
y = torch.eye(num_classes) 
return y[labels] 

p_one_hot = one_hot_embedding(p,number_of_classes)
accuracy(p_one_hot,t)

NOTE: I assume that the encoding of your p and t are zero based (ie the first class id is 0)

1 Like

I have a csv dataset and i want to use two or more clustering algorithms, build an unsupervised time-series classifier to identify characteristic day-length patterns. in csv data. in csv dataset each of the columns in the csv data set includes sensor measurements of the same kind for light in a room (units in Lux). I want to use appropriate quantitative metrics to determine the number of time series clusters and to evaluate their quality. In light of the data and the differences between algorithms, i want to speculate on why a given method yielded quantitatively better clusters. I have used many different Prerequisites but I’ve not been successful to do so… some where in directional i am going wrong … any idea ?