Hi,
I’m currently using fast.ai v1 for three kinds of problems, image classification, image regression and image multi-label classification. After learn.fit_one_cycle, I use the call to res = learn.get_preds(DatasetType.Valid, with_loss=False) and metric(*res) to check the quality of the trained model on the validation data. Metric is accuracy, mean_absolute_error and accuracy_thresh/fbeta for the three problems.
Problem is, that the accuracy values (accuracy_thresh and fbeta) for the multi-label classification look fine in training, but are very low in this validation call. After trying some things, I found when I set the sigmoid parameter of the metric to False, that then the proper values are shown. So without the “patch”, the same metric works during training, but not for the result of the get_preds call.
I suspect it is a bug…