But my metric doesn’t seem to have any impact on my models predictions. Am I mistaken thinking that the metric influences what’s predicted? I would have imagined if I call
than the probabilities would be compared to 0.95, and considered predicted if they are greater than that. But the ClassificationInterpretation is considering a label as predicted if it’s probability is over 0.6 or so. I’m definitely misunderstanding something here.
So my model is using some presets as a threshold for predictions? I guess I don’t understand how/when my model decides an output is confident enough to be a prediction.
This is coming from interp.plot_top_losses() by the way. I guess I wasn’t clear on that. The ClassificationInterpretation considers a label’s probability as “predicted” if that respective probability is above some threshold? Not sure where that threshold is coming from.