There isn’t. Generally you take the argmax of these probabilities and that is your prediction. This is how fastai does it. Argmax = highest (before probability) index. Aka the logits.
So for example, my model doesn’t output things that sum to 1. Really they output what we call logits, which are a bunch of positive or negative numbers. In this example my model predicts three classes.
x,_ = dls.train.one_batch()
x will be a batch of data. I will then use raw pytorch to get the model logits:
with torch.no_grad():
logits = learn.model(x)
These logits (on a batch of 1) may look like the following:
tensor([[0.4, 10.2, -20.]])
What we then do is perform softmax and argmax to translate this into something comperable.
These all sum to 1, and can be called the “probabilities”
The official “class” that we say the image most represents is found by taking the argmax of that tensor (which doesn’t change when doing the logits or our softmax’d probabilities)
logits.argmax(dim=-1)
tensor([1])
So if our dls.vocab is something like ['bird', 'snake', 'dog'], we want the name in position 1, so our model classified the input as a snake.
Does this make more sense?
(This is also what predict and get_preds are doing under the hood)