Best practices for showing model confidence

It’s clear this has been asked many times, but I’m curious if anything new has changed. For example, take a cat vs dog image classifier.

We can make a vision model that gives probability its a cat vs probability it’s a dog. But how should we present the confidence in that outcome? E.g. maybe it’s 30% cat and 70% dog, but is there a way to show some confidence metric that it is either?

Another way to rephrase this: if I wanted a cat vs not-cat detector, is there a way to only feed in pictures of cats and, using the model’s activations, give some sense of how many features are detected and how strongly they are detected?

After researching more, can I just use a one-class classification approach?

For instance, in the dog vs cat classifier, I could use a standard CNN image classifier to get the relative probability of cat vs dog, and then I could use a one-class classifier on each of the dog and cat datasets to effectively get an estimate of how similar or not similar a new image is to cat or dog alone.