I am trying to apply Lesson 06 on an external data.

I need to figure out how to interpret what I see in the show_results and correlate it with the findings on accuracy_multi.

Here is my learner, loss and accuracies:

And here are the visual results:

Questions:

- I understand from the binary_cross_entropy section in the lesson that each label in each image undergoes the metric process.

The accuracy was good at more than 80% based on the learner. But when I check out the show_results, I see a lot of misclassifications.

What I need answered to close the loop is: what does the accuracy_multi in the learner reflect? Is it the accuracy per image - ie, that all the target objects are predicted? Or is it per label, regardless of the image?

- In the show_results, what do the labels represent? Is the top label the target, and the bottom - the predicted?

Thank you!

Maria

Hi Maria

It takes the **mean** of all predictions twice. The mean accuracy for every record in the test set, And the mean of all classifications per image. I guess this is not very helpful… I encourage you to play around with the functions and check what happens with the data.

Create some test Tensors and check out what happens to shape and content of the Tensors when you go step by step through the computations.

What happens when you run `np>thresh`

and next what happens when you run `(inp>thresh)==targ.bool()`

.

def accuracy_multi(inp, targ, thresh=0.5, sigmoid=True):

“Compute accuracy when `inp`

and `targ`

are the same size.”

if sigmoid: inp = inp.sigmoid()

return ((inp>thresh)==targ.bool()).float().mean()

For your second question: you can use get_preds and inspect the preds and the targets to find out. Try to run `learn.show_results(shuffle=False)`

, to compare it. When I’m not wrong then it should use also the test set and both functions start with the first batch.

1 Like

I made a blog on this, to help out other beginners.

Link:

2 Likes