Getting an error "AssertionError: ==: 64 384" when trying to use error_rate as a metric for a Multi-Label bear classifier (full code gist included)

Hey folks, I’m re-writing my “Bear Classifier” model to use MultiCategoryBlock, so that it can apply more than one label (or no label at all) to the images it analyzes. The model compiles successfully when I use the default metrics, although at first the metric seemed super-high compared to the metrics I saw output in the original model.

That’s when I noticed that I was using the default metric of “valid_loss” (which is not the thing we care about if I understood correctly up to this point). The original bear classifier (the one which used CategoryBlock instead of MultiCategoryBlock) passed “metrics=error_rate” to the “cnn_learner” function, so I thought I’d try that instead.

When I did so, I got the following error with a backtrace (which I included in the gist):

“”"
AssertionError: ==:
64
384
“”"

I made a Github gist with my Jupyter Notebook code and the full stack trace of the error, for context.

The error seems to be coming from the file /opt/conda/envs/fastai/lib/python3.8/site-packages/fastcore/test.py in test(a, b, cmp, cname) , but the error message doesn’t immediately reveal how the addition of the metrics param led to this error. Does anyone recognize this line of code or know what these two numbers represent? Any suggestions for debugging would be most appreciated.

error_rate is only for single label classification, not multilabel. You’ll need to write your own multilabel error rate function to use here instead

1 Like

OK thanks.

Separately, I just found the section of the docs which describes which metrics go with which problem types. Posting it here in case it helps others: