So we often discuss how to avoid bias and how to deal with unbalanced training data.
But what if we are in a scenario where bias is expected in production and does not create ethical issues ?
For example, for a vegetable classifier in a supermarket, if we have stats about purchase history, it’s probably something we could leverage without ethical concerns and which should improve accuracy, in particular when the model is uncertain.
Are there best practices to achieve this? A few ideas that come to mind:
naive approach: weighted average between the prediction and class frequency in production, then take the argmax. With a small weight to frequencies it should be enough to “flip” low confidence predictions
Same but as a final layer in the model, with weights being learnable parameters.
Similar to normalization for regression, we could normalize our 1 hot encoded target vectors by the class frequency. This way the model learns the deviation from the average frequency of classes.
Do you know any papers or resources addressing this?