Is it overfitting sometimes okay?

I was considering the next step to @skalyan-anu’s nice “serif/sans” classifier, might be inferring the font-name of “any” font presented to it. On one hand, this might be considered a closed domain, where generalization is not so useful as memory, such that it might be reasonable to overfit so to rely more on the latter. Arial is Arial is Arial, especially where Arial Black and Arial Narrow might be considered distinct fonts - perhaps at least on one operating system.

On the other hand, different operating systems might have slight differences, and perhaps the presented data is from physical scans containing noise. So then generalization might be raised in importance.

Curious how long time practictioners might muse about such trade-offs.

1 Like

I’m not quite sure how “generalization is not so useful as memory” in this situation. The classifier would still have to generalise over different font sizes, colors, orientations, etc.

The main difference from my existing model is that it would have a much larger set of classes to learn (though still a more or less closed domain, as you point out, at least if we restrict it to widely-used fonts).

Another project that would be interesting is to build a system that would assign a font name to a given image of text, trained on names of existing fonts. This would be a system that can generate “thematically appropriate” names for fonts it has never seen before; such a system would indeed have to genealise over observed mappings between images of text and font names (which may have been what you were getting at). I know that there are architectures for transforming between images and text, but we haven’t covered those in the course, and I’m not yet confident enough to work it out on my own.

Thanks for your interest in my font classifier! It’s nice to have my work noticed :). By the way, you may be interested in the underlying Kaggle notebook: https://www.kaggle.com/code/skalyan91/is-it-a-bird-creating-a-model-from-your-own-data.

1 Like