I am trying to do my lesson 1 independent work by building an image recognizer for the SET card game. Quick summary - each card has one of 3 shapes in one of 3 colors, one of 3 fill patterns, and 1, 2 or 3 copies of the shape. So basically there are 4 features with 3 classes each: shape, color, fill, number. I’m starting by trying to train a model to recognize a single class, then assumed I would just do that 4 times. Problem number 1 of course is training data.
Typically SET is played by looking at 12 cards and finding 3 that meet certain criteria. I am assuming that extracting the cards from an image is not well suited for DL. So I have tried to build an opencv based pipeline for extracting these images. You can see it here:
So, my questions:
- Is this approach the right one for now? (get rectangular card images manually, train a model)
- As I progress, I’m sure I’ll have more questions, what’s the easiest way to share progress? Is the git repo with notebooks and images the expected interaction?
- Since I am manually taking pictures mostly, anyone have a sense for how many per-class I might need to differentiate among the three classes? 50? I suppose writing some mobile app to let me easily gather and tag more images is the best way. Does something like that exist?