Image Classification Idea (Looking for opinions)

Hello,
So I’m working on a project involving image classification. And I’m looking to get input from others regarding how best to implement this solution.
I’m planning on building an application that can classify many images like people, pets, cars, etc.
Now, instead of writing a single classifier and training it to distinguish between all the above mentioned images, I’m going to write a generic classifier that simply classifies a single object from positive and negative examples. For example, I would have one classifier for cars and another for people, etc.
I think this approach gives me more flexibility because as I want the application to learn new objects I simply train another classifier and add it to the “master” model which contains all the trained classifiers.

Does anyone have any thoughts about this?

As a side note, I have created the generic classifier from scratch with TensorFlow and it seems to work well so far. I also wrote one in Keras which was a lot easier, however, I got poor results. Just curious if anyone has any thoughts as to why whenever I write a classifier either from real scratch (no ML frameworks) or with base TensorFlow libraries I seem to get better results right off the bat as opposed to using higher level libraries like Keras or Tensorflows estimator. I never really dug into why because once I see the initial poor results with a higher level library I say “screw it” and just write one (painfully) from scratch and it always comes out better.
Any thoughts?

Sounds like one-vs-rest classification. Using a single classifier with a softmax is actually a simplification of that approach but I’m not 100% sure if it’s mathematically equivalent (if it is, then using a softmax is the smarter approach).

Thanks for the feedback. yea the only issue with one-vs-all is that every time I want to add a new object to the classifier I would need to retrain the entire model.

Why do you need to retrain the entire model? You could fine-tune the current one. You can just get rid of the last layer weights (the one outputting the N output classes). It may also be possible to initialize the new weight tensor of size N + 1 with the current model weights. That could speed up learning even more.

I’m really interested in image recognition projects, I’m working on something myself, how do I contact you?

jamiemicro10@gmail.com