Building a model on Black and White images to reduce overfitting?

Although I can’t give a lot of details on specifics, I’ve worked with a particular image dataset that seems to be getting good results after training the model using the principles in Lesson 1. However, when I find a new image with the subject in the foreground but a vastly different background, the model fails pretty miserably. I feel that my model is somehow overfitting to the image backgrounds and not generalizing well when the background changes, not to mention that the background and the colors of the image itself has very little bearing on my objects’ classifications.

Would it be worthwhile to try converting the images to black and white and training the model on that? If need be, maybe i would also need to train a black and white version of the imagenet model to work with it?

Is my idea of using preprocessing my images to black and white too far fetched? If not, has anyone tried such an approach and gotten success?

1 Like