Classifier training set contains unclassifiable images

I’m wondering what’s the best way to handle this:

In our study group, the students that built projects from work data often had noisy data sets that contained images that didn’t fit the multi-classification sets we were looking for and a few of us classified these as junk or “records”. We are effectively training our models to recognize junk from random images. Is this the way that we should be classifying or should we just remove these images and hope they result in a low confidence score and use the confidence score as our cut off? What strategies are used for this common situation?

I also need the answer to the above question.
How to deal with the out of scope items.
I hope we should not include such data in our training dataset.