Image Segmentation data augmentation

I’m trying to detect A4 paper documents and profile photos from images (classes: 0 - background, 1 - document, 2 - profile photo). I have a labelled dataset of 500 images.

I’ve found that the trained model predict non-document pixels as document pixels. What data augmentation approaches could I consider?

Would adding additional images with only background masks help?

What have you tried so far? Have you tried the same augmentation technique as CAMVID? IIRC there’s also oversampling methods for segmentation too

Hi,
I am a newbie in fastai and deep learnign, I am using image segmentation method for land cover classification following the techniques used in CAMVID . Most of my images have multiple lables, and I am facing a problem for overdampling them.
I have used:
class OverSamplingCallback(LearnerCallback):
def init(self,learn:Learner):
super().init(learn)
self.labels = self.learn.data.train_dl.dataset.y.items
_, counts = np.unique(self.labels,return_counts=True)
self.weights = torch.DoubleTensor((1/counts)[self.labels])
self.label_counts = np.bincount([self.learn.data.train_dl.dataset.y[i].data for i in range(len(self.learn.data.train_dl.dataset))])
self.total_len_oversample = int(self.learn.data.c*np.max(self.label_counts))
def on_train_begin(self, **kwargs):
self.learn.data.train_dl.dl.batch_sampler = BatchSampler(WeightedRandomSampler(weights,self.total_len_oversample), self.learn.data.train_dl.batch_size,False)
learn = unet_learner(data, models.resnet34, metrics=metrics, callback_fns = [partial(OverSamplingCallback)],wd=1e-3)

learn.lr_find()
learn.recorder.plot(suggestion=True)
but this is the error:
IndexError: arrays used as indices must be of integer (or boolean) type
Is there any other solution for oversampling the train data?