I’m working on a medical segmentation task and currently experiment with synthetic noise on the training and validation data. I noticed that the predictions show higher accuracy than the noisy ground truth data it learned from. I found that kinda cool.
Anyway, now i’m imagining a workflow that goes something like this:
- set up training and validation data using the same samples, but add noise to the training labels
- train network on noisy labels until performance on validation is more accurate than the noise on the training labels
- predict labels for the training set and use them as new training labels
- start all over
and then i thought: hey, doesnt that kinda sound like a GAN? so, now im thinking how would i set up something like this using a GAN? and also: does any of this make sense?