I am training a 2D lungs vessels and airways segmentation network (tiramisu with pytorch, implemented by one of fastai’s past students <- thanks!) on CT scans and it relies on pretty noisy labels (some are incorrect, many are missing) that come out of standard image processing methods plus some human correction. I was wondering if there are any techniques out there to let the network teach itself and learn from the correct predictions it has already created. Should I go down the lines of using my strong predictions as new training data? I imagine that approach also adds another layer of noise…
You may want to look into pseudo labeling or different semi supervised learning methods. But as you pointed out there may be drawbacks such like adding noise back to training. But I guess if there is no systematic error in training set labels this method might help a bit.
It might help a bit, but it will somehow “reinforce” the strength the network already has learned.
I did it while tackling the Carvana challenge, and it helped!