Adding external features to help convergence

Hey Team,

i just started working on a medical project where i want to predict the age of people based on a specific biometric feature that can be seen in MRI scans. This project is both interesting and annoying, because the dataset includes only 150 samples. on the plus side these samples are 3D and can be turned into 150*32 2D images. i tried training a simple 3D CNN with regression output, but the results were disappointing after playing with it for a day.

I also have a segementation map for 75 of my samples that highlight the area of interest in the picture. So I started to train an CNN for segmentation and the results are pretty good. Sometimes even better than the rather noisy ground truth data. Next, I want to use a CNN classified on the extracted segmentation features, but I’m not there yet.

Anyway, I can put a little reasoning behind the idea why segmentation works but classification doesn’t. Probably because one segmentation is essentially xyz predictions. Are you aware of any papers that address a hard to train problem by adding features like coordinates or segementation maps? I had to think back to Jeremies video about the kaggle fish classification and wondered if there is more research on workflows like this.

Cheers