[AI + Impact] Detecting Breast Cancer in Digital Mammograms

My error, I meant 80% sensitivity / 90% specificity to compare with the DREAM competition goals. I edited my post to correct it.

There are many studies about the diagnostic accuracy of radiologists in screening mammography. The diagnostic accuracy depends on many variables (Film vs Digital mammography, 1st exam vs comparison exam, prevalence of disease in population, using classical CAD or not, type of screening program (annual vs biannual), type of screening protocol (1 view vs 2 views), years of experience of the radiologist, number of mammography read/year, single vs double reading). Almost all the studies are biased according to one or more of these variables.

For simplicity purpose (and US humility purpose), you can look at
http://onlinelibrary.wiley.com/doi/10.1002/ijc.25198/pdf

That is a pretty good idea to lower the resolution. I also thought of the Siamese network idea but I finished with a multi-channel network. This solution is easier and can potentially use pretrained networks (3 channels RGB -> 3 channels grey scale views or comparison) more easily. But I don’t know if spatial correlation between channels is important to converge. The features should be extracted independently to present to the FCC layer.

Thanks for the references @alexandrecc and @davecg

The Danish paper is particularly helpful since it reports AUC, for easy comparison with DREAM Challenge

I think some spatial correlation is necessary, since with the multichannel approach you would be comparing a 3x3 window of the MLO view with a 3x3 window of the CC view even though those two regions may have little overlapping tissue.

The dimension from areola to pectoralis/back of film should correspond roughly between the two (that measurement is used for QA and needs to be <1 cm different between the two views). So my thought was to train the same network for both to create features, then take the mean/max along the CC dimension (for MLO) or the ML dimension (for CC).

You could then concat those two vectors along a new dim and there should be spatial correlation between the two.

Never got around to implementing that for this competition though for various reasons. Will try it with the DDSM dataset.

I also need to keep working on my carbon-based classifier (need to hit my quota of screening mammograms before I graduate from residency, for some reason training a computer on 340k images doesn’t count). :slight_smile:

2 Likes

@davecg, I did something similar in leaderboard 2 but couldn’t get better AUROC than max prediction on single view. Overfitting was a problem because the training was converging very fast on traing set (demonstrating the usefulness of the solution in my opinion).

But the exact spatial correlation is probably better than rough estimation (distance to the nipple). What I can understand of deep CNN theory is it can extract the features potentially and keep (or not depending on the weights) the relation between the features. So a properly trained network could potentially (by theory) learn by itself the spatial correlation between multiple features (cancer position vs gland position, cancer pos vs nipple pos, cancer pos vs skin pos, etc. ) in 2 (or n) different channels.

High resolution morphology (contours, form, density), Low resolution 3D spatial correlation between views, High resolution temporal (with past exam) correlation (growth vs no growth) are key parameters of this problem to max AUROC (at least human AUROC) in my opinion.

My personal advice from a 10 year experience general radiologist to a current resident in radiology 
 train in some kind of interventional radiology. Interventions will be harder to automate than image diagnosis in short/mid term. Robotic should logically lag computer vision in radiology in a 10-20 years future. And of course keep interest in machine learning as you do !

1 Like

If you can’t beat’em, join’em.

1 Like

Hi,

You mentioned the source + weights were available open source? Do you know where I can get them.

Thanks
David

@djones
You can access the ArchivedWriteUps from all the participants there :
SC1 (no longitudinal data): https://www.synapse.org/#!Synapse:syn4224222/wiki/434546
SC2 (with longitudinal data) : https://www.synapse.org/#!Synapse:syn4224222/wiki/434547

In a selected ArchiveWriteUp, you can access to the source code in the File tab with a synapse login account. The weights can be found with the code or as an ArchivedModelState in their infrastructure. Their dataset isn’t publicly available anymore for training/inference. Of course these weights are probably biased to their dataset (specific mammography machine, reconstruction kernel used before sending to PACS). But trying to train/infer this code on a new external dataset could be an interesting task to try if someone has some time. The challenge will be to setup a labeled (cancer vs no cancer) mammography dataset with the same structure used in a dockerized image. My radiology clinic probably has about 50000 mammography exams and my community hospital about 30000 mammography exams in dicom format for the past 5 years with around 600 cancers found. This is approximately the same size as their competition dataset.

Hi everyone,

Thought you might be interested in this conference:

https://www.iscb.org/recomb-regsysgen2017

(NYC, November). Winners of the Digital Mammography challenge will be
presenting their solutions

2 Likes

Hi Jeremy, I’m sorry for reviving an old topic, but I’m very interested in your current views about what you said here if possible.