Multiple Images CNN Architecture

Hello everyone!

I’m trying to build a CNN for detecting anomalies on musculoskeletal x-rays - the popular MURA task and dataset. I have started from a basic DenseNet-121 and tried multiple training methods in order to improve the network’s accuracy.

The dataset contains studies of patients which are labeled as either normal or abnormal, and some of the studies have more than one x-ray included. For example, a study of the forearm can contain two x-rays: one that shows a ‘top down’ view of the forearm, and one that shows a ‘lateral’ one. This is needed because the fracture might only be visible from one angle and not from the other.

I believe that the network can benefit from using more than one image as input, but because the studies have an arbitrary number of radiographies (reaching as much as 11), I am not sure how to combine them without making the network huge.

Do you know of any ‘standard’ way of doing this, or what do you think is the best CNN architecture that offers the network a larger, arbitrary number of inputs?

3 Likes

this might help you futher it is in the context of time-series as images but it has very good results.

The notebook is not made by me but i saved it and can not find the orignal source.