As you mentioned, handling multi-view images would be one of key challenges in this competition. Weighted average based on output probabilities would be one idea to try, I mean something like putting higher weights on probabilities close to 1 (and/or 0) (kind of taking uncertainty into account), but unfortunately I don’t have neat solutions to this.
I would be interested to form a team.
Although this is the first time i try out x-ray images. (It resonates with me very well. Using AI to help people is the reason I am into it)
I have been thinking about a couple of options, one might combine them perhaps in the end as an ensemble of sorts.
Option 1
Since x-ray are grayscale, you would not need an RGB tensor, but i am thinking one could combine all the views (images) into one tensor. One problem is that there are different amounts of images per study. Also, some of the x rays were white while others were black, so perhaps one would need to normalize these by inverting.
Option 2
Many to one classification, RNN or equivalent.
Option 3
Averaging the results of each image as in the paper.
Option 4
Adding Embeddings for the extremity type, wrist, shoulder etc…
Although this might be picked up anyway by the network, i am not sure if it is needed.
Option 5.
Averaging 1-4 to give final result
tell me if this doesn’t make sense.
@alexandrecc, with 3D correlation, do you mean that if there is a probable abnormality in let’s say index finger middle joint, on image 1, if image 2 also has a probable abnormality is in the same place (from a different angle) it would consider that abnormality to have higher importance. Or do you mean that in ones mind you create “layers” in 3D from the 2D image
Yes, I got the dataset since last year. I guess you can contact the Stanford team if the online form isn’t working. The research agreement doesn`t allow transfer of their dataset between individuals.
Hi. I just published my medium post + jupyter notebook about the MURA competition.
My goal was to assess how far the standard fastai method could go in the search for better accuracy/kappa in the radiology domain and without any knowledge in radiology.
However, to go beyond a kappa of 0.642 (my score with the standard fastai method), I think that I need a more complete understanding of the field of radiology and more DL experiments.
Part 2 of my journey in Deep Learning for medical images with the fastai framework on the MURA dataset.
I got a better kappa score but I need radiologists to go even further (and fastai specialists too ). Please, feel free to use (and improve) my notebook (ensemble models, squeezenet models, etc.).
As a radiologist, i think that the largest mistake is to believe that all X-rays are of good quality… not just the image quality itself, it’s more about the deviations from standard projection, low occurence of pathology mimicking normal variants and so on.
Just read your Article on Medium and liked it very much, but it seems that the preprocessing is kind of missing… some x-rays are scaled down for example, way to much blank spaces around the images themselves.
Hi @Sandorkonya,
As written in my medium post, I applied Data Augmentation (DA) as following:
horizontal flip and max rotate of 30° without wrap.
default fastai transformations as max_zoom=1.1 and max_lighting=0.2 through the get_transforms() fastai function.
I would enjoy apply more accurate preprocessing of the x-rays in DA. Could you suggest some and give the corresponding fastai code? Many thanks in advance.
instead of a. the ideal case would be to take b. (cropped, magnified). It would (probably) normalize the image features.
The original would look with 1.1 magnification and a 30° rotation resampled to 320px and 112px above The “black area” of the frame is huge, the image features are very small.
I think an object-detection approach could be utilized, where one trains an object detection model on different x-ray categories (with well captured/croped ) and it “simply” crops and aligns the region from the original x-ray if it is rotated initially or there is to thick frame around it.