Bone X-Ray Deep Learning Dataset and Competition

(Rikiya Yamashita) #7

Thanks @alexandrecc, I’m very interested in this :wink:

As you mentioned, handling multi-view images would be one of key challenges in this competition. Weighted average based on output probabilities would be one idea to try, I mean something like putting higher weights on probabilities close to 1 (and/or 0) (kind of taking uncertainty into account), but unfortunately I don’t have neat solutions to this.

1 Like

(Phani Srikanth) #8

Hi @alexandrecc and team,

I’d be interested in working on this problem with a team like yours. Are you looking for another helping hand for this challenge?



(Christoffer Björkskog) #9

I would be interested to form a team.
Although this is the first time i try out x-ray images. (It resonates with me very well. Using AI to help people is the reason I am into it)

I have been thinking about a couple of options, one might combine them perhaps in the end as an ensemble of sorts.

Option 1
Since x-ray are grayscale, you would not need an RGB tensor, but i am thinking one could combine all the views (images) into one tensor. One problem is that there are different amounts of images per study. Also, some of the x rays were white while others were black, so perhaps one would need to normalize these by inverting.

Option 2
Many to one classification, RNN or equivalent.

Option 3
Averaging the results of each image as in the paper.

Option 4
Adding Embeddings for the extremity type, wrist, shoulder etc…
Although this might be picked up anyway by the network, i am not sure if it is needed.

Option 5.
Averaging 1-4 to give final result

tell me if this doesn’t make sense.

@alexandrecc, with 3D correlation, do you mean that if there is a probable abnormality in let’s say index finger middle joint, on image 1, if image 2 also has a probable abnormality is in the same place (from a different angle) it would consider that abnormality to have higher importance. Or do you mean that in ones mind you create “layers” in 3D from the 2D image



Still interested @alexandrecc look forward to meeting in person on Friday


(Anumula Muralidhar) #11

Is there anyone working on this problem?


(Alexandre Cadrin-Chênevert) #12

Yes, we currently have a relatively large group working on this problem. @jeremy


(Anumula Muralidhar) #13

how can i join in this group @alexandrecc

1 Like

(Pierre Guillou) #14

Hello @alexandrecc. Did you download the MURA dataset? The online form is not working. How to get the dataset?


(Alexandre Cadrin-Chênevert) #15

Hi @pierreguillou ,

Yes, I got the dataset since last year. I guess you can contact the Stanford team if the online form isn’t working. The research agreement doesn`t allow transfer of their dataset between individuals.


(Pierre Guillou) #16

Thanks Alexandre. I sent an email to the Standford team and I’m waiting for its answer.

[ EDIT ] : I received the email from ML Stanford and downloaded the MURA database :slight_smile:


(Pierre Guillou) #17

Hi. I just published my medium post + jupyter notebook about the MURA competition.

My goal was to assess how far the standard fastai method could go in the search for better accuracy/kappa in the radiology domain and without any knowledge in radiology.

However, to go beyond a kappa of 0.642 (my score with the standard fastai method), I think that I need a more complete understanding of the field of radiology and more DL experiments.

Feedbacks welcome!


Share your work here ✅
(matej) #18

Excellent work. I would be very interested in someone from experts sharing some advanced techniques and optimization on your notebook.

1 Like

(Pierre Guillou) #19

Part 2 of my journey in Deep Learning for medical images with the fastai framework on the MURA dataset.

I got a better kappa score but I need radiologists to go even further (and fastai specialists too :slight_smile: ).
Please, feel free to use (and improve) my notebook (ensemble models, squeezenet models, etc.).


(Pierre Guillou) #20

Thank you @matejthetree. I just posted the part 2 of my research on the MURA dataset.
Feedback welcome to go further :slight_smile:

1 Like

(Sandor Konya) #21

As a radiologist, i think that the largest mistake is to believe that all X-rays are of good quality… not just the image quality itself, it’s more about the deviations from standard projection, low occurence of pathology mimicking normal variants and so on.

Just read your Article on Medium and liked it very much, but it seems that the preprocessing is kind of missing… some x-rays are scaled down for example, way to much blank spaces around the images themselves.

Did you do any preprocessing of the x-rays?

1 Like

(negar) #22

I want to find some articles which written about MURA dataset can anybody help me?


(Haider Alwasiti) #23

Here is a nice article and code :


(negar) #24

thank you so much


(Pierre Guillou) #25

Hi @Sandorkonya,
As written in my medium post, I applied Data Augmentation (DA) as following:

  • horizontal flip and max rotate of 30° without wrap.
  • default fastai transformations as max_zoom=1.1 and max_lighting=0.2 through the get_transforms() fastai function.

I would enjoy apply more accurate preprocessing of the x-rays in DA. Could you suggest some and give the corresponding fastai code? Many thanks in advance.

1 Like

(Sandor Konya) #26

Hi @pierreguillou,

my comment was regarding the “raw” X-ray image itself before any transformation and not the data augmentation step itself.

Let me explain on a pic:

instead of a. the ideal case would be to take b. (cropped, magnified). It would (probably) normalize the image features.
The original would look with 1.1 magnification and a 30° rotation resampled to 320px and 112px above The “black area” of the frame is huge, the image features are very small.

I think an object-detection approach could be utilized, where one trains an object detection model on different x-ray categories (with well captured/croped ) and it “simply” crops and aligns the region from the original x-ray if it is rotated initially or there is to thick frame around it.