Stanford MURA (X-Ray) Classification Competition

I’m interested too.

Ah, you are most welcomed to join! How could we reach you?

I will love to contribute if somehow I can help. I don’t have much domain expertise in this field. But I have some knowledge of deep learning.

I am interested, how can I join you?

So glad to see all the enthusiasm out there! Here is a Google Hangout invitation link.

https://hangouts.google.com/call/ImVgSmbBmlF_QP8J6uwkAEEE

We can use it to start our meeting at 8:30 pm on 03/17 PST. Let me know if Google Hangout does not work for you.

1 Like

I’m interested too. I’ve worked on a generic “multi image input” DataBunch for the “human protein atlas” competition.
It should be tuned to accept RGB images instead of GRAY scale and extended to support “missing” images in the case that not all “views” are present.

3 Likes

Ah, this is great!

Here is some more note I take for myself when playing around with the competition. However, I cannot guarantee its readability as originally I did not intend it to be widely readable to others. You are still welcomed to read it if you find it interesting.

Stanford MURA Competition.pdf (194.7 KB)

1 Like

I’ve updated the code to work with fast.ai 1.0.50.dev0 .

1 Like

I am already here in the Hangout.

As a reminder, the link to join is:

Just in case any of you are interested in building the whole thing from raw PyTorch, here is a repo that might serve as a starting point:

Waiting for your Discord group link here. :wink: @soco_loco

Hello everyone, at our meeting today we discussed making a subsection group of Part 2 Fast.ai which will be focused on applying the lessons from each week to a medical theme.

For example, in our prep to Lecture 8 and the beginning of Part 2, we discussed the Stanford MURA xray competition and how to best approach that project.

The goal next week after Lecture 8 would be find an appropriate model to implement or problem to solve that has a medical theme in order to practice the new skills as a team.

The hope is that we can find people from all perspectives and disciplines who are enthusiastic about the medical applications of Fast.AI to collaborate on learning and creating.

Hope to see you there.

4 Likes

Here is my initial exploration notebook just for the understanding dataset.
Looking forward to work on this with you all.

3 Likes

I’ve uploaded an updated version of the MultiChannelImage data loader, but afeter taking a close look to the problem and samples i think that that component is not useful in this case.

I used it in the “Protein Atlas Competition” because in that situation each sample was composed of multiple channels (R,G,B and YELLOW) “linked” together as single image, so same data augmentation transformations should be applied to al the channels.

In this situation we have

  1. multiple images (1…11 AFAIK) per sample
  2. eache image is RGB
  3. different resolution/orientation for each image in sample

So i think that here it’s important to have: an independent data augmentation for each image and the possibility to have variable number of images per sample.

1 Like

Don’t think merging all images into a single image will work well for this. Having an RGB image of frontal, lateral and oblique radiographs overlaid on each other will not be very helpful.

The dataset is also very messy - you will notice some images are inverted due to differences in technique (black on white vs white on black).

There is also a very easy sub problem that might dominate a classification approach without more labeling - many of the “abnormal” xrays have metallic hardware or casts, which are much easier to spot/less clinically relevant than more interesting abnormalities like fractures, tumors, etc.

5 Likes

Images aren’t really RGB, just saved that way for the competition.

1 Like

Interesting point: converting them BW can reduce memory usage!

Except that when you use a CNN model pre-trained on ImageNet, you have to convert it back to RGB again. :thinking:

You don’t have to, you can sum the weights along the channel dimension and change first layer to a 1 channel input.

4 Likes