MRNet: Stanford Knee MRI Dataset And Competition

Great to see the activity here! I spent a few more hours this afternoon researching and wanted to throw out a few more ideas concepts and hopefully we can get going after that.

Possible additional ideas re: architecture:
1 - I saw two papers using Super resolution on MRI images in order to get better classification results. (2x and 4x image resolution enhancements). I believe we will have an updated FastAI super-resolution portion in one of the next classes here, so that might be an interesting pre-processing step to increase our accuracy. (@nswitanek - since you have the images, are they clear or would super resolution be worth investigating?)

2 - There really is not much in terms of knee MRI and deep learning (vs tons for brain MRI and deep learning). I mostly found Stanfords paper from their project entry and one from earlier using a sort of UNet. I did want to show one image from that paper so we have some initial examples of what we are trying to find:


(a) cartilage softening on the lateral tibial plateau, (b) cartilage fissuring on the medial femoral condyle, © focal cartilage defect on the medial femoral condyle, and (d) diffuse cartilage thinning on the lateral femoral condyle that were correctly identified by the cartilage lesion detection system (arrow).

and their architecture (it scored well, based on VGG surprisingly) - this was just for cartilage though, not ACL or abnormality:


and link:

3 - On the good news - one paper tested out using a CNN pretrained on ImageNet and then retraining with a small set of MRI images…vs a CNN fully trained on only MRI. They found the ImageNet one outperformed the dedicated MRI trained one, so that’s great for us since we’ve all used transfer learning.

4 - Segmentation first, or direct classification? Several papers used segmentation and then classification… not sure what is better here.

Depending on architecture selection, we could end up with multiple teams/projects b/c in most specialized version we would have:
Super resolution of imagery-> Data augmentation(?) -> Segmentation -> Classification

Or we just have multiple classification systems leveraging same images but using different priorities ala @neuradai 's excellent proposal:

I believe tomorrow’s class or next week’s, we’ll be building ResNet from scratch with the latest/greatest FastAI 1.2 so that might give us a first chance to make a small dataset and try out the 3 channel model pretty quickly?

btw - these boards keep blocking me from replying more than 2 or 3 times, but did want to say thanks a bunch to @neuradai for the domain knowledge posts (very helpful!) and also thanks to @melonkernel for posting about this competition and starting this project, and @rsrivastava and @tcapelle for joining in…and very excited we have a radiologist interested in helping - thanks to @agentili !

I really hope we can beat out the Stanford team and make some waves for FastAI in doing so :slight_smile:

Oh and since I had to double check some of these medical terms, here’s the layout of what sagitall (or as I would term it, side view), coronal (front view) and axial or transverse (top down) are:

2 Likes

Here is a paper I found on Knee Analysis Using CNN https://arxiv.org/pdf/1703.09856.pdf . But this paper is using X-Ray Vs MRI.

So we have a few questions.

  1. How to handle MRI images.
  2. How does knee MRI differ from brain MRI
  3. Do we need to use segmentation to determine boundaries to various tissues in the knee data.
  4. How to handle 3 D MRI data.

Really nice. Hope you get your guidance.

To know about knee
MRI

This picture is very helpful

2 Likes

I just found this paper on LiSHT, a possibly better activation than ReLU, Swish, etc. from January. It tested out better on a number of datasets and they used ResNet for it…so I’ll do some more testing in our class notebooks with it but this might give us another edge here.

Got a chance to download and briefly review the data this morning. Here’s a screenshot from the EDA nb I’m putting together with the code used to generate the plot below.

from ipywidgets import interactive
from IPython.display import display

plt.style.use('grayscale')

class KneePlot():
    def __init__(self, x, figsize=(10, 10)):
        self.x = x
        self.slice_range = (0, self.x.shape[0] - 1)
        self.resize(figsize)
    
    def _plot_slice(self, im_slice):
        fig, ax = plt.subplots(1, 1, figsize=self.figsize)
        ax.imshow(self.x[im_slice, :, :])
        plt.show()

    def resize(self, figsize):
        self.figsize = figsize
        self.interactive_plot = interactive(self._plot_slice, im_slice=self.slice_range)
        self.output = self.interactive_plot.children[-1]
        self.output.layout.height = '{}px'.format(60 * self.figsize[1])

    def show(self):
        display(self.interactive_plot)

With this code, you can generate the interactive plot I have shown, so you can scroll through the images and get a sense of what we’re looking at.

When I have a little more time, I’ll send a link to the full EDA nb.

5 Likes

Hi there,
I work at UCSF in brain tumor imaging prediction, and would love to be a part of this effort. Is there any organized meeting for working on this project so far?

2 Likes

Thanks for sharing this is very helpful.

Hi @julclu - would love to have you helping, esp with your background!

Re: organized meeting - not yet (nothing beyond this thread). Personally, I was waiting for tonight’s class as I think we will learn to build ResNet using the new framework (FastAI 1.2) and that’s probably what we would want to use to start with for this project.

Maybe tomorrow (4/18) we can start discussion about a more formalized meeting/organization as we should have enough tools to get going if we get to building various nets in class tonight?

1 Like

Thanks for posting this. This is very useful. I’m also taking part in this competition and I was trying to understand what slice meant here? Should we interpret each slice as a separate image ?

I’m going to edit the code above shortly with some changes I made.

In MRI, a slice is a single image in a stack. You can use a 2D convnet to perform image-level classification, then derive study-level classification from that. Or you can use a 3D convnet to achieve study-level classification by treating the entire stack of images as a volume. There are also some hybrid approaches using object detection architectures, like this paper.

2 Likes

Awesome work guys!

awesome, thanks for the notebook! Nice to be able to see the data now.

1 Like

Do you guys have any preference on an eventual virtual meetup. We could perhaps do a google hangouts. What timezones are people here in? (me GMT +2)

1 Like

Hi. Thanks to @melonkernel for creating this thread. I just read all posts and saw your proposals of models and links to papers. Great :slight_smile:

However, why not starting with the model of the original paperDeep-learning-assisted diagnosis for knee magnetic resonance imaging: Development and retrospective validation of MRNet”:

Code for replicating these findings is provided as Supporting Information (S1 Code and S2 Code).

At first, we could turn this code into fastai in order to get the same results (even exceed them thanks to the great ideas of fastai :wink:

Thus, we could post in github the fastai notebook and put the weights online (in order to help the DL and health community not to train the same models again and again from scratch).

Then, as a team or through different teams (here at Brasilia, we are creating a team for this competition), we could train better models to improve our results.

What do you think of that?

5 Likes

The solution is done in another thread about MRNet : Stanford MRNet Competition trouble downloading dataset to gcp (I guess it is what @nswitanek used, no?):

  • Use this Google extension to download in GCP (wget ...) the 6 Go MRNet dataset: it works well :slight_smile:

The structure after unzip is the following one:

- train
-- axial  
-- coronal  
-- sagittal
- train-abnormal.csv
- train-acl.csv
- train-meniscus.csv
- valid
-- axial  
-- coronal  
-- sagittal
- valid-abnormal.csv
- valid-acl.csv
- valid-meniscus.csv

With the command du -h MRNet-v1.0 in my terminal ubuntu, I get the following folder sizes:

258M    MRNet-v1.0/valid/axial
222M    MRNet-v1.0/valid/coronal
230M    MRNet-v1.0/valid/sagittal
709M    MRNet-v1.0/valid
2.4G    MRNet-v1.0/train/axial
2.1G    MRNet-v1.0/train/coronal
2.2G    MRNet-v1.0/train/sagittal
6.6G    MRNet-v1.0/train
7.3G    MRNet-v1.0

Hi Pierre,
Thanks - my issue was on Windows but yes I may have to download to a server as no luck yet on Windows!

I have been able to download and run the above code to view the images. But I am using mac.

@LessW2020 @rsrivastava : in fact, I did not try to download the MRNet dataset to a mac or a windows notebook but to my instance in GPC. I guess that the Google Extension given works in all situations.

1 Like

It is working for me too.

1 Like