Medical Imaging in fastai v2

I am very excited with the fact that fastai v2 has a Medical Imaging submodule and I have been reviewing as much documentation as I can to learn more on this. One extremely helpful resource is this: https://github.com/fastai/fastai/blob/master/nbs/61_tutorial.medical_imaging.ipynb
Thank you very much @jeremy, @sgugger, @albertvillanova, asvcode, and @Moritz for putting it together!
I was trying to apply the same pipeline to 16bit images but got stuck. Probably because of issues with image normalization. Any special tricks that can be applied in the DataBlock for 16bit images?

Also,
The Radiological Society of North America (RSNA) has put together a new challenge on detection of pulmonary embolism - https://www.kaggle.com/c/rsna-str-pulmonary-embolism-detection - and would be phenomenal if someone could demonstrate the flexibility of the new Medical Imaging module with that dataset.

Best regards,

Luciano

1 Like

Glad you liked the tutorial :)!

Can you elaborate a bit more on the issues with the your pipeline?

I had a quick look at the dataset you mentioned and the images are signed int16 arrays. I uploaded a quick kernel on kaggle that uses a custom version of PILDicom so that you can create datablocks with various window levels.

You can also check out my medical imaging blog which predominantly uses fastai.

lung window:
lung

pe window
lung_pe

'metastinal` window
lung_me

Your custom version of PILDicom is exactly what I was looking for. Thank you very much! But I wonder if there is something wrong going on with the windows. They don’t match what we see in the clinical setting. Should we use im.windowed(w=[window],l[level]) function instead? But even when I do that I still don’t see it being applied to all images in a consistent way. Do you know if this function accounts for different Pixel Representation, Rescale Intercept/Slope that can exist in data from multiple scanners? Any advice on how to leverage the scale_px attribute that applies rescaling automatically?

Thanks a lot!

Luciano

Here is an example of the issue I’m describing with inconsistent windowing:

scaled = im.scaled_px.numpy() does the trick of the intercept and addresses the consistency part but I am having trouble combining that with windowing. Any help would be much appreciated.
Also, networks with pretrained imagenet weights are not learning at all. I am assuming it is because the DICOM data is very different. Do I have to convert to 3 channels 8 bits to get the benefits of transfer learning or is there any other trick?

Thanks!

Luciano

Yes you are right here, this is the correct way of applying the window. Not exactly sure on the inconsistency though at this point.

Not that I am aware of

This is because you have to take out the .numpy() from scaled_px before applying the window. This should work:

 g = scaled_px(im)
 scaled = g.windowed(*dicom_windows.brain).numpy()

However windowed already takes scaled_px into consideration so does not have to be explicitly called prior to the window.

def windowed(self:DcmDataset, w, l):
    return self.scaled_px.windowed(w,l)

I am going to have to experiment to see if the windows are being applied correctly when I get a chance.

Not sure about this at this point

I tried this but the result is not as expected. The result is essentially the same as the windowed function alone. My guess is that there is something wrong with the way that the windowed function is interacting with the scaled_px function.

Ok the issue seems to be the way the lighting transforms are being applied from aug_transforms which is messing up with the windowing. Try this instead and see if you get better results.

Use the above as the same

but change this:

batch_tfms=aug_transforms(size=512)

to:

batch_tfms=Resize(512)) so that Resize is the only transform and see if you get better results.

Perfect! That dealt with the consistency issue. I wonder why the scaled_px alone was not being transformed in the same fashion. Anyways great catch and thank you! I am still not able to make the window and level reflect what we see clinically though. I tried the code above and im.windowed(w=80 L=40), but no luck.They still look much darker than they should:

Still working on that (have to jump off to attend a clinic). The windowing seems to be working ok, for example to confirm that the window is being applied: (this is a confirmed PE in the image as per the data)

pe_confirmed = f'{source}/train/4833c9b6a5d0/57e3e3c5f910/f890efd48940.dcm'
pel = dcmread(pe_confirmed)
pel.show()

pe1.PNG

The show function basically uses hist_scaled so this produces the same result

normalized = pel.hist_scaled(min_px=-1100,max_px=None)
show_image(normalized, cmap='bone');

pe2.PNG

rescaling

p = pixels(pel)
re = (p * pel.RescaleSlope + pel.RescaleIntercept)
show_image(re, cmap='bone');

pe3

window

peu = re.windowed(l=100, w=700)
show_image(peu, cmap='bone');

pe4

The above can be reproduced by doing just the following - and the same result

pel = dcmread(pe_confirmed)
peu2= pel.windowed(l=100, w=700)
show_image(peu2, cmap='bone');

pe5

Maybe there is another issue in the pipeline somewhere

Lots of examples working with 16 bit data here fyi:

6 Likes

Those kernels have always been my go to notebooks!

2 Likes

@lprevedello, this update should now work in getting the images to be consistent with the windowing.

class PILDicom2(PILBase):
    _open_args,_tensor_cls,_show_args = {},TensorDicom,TensorDicom._show_args
    @classmethod
    def create(cls, fn:(Path,str,bytes), mode=None)->None:
        if isinstance(fn,bytes): im = pydicom.dcmread(pydicom.filebase.DicomBytesIO(fn))
        if isinstance(fn,(Path,str)): im = dcmread(fn)
        #scaled = np.array(im.hist_scaled(min_px =-1100, max_px=None).numpy())*255 #to view best image visualization
        scaled = np.array(im.windowed(w=40, l=80).numpy())*255
        scaled = scaled.astype(np.uint8)
        return cls(Image.fromarray(scaled))

Please let me know if your images are now more consistent.

Hi @amritv. Fantastic! You were right that the consistency issue had to do with the random transformations that were being applied to the DataBlock. And this new code addresses the visualization issue I was seeing before with the show_batch.

I just changed the window and level to the brain w/l (w=80, l=40)

Here is what I am getting now:

Which is exactly as I was expecting! Thank you very much!

Luciano

1 Like

Hi @ lprevedello
Checking if you have been able to build the inference model for the competition
am struggling with it, as the output format is very different…

Hi All
I’m working on a pulmonary embolism challenge in Kaggle, for which I want to use both, DICOM images and metadata, for training and inference (both needs to be extracted and transformed while creating batches).
Finding it difficult to understand how to create datablock for the same.
More details here: Medical Imaging | DataBlock for DICOM metadata

It would be great if someone can help or share relevant notebooks! :slight_smile: