DICOM images not displaying correctly

I am using some DICOM images. When using:

patient = 2
xray_sample = dcmread(items[patient])


There is no problem, however when trying to import it using:


I got a completely black image. However if I use matplotlib’s show() like:


I’ve got the expected result:

<matplotlib.axes._subplots.AxesSubplot at 0x7fb962fe0a10>

I am quite sure that the problem is with the cmap parameter in the fastai2 show but I am unsure how to change it properly. I updating the TensorDicom class:

class TensorDicom(TensorImage): _show_args = {'cmap':'gray'}

But I do not see any change.

I am aware also that the images are dtype=uint16 but changing to dtype=uint8 using:

im = Image.fromarray(cv2.convertScaleAbs(xray_sample.pixel_array))

Returns a completely burned image.

Amy ideas?

I could be wrong but it looks like dcmread returns a DcmDataset , which then has a .show method that delegates from show_image :

PILDicom.create returns a PIL image. The PIL image uses the .show method from PILBase which in turn uses show_image in torch.core, which converts the PIL image to an numpy array before using matplotlib to show it.

(I think :slight_smile: )

I think the problem is whether the image is 8 or 16 Bit.
The burned image is probably when the 16 Bit image get clipped to values between 0 and 255, so all pixels above 255 will appear black.
For the black image, I believe the image is loaded correctly but was not rescaled. Radiography has mostly around 4000 individual pixel values (you can see how much exactly in the image header), but an 16 Bit image allows for up tp 65.536 values. If you use, lets say 100 different colors for display only the first 6 would be used to display the pixel values, and the remaining 94 colors are not used, thus the image appears black.
This could be corrected by manually rescaling:

im = PILDicom.create(fn) # create image
im = np.asarray(im) # get array from image
im = (im / np.max(im)) * 255 # rescale array to values between 0 and 255
PILImage.create(im.astype(np.uint8)) # cast array to uint8 and create image again

Now the image is displayed correctly, however you would loose pixel information it this process. The reason it is displayed correctly by using matplotlib is that, it probably automatically adapts the range of the cmap to the range of pixels.

Hi !
Thanks for your great answer. In my case, I’m loading 16 bits png chest xrays images. When calling dls.show_batch() all images looks white, which I think is due to the function is not normalizing correctly the values from the 16 bit images [0, 65535]. I don’t care that much about displaying but if the model is getting the images as I want to (16 bits [0, 65535] but normalize in the range [0, 1]). I’ve read that Pytorch does normalize in the range [0, 1] all images (8 and 16 bits ones).