Show_batch erroring on matplotlib

Hello,
I’m a new user of Fastai and while trying to follow some tutorials I get stuck right after “building” the Dataloaders.
For example, following the walkwithfastai vision Lesson 1 - PETS
After executing this line

dls.show_batch(max_n=9, figsize=(6,7))

I get:


/usr/lib/python3/dist-packages/matplotlib/image.py in _make_image(self, A, in_bbox, out_bbox, clip_bbox, magnification, unsampled, round_to_pixel_border)
466 A, output, t, interpd[self.get_interpolation()],
467 self.get_resample(), alpha,
→ 468 self.get_filternorm() or 0.0, self.get_filterrad() or 0.0)
469
470 # at this point output is either a 2D array of normed data

ValueError: 3-dimensional arrays must be of dtype unsigned byte, unsigned short, float32 or float64
<matplotlib.figure.Figure at 0x7ef1c954a8>

Am I missing something? Is it a problem with my FastAI installation? I used mamba to install it.
Thank you for your help.

Please see this link. It will answer your query probably.

Thank you so much for your help. So the only solution now is to build matplotlib from its source or wait for a new release?

From the error msg that you received, I believe you were using torch.float16 dtype. So except that you may use torch.float32 or float64. That should work, or at least should not give this error. You may want to preformat your error msg while posting. This looks neat and easy to read. Just select the code part and press Ctrl+ E

Thank you. That sounds like a more reasonable approach.
Is there a way to enforce dtype to be float64 in the ImageDataLoaders?

dls = ImageDataLoaders.from_name_re(path, fnames, pat, batch_tfms=batch_tfms, 
                                   item_tfms=item_tfms, bs=bs)
dls.show_batch(max_n=9, figsize=(6,7))

Please post your code from the start to the point where it is showing an error. that way it will be easy to comment on your query. Also please mention the platform that you are using. As a side note update the fastai to the latest version if not done so yet.

Cheers!

Thank you for your time!
Currently the code is:

from fastai.basics import *
from fastai.vision.all import *
from fastai.callback.all import *

path = untar_data(URLs.PETS)
fnames = get_image_files(path/'images')
pat = r'(.+)_\d+.jpg$'

item_tfms = RandomResizedCrop(460, min_scale=0.75, ratio=(1.,1.))
batch_tfms = [*aug_transforms(size=224, max_warp=0), Normalize.from_stats(*imagenet_stats)]
bs=64

dls = ImageDataLoaders.from_name_re(path, fnames, pat, batch_tfms=batch_tfms, 
                                   item_tfms=item_tfms, bs=bs)

pets = DataBlock(blocks=(ImageBlock, CategoryBlock),
                 get_items=get_image_files,
                 splitter=RandomSplitter(),
                 get_y=RegexLabeller(pat = r'/([^/]+)_\d+.*'),
                 item_tfms=item_tfms,
                 batch_tfms=batch_tfms)

path_im = path/'images'
dls = pets.dataloaders(path_im, bs=bs)

dls.show_batch(max_n=9, figsize=(6,7))

It is currently running in a Jupyter notebook within a docker container on a Nvidia Jetson Xavier NX (aarch64)

The code seems OK to me. Try running this on a Google colab platform. See if the error reproduces. Before that update the libraries and fastai version. I hope this should run fine.

Indeed in a Colab works fine.
The problem must be on my setup. I will investigate further.
Thank you very much for your assistance.