V1.0 data augmentation - AssertionError: List len mismatch (0 vs 2)

Hi there,

I’m trying to get a grip on the new fastai version.

I failed trying to show examples of data augmentations for dogs/cats. Any idea what’s going on?

data = image_data_from_folder(PATH, ds_tfms=get_transforms(), tfms=imagenet_norm)
ds = data.train_ds
data_tfms = DatasetTfm(ds, ds.tfms)
x,y = next(iter(data_tfms))

AssertionError Traceback (most recent call last)
in ()
1 ds = data.train_ds
2 data_tfms = DatasetTfm(ds, ds.tfms)
----> 3 x,y = next(iter(data_tfms))

~/fastai/fastai/vision/data.py in getitem(self, idx)
188 def getitem(self,idx:int)->Tuple[ItemBase,Any]:
189 “Return tfms(x),y.”
–> 190 x,y = self.ds[idx]
191 x = apply_tfms(self.tfms, x, **self.kwargs)
192 if self.tfm_y: y = apply_tfms(self.tfms, y, **self.y_kwargs)

~/fastai/fastai/vision/data.py in getitem(self, idx)
189 “Return tfms(x),y.”
190 x,y = self.ds[idx]
–> 191 x = apply_tfms(self.tfms, x, **self.kwargs)
192 if self.tfm_y: y = apply_tfms(self.tfms, y, **self.y_kwargs)
193 return x, y

~/fastai/fastai/vision/image.py in apply_tfms(tfms, x, do_resolve, xtra, size, mult, do_crop, padding_mode, **kwargs)
450 for tfm in tfms:
451 if tfm.tfm in xtra: x = tfm(x, **xtra[tfm.tfm])
–> 452 elif tfm in size_tfms: x = tfm(x, size=size, padding_mode=padding_mode)
453 else: x = tfm(x)
454 return x

~/fastai/fastai/vision/image.py in call(self, x, *args, **kwargs)
366 def call(self, x:Image, *args, **kwargs)->Image:
367 “Randomly execute our tfm on x.”
–> 368 return self.tfm(x, *args, **{**self.resolved, **kwargs}) if self.do_run else x

~/fastai/fastai/vision/image.py in call(self, p, is_random, *args, **kwargs)
310 def call(self, *args:Any, p:float=1., is_random:bool=True, **kwargs:Any)->Image:
311 “Calc now if args passed; else create a transform called prob p if random.”
–> 312 if args: return self.calc(*args, **kwargs)
313 else: return RandTransform(self, kwargs=kwargs, is_random=is_random, p=p)

~/fastai/fastai/vision/image.py in calc(self, x, *args, **kwargs)
315 def calc(self, x:Image, *args:Any, **kwargs:Any)->Image:
316 “Apply to image x, wrapping it if necessary.”
–> 317 if self._wrap: return getattr(x, self._wrap)(self.func, *args, **kwargs)
318 else: return self.func(x, *args, **kwargs)

~/fastai/fastai/vision/image.py in pixel(self, func, *args, **kwargs)
152 def pixel(self, func:PixelFunc, *args, **kwargs)->‘Image’:
153 “Equivalent to image.px = func(image.px).”
–> 154 self.px = func(self.px, *args, **kwargs)
155 return self

~/fastai/fastai/vision/transform.py in crop_pad(x, size, padding_mode, row_pct, col_pct)
90 “Crop and pad tfm - row_pct,col_pct sets focal point.”
91 padding_mode = _pad_mode_convert[padding_mode]
—> 92 size = listify(size,2)
93 if x.shape[1:] == size: return x
94 rows,cols = size

~/fastai/fastai/core.py in listify(p, q)
84 n = q if type(q)==int else len§ if q is None else len(q)
85 if len§==1: p = p * n
—> 86 assert len§==n, f’List len mismatch ({len§} vs {n})’
87 return list§

AssertionError: List len mismatch (0 vs 2)

I found out that the images are not cropped because this line:

size_tfms = [o for o in tfms if isinstance(o.tfm,TfmCrop)]

always returns an empty list while there are instances of TfmCrop in there

eg I printed

print(tfms[len(tfms)-1].tfm.class.name, isinstance(tfms[len(tfms)-1].tfm, TfmCrop))

which returns TfmCrop and False?? Why is that? Digging further

This does work:

size_tfms = [o for o in tfms if o.tfm.class.name is ‘TfmCrop’]
instead of
size_tfms = [o for o in tfms if isinstance(o.tfm,TfmCrop)]
but that’s probably hacky python code

Next to above,

the line below is failing on my machine as well:
vision.py, line 92 --> size = listify(size,2)
size seems to be an integer at that point, but listify is failing
setting size to eg size = [224,224] does the trick

But why is “size = listify(size,2)” failing?

I think you need to add a (e.g) size=224 param in your get_transforms().

And probably in image_data_from_folder too.

@sgugger this is something we really need to fix. At least better error messages - ideally better behavior! Maybe size=224 should be default? …At least if you’re using crop_pad et al transforms…

I checked and get_transfom doesn’t take a size actually (it’s inferred in apply_tfms) so it’s mostly in the image_data method that this is missing.
Will see how we can have better error message popping for this.

1 Like

I have a similar issue:
I created an ImageDataBunch like so:
data=faiv.ImageDataBunch.create(train_ds, valid_ds, test_ds, bs=4, ds_tfms=faiv.get_transforms()[0])
And I built a Learner from this like so:
learn=fai.Learner(data,m2, metrics=[fai.accuracy])
I, too get, AssertionError: List len mismatch (0 vs 2)
I tried using faiv.apply_tfms, which worked if I pass a size argument as said above. However, if I can not pass this size argument, how can I resolve?
Another, related question: What would be the canonical way to create my own tfms? Should I set data.train_ds.tfms=[List of my transforms] or should I use torchvision.transforms.Compose([myList of transforms]) and pass that to the constructor of my ImageDataBunch?
Sorry, I am a little confused.

What version of fastai are you using? faiv.apply_tfms should wok even if you don’t specify a size now (since yesterday actually :wink: ) but you would still need to do that if your images aren’t all of the same size or you won’t be able to put them in batches.

To create your own transform, just define it by following the models in fastai.vision.transform. I promise it’s super easy!

OK, I’ll update and try again. Many thanks so far!

I tried. Updated to versions:

fastai: 1.0.17
pytorch: 1.0.0.dev20181103
torchvision: 0.2.1

When using apply_tfms on a fastai Image I still need to provide the size argument.
tim=faiv.apply_tfms(faiv.get_transforms()[0], im);tim.shape where type(im) is fastai.vision.image.Image
Last bit of output.

/opt/conda/lib/python3.7/site-packages/fastai/vision/image.py in pixel(self, func, *args, **kwargs)
    160     def pixel(self, func:PixelFunc, *args, **kwargs)->'Image':
    161         "Equivalent to `image.px = func(image.px)`."
--> 162         self.px = func(self.px, *args, **kwargs)
    163         return self

/opt/conda/lib/python3.7/functools.py in wrapper(*args, **kw)
    819     def wrapper(*args, **kw):
--> 820         return dispatch(args[0].__class__)(*args, **kw)
    822     registry[object] = func

/opt/conda/lib/python3.7/site-packages/fastai/vision/transform.py in crop_pad(x, size, padding_mode, row_pct, col_pct)
    134     "Crop and pad tfm - `row_pct`,`col_pct` sets focal point."
    135     padding_mode = _pad_mode_convert[padding_mode]
--> 136     size = listify(size,2)
    137     if x.shape[1:] == size: return x
    138     rows,cols = size

/opt/conda/lib/python3.7/site-packages/fastai/core.py in listify(p, q)
     89     n = q if type(q)==int else len(p) if q is None else len(q)
     90     if len(p)==1: p = p * n
---> 91     assert len(p)==n, f'List len mismatch ({len(p)} vs {n})'
     92     return list(p)

size is None.

Now, apply_tfms is only a convenience function I used for testing my list of transforms. What I am really after is to build a custom Dataset (which I have) and fitting a learner.
I used get_transforms() to be sure I am not causing this with applying transforms wrongly. So, I get
(output after exception in basic_train.py):

/opt/conda/lib/python3.7/site-packages/fastprogress/fastprogress.py in __iter__(self)
     63         self.update(0)
     64         try:
---> 65             for i,o in enumerate(self._gen):
     66                 yield o
     67                 if self.auto_update: self.update(i+1)

/opt/conda/lib/python3.7/site-packages/fastai/basic_data.py in __iter__(self)
     72     def __iter__(self):
     73         "Process and returns items from `DataLoader`."
---> 74         for b in self.dl: yield self.proc_batch(b)
     76     def one_batch(self)->Collection[Tensor]:

/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self)
    635                 self.reorder_dict[idx] = batch
    636                 continue
--> 637             return self._process_next_batch(batch)
    639     next = __next__  # Python 2 compatibility

/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _process_next_batch(self, batch)
    656         self._put_indices()
    657         if isinstance(batch, ExceptionWrapper):
--> 658             raise batch.exc_type(batch.exc_msg)
    659         return batch

data and Learner were built as posted above.

Apologies for long post. I forgot to mention: It does work also for learner.fit(), if I pass a size argument when I construct an ImageDataBunch. So, not a huge deal, though it left me scratching my head a bit.

I’ve just released a new version - let me know if that helps.

Many thanks. I tested

fastai: 1.0.19
pytorch: 1.0.0.dev20181104
torchvision: 0.2.1

and I could apply_tfms() without a size argument.