Help with the @property bug when using custom layer in FastAI, possible due to TensorImageBW?

Hi there,

How are you doing?

I’m currently working on implementing an AutoEncoder in FastAI with a custom layer in-between. This model is for wireless communication system and the custom layer acting as the channel whose job is just adding normal distribution to the bottle neck vector.

However, I got a weird error when I test ran this layer with a random image from validation set. It ran normally with a same-size random tensor.
If possible, could you give me give me some thoughts about it?

Thank you

The sum() before the mean() is reducing it to a scalar. hence no mean() property is available.
You will need to specify which axis/dim you wish it to sum over. example sum(dim=1)

1 Like

Thank you for pointing that out. It was unnecessary to include that sum() if I already used mean() after that.

However, when I deleted that sum(), only .abs().pow(2).mean(), it still gives me the same error. Only this time, it is “abs”. If I also took out the .abs(), the same error would happen, but with “pow”.

I tested this same code with a random 28x28 tensor and it works just fine. I suspect this error might be related to the datatype. The image tensor is TensorImageBW so maybe it was not compatible somehow?

Just kicking some tyres. Not sure if it helps

xb, yb = dsets.valid[1]
type(xb)
# fastai.vision.core.PILImageBW
xb.mean()

> AttributeError                            Traceback (most recent call last)
> Cell In[120], line 1
> ----> 1 xb.mean()
> 
> File ~/mambaforge/envs/py38/lib/python3.8/site-packages/PIL/Image.py:529, in Image.__getattr__(self, name)
>     527     deprecate("Image categories", 10, "is_animated", plural=True)
>     528     return self._category
> --> 529 raise AttributeError(name)
> 
> AttributeError: mean

mnist.summary(path) shows that pipeline which includes InToFloatTensor in after_batch.
What if we grab xb from a batch after instead of straight from the dset?

xb = dls.one_batch()[1]
type(xb)
# fastai.torch_core.TensorImageBW
xb.mean()
# TensorImageBW(0.1273, device='cuda:0')

So it looks like mean() will work after it has gone through the pipeline and is transformed to the right type.

1 Like

Thank you so much for your help.

That was indeed the problem. I do have a following up question if you don’t mind.

This is my model structure (autoencoder with a custom layer in the middle). However, I had to modify my custom layer to copy the data of the input to a different variable and work with that variable instead. Otherwise, it would give me error which does not happen if I test it with a single image data file.

I would appreciate it if you could give me some thoughts on why this is the case.

Thank you