Help! RuntimeError: The size of tensor a (3) must match the size of tensor b (1024) at non-singleton dimension 3

Hey I’m encoutering this kind of problem:

/usr/local/lib/python3.7/dist-packages/torch/_tensor.py:1051: UserWarning: Using a target size (torch.Size([4, 3, 1024, 1024])) that is different to the input size (torch.Size([4, 3])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
  ret = func(*args, **kwargs)

Both my input and target are RGB (3 chanelles) images of the size 1024x1024 (HxW).

The dataset is divided into batches of size 4.

So there should be [4,3,1024,1024].

Why is the input size only shaped as [4,3]?

/usr/local/lib/python3.7/dist-packages/fastai/callback/schedule.py in fine_tune(self, epochs, base_lr, freeze_epochs, lr_mult, pct_start, div, **kwargs)
    159     "Fine tune with `Learner.freeze` for `freeze_epochs`, then with `Learner.unfreeze` for `epochs`, using discriminative LR."
    160     self.freeze()
--> 161     self.fit_one_cycle(freeze_epochs, slice(base_lr), pct_start=0.99, **kwargs)
    162     base_lr /= 2
    163     self.unfreeze()

/usr/local/lib/python3.7/dist-packages/fastai/callback/schedule.py in fit_one_cycle(self, n_epoch, lr_max, div, div_final, pct_start, wd, moms, cbs, reset_opt)
    114     scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
    115               'mom': combined_cos(pct_start, *(self.moms if moms is None else moms))}
--> 116     self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
    117 
    118 # Cell

/usr/local/lib/python3.7/dist-packages/fastai/learner.py in fit(self, n_epoch, lr, wd, cbs, reset_opt)
    219             self.opt.set_hypers(lr=self.lr if lr is None else lr)
    220             self.n_epoch = n_epoch
--> 221             self._with_events(self._do_fit, 'fit', CancelFitException, self._end_cleanup)
    222 
    223     def _end_cleanup(self): self.dl,self.xb,self.yb,self.pred,self.loss = None,(None,),(None,),None,None

/usr/local/lib/python3.7/dist-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
    161 
    162     def _with_events(self, f, event_type, ex, final=noop):
--> 163         try: self(f'before_{event_type}');  f()
    164         except ex: self(f'after_cancel_{event_type}')
    165         self(f'after_{event_type}');  final()

/usr/local/lib/python3.7/dist-packages/fastai/learner.py in _do_fit(self)
    210         for epoch in range(self.n_epoch):
    211             self.epoch=epoch
--> 212             self._with_events(self._do_epoch, 'epoch', CancelEpochException)
    213 
    214     def fit(self, n_epoch, lr=None, wd=None, cbs=None, reset_opt=False):

/usr/local/lib/python3.7/dist-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
    161 
    162     def _with_events(self, f, event_type, ex, final=noop):
--> 163         try: self(f'before_{event_type}');  f()
    164         except ex: self(f'after_cancel_{event_type}')
    165         self(f'after_{event_type}');  final()

/usr/local/lib/python3.7/dist-packages/fastai/learner.py in _do_epoch(self)
    204 
    205     def _do_epoch(self):
--> 206         self._do_epoch_train()
    207         self._do_epoch_validate()
    208 

/usr/local/lib/python3.7/dist-packages/fastai/learner.py in _do_epoch_train(self)
    196     def _do_epoch_train(self):
    197         self.dl = self.dls.train
--> 198         self._with_events(self.all_batches, 'train', CancelTrainException)
    199 
    200     def _do_epoch_validate(self, ds_idx=1, dl=None):

/usr/local/lib/python3.7/dist-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
    161 
    162     def _with_events(self, f, event_type, ex, final=noop):
--> 163         try: self(f'before_{event_type}');  f()
    164         except ex: self(f'after_cancel_{event_type}')
    165         self(f'after_{event_type}');  final()

/usr/local/lib/python3.7/dist-packages/fastai/learner.py in all_batches(self)
    167     def all_batches(self):
    168         self.n_iter = len(self.dl)
--> 169         for o in enumerate(self.dl): self.one_batch(*o)
    170 
    171     def _do_one_batch(self):

/usr/local/lib/python3.7/dist-packages/fastai/learner.py in one_batch(self, i, b)
    192         b = self._set_device(b)
    193         self._split(b)
--> 194         self._with_events(self._do_one_batch, 'batch', CancelBatchException)
    195 
    196     def _do_epoch_train(self):

/usr/local/lib/python3.7/dist-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
    161 
    162     def _with_events(self, f, event_type, ex, final=noop):
--> 163         try: self(f'before_{event_type}');  f()
    164         except ex: self(f'after_cancel_{event_type}')
    165         self(f'after_{event_type}');  final()

/usr/local/lib/python3.7/dist-packages/fastai/learner.py in _do_one_batch(self)
    173         self('after_pred')
    174         if len(self.yb):
--> 175             self.loss_grad = self.loss_func(self.pred, *self.yb)
    176             self.loss = self.loss_grad.clone()
    177         self('after_loss')

/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in l1_loss(input, target, size_average, reduce, reduction)
   3066     if has_torch_function_variadic(input, target):
   3067         return handle_torch_function(
-> 3068             l1_loss, (input, target), input, target, size_average=size_average, reduce=reduce, reduction=reduction
   3069         )
   3070     if not (target.size() == input.size()):

/usr/local/lib/python3.7/dist-packages/torch/overrides.py in handle_torch_function(public_api, relevant_args, *args, **kwargs)
   1353         # Use `public_api` instead of `implementation` so __torch_function__
   1354         # implementations can do equality/identity comparisons.
-> 1355         result = torch_func_method(public_api, types, args, kwargs)
   1356 
   1357         if result is not NotImplemented:

/usr/local/lib/python3.7/dist-packages/fastai/torch_core.py in __torch_function__(self, func, types, args, kwargs)
    338         convert=False
    339         if _torch_handled(args, self._opt, func): convert,types = type(self),(torch.Tensor,)
--> 340         res = super().__torch_function__(func, types, args=args, kwargs=kwargs)
    341         if convert: res = convert(res)
    342         if isinstance(res, TensorBase): res.set_meta(self, as_copy=True)

/usr/local/lib/python3.7/dist-packages/torch/_tensor.py in __torch_function__(cls, func, types, args, kwargs)
   1049 
   1050         with _C.DisableTorchFunction():
-> 1051             ret = func(*args, **kwargs)
   1052             if func in get_default_nowrap_functions():
   1053                 return ret

/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in l1_loss(input, target, size_average, reduce, reduction)
   3078         reduction = _Reduction.legacy_get_string(size_average, reduce)
   3079 
-> 3080     expanded_input, expanded_target = torch.broadcast_tensors(input, target)
   3081     return torch._C._nn.l1_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction))
   3082 

/usr/local/lib/python3.7/dist-packages/torch/functional.py in broadcast_tensors(*tensors)
     70     if has_torch_function(tensors):
     71         return handle_torch_function(broadcast_tensors, tensors, *tensors)
---> 72     return _VF.broadcast_tensors(tensors)  # type: ignore[attr-defined]
     73 
     74 

RuntimeError: The size of tensor a (3) must match the size of tensor b (1024) at non-singleton dimension 3

I suppose that it is because of the loss function.

learn = cnn_learner(dls, resnet18,loss_func=F.l1_loss, metrics=[psnr,ssim])

Here’s my summary():

Setting-up type transforms pipelines
Collecting items from /content/drive/MyDrive/Dataset9
Found 118 items
2 datasets of sizes 95,23
Setting up Pipeline: PILBase.create
Setting up Pipeline: <lambda> -> PILBase.create

Building one sample
  Pipeline: PILBase.create
    starting from
      /content/drive/MyDrive/Dataset9/Long/40.JPG
    applying PILBase.create gives
      PILImage mode=RGB size=4272x2848
  Pipeline: <lambda> -> PILBase.create
    starting from
      /content/drive/MyDrive/Dataset9/Long/40.JPG
    applying <lambda> gives
      /content/drive/MyDrive/Dataset9/Long/40.JPG
    applying PILBase.create gives
      PILImage mode=RGB size=4272x2848

Final sample: (PILImage mode=RGB size=4272x2848, PILImage mode=RGB size=4272x2848)


Collecting items from /content/drive/MyDrive/Dataset9
Found 118 items
2 datasets of sizes 95,23
Setting up Pipeline: PILBase.create
Setting up Pipeline: <lambda> -> PILBase.create
Setting up after_item: Pipeline: RandomCrop -- {'size': (1024, 1024), 'p': 1.0} -> ToTensor
Setting up before_batch: Pipeline: 
Setting up after_batch: Pipeline: IntToFloatTensor -- {'div': 255.0, 'div_mask': 1}

Building one batch
Applying item_tfms to the first sample:
  Pipeline: RandomCrop -- {'size': (1024, 1024), 'p': 1.0} -> ToTensor
    starting from
      (PILImage mode=RGB size=4272x2848, PILImage mode=RGB size=4272x2848)
    applying RandomCrop -- {'size': (1024, 1024), 'p': 1.0} gives
      (PILImage mode=RGB size=1024x1024, PILImage mode=RGB size=1024x1024)
    applying ToTensor gives
      (TensorImage of size 3x1024x1024, TensorImage of size 3x1024x1024)

Adding the next 3 samples

No before_batch transform to apply

Collating items in a batch

Applying batch_tfms to the batch built
  Pipeline: IntToFloatTensor -- {'div': 255.0, 'div_mask': 1}
    starting from
      (TensorImage of size 4x3x1024x1024, TensorImage of size 4x3x1024x1024)
    applying IntToFloatTensor -- {'div': 255.0, 'div_mask': 1} gives
      (TensorImage of size 4x3x1024x1024, TensorImage of size 4x3x1024x1024)

Thanks