The size of tensor a (3) must match the size of tensor b (7) at non-singleton dimension 1

Hey folks,
I’ve been testing my project for a long time, and now I made a progress.

But I got this error that I didn’t find a good solution to:

UserWarning: Using a target size (torch.Size([7])) that is different to the input size (torch.Size([7, 3])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.

How am I supposed to update the sizes of the tensors? And by what size?

As seen here:

More info and interesting comments, you can see here:

can you confirm what '/content/gdrive/MyDrive/SID/train' this folder is suppose to point to? I would like to run your notebook to debug further. I think it might be an issue with how cnn_learner is setup but would like to confirm. (thinking maybe brackets or parenthesis missing maybe?)

Can you rerun your code and do:

pip install fastdebug

from fastdebug import *

Before running it? Should give us a more clear error of where it’s stemming from

1 Like

Thanks Kevin!
I mounted my google drive to that Google Colab. So I can access the folders in my google drive from Google Colab (Basically it adds the folders to that path, you can see in the files explorer on the left).
If you can’t see those files there, let me know. Maybe I gotta give it permission when sharing

Hey Man! Thanks.
Yup, just did so.
Btw, you can copy this notebook to your own drive (File->Save a copy …) and make changes and run it.
But maybe I’ll need to give it just permission to everybody to edit it. Let me know if copying it doesnt work

When asking for assistance on the forums, don’t assume that we will run it and debug ourselves. You should try to convey enough information to get everything relating to the problem, and the gist or Colab notebook serves as a record to see your code and it’s outputs :slight_smile: As many of us are too busy to run it ourselves, so we just want a quick view as to what you’re doing and how you go the behavior.

In this case you’re also hooking up to your Google drive, which none of us will be able to reproduce since that is your drive specific. We cannot login to that

Don’t get me wrong, I’d do the work myself.
I probably understood it as if you asked me to add something that helps you through to run it. My apologies.

Nope, I’m not able to access them

Alright guys, I suspected that the error arose because for the same target path, there were attached up to 10 different inputs. So I tried to rescale the problem, by attaching for every target object only a single input.

Problem was, that when loading the data, it missed one line in the csv. Any ideas how or why?

It looks like this:

Also the stack trace:

KeyError                                  Traceback (most recent call last)
<ipython-input-38-f142507f9d90> in <module>()
----> 1 learn.fit_one_cycle(1, 0.01)

13 frames
/usr/local/lib/python3.7/dist-packages/fastai/callback/schedule.py in fit_one_cycle(self, n_epoch, lr_max, div, div_final, pct_start, wd, moms, cbs, reset_opt)
    110     scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
    111               'mom': combined_cos(pct_start, *(self.moms if moms is None else moms))}
--> 112     self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
    113 
    114 # Cell

/usr/local/lib/python3.7/dist-packages/fastai/learner.py in fit(self, n_epoch, lr, wd, cbs, reset_opt)
    216             self.opt.set_hypers(lr=self.lr if lr is None else lr)
    217             self.n_epoch = n_epoch
--> 218             self._with_events(self._do_fit, 'fit', CancelFitException, self._end_cleanup)
    219 
    220     def _end_cleanup(self): self.dl,self.xb,self.yb,self.pred,self.loss = None,(None,),(None,),None,None

/usr/local/lib/python3.7/dist-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
    158 
    159     def _with_events(self, f, event_type, ex, final=noop):
--> 160         try: self(f'before_{event_type}');  f()
    161         except ex: self(f'after_cancel_{event_type}')
    162         self(f'after_{event_type}');  final()

/usr/local/lib/python3.7/dist-packages/fastai/learner.py in _do_fit(self)
    207         for epoch in range(self.n_epoch):
    208             self.epoch=epoch
--> 209             self._with_events(self._do_epoch, 'epoch', CancelEpochException)
    210 
    211     def fit(self, n_epoch, lr=None, wd=None, cbs=None, reset_opt=False):

/usr/local/lib/python3.7/dist-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
    158 
    159     def _with_events(self, f, event_type, ex, final=noop):
--> 160         try: self(f'before_{event_type}');  f()
    161         except ex: self(f'after_cancel_{event_type}')
    162         self(f'after_{event_type}');  final()

/usr/local/lib/python3.7/dist-packages/fastai/learner.py in _do_epoch(self)
    202     def _do_epoch(self):
    203         self._do_epoch_train()
--> 204         self._do_epoch_validate()
    205 
    206     def _do_fit(self):

/usr/local/lib/python3.7/dist-packages/fastai/learner.py in _do_epoch_validate(self, ds_idx, dl)
    198         if dl is None: dl = self.dls[ds_idx]
    199         self.dl = dl
--> 200         with torch.no_grad(): self._with_events(self.all_batches, 'validate', CancelValidException)
    201 
    202     def _do_epoch(self):

/usr/local/lib/python3.7/dist-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
    158 
    159     def _with_events(self, f, event_type, ex, final=noop):
--> 160         try: self(f'before_{event_type}');  f()
    161         except ex: self(f'after_cancel_{event_type}')
    162         self(f'after_{event_type}');  final()

/usr/local/lib/python3.7/dist-packages/fastai/learner.py in all_batches(self)
    164     def all_batches(self):
    165         self.n_iter = len(self.dl)
--> 166         for o in enumerate(self.dl): self.one_batch(*o)
    167 
    168     def _do_one_batch(self):

/usr/local/lib/python3.7/dist-packages/fastai/data/load.py in __iter__(self)
    107         self.before_iter()
    108         self.__idxs=self.get_idxs() # called in context of main process (not workers/subprocesses)
--> 109         for b in _loaders[self.fake_l.num_workers==0](self.fake_l):
    110             if self.device is not None: b = to_device(b, self.device)
    111             yield self.after_batch(b)

/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in __next__(self)
    515             if self._sampler_iter is None:
    516                 self._reset()
--> 517             data = self._next_data()
    518             self._num_yielded += 1
    519             if self._dataset_kind == _DatasetKind.Iterable and \

/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in _next_data(self)
   1197             else:
   1198                 del self._task_info[idx]
-> 1199                 return self._process_data(data)
   1200 
   1201     def _try_put_index(self):

/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in _process_data(self, data)
   1223         self._try_put_index()
   1224         if isinstance(data, ExceptionWrapper):
-> 1225             data.reraise()
   1226         return data
   1227 

/usr/local/lib/python3.7/dist-packages/torch/_utils.py in reraise(self)
    427             # have message field
    428             raise self.exc_type(message=msg)
--> 429         raise self.exc_type(msg)
    430 
    431 

KeyError: Caught KeyError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/fastai/data/transforms.py", line 246, in encodes
    return TensorCategory(self.vocab.o2i[o])
KeyError: 'long/10/7.jpg'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
    data = fetcher.fetch(index)
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 34, in fetch
    data = next(self.dataset_iter)
  File "/usr/local/lib/python3.7/dist-packages/fastai/data/load.py", line 118, in create_batches
    yield from map(self.do_batch, self.chunkify(res))
  File "/usr/local/lib/python3.7/dist-packages/fastcore/basics.py", line 216, in chunked
    res = list(itertools.islice(it, chunk_sz))
  File "/usr/local/lib/python3.7/dist-packages/fastai/data/load.py", line 133, in do_item
    try: return self.after_item(self.create_item(s))
  File "/usr/local/lib/python3.7/dist-packages/fastai/data/load.py", line 140, in create_item
    if self.indexed: return self.dataset[s or 0]
  File "/usr/local/lib/python3.7/dist-packages/fastai/data/core.py", line 333, in __getitem__
    res = tuple([tl[it] for tl in self.tls])
  File "/usr/local/lib/python3.7/dist-packages/fastai/data/core.py", line 333, in <listcomp>
    res = tuple([tl[it] for tl in self.tls])
  File "/usr/local/lib/python3.7/dist-packages/fastai/data/core.py", line 299, in __getitem__
    return self._after_item(res) if is_indexer(idx) else res.map(self._after_item)
  File "/usr/local/lib/python3.7/dist-packages/fastai/data/core.py", line 261, in _after_item
    def _after_item(self, o): return self.tfms(o)
  File "/usr/local/lib/python3.7/dist-packages/fastcore/transform.py", line 200, in __call__
    def __call__(self, o): return compose_tfms(o, tfms=self.fs, split_idx=self.split_idx)
  File "/usr/local/lib/python3.7/dist-packages/fastcore/transform.py", line 150, in compose_tfms
    x = f(x, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/fastdebug/fastai/transform.py", line 32, in __call__
    transform_error(e, _get_name(self), 'encodes')
  File "/usr/local/lib/python3.7/dist-packages/fastdebug/fastai/transform.py", line 24, in transform_error
    raise e
  File "/usr/local/lib/python3.7/dist-packages/fastdebug/fastai/transform.py", line 30, in __call__
    return self._call('encodes', x, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/fastcore/transform.py", line 83, in _call
    return self._do_call(getattr(self, fn), x, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/fastcore/transform.py", line 89, in _do_call
    return retain_type(f(x, **kwargs), x, ret)
  File "/usr/local/lib/python3.7/dist-packages/fastcore/dispatch.py", line 118, in __call__
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/fastai/data/transforms.py", line 248, in encodes
    raise KeyError(f"Label '{o}' was not included in the training dataset") from e
KeyError: "There was an issue calling the encodes on transform Categorize:\n\nLabel 'long/10/7.jpg' was not included in the training dataset"