RuntimeError: The size of tensor a (2) must match the size of tensor b (96) at non-singleton dimension 1


(Ranjit) #1

HI I’m trying to do a gender detection images

df.head()

full_path gender
17/10000217_1981-05-05_2009.jpg M
48/10000548_1925-04-04_1964.jpg M
12/100012_1948-07-03_2008.jpg M
65/10001965_1930-05-23_1961.jpg M
16/10002116_1971-05-31_2012.jpg F

tfms = get_transforms(do_flip=False)
np.random.seed(42)
src = (ImageItemList.from_df(df, path)
.random_split_by_pct(0.2)
.label_from_df(cols=1))

data = (src.transform(tfms, size=128)
.databunch().normalize(imagenet_stats))

After run a cycle, its gives me this error

RuntimeError Traceback (most recent call last)
in
----> 1 learn.fit_one_cycle(2, slice(lr))

/notebooks/fastai/train.py in fit_one_cycle(learn, cyc_len, max_lr, moms, div_factor, pct_start, wd, callbacks, **kwargs)
18 callbacks.append(OneCycleScheduler(learn, max_lr, moms=moms, div_factor=div_factor,
19 pct_start=pct_start, **kwargs))
—> 20 learn.fit(cyc_len, max_lr, wd=wd, callbacks=callbacks)
21
22 def lr_find(learn:Learner, start_lr:Floats=1e-7, end_lr:Floats=10, num_it:int=100, stop_div:bool=True, **kwargs:Any):

/notebooks/fastai/basic_train.py in fit(self, epochs, lr, wd, callbacks)
160 callbacks = [cb(self) for cb in self.callback_fns] + listify(callbacks)
161 fit(epochs, self.model, self.loss_func, opt=self.opt, data=self.data, metrics=self.metrics,
–> 162 callbacks=self.callbacks+callbacks)
163
164 def create_opt(self, lr:Floats, wd:Floats=0.)->None:

/notebooks/fastai/basic_train.py in fit(epochs, model, loss_func, opt, data, callbacks, metrics)
92 except Exception as e:
93 exception = e
—> 94 raise e
95 finally: cb_handler.on_train_end(exception)
96

/notebooks/fastai/basic_train.py in fit(epochs, model, loss_func, opt, data, callbacks, metrics)
87 if hasattr(data,‘valid_dl’) and data.valid_dl is not None:
88 val_loss = validate(model, data.valid_dl, loss_func=loss_func,
—> 89 cb_handler=cb_handler, pbar=pbar)
90 else: val_loss=None
91 if cb_handler.on_epoch_end(val_loss): break

/notebooks/fastai/basic_train.py in validate(model, dl, loss_func, cb_handler, pbar, average, n_batch)
52 if not is_listy(yb): yb = [yb]
53 nums.append(yb[0].shape[0])
—> 54 if cb_handler and cb_handler.on_batch_end(val_losses[-1]): break
55 if n_batch and (len(nums)>=n_batch): break
56 nums = np.array(nums, dtype=np.float32)

/notebooks/fastai/callback.py in on_batch_end(self, loss)
236 “Handle end of processing one batch with loss.”
237 self.state_dict[‘last_loss’] = loss
–> 238 stop = np.any(self(‘batch_end’, not self.state_dict[‘train’]))
239 if self.state_dict[‘train’]:
240 self.state_dict[‘iteration’] += 1

/notebooks/fastai/callback.py in call(self, cb_name, call_mets, **kwargs)
184 def call(self, cb_name, call_mets=True, **kwargs)->None:
185 “Call through to all of the CallbakHandler functions.”
–> 186 if call_mets: [getattr(met, f’on_{cb_name}’)(**self.state_dict, **kwargs) for met in self.metrics]
187 return [getattr(cb, f’on_{cb_name}’)(**self.state_dict, **kwargs) for cb in self.callbacks]
188

/notebooks/fastai/callback.py in (.0)
184 def call(self, cb_name, call_mets=True, **kwargs)->None:
185 “Call through to all of the CallbakHandler functions.”
–> 186 if call_mets: [getattr(met, f’on_{cb_name}’)(**self.state_dict, **kwargs) for met in self.metrics]
187 return [getattr(cb, f’on_{cb_name}’)(**self.state_dict, **kwargs) for cb in self.callbacks]
188

/notebooks/fastai/callback.py in on_batch_end(self, last_output, last_target, train, **kwargs)
269 if not is_listy(last_target): last_target=[last_target]
270 self.count += last_target[0].size(0)
–> 271 self.val += last_target[0].size(0) * self.func(last_output, *last_target).detach().cpu()
272
273 def on_epoch_end(self, **kwargs):

/notebooks/fastai/metrics.py in accuracy_thresh(y_pred, y_true, thresh, sigmoid)
20 “Compute accuracy when y_pred and y_true are the same size.”
21 if sigmoid: y_pred = y_pred.sigmoid()
—> 22 return ((y_pred>thresh)==y_true.byte()).float().mean()
23
24 def dice(input:Tensor, targs:Tensor, iou:bool=False)->Rank0Tensor:
RuntimeError: The size of tensor a (2) must match the size of tensor b (96) at non-singleton dimension 1


(Brad) #2

Not sure if you’re still looking for an answer, but I got a similar error because I changed a hyper-parameter (batch size in my case) and tried to load data from a different run using different hyper-parameters.


(Jeremy Easterbrook) #3

I am having the same error for a binary classification for tabular, I have tried the following:

acc_imb = partial(accuracy_thresh, thresh=0.1)
m = tabular_learner(data, layers=[1000,500], metrics=[acc_imb])

m = tabular_learner(data, layers=[500,250], metrics=[accuracy_thresh(thresh=0.1)])

In both cases I get:
RuntimeError: The size of tensor a (2) must match the size of tensor b (64) at non-singleton dimension 1


Error while using fbeta metric
(Junlin) #4

According to docs, accuracy_thresh is intended for one-hot-encoded targets (often in a multiclassification problem). I guess that’s why your size of tensor doesn’t match.


(Jeremy Easterbrook) #5

Thanks Junlin! I realized we can actually pass a treshold in the loss_func of the model


(Junlin) #6

You’re welcome.:grinning: Would you like to share how did you get around the one-hot-encoded situation?


(Jeremy Easterbrook) #7

I’ve always had a binary one-hot encoded situation, so I did :

learn = tabular_learner(data, layers=[500,250], metrics=[accuracy], loss_func=imbalance_loss_func)


(Junlin) #9

Just out of curiosity, what did you past as loss_func ?


(Jeremy Easterbrook) #10

I passed a cuda tensor of weights with cross-entropy loss