Hi Peeps,
I’m having some trouble creating a data block and keep receiving the error ‘RuntimeError: The size of tensor a (115) must match the size of tensor b (64) at non-singleton dimension 1’
I am trying to train my resnet34 model on the fruits-360 dataset from kaggle.
This is my code:
np.random.seed(42)
data = (ImageList.from_folder(fruits_path)
.split_by_rand_pct(0.2)
.label_from_folder()
.transform(get_transforms(), size=128)
.databunch().normalize(imagenet_stats))
architecture = models.resnet34
acc_02 = partial(accuracy_thresh, thresh=0.2)
f_score = partial(fbeta, thresh=0.2)
learn = cnn_learner(data, architecture, metrics=[acc_02, f_score])
lr = 0.01
learn.fit_one_cycle(5, slice(lr))
Any help would be really appreciated! Thanks!
Sean
1 Like
kushaj
(Kushajveer Singh)
July 21, 2019, 5:40pm
2
The problem has something to do with batch size. Can you try explicitly setting batch size=64 in the databunch and see if there is any difference.
1 Like
Hi Kushajveer,
I changed it to 64 but am still receiving the same error…
kushaj
(Kushajveer Singh)
July 21, 2019, 5:54pm
4
Try the same code for MNIST sample data and see if the error persists. I tried with your code and I got no error.
Just tried with MNIST sample data and I am still receiving a similar error
RuntimeError: The size of tensor a (3) must match the size of tensor b (64) at non-singleton dimension 1
kushaj
(Kushajveer Singh)
July 21, 2019, 6:06pm
6
It is probably a version issue. Update fastai. Mine version is 1.0.55 and your code is working on my machine.
Strange, mine is the same. Do you mind if you copy and paste your MNIST code so I could compare where i am going wrong please?
kushaj
(Kushajveer Singh)
July 21, 2019, 6:11pm
8
path = untar_data(URLs.MNIST)
np.random.seed(42)
data = (ImageList.from_folder(path)
.split_by_rand_pct(0.2)
.label_from_folder()
.transform(get_transforms(), size=128)
.databunch().normalize(imagenet_stats))
architecture = models.resnet34
acc_02 = partial(accuracy_thresh, thresh=0.2)
f_score = partial(fbeta, thresh=0.2)
learn = cnn_learner(data, architecture, metrics=[acc_02, f_score])
lr = 0.01
learn.fit_one_cycle(5, slice(lr))
I ran for 1 epoch and it is working.
kushaj
(Kushajveer Singh)
July 21, 2019, 6:16pm
9
I came to the conclusion early. This is giving an error but after the epoch is run. Wait few minutes, I will try to resolve it .
Yeah I am still getting errors. I will wait.
kushaj
(Kushajveer Singh)
July 21, 2019, 6:28pm
11
Oh I forgot we are doing single label classification. These metrics are meant for multi-label problems.
Excuse my ignorance bu why does that matter? Surely there should be a accuracy score and f_score anyway?
kushaj
(Kushajveer Singh)
July 21, 2019, 6:30pm
13
To use these metrics you have to give one-hot encoded inputs but in fastai by default for single label problems the labels the labels are not one-hot encoded.
kushaj
(Kushajveer Singh)
July 21, 2019, 6:31pm
14
You can create a custom callback where you convert your labels to one-hot encoded format and then use the metric there.
Ah okay, so what is a better metric to use?
kushaj
(Kushajveer Singh)
July 21, 2019, 6:35pm
16
They are for different purposes. If you have multi label problem, then with accuracy_thresh you are calculating how much error is there is I select all the classes greater than this thresh.
In F-score you are measuring how much precision and recalll you are able to get after setting a thresh.
kushaj
(Kushajveer Singh)
July 21, 2019, 6:36pm
17
This article can help link .
1 Like
Great, I changed it to error rate and the pochs are running successfully, thank you so much for your help kushaj!!!
1 Like
Hey I’m doing a project similar to this but for myself and I’m also running into this error.
More like this tho:
RuntimeError: The size of tensor a (128) must match the size of tensor b (4096) at non-singleton dimension 1
I can’t do large batch sizes because my computer is really weak.
I was wondering if you could help
nofreewill
(Iván Jonatán)
January 20, 2020, 5:20pm
20
Thanks. I had this error with batch size of 1, 2, 4, and 6. Now with 8 I don’t get the error somehow. Feels a bit magic.