Problem with classification of structured data

Im using a Structured data with length 50,000 …i want to classify data into two different classes , so for ColumnarModelData i have set its arguments such as is_reg = false and is_multi = true. The code gets complied till the get_learner function but throws an error on m.fit()…i have set use_bn = False

Epoch
0% 0/6 [00:00<?, ?it/s]
0%| | 0/351 [00:00<?, ?it/s]

RuntimeError Traceback (most recent call last)
in ()
----> 1 m.fit(lr, 3,cycle_len = 2)

~/fastai/courses/dl1/fastai/learner.py in fit(self, lrs, n_cycle, wds, **kwargs)
300 self.sched = None
301 layer_opt = self.get_layer_opt(lrs, wds)
–> 302 return self.fit_gen(self.model, self.data, layer_opt, n_cycle, **kwargs)
303
304 def warm_up(self, lr, wds=None):

~/fastai/courses/dl1/fastai/learner.py in fit_gen(self, model, data, layer_opt, n_cycle, cycle_len, cycle_mult, cycle_save_name, best_save_name, use_clr, use_clr_beta, metrics, callbacks, use_wd_sched, norm_wds, wds_sched_mult, use_swa, swa_start, swa_eval_freq, **kwargs)
247 metrics=metrics, callbacks=callbacks, reg_fn=self.reg_fn, clip=self.clip, fp16=self.fp16,
248 swa_model=self.swa_model if use_swa else None, swa_start=swa_start,
–> 249 swa_eval_freq=swa_eval_freq, **kwargs)
250
251 def get_layer_groups(self): return self.models.get_layer_groups()

~/fastai/courses/dl1/fastai/model.py in fit(model, data, n_epochs, opt, crit, metrics, callbacks, stepper, swa_model, swa_start, swa_eval_freq, visualize, kwargs)
139 batch_num += 1
140 for cb in callbacks: cb.on_batch_begin()
–> 141 loss = model_stepper.step(V(x),V(y), epoch)
142 avg_loss = avg_loss * avg_mom + loss * (1-avg_mom)
143 debias_loss = avg_loss / (1 - avg_mom
batch_num)

~/fastai/courses/dl1/fastai/model.py in step(self, xs, y, epoch)
48 def step(self, xs, y, epoch):
49 xtra = []
—> 50 output = self.m(*xs)
51 if isinstance(output,tuple): output,*xtra = output
52 if self.fp16: self.m.zero_grad()

~/anaconda3/envs/fastai-cpu/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
475 result = self._slow_forward(*input, **kwargs)
476 else:
–> 477 result = self.forward(*input, **kwargs)
478 for hook in self._forward_hooks.values():
479 hook_result = hook(self, input, result)

~/fastai/courses/dl1/fastai/column_data.py in forward(self, x_cat, x_cont)
119 x = self.emb_drop(x)
120 if self.n_cont != 0:
–> 121 x2 = self.bn(x_cont)
122 x = torch.cat([x, x2], 1) if self.n_emb != 0 else x2
123 for l,d,b in zip(self.lins, self.drops, self.bns):

~/anaconda3/envs/fastai-cpu/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
475 result = self._slow_forward(*input, **kwargs)
476 else:
–> 477 result = self.forward(*input, **kwargs)
478 for hook in self._forward_hooks.values():
479 hook_result = hook(self, input, result)

~/anaconda3/envs/fastai-cpu/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py in forward(self, input)
64 input, self.running_mean, self.running_var, self.weight, self.bias,
65 self.training or not self.track_running_stats,
—> 66 exponential_average_factor, self.eps)
67
68 def extra_repr(self):

~/anaconda3/envs/fastai-cpu/lib/python3.6/site-packages/torch/nn/functional.py in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps)
1252 return torch.batch_norm(
1253 input, weight, bias, running_mean, running_var,
-> 1254 training, momentum, eps, torch.backends.cudnn.enabled
1255 )
1256

RuntimeError: running_mean should contain 4 elements not 5

Please help me out… Thanks in advance:grinning:

I am guessing the dimensions in your batch norm and your input x are different.

But I have set use_bn = False…so the model shouldn’t use batch normalization

Use bn = false would only freeze the bn layers, that is they will not update their parameters. However you need to pass through them.

Even if you remove the bn layers the error will likely persist. This is because it will give error on the next layer because of dimension mismatch.

You can try to get a small mini batch and pass through the model, using pdb set trace to debug.

1 Like

Thank You for your help…my error was resolved.

how was it resolved? I’m having the same error and don’t really understand what it means that the dimension of my batchNorm and input doesn’t match…

EDIT: Well, If I print out the summary of the model, I see the problem:

MixedInputModel(
  (embs): ModuleList(
    (0): Embedding(4, 2)
    (1): Embedding(3, 2)
    (2): Embedding(8, 4)
    (3): Embedding(8, 4)
    (4): Embedding(682, 50)
    (5): Embedding(5, 3)
  )
  (lins): ModuleList(
    (0): Linear(in_features=68, out_features=100)
    (1): Linear(in_features=100, out_features=50)
  )
  (bns): ModuleList(
    (0): BatchNorm1d(100, eps=1e-05, momentum=0.1, affine=True)
    (1): BatchNorm1d(50, eps=1e-05, momentum=0.1, affine=True)
  )
  (outp): Linear(in_features=50, out_features=2)
  (emb_drop): Dropout(p=0.1)
  (drops): ModuleList(
    (0): Dropout(p=0.09)
    (1): Dropout(p=0.18)
  )
  (bn): BatchNorm1d(3, eps=1e-05, momentum=0.1, affine=True)
)

the last layer has an output size of 2 and the BatchNorm1d has an size of 3 … but why is the model build like this and how can I change the batchNorm size?

I have the same error/question. Can anyone point me to how to set the model parameters with fit()?