Structured Data Issue: Weights Sum to Zero


Has anyone seen this error when using structured data:

ZeroDivisionError: Weights sum to zero, can’t be normalized

I’m applying the regression problem outlined on the Rossman dataset to a new structured dataset, with continuous and categorical variables. I successfully pre-process the data, but when I apply the .lr_find, and .fit to the model, I’ve run into this error. Adjusting the batchsize down allowed .lrfind to complete, but it didn’t eliminate the issue in .fit. Here is the detailed error:

ZeroDivisionError Traceback (most recent call last)
in ()
----> 1, 1, cycle_len=2, use_clr=(3,3))

~/fastai/courses/fastai/ in fit(self, lrs, n_cycle, wds, **kwargs)
300 self.sched = None
301 layer_opt = self.get_layer_opt(lrs, wds)
–> 302 return self.fit_gen(self.model,, layer_opt, n_cycle, **kwargs)
304 def warm_up(self, lr, wds=None):

~/fastai/courses/]fastai/ in fit_gen(self, model, data, layer_opt, n_cycle, cycle_len, cycle_mult, cycle_save_name, best_save_name, use_clr, use_clr_beta, metrics, callbacks, use_wd_sched, norm_wds, wds_sched_mult, use_swa, swa_start, swa_eval_freq, **kwargs)
247 metrics=metrics, callbacks=callbacks, reg_fn=self.reg_fn, clip=self.clip, fp16=self.fp16,
248 swa_model=self.swa_model if use_swa else None, swa_start=swa_start,
–> 249 swa_eval_freq=swa_eval_freq, **kwargs)
251 def get_layer_groups(self): return self.models.get_layer_groups()

~/fastai/coursesfastai/ in fit(model, data, n_epochs, opt, crit, metrics, callbacks, stepper, swa_model, swa_start, swa_eval_freq, visualize, **kwargs)
161 if not all_val:
–> 162 vals = validate(model_stepper, cur_data.val_dl, metrics, epoch, seq_first=seq_first, validate_skip = validate_skip)
163 stop=False
164 for cb in callbacks: stop = stop or cb.on_epoch_end(vals)

~/fastai/courses/fastai/ in validate(stepper, dl, metrics, epoch, seq_first, validate_skip)
240 loss.append(to_np(l))
241 res.append([f(datafy(preds), datafy(y)) for f in metrics])
–> 242 return [np.average(loss, 0, weights=batch_cnts)] + list(np.average(np.stack(res), 0, weights=batch_cnts))
244 def get_prediction(x):

~/anaconda3/envs/fastai/lib/python3.6/site-packages/numpy/lib/ in average(a, axis, weights, returned)
384 if np.any(scl == 0.0):
385 raise ZeroDivisionError(
–> 386 “Weights sum to zero, can’t be normalized”)
388 avg = np.multiply(a, wgt, dtype=result_dtype).sum(axis)/scl

ZeroDivisionError: Weights sum to zero, can’t be normalized

1 Like

I am facing the same issue while fine tuning the wikitext103 pretrained lm model…

Me too. Have you found the solution?

The validation set was too small compared to the batch size. Decreasing the parameter “bs” solved the issue.