RuntimeError: mean(): input dtype should be either floating point or complex dtypes. Got Long instead

Hi everyone!
I am doing practice of Jeremy Howard’s lecture 5(Titanic) of Practical Deep Learning 2022 and have this problem. My code:

t_dep = tensor(df.Survived)

indep_cols = [“Age”, “SibSp”, “Parch”, “LogFare”] + added_cols
t_indep = tensor(df[indep_cols].values, dtype=torch.float)
t_indep

torch.manual_seed(442)

n_coeff = t_indep.shape[1]
coeffs = torch.rand(n_coeff)-0.5
coeffs

t_indep*coeffs

vals, indices = t_indep.max(dim=0)
t_indep = t_indep / vals

preds = ((t_indep*coeffs).sum(axis=1))

loss = torch.abs(preds-t_dep).mean()

def calc_preds(coeffs, indeps): return (indeps*coeffs).sum(axis=1)

def calc_loss(coeffs, indeps, deps): return torch.abs(calc_preds(coeffs, indeps)-deps.mean())

coeffs.requires_grad_()

loss = calc_loss(coeffs, t_indep, t_dep)

Error

RuntimeError Traceback (most recent call last)
/tmp/ipykernel_27/11996232.py in
----> 1 loss = calc_loss(coeffs, t_indep, t_dep)
2 loss

/tmp/ipykernel_27/3339545613.py in calc_loss(coeffs, indeps, deps)
1 def calc_preds(coeffs, indeps): return (indeps*coeffs).sum(axis=1)
----> 2 def calc_loss(coeffs, indeps, deps): return torch.abs(calc_preds(coeffs, indeps)-deps.mean())

RuntimeError: mean(): input dtype should be either floating point or complex dtypes. Got Long instead.

Methods I apply:

In the loss function coeffs(shape 12) and t_indep(shape 891, 12) are in float32 and t_dep(shape 891) is in int64, if I convert t_dep in float32 then instead of giving a single value it gives 891 values and then there is a problem in loss.backward.
which is:-


RuntimeError Traceback (most recent call last)
/tmp/ipykernel_27/2859123600.py in
----> 1 loss.backward()

/opt/conda/lib/python3.7/site-packages/torch/_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs)
361 create_graph=create_graph,
362 inputs=inputs)
→ 363 torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
364
365 def register_hook(self, hook):

/opt/conda/lib/python3.7/site-packages/torch/autograd/init.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
164
165 grad_tensors_ = tensor_or_tensors_to_tuple(grad_tensors, len(tensors))
→ 166 grad_tensors
= make_grads(tensors, grad_tensors, is_grads_batched=False)
167 if retain_graph is None:
168 retain_graph = create_graph

/opt/conda/lib/python3.7/site-packages/torch/autograd/init.py in _make_grads(outputs, grads, is_grads_batched)
65 if out.requires_grad:
66 if out.numel() != 1:
—> 67 raise RuntimeError(“grad can be implicitly created only for scalar outputs”)
68 new_grads.append(torch.ones_like(out, memory_format=torch.preserve_format))
69 else:

RuntimeError: grad can be implicitly created only for scalar outputs

Any one please help me!
Regards!
Salma

Hey @salma1

You have a wandering brace on your calc_loss method()

you have:

return torch.abs(calc_preds(coeffs, indeps)-deps.mean())

and you need to move that last brace after mean to after ‘deps’

return torch.abs(calc_preds(coeffs, indeps)-deps).mean()

Hope that helps! :open_hands:

Thank You So much Nickle

1 Like