# RuntimeError: mean(): input dtype should be either floating point or complex dtypes. Got Long instead

## Hi everyone! I am doing practice of Jeremy Howard’s lecture 5(Titanic) of Practical Deep Learning 2022 and have this problem. My code:

t_dep = tensor(df.Survived)

indep_cols = [“Age”, “SibSp”, “Parch”, “LogFare”] + added_cols
t_indep = tensor(df[indep_cols].values, dtype=torch.float)
t_indep

torch.manual_seed(442)

n_coeff = t_indep.shape[1]
coeffs = torch.rand(n_coeff)-0.5
coeffs

t_indep*coeffs

vals, indices = t_indep.max(dim=0)
t_indep = t_indep / vals

preds = ((t_indep*coeffs).sum(axis=1))

loss = torch.abs(preds-t_dep).mean()

def calc_preds(coeffs, indeps): return (indeps*coeffs).sum(axis=1)

loss = calc_loss(coeffs, t_indep, t_dep)

## Error

RuntimeError Traceback (most recent call last)
/tmp/ipykernel_27/11996232.py in
----> 1 loss = calc_loss(coeffs, t_indep, t_dep)
2 loss

/tmp/ipykernel_27/3339545613.py in calc_loss(coeffs, indeps, deps)
1 def calc_preds(coeffs, indeps): return (indeps*coeffs).sum(axis=1)

RuntimeError: mean(): input dtype should be either floating point or complex dtypes. Got Long instead.

## Methods I apply:

In the loss function coeffs(shape 12) and t_indep(shape 891, 12) are in float32 and t_dep(shape 891) is in int64, if I convert t_dep in float32 then instead of giving a single value it gives 891 values and then there is a problem in loss.backward.
which is:-

RuntimeError Traceback (most recent call last)
/tmp/ipykernel_27/2859123600.py in
----> 1 loss.backward()

/opt/conda/lib/python3.7/site-packages/torch/_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs)
361 create_graph=create_graph,
362 inputs=inputs)
364
365 def register_hook(self, hook):

164
167 if retain_graph is None:
168 retain_graph = create_graph

66 if out.numel() != 1:
—> 67 raise RuntimeError(“grad can be implicitly created only for scalar outputs”)
69 else:

RuntimeError: grad can be implicitly created only for scalar outputs

Regards!
Salma

Hey @salma1

You have a wandering brace on your calc_loss method()

you have:

`return torch.abs(calc_preds(coeffs, indeps)-deps.mean())`

and you need to move that last brace after mean to after ‘deps’

`return torch.abs(calc_preds(coeffs, indeps)-deps).mean()`

Hope that helps!

Thank You So much Nickle

1 Like