Struggling with Titanic competition: tensor type mismatch

Hello, DL geniuses! I’m very new to deep learning and Fast.ai, so I’m sure the issue is on my end. After watching Lesson 3, I wanted to try using Fast.ai to solve the Titanic dataset challenge. I’m running into an error when I try to run the lr_learn tool, getting this error:

RuntimeError: Expected object of type torch.cuda.FloatTensor but found type torch.cuda.LongTensor for argument #2 ‘target’

I’ve cast my target “y” variable as a float, but still get the error. Is there something I can start checking to troubleshoot this?

Can you post the code? For debugging I would look into pdb. Put pdb.set_trace before the line the error line.

You can check the type of the tensor before passing to the function giving the error.

I copied in the code below. Thank you for taking a look!

%matplotlib inline
%reload_ext autoreload
%autoreload 2

from fastai.structured import *
from fastai.column_data import *
np.set_printoptions(threshold=50, edgeitems=20)

PATH='/home/nbuser/titanic/data/'

import pandas as pd
import numpy as np

test = pd.read_csv(f'{PATH}test.csv')
train = pd.read_csv(f'{PATH}train.csv')
train = train.set_index('PassengerId')
test = test.set_index('PassengerId')

n = len(train); n

cat_vars=['Pclass','Name', 'Sex', 'Age', 'SibSp', 'Parch', 'Ticket', 'Cabin', 'Embarked']
contin_vars = ['Fare']

for v in cat_vars: train[v] = train[v].astype('category').cat.as_ordered()
for v in cat_vars: test[v] = test[v].astype('category').cat.as_ordered()

for v in contin_vars:
    train[v] = train[v].fillna(0).astype('float32')
    test[v] = test[v].fillna(0).astype('float32')

# Prepare the training set
df, y, nas, mapper = proc_df(train, 'Survived', do_scale=True)

# Prepare the test set
df_test, _, nas, mapper = proc_df(test, do_scale=True, 
                                  mapper=mapper, na_dict=nas)
train_ratio = 0.75
samp_size = n
train_size = int(samp_size * train_ratio); train_size
val_idx = list(range(train_size, len(df)))

md = ColumnarModelData.from_data_frame(PATH, val_idx, df, y, cat_flds=cat_vars, bs=128,
                                       test_df=df_test)

cat_sz = [(c, len(train[c].cat.categories)+1) for c in cat_vars]
emb_szs = [(c, min(50, (c+1)//2)) for _,c in cat_sz]

max_y = np.max(y)
y_range = (0, max_y*1.2)

# Find LR
m = md.get_learner(emb_szs, len(df.columns)-len(cat_vars),
                   0.04, 1, [1000,500], [0.001,0.01], y_range=y_range)
lr = 1e-3
m.lr_find()

Here’s what I’ve discovered so far:

  • It’s having the error when calculating the MSE during the step module- specifically, it fails at the pointwise loss step in the _pointwise_loss module of functional.py inside PyTorch
  • I’ve tried setting the “y” variable to an “int” manually, but it doesn’t seem to help.

Here’s the error output:

RuntimeError                              Traceback (most recent call last)
<ipython-input-69-9888ade4ac74> in <module>()
----> 1 m.fit(lr, 1)

~/titanic/fastai/learner.py in fit(self, lrs, n_cycle, wds, **kwargs)
    296         self.sched = None
    297         layer_opt = self.get_layer_opt(lrs, wds)
--> 298         return self.fit_gen(self.model, self.data, layer_opt, n_cycle, **kwargs)
    299 
    300     def warm_up(self, lr, wds=None):

~/titanic/fastai/learner.py in fit_gen(self, model, data, layer_opt, n_cycle, cycle_len, cycle_mult, cycle_save_name, best_save_name, use_clr, use_clr_beta, metrics, callbacks, use_wd_sched, norm_wds, wds_sched_mult, use_swa, swa_start, swa_eval_freq, **kwargs)
    243             metrics=metrics, callbacks=callbacks, reg_fn=self.reg_fn, clip=self.clip, fp16=self.fp16,
    244             swa_model=self.swa_model if use_swa else None, swa_start=swa_start,
--> 245             swa_eval_freq=swa_eval_freq, **kwargs)
    246 
    247     def get_layer_groups(self): return self.models.get_layer_groups()

~/titanic/fastai/model.py in fit(model, data, n_epochs, opt, crit, metrics, callbacks, stepper, swa_model, swa_start, swa_eval_freq, visualize, **kwargs)
    138             batch_num += 1
    139             for cb in callbacks: cb.on_batch_begin()
--> 140             loss = model_stepper.step(V(x),V(y), epoch)
    141             avg_loss = avg_loss * avg_mom + loss * (1-avg_mom)
    142             debias_loss = avg_loss / (1 - avg_mom**batch_num)

~/titanic/fastai/model.py in step(self, xs, y, epoch)
     52         if self.fp16: self.m.zero_grad()
     53         else: self.opt.zero_grad()
---> 54         loss = raw_loss = self.crit(output, y)
     55         if self.loss_scale != 1: assert(self.fp16); loss = loss*self.loss_scale
     56         if self.reg_fn: loss = self.reg_fn(output, xtra, raw_loss)

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in mse_loss(input, target, size_average, reduce)
   1567     """
   1568     return _pointwise_loss(lambda a, b: (a - b) ** 2, torch._C._nn.mse_loss,
-> 1569                            input, target, size_average, reduce)
   1570 
   1571 

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in _pointwise_loss(lambd, lambd_optimized, input, target, size_average, reduce)
   1535         return torch.mean(d) if size_average else torch.sum(d)
   1536     else:
-> 1537         return lambd_optimized(input, target, size_average, reduce)
   1538 
   1539 

RuntimeError: Expected object of type torch.FloatTensor but found type torch.LongTensor for argument #2 'target'

I’ll keep updating this thread as I find out more.

I think my solution might be in the work @tonygentilcore did here:

I’m going to try playing around with this today and see if I can see why his worked and mine didn’t. @TheShadow29, let me know if I should stop this thread. I’m only responding in case I get a solution and someone has a similar problem in the future, but if this is discouraged, let me know. I’m the new guy :slight_smile:

UPDATE 7/18 1:48 PM CDT:

Yup, it was here. I had:

md = ColumnarModelData.from_data_frame(PATH, val_idx, df, y, cat_flds=cat_vars, bs=128,
                                       test_df=df_test)

but it should have been:

md = ColumnarModelData.from_data_frame(PATH, val_idx, df, y.astype(np.float32), cat_flds=cat_vars, bs=128,
                                       test_df=df_test)

Thanks for letting me use this as a sounding board, and special thanks to @tonygentilcore for the code assist (even if it was unknowingly)!

2 Likes