Lesson Two Variable Mismatch Runtime Error

Hello All,
I’m encountering a strange error with the Lesson 2 Image Models notebook. It echoes this error, where there is expecting a float tensor but finding a long tensor. Is there a good way to diagnose how the library is reading in the data? I’m new to this library but have some experience with Keras.

Every similar issue appears to be windows related, but I am running on Paperspace using the Ubuntu Template. I’ve tried pulling the course repo and updating my environment again but don’t understand the workings of fastai well enough to feel comfortable tinkerig with the codebase.

Stack Trace:

RuntimeError                              Traceback (most recent call last)
<ipython-input-34-c69896a35d32> in <module>()
----> 1 lrf=learn.lr_find()
      2 learn.sched.plot()

~/fastai/courses/dl1/fastai/learner.py in lr_find(self, start_lr, end_lr, wds, linear)
    256         layer_opt = self.get_layer_opt(start_lr, wds)
    257         self.sched = LR_Finder(layer_opt, len(self.data.trn_dl), end_lr, linear=linear)
--> 258         self.fit_gen(self.model, self.data, layer_opt, 1)
    259         self.load('tmp')

~/fastai/courses/dl1/fastai/learner.py in fit_gen(self, model, data, layer_opt, n_cycle, cycle_len, cycle_mult, cycle_save_name, best_save_name, use_clr, metrics, callbacks, use_wd_sched, norm_wds, wds_sched_mult, **kwargs)
    160         n_epoch = sum_geom(cycle_len if cycle_len else 1, cycle_mult, n_cycle)
    161         return fit(model, data, n_epoch, layer_opt.opt, self.crit,
--> 162             metrics=metrics, callbacks=callbacks, reg_fn=self.reg_fn, clip=self.clip, **kwargs)
    164     def get_layer_groups(self): return self.models.get_layer_groups()

~/fastai/courses/dl1/fastai/model.py in fit(model, data, epochs, opt, crit, metrics, callbacks, stepper, **kwargs)
     94             batch_num += 1
     95             for cb in callbacks: cb.on_batch_begin()
---> 96             loss = stepper.step(V(x),V(y))
     97             avg_loss = avg_loss * avg_mom + loss * (1-avg_mom)
     98             debias_loss = avg_loss / (1 - avg_mom**batch_num)

~/fastai/courses/dl1/fastai/model.py in step(self, xs, y)
     41         if isinstance(output,tuple): output,*xtra = output
     42         self.opt.zero_grad()
---> 43         loss = raw_loss = self.crit(output, y)
     44         if self.reg_fn: loss = self.reg_fn(output, xtra, raw_loss)
     45         loss.backward()

~/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/functional.py in binary_cross_entropy(input, target, weight, size_average)
   1198             weight = Variable(weight)
-> 1200     return torch._C._nn.binary_cross_entropy(input, target, weight, size_average)

RuntimeError: Expected object of type Variable[torch.cuda.FloatTensor] but found type Variable[torch.cuda.LongTensor] for argument #1 'target'```

I’ve now noticed that my class labels are being returned as integers instead of floats, is there a way to cast these back to float?

I’m having the same problem. Perhaps something changed recently. I’m going to look at recent git changes.

I fixed it by going to


Line 431, change:

    fnames,y,classes = csv_source(folder, csv_fname, skip_header, suffix, continuous=continuous)
    return cls.from_names_and_array(path, fnames, y, classes, val_idxs,
        test_name,num_workers=num_workers, suffix=suffix, tfms=tfms, bs=bs, continuous=continuous)


    fnames,y,classes = csv_source(folder, csv_fname, skip_header, suffix, continuous=continuous)
    y = y.astype(float)
    return cls.from_names_and_array(path, fnames, y, classes, val_idxs, test_name,
        num_workers=num_workers, suffix=suffix, tfms=tfms, bs=bs, continuous=continuous)

You should find that it works now.

1 Like

Thanks, Steve! I’m gonna give this a shot in a bit.

I posted it on the github repo, so hopefully someone who knows more than I do can vet it and provide a permanent solution. Keep an eye on that too! :slight_smile: Good luck!

The " y = y.astype(float) " solution worked for me. Thanks.

Thanks, I had the same problem (also on Paperspace) and this fix worked for me, too.

@steveMancero Worked for me aswell, thanks!

I believe this problem was due to a buddy PR we received a few days ago. It was fixed a couple of days back so doing a git pull should resolve it.