Pytorch 0.3 upgrade needed

Our next lesson will be using pytorch 0.3. If you do a conda env update it will upgrade you automatically. Let me know if you notice any new issues.

Hopefully you’ll find some of your notebooks are somewhat faster, and also any memory issues you may have had may be resolved.


Release Notes for anyone interested.


The environment.yml file is still at >=0.2.0 … so conda env update won’t upgrade to 0.3.

I think you either want to change it to >=0.3.0 or have folks do a conda update --all (which apparently may break things for some folks).

Have you tried it? It worked for me.


conda env update

conda list pytorch

(fastai) $conda list pytorch
# packages in environment at /development/_tools/anaconda/envs/fastai:
pytorch                   0.2.0                py36_4cu75    soumith

Ah I think you haven’t done a git pull yet?

Correction - I haven’t pushed the change yet! Coming right up :slight_smile:

You got me before I could say this. I’ve gotten bit by this so much i’ll do a git status back-to-back just to make sure there is nothing to commit or push.


OK just pushed the fix. Sorry about that.

Introduced torch.erf and torch.erfinv that compute the error function and the inverse error function of each element in the Tensor.

This is interesting, I wonder if they add this after seeing it’s been used in Porto Seguro Kaggle competition winning solution :slight_smile:

The predict_with_targs(True) is not working after upgrade. :worried: is not working either.

These do not appear related @Moody . Your 2nd problem appears to be because you have multiple labels for some rows, and the first appears to be because your test set isn’t set correctly.

Crestle pytorch has also been updated to 0.3.

1 Like

I upgraded to pytorch 0.3 (using conda env update) but get this error when run
lesson2-image_models ( cell 16:lrf=learn.lr_find(), learn.sched.plot() )

RuntimeError Traceback (most recent call last)
in ()
----> 1 lrf=learn.lr_find()
2 learn.sched.plot()

~/fastai/courses/dl1/fastai/ in lr_find(self, start_lr, end_lr, wds)
234 layer_opt = self.get_layer_opt(start_lr, wds)
235 self.sched = LR_Finder(layer_opt, len(, end_lr)
–> 236 self.fit_gen(self.model,, layer_opt, 1)
237 self.load(‘tmp’)

~/fastai/courses/dl1/fastai/ in fit_gen(self, model, data, layer_opt, n_cycle, cycle_len, cycle_mult, cycle_save_name, metrics, callbacks, use_wd_sched, **kwargs)
143 n_epoch = sum_geom(cycle_len if cycle_len else 1, cycle_mult, n_cycle)
144 fit(model, data, n_epoch, layer_opt.opt, self.crit,
–> 145 metrics=metrics, callbacks=callbacks, reg_fn=self.reg_fn, clip=self.clip, **kwargs)
147 def get_layer_groups(self): return self.models.get_layer_groups()

~/fastai/courses/dl1/fastai/ in fit(model, data, epochs, opt, crit, metrics, callbacks, kwargs)
84 batch_num += 1
85 for cb in callbacks: cb.on_batch_begin()
—> 86 loss = stepper.step(V(x),V(y))
87 avg_loss = avg_loss * avg_mom + loss * (1-avg_mom)
88 debias_loss = avg_loss / (1 - avg_mom

~/fastai/courses/dl1/fastai/ in step(self, xs, y)
41 if isinstance(output,(tuple,list)): output,*xtra = output
42 self.opt.zero_grad()
—> 43 loss = raw_loss = self.crit(output, y)
44 if self.reg_fn: loss = self.reg_fn(output, xtra, raw_loss)
45 loss.backward()

~/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/ in binary_cross_entropy(input, target, weight, size_average)
1177 weight = Variable(weight)
-> 1179 return torch._C._nn.binary_cross_entropy(input, target, weight, size_average)

RuntimeError: Expected object of type Variable[torch.cuda.FloatTensor] but found type Variable[torch.cuda.LongTensor] for argument #1 ‘target’

@layla.tadjpour sorry about that! Should be fixed now.

yes it is working now. Thanks

Could someone test this to see if it’s still working? I’m getting the same error and I think I’ve got everything up to date.

Thanks in advance.

I’m getting the same error after doing a git pull and conda env update yesterday. FYI -
I’m using Ubuntu on Google Cloud

Yes, that was precisely what happened. Also using ubuntu but with the template for paperspace.