Minor exception in DL block of Lesson 3 Rossman JN Part 1, I can go further

Hello fast.ai forum members!
With a great respect to teachers and students!
I got minor exception in DL block of Lesson 3 Rossman JN Part 1, I can go further.

m = md.get_learner(emb_szs, len(df.columns)-len(cat_vars),
                   0.04, 1, [1000,500], [0.001,0.01], y_range=y_range)
m.summary()

I ran JN on GPU of Colab, torch 0.3.1.
I don’t know how to fix and not sure it is needed to fix.
Also I ran before

!pip install -Uq pandas==0.22 pandas_summary

I got this exception:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-104-3163e18abde6> in <module>()
      1 m = md.get_learner(emb_szs, len(df.columns)-len(cat_vars),
      2                    0.04, 1, [1000,500], [0.001,0.01], y_range=y_range)
----> 3 m.summary()

/usr/local/lib/python3.6/dist-packages/fastai/column_data.py in summary(self)
    141     def _get_crit(self, data): return F.mse_loss if data.is_reg else F.binary_cross_entropy if data.is_multi else F.nll_loss
    142 
--> 143     def summary(self): return model_summary(self.model, [(self.data.trn_ds.cats.shape[1], ), (self.data.trn_ds.conts.shape[1], )])
    144 
    145 

/usr/local/lib/python3.6/dist-packages/fastai/model.py in model_summary(m, input_size)
    275         x = [to_gpu(Variable(torch.rand(3,*in_size))) for in_size in input_size]
    276     else: x = [to_gpu(Variable(torch.rand(3,*input_size)))]
--> 277     m(*x)
    278 
    279     for h in hooks: h.remove()

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    355             result = self._slow_forward(*input, **kwargs)
    356         else:
--> 357             result = self.forward(*input, **kwargs)
    358         for hook in self._forward_hooks.values():
    359             hook_result = hook(self, input, result)

/usr/local/lib/python3.6/dist-packages/fastai/column_data.py in forward(self, x_cat, x_cont)
    112     def forward(self, x_cat, x_cont):
    113         if self.n_emb != 0:
--> 114             x = [e(x_cat[:,i]) for i,e in enumerate(self.embs)]
    115             x = torch.cat(x, 1)
    116             x = self.emb_drop(x)

/usr/local/lib/python3.6/dist-packages/fastai/column_data.py in <listcomp>(.0)
    112     def forward(self, x_cat, x_cont):
    113         if self.n_emb != 0:
--> 114             x = [e(x_cat[:,i]) for i,e in enumerate(self.embs)]
    115             x = torch.cat(x, 1)
    116             x = self.emb_drop(x)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    355             result = self._slow_forward(*input, **kwargs)
    356         else:
--> 357             result = self.forward(*input, **kwargs)
    358         for hook in self._forward_hooks.values():
    359             hook_result = hook(self, input, result)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/sparse.py in forward(self, input)
    101             input, self.weight,
    102             padding_idx, self.max_norm, self.norm_type,
--> 103             self.scale_grad_by_freq, self.sparse
    104         )
    105 

/usr/local/lib/python3.6/dist-packages/torch/nn/_functions/thnn/sparse.py in forward(cls, ctx, indices, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
     55 
     56         if indices.dim() == 1:
---> 57             output = torch.index_select(weight, 0, indices)
     58         else:
     59             output = torch.index_select(weight, 0, indices.view(-1))

TypeError: torch.index_select received an invalid combination of arguments - got (torch.cuda.FloatTensor, int, !torch.cuda.FloatTensor!), but expected (torch.cuda.FloatTensor source, int dim, torch.cuda.LongTensor index)

I have this notebook on my Drive

Me too. I can complete the notebook using paperspace, but I run into the same problem when I try to use Google’s colab.

NB: Google’s colab defaulted to pandas 0.22 for me.

Any thoughts anyone?

I’m facing same issue, any solution?

anyone has idea on this issue? I also got this issue. I can comment m.summary() and get it running…
I have googled a bit but can’t get it fixed. someone said it is due to one of the internal function not passing correct cuda tensor properly… but that’s deep in the stack and i don’t know how to fix it properly…