Assertion Error - Pytorch

I am trying to run part 1 v2 Lesson-6-rnn notebook. But I am getting assertion error after I create the CharRnn object and place it on GPU(i.e do .cuda()). On CPU it is working fine, but on GPU it gives error.

The stack trace is -:
This is after I run this command -:
m = CharRnn(vocab_size, n_fac).cuda()


AssertionError Traceback (most recent call last)
in ()
----> 1 m = CharRnn(vocab_size, n_fac).cuda()

/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py in cuda(self, device)
214 Module: self
215 “”"
–> 216 return self._apply(lambda t: t.cuda(device))
217
218 def cpu(self):

/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py in _apply(self, fn)
144 def _apply(self, fn):
145 for module in self.children():
–> 146 module._apply(fn)
147
148 for param in self._parameters.values():

/opt/conda/lib/python3.6/site-packages/torch/nn/modules/rnn.py in _apply(self, fn)
121 def _apply(self, fn):
122 ret = super(RNNBase, self)._apply(fn)
–> 123 self.flatten_parameters()
124 return ret
125

/opt/conda/lib/python3.6/site-packages/torch/nn/modules/rnn.py in flatten_parameters(self)
109 # Slice off views into weight_buf
110 all_weights = [[p.data for p in l] for l in self.all_weights]
–> 111 params = rnn.get_parameters(fn, handle, fn.weight_buf)
112
113 # Copy weights and update their storage

/opt/conda/lib/python3.6/site-packages/torch/backends/cudnn/rnn.py in get_parameters(fn, handle, weight_buf)
163 # might as well merge the CUDNN ones into a single tensor as well
164 if linear_id == 0 or linear_id == num_linear_layers / 2:
–> 165 assert filter_dim_a.prod() == filter_dim_a[0]
166 size = (filter_dim_a[0] * num_linear_layers // 2, filter_dim_a[2])
167 param = fn.weight_buf.new().set_(

AssertionError:

Can anyone please guide me how this can be solved??

Did you find a solution? I’m experiencing a similar problem with lesson 4 IMDB.