Runtime error when using different minibatch sizes

I am on a windows machine with GPU. I tried the neural translation model on a dataset of my own. It runs fine when settting the minibatch size to some value (e.g. 2, 10) but produces run time error when using different minibatch sizes (less or more than 10). The run time error is as follows :

On calling the ‘learn.lr_find()’ method :

~\Anaconda3\envs\fastai\lib\site-packages\torch\backends\cudnn\rnn.py in forward(fn, input, hx, weight, output, hy)
293 fn.cy_desc, ctypes.c_void_p(cy.data_ptr()) if cx is not None else None,
294 ctypes.c_void_p(workspace.data_ptr()), workspace.size(0),
–> 295 ctypes.c_void_p(fn.reserve.data_ptr()), fn.reserve.size(0)
296 ))
297 else: # inference

RuntimeError: invalid argument 2: out of range at c:\anaconda2\conda-bld\pytorch_1519501749874\work\torch\lib\thc\generic/THCTensor.c:23

I noticed that In the course the minibatch size for validation set has been set to 1.6 times that for the training set. Is there a particular reason for that? Why exactly 1.6?