Language model text generation giving weird results in a consistent manner

Yeah, I’ve forgotten that fast.ai by default uses pytorch<0.4 and pytorch 0.4 (which I’m using) treats scalars differently:

import torch
print(torch.__version__)
print(type(torch.FloatTensor([1])[0]))

0.3.1.post2
<class ‘float’>

vs

0.4.1
<class ‘torch.Tensor’>