Tensor is int32: upgrading to int64; for better performance use int64 input

I’m getting this warning message when running fastai v1 on Windows Server 2016, any ideas ?

Tensor is int32: upgrading to int64; for better performance use int64 input

def tensor(x:Any, *rest)->Tensor:
    "Like `torch.as_tensor`, but handle lists too, and can pass multiple vector elements directly."
    if len(rest): x = (x,)+rest
    # XXX: Pytorch bug in dataloader using num_workers>0; TODO: create repro and report
    if is_listy(x) and len(x)==0: return tensor(0)
    res = torch.tensor(x) if is_listy(x) else as_tensor(x)
    if res.dtype is torch.int32:
        warn('Tensor is int32: upgrading to int64; for better performance use int64 input')
        return res.long()
    return res

Thanks

It’s just informing you it converted the tensor so nothing to fix. To avoid the warning you should convert the numpy array to int64 with the_array.long() (I’m assuming it’s a numpy array, I think python arrays should already be 64-bit). Or if that’s difficult for whatever reason then if you search for ‘python disable warning’ you should find a way to disable it (there’s a context manager to selectively disable).

I would note though that I’d seen this code and wasn’t quite sure about it. I think the reasoning is that CPU operations on 32-bit integer arrays/tensors are slow. But I’m pretty sure this does not apply to CUDA operations and CUDA operations on 64-bit ints are markedly slower. At best it’s two 32-bit operations as there’s no real dedicated 64-bit hardware, at worst up to something like 8x slowdown based on numbers I’ve seen, and the extra memory usage on top of that. So I’m not really sure about the reasoning for doing this conversion on tensors without any sort of checks as to context (I checked the commit and no real information there but haven’t followed it up).
So, if these are tensors to be used on GPU I’d probably avoid the tensor function. Or at least run tests if performance/memory use are an issue for your use. The torch.from_numpy function will convert a numpy array without memory copy (or there’s function that will copy if you want to preserve the original). Or you can construct a tensor from a python array (but not an iterable which tensor allows).

1 Like