In older versions of PyTorch, in order to move everything to the GPU, one had to do the following.
# Define a lambda at the top cuda = lambda x: x.cuda() if torch.cuda.is_available() else x x = Variable(cuda(torch.randn(10))) # When creating variables model = cuda(Model()) # When creating modules
With the release of PyTorch 0.4, this has been slightly simplified as:
# Define the default device at the top device = 'cuda' if torch.cuda.is_available() else 'cpu' x = torch.randn(10).to(device) # When creating tensors model = Model().to(device) # When creating modules
However, this is still not clean.
Ideally, we would like PyTorch to move everything over to the GPU, if available…
much like TensorFlow.
I tried setting the global tensor type to a cuda tensor using the
However, there are some fundamental problems with setting the default tensor type.
Dataloaders give normal (non-cuda) tensors by default. They have to be manually cast using the Tensor.to() method.
Many methods are simply not implemented for torch.cuda.*Tensor. Thus, setting the global tensor type to cuda fails.
Conversions to numpy using the numpy() method aren’t’ available for cuda tensors. One has to go x.cpu().numpy().
Although this chain is agnostic, it defeats the purpose.
Does anyone have some idea?
Could we somehow have a global device setting that just works?