Why do Pytorch notebooks online say tensor.to(device)?

Most development work is done under your control, so why bring an if statement checking for gpu?
Training without it is going to take extremely long anyway, so it’s not like we can “manage” with a cpu if a gpu isn’t available.

And inferences are almost always on cpu, so the device checking doesn’t make sense there either. Yet almost all the notebooks on GitHub with Pytorch have it.

Am I missing the main reason why?

In a programming framework, it’s best to give the user as much control as possible. Of course, you can abstract away and use sensible defaults, which is what higher-level frameworks built on top of PyTorch often do (ex: fastai!).

Sometimes you will be testing something on CPU before moving to the GPU. Sometimes you need to specify which GPUs to use. Sometimes you need to use other devices as well (like TPUs). So you would want to give that option to users.

Also, inference is always done on CPU. For example, in Kaggle it’s almost always done on GPUs instead. There are various inference accelerators available (ex: Edge TPUs) that are not CPUs. So again, the option should be provided.

4 Likes

Makes sense. Thanks!