I have an M1 Mac, and the way for PyTorch to take advantage of its GPU is to set the device of your model and tensor in either of the two following manners.
model = TheModel(..., device='mps')
or model.to(device='mps')
How do I set the a fastai learner I create to use the GPU? Is there any specific fastai function that does this, or do I use the above methods/arguments somewhere?
learner.to(device='mps') doesn’t work, which makes sense. But I’m not sure where I would then set the learner’s device…
Just a couple of problems though: I’m not sure what the parameter x takes and what Mapping is defined as. I did try inputting my learner as x, but I can’t execute the function because I’m not sure what Mapping is.
Judging from the isinstance docs though, I have to put a target object. I tried putting the fastai Learner class in place of Mapping, and then figured out that I can’t pass my learner as x, since it has no items attribute.
I’m guessing that the snippet you provided above is for changing the device of objects that store data (e.g., tensors), but that can be easly done through either tensor([1, 2, 3], device='mps') or tensor([1, 2, 3]).to(device='mps').
I see; the default device has to be set when defining the DataLoaders.
@erikpb also figured out that by using the default_device() function, from the fastai library, and passing 'mps' as the argument will make all models and DataLoaders that are created use mps as the device.
If you output a model or DataLoaders created this way, it won’t specify it’s been set to mps, but it can similarly be checked in Activity Monitor and the GPU is indeed being used.