I have an M1 Mac, and the way for PyTorch to take advantage of its GPU is to set the device of your model and tensor in either of the two following manners.
model = TheModel(..., device='mps')
How do I set the a fastai learner I create to use the GPU? Is there any specific fastai function that does this, or do I use the above methods/arguments somewhere?
learner.to(device='mps') doesn’t work, which makes sense. But I’m not sure where I would then set the learner’s device…
I appreciate any input!
I don’t have a mac, but recently saw the following in a private access video. Maybe you or someone else can decipher.
Ooo, yes, that may be onto something.
Just a couple of problems though: I’m not sure what the parameter
x takes and what
Mapping is defined as. I did try inputting my learner as
x, but I can’t execute the function because I’m not sure what
Judging from the
isinstance docs though, I have to put a target object. I tried putting the fastai
Learner class in place of
Mapping, and then figured out that I can’t pass my learner as
x, since it has no
I’m guessing that the snippet you provided above is for changing the device of objects that store data (e.g., tensors), but that can be easly done through either
tensor([1, 2, 3], device='mps') or
tensor([1, 2, 3]).to(device='mps').
You can check your preset quickly with the following:
Also, to use an MPS device, you can pass it as a parameter to a model initiation.
For most cases, it’s enough o run training on the M1.
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_cat, item_tfms=Resize(224), device=default_device(1))
But there are still a lot of other problems that I found.
Here is my experience with running the 1st lesson on M1 Max Macbook: FastAI 2022 on Macbook M1 Pro Max (GPU) | by Ivan T | Feb, 2023 | Medium
P.S. You can check that training using GPU by checking Activity Monitoring, and by the training time, of course. CPU takes x10 then GPU.
Thanks for the response and findings!
I see; the default device has to be set when defining the DataLoaders.
@erikpb also figured out that by using the
default_device() function, from the fastai library, and passing
'mps' as the argument will make all models and DataLoaders that are created use
mps as the device.
If you output a model or DataLoaders created this way, it won’t specify it’s been set to
mps, but it can similarly be checked in Activity Monitor and the GPU is indeed being used.