Is there a bug in torch_core._has_mps()?

Hi,

I’m new to using the fastai library, and after running through the “Is it a bird?” notebook on Kaggle, I tried to run it locally on a 10 year old Mac without a GPU. I didn’t get very far – the step to create the DataBlock raised the following error:

RuntimeError: The MPS backend is supported RuntimeError: The MPS backend is supported on MacOS 12.3+.Current OS version can be queried using sw_vers

After a bit of digging, it looks like the torch_core._has_mps() function is returning True instead of false for me. For reference, here is the implementation of _has_mps:

def _has_mps():
    if nested_attr(torch, 'backends.mps.is_available', noop)(): return True
    return nested_attr(torch, 'backends.mps.is_built', False)()

In my case, mps.is_available() returns False, but mps.is_built() returns True, so _has_mps() returns True. Should it be the following instead?

def _has_mps():
    return nested_attr(torch, 'backends.mps.is_available', noop)() and nested_attr(torch, 'backends.mps.is_built', False)()

With this change I can now run the first tutorial successfully on my Mac.

Does the above change sound reasonable? I’m happy to submit a PR with this change and a unit test if so.

Cheers,
Darren.