Fastai on Apple M1

With:

torch==1.13.0
fastai==2.7.10

my tests shows gpu acceleration being turned off on mac, can anybody confirm?

edit: They reverted initial mps support: revert auto-enable of mac mps due to pytorch limitations ¡ Issue #3769 ¡ fastai/fastai ¡ GitHub
Turned it on manually and doing some more testing.
edit2: Did initial testing, still not ready.

1 Like

I think this is out of scope but what do you think about this? https://twitter.com/svpino/status/1578354467572838402?t=IBbMYmtj6epC0bB-9To71Q&s=19

@Fahim could you also share with us your fastai version? I’m getting this error installing the pytorch nightly build with pip:

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
fastai 2.7.10 requires torch<1.14,>=1.7, but you have torch 1.14.0.dev20221206 which is incompatible.

Doesn’t seem like they’re compatible?

I believe the latest FastAI is not compatible with Pytorch versions greater than 1.13 … I haven’t used FastAI in a while. So my input’s probably not very relevant at this point, sorry.

1 Like

Adding default_device(torch.device("mps")) after importing fastai should do the trick. Works fine for me in chapter 1 of the book at least, on a base MacBook Pro 14" M1.

1 Like

Hi all!
I added my answer in the following branch: How do I set a fastai learner to use the GPU on an M-Series Mac? - #4 by JTaurus

And here are more details: FastAI 2022 on Macbook M1 Pro Max (GPU) | by Ivan T | Feb, 2023 | Medium

Let us know if someone solved Operation 'neg_out_mps()' does not support input type 'int64' in MPS backend. issue :slight_smile:

3 Likes

I have successfully ran the version just like the medium post. But I am facing issues if I am trying to augment the data, it shows error. Another issue I am facing while using learn.lr_find(), also it’s showing error. I used the same code in kaggle, it runs just fine on cuda. I was running pets data. Can anyone help me with these two issues?

I just wanted to report that as of today, M1 Macbook Pro seems to work with GPU acceleration out of the box.

Seriously, the setup couldn’t be easier.

pip install -r .devcontainer/requirements.txt

Is all I needed to do in a pyenv installed python 3.10.

Here are the training times from an unmodified 02-saving-a-basic-fastai-model.ipynb.

And here is the same on a Kaggle P100 GPU

I don’t know how will it be on later examples, but it seems that at least this example works out of the box as of today.

1 Like

Can you please clarify this statement?

is this → .devcontainer/requirements.txt part of the fast.ai install ?

pip install -r .devcontainer/requirements.txt

Thanks for your reply in advance !!!

Yes, it’s this file:

but is this in a docker container ?
or natively on MacOS with conda install ?

Can you provide more environment information for your successful case ?

I am trying with Homebrew / Conda Miniforge on an M2 and Kernal crashes during Jupyter lab notebook reaches the learner stage.

Sorry - can you please clarify???

→ are you running this in a local Python Env - directly - or via a docker container ?

Local virtualenv, like this:

python -m venv venv
source venv/bin/activate

pip install -U pip wheel setuptools
pip install -r requirements.txt

Thanks for your helpful reply.
I have tried this and yes - it does indeed make a build. In a Python env.

However, learn = cnn_learner(dls, resnet18, metrics=error_rate)

… kills the kernal. I have tried multiple times.

Configuration is: Sonoma Macbook Pro Max M2.

So - guess have to wait until something improves with the FastAi distro.

I believe FastAI is incompatible with Pytorch versions above 1.13. I haven’t used FastAI recently, so my input may not be relevant.

1 Like

yes - it is pretty clear to me now. thanks for the reply.

I know Jeremy is a Windows guy (not that there is any wrong with that …LOL)

Really need to get Fast AI working well on M1/2 Apple Silicon.
Lots of people do ML on this platform.

‘cnn_learner’ is deprecated. Have you tried ‘vision_learner’ ?

it is irrespective. tried each method.
it is ok - if M1/2 is a priority it will get updated.
until then - just review FastAi but use Pytorch Independently on Apple silicon.

I managed to get both my M2 Mac Book air and M1 Mac Mini to run the chapter1 code with instructions from this address:

with one exception, instead of the overnight channel, I used the pytorch channel to install. M1 is much slower than M2, about 10x. But M2 is quiet decent.

conda install pytorch torchvision torchaudio -c pytorch
1 Like

Yes, fastai has been successfully used on Apple M1 chips. Users report smooth performance and compatibility, leveraging the chip’s power for efficient machine learning tasks.