Fastai on Apple M1

Well that’s a 650w Xeon 8c/64GB workstation with a 1070ti card … so I’d take it as a compliment :wink:

As pytorch becomes more optimized for M1/M2 chips, I’m pretty sure they’ll start giving Nvidia a run for their money!

cheers!

2 Likes

Regarding the NotImplementedError: The operator ‘aten::adaptive_max_pool2d.out’ is not current implemented for the MPS device.

The missing part has been released within pytorch 1.12.0 MPS: Add adaptive max pool2d op by kulinseth · Pull Request #78410 · pytorch/pytorch · GitHub

Also, fastai added initial support for mps backend in v2.7.6

Now, fine_tuning takes ages for me, doesn’t use nor gpu or cpu, can’t move forward.

Anyone?

EDIT: OK, it didn’t appear to do anything so I played few times with new conda environment and well, I can confirm that 02-saving-a-basic-fastai-model.ipynb is working well using gpu (I checked, python process uses 99% gpu), it’s fast and working! Happy to see some benchmarks versus beefy nvidia gpus!

EDIT2: OK, false positive, exporting the model fails and it seems to be pytorch issue. Presumably nightly version of pytorch has this resolved but coudn’t make it work together with fastai, got back to the stage when cpu nor gpu didn’t do anything during training. Let’s hope pytorch 1.12.2 has the fix merged and we can get rolling.

2 Likes

Did you got any solution

import torch
import math
# this ensures that the current MacOS version is at least 12.3+
print(torch.backends.mps.is_available())
# this ensures that the current current PyTorch installation was built with MPS activated.
print(torch.backends.mps.is_built())
torch.device("mps")

device_type = "mps"
device = torch.device(device_type)
class TrainingArgumentsWithMPSSupport(TrainingArguments):
    @property
    def device(self) -> torch.device:
        if device_type == "mps":
            return torch.device("mps")
        else:
            return torch.device("cpu")

args = TrainingArgumentsWithMPSSupport('outputs', 
    learning_rate=lr, warmup_ratio=0.1, lr_scheduler_type='cosine', 
    evaluation_strategy="epoch", per_device_train_batch_size=bs, per_device_eval_batch_size=bs*2,
    num_train_epochs=epochs, weight_decay=0.01, report_to='none')

trainer = Trainer(model, args, 
        train_dataset=dds['train'], 
        eval_dataset=dds['test'], 
        tokenizer=tokz, 
        compute_metrics=corr_d)
trainer.train()

Getting this error for lesson 4 on mac m1 studio:


File ~/.pyenv/versions/3.9.7/envs/learning/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:845, in DisentangledSelfAttention.disentangled_attention_bias(self, query_layer, key_layer, relative_pos, rel_embeddings, scale_factor)
    842 else:
    843     r_pos = relative_pos
--> 845 p2c_pos = torch.clamp(-r_pos + att_span, 0, att_span * 2 - 1)
    846 p2c_att = torch.bmm(key_layer, pos_query_layer.transpose(-1, -2))
    847 p2c_att = torch.gather(
    848     p2c_att,
    849     dim=-1,
    850     index=p2c_pos.squeeze(0).expand([query_layer.size(0), key_layer.size(-2), key_layer.size(-2)]),
    851 ).transpose(-1, -2)

TypeError: Operation 'neg_out_mps()' does not support input type 'int64' in MPS backend.
2 Likes

Hi all,
I’m new to fastai and managed to get Lesson 1 working in a Jupyter Notebook locally on my M1 Max. I thought I’d post the exact steps I took for other new folks like myself. Big thanks to @iTapAndroid for his post (would not have succeeded without it). Lesson 1 ran/trained locally in less than 8 seconds for me.

Versions:
macOS Monterey 12.6
32 G RAM
32 GPU
python 3.10.4
pytorch 1.12.1
fastai 2.7.9

  • Install Anaconda for M1 Anaconda Installer

  • (default install is $HOME/opt/anaconda3)

  • (Be sure to restart your terminal after installing)

  • Anaconda updates .bashrc so it may just work, however if you want…
    export PATH=$HOME/opt/anaconda3/bin:$PATH in my .profile

  • conda update -n base -c defaults conda

  • conda create -n fastai python=3.10.4

  • conda activate fastai

  • conda install -c fastchan fastai fastbook jupyterlab sentencepiece

  • (fastchan installs pytorch 1.10 by default so you’ll need to upgrade it)

  • conda install pytorch torchvision torchaudio -c pytorch

Make sure that you now have pytorch 1.12.1

You can use grep for this:


conda list | grep pytorch

After this I took the exact first lesson in Kaggle and replicated it in my local Jupyter Notebook.

Per @iTapAndroid
You must add the following at the top of your notebook:


import os

os.environ["OMP_NUM_THREADS"] = "1"

6 Likes

@Fahim Really sorry to keep bugging you…I just know that you’ve been helpful on the forums and I’ve been having a nightmare of a time trying to get lesson two working now.

I think it’s to do with my setup but I’ve tried many different things mentioned here and on the PyTorch forum, but something always breaks - the widgets for Jupyter Notebook not working on Jupyter Lab, or now I’m getting a new error…

I wonder if you might be able to share the output of your mamba list so I can compare your versions to what I’ve got going / if you had steps to recreate your working environment I’d really appreciate it given we’re on the same M1Mac 32GB machine.

For comparison (although I definitely don’t expect you to read, just in case it’s helpful for someone else), mine is below…

Thank you thank you

# packages in environment at /Users/samgreen/mambaforge/envs/fastai:
#
# Name                    Version                   Build  Channel
aom                       3.5.0                h7ea286d_0    fastchan
appnope                   0.1.3              pyhd8ed1ab_0    conda-forge
asttokens                 2.1.0              pyhd8ed1ab_0    fastchan
astunparse                1.6.3              pyhd8ed1ab_0    fastchan
backcall                  0.2.0              pyh9f0ad1d_0    fastchan
backports                 1.0                        py_2    fastchan
backports.functools_lru_cache 1.6.4              pyhd8ed1ab_0    fastchan
brotli                    1.0.9                h1c322ee_7    fastchan
brotli-bin                1.0.9                h1c322ee_7    fastchan
brotlipy                  0.7.0           py310hf8d0d8f_1004    fastchan
bzip2                     1.0.8                h3422bc3_4    fastchan
ca-certificates           2022.9.24            h4653dfc_0    conda-forge
catalogue                 2.0.8           py310hbe9552e_0    fastchan
certifi                   2022.9.24          pyhd8ed1ab_0    fastchan
cffi                      1.15.1          py310he00a5c5_0    fastchan
charset-normalizer        2.1.1              pyhd8ed1ab_0    fastchan
click                     8.1.3           py310hbe9552e_0    fastchan
colorama                  0.4.6              pyhd8ed1ab_0    fastchan
confection                0.0.3           py310hc47352e_0    fastchan
contourpy                 1.0.6           py310h2887b22_0    fastchan
cryptography              38.0.3          py310hfc83b78_0    fastchan
cycler                    0.11.0             pyhd8ed1ab_0    fastchan
cymem                     2.0.7           py310h0f1eb42_0    fastchan
cython-blis               0.7.8           py310h611a7d1_0    fastchan
dataclasses               0.8                pyhc8e2a94_3    fastchan
decorator                 5.1.1              pyhd8ed1ab_0    fastchan
execnb                    0.1.4                      py_0    fastchan
executing                 1.2.0              pyhd8ed1ab_0    fastchan
expat                     2.5.0                hb7217d7_0    fastchan
fastai                    2.7.10                     py_0    fastchan
fastcore                  1.5.27                     py_0    fastchan
fastdownload              0.0.7                      py_0    fastchan
fastprogress              1.0.3                      py_0    fastchan
ffmpeg                    5.1.2           gpl_hf4c414c_103    fastchan
font-ttf-dejavu-sans-mono 2.37                 hab24e00_0    fastchan
font-ttf-inconsolata      3.000                h77eed37_0    fastchan
font-ttf-source-code-pro  2.038                h77eed37_0    fastchan
font-ttf-ubuntu           0.83                 hab24e00_0    fastchan
fontconfig                2.14.1               h82840c6_0    fastchan
fonts-conda-ecosystem     1                             0    fastchan
fonts-conda-forge         1                             0    fastchan
fonttools                 4.38.0          py310h8e9501a_0    fastchan
freetype                  2.12.1               hd633e50_0    fastchan
gettext                   0.21.1               h0186832_0    fastchan
ghapi                     1.0.3                      py_0    fastchan
giflib                    5.2.1                h27ca646_2    fastchan
gmp                       6.2.1                h9f76cd9_0    fastchan
gnutls                    3.7.8                h9f1a10d_0    fastchan
icu                       70.1                 h6b3803e_0    fastchan
idna                      3.4                pyhd8ed1ab_0    fastchan
ipython                   8.6.0              pyhd1c38e8_1    fastchan
jedi                      0.18.1          py310hbe9552e_1    fastchan
jinja2                    3.1.2              pyhd8ed1ab_0    fastchan
joblib                    1.2.0              pyhd8ed1ab_0    fastchan
jpeg                      9e                   h1c322ee_1    fastchan
kiwisolver                1.4.4           py310hd23d0e8_0    fastchan
lame                      3.100             h27ca646_1001    fastchan
langcodes                 3.3.0              pyhd8ed1ab_0    fastchan
lcms2                     2.14                 h8193b64_0    fastchan
lerc                      3.0                  hbdafb3b_0    fastchan
libblas                   3.9.0           14_osxarm64_openblas    fastchan
libbrotlicommon           1.0.9                h1c322ee_7    fastchan
libbrotlidec              1.0.9                h1c322ee_7    fastchan
libbrotlienc              1.0.9                h1c322ee_7    fastchan
libcblas                  3.9.0           14_osxarm64_openblas    fastchan
libcxx                    14.0.4               h6a5c8ee_0    fastchan
libdeflate                1.10                 h3422bc3_0    fastchan
libffi                    3.4.2                h3422bc3_5    conda-forge
libgfortran               5.0.0           11_3_0_hd922786_25    conda-forge
libgfortran5              11.3.0              hdaf2cc0_25    conda-forge
libiconv                  1.17                 he4db4b2_0    fastchan
libidn2                   2.3.4                h1a8c8d9_0    fastchan
liblapack                 3.9.0           14_osxarm64_openblas    fastchan
libopenblas               0.3.20          openmp_h130de29_1    conda-forge
libpng                    1.6.38               h76d750c_0    fastchan
libsqlite                 3.39.4               h76d750c_0    conda-forge
libtasn1                  4.19.0               h1a8c8d9_0    fastchan
libtiff                   4.4.0                h2810ee2_0    fastchan
libunistring              0.9.10               h3422bc3_0    fastchan
libvpx                    1.11.0               hc470f4d_3    fastchan
libwebp                   1.2.4                h328b37c_0    fastchan
libwebp-base              1.2.4                h57fd34a_0    fastchan
libxcb                    1.13              h9b22ae9_1004    fastchan
libxml2                   2.10.3               h87b0503_0    fastchan
libzlib                   1.2.13               h03a7124_4    conda-forge
llvm-openmp               14.0.4               hd125106_0    fastchan
lz4-c                     1.9.3                hbdafb3b_1    fastchan
markupsafe                2.1.1           py310hf8d0d8f_1    fastchan
matplotlib                3.6.1           py310hb6292c7_0    fastchan
matplotlib-base           3.6.1           py310h78c5c2f_0    fastchan
matplotlib-inline         0.1.6              pyhd8ed1ab_0    fastchan
munkres                   1.1.4              pyh9f0ad1d_0    fastchan
murmurhash                1.0.9           py310h0f1eb42_0    fastchan
nbdev                     2.3.8                      py_0    fastchan
ncurses                   6.3                  h07bb92c_1    conda-forge
nettle                    3.8.1                h63371fa_1    fastchan
numpy                     1.23.4          py310h5d7c261_0    fastchan
openh264                  2.3.1                hb7217d7_0    fastchan
openjpeg                  2.4.0                h062765e_1    fastchan
openssl                   3.0.7                h03a7124_0    conda-forge
p11-kit                   0.24.1               h29577a5_0    fastchan
packaging                 21.3               pyhd8ed1ab_0    fastchan
pandas                    1.5.1           py310h2b830bf_0    fastchan
parso                     0.8.3              pyhd8ed1ab_0    fastchan
pathy                     0.6.2              pyhd8ed1ab_0    fastchan
pexpect                   4.8.0              pyh9f0ad1d_2    fastchan
pickleshare               0.7.5                   py_1003    fastchan
pillow                    9.2.0           py310hc9df86f_0    fastchan
pip                       22.3               pyhd8ed1ab_0    conda-forge
preshed                   3.0.8           py310h0f1eb42_0    fastchan
prompt-toolkit            3.0.31             pyha770c72_0    fastchan
pthread-stubs             0.4               h27ca646_1001    fastchan
ptyprocess                0.7.0              pyhd3deb0d_0    fastchan
pure_eval                 0.2.2              pyhd8ed1ab_0    fastchan
pycparser                 2.21               pyhd8ed1ab_0    fastchan
pydantic                  1.10.2          py310h8e9501a_0    fastchan
pygments                  2.13.0             pyhd8ed1ab_0    fastchan
pyopenssl                 22.1.0             pyhd8ed1ab_0    fastchan
pyparsing                 3.0.9              pyhd8ed1ab_0    fastchan
pysocks                   1.7.1           py310hbe9552e_5    fastchan
python                    3.10.4          h14b404e_0_cpython    fastchan
python-dateutil           2.8.2              pyhd8ed1ab_0    fastchan
python_abi                3.10                    2_cp310    fastchan
pytorch                   1.13.0                 py3.10_0    fastchan
pytz                      2022.6             pyhd8ed1ab_0    fastchan
pyyaml                    6.0             py310hf8d0d8f_4    fastchan
readline                  8.1.2                h46ed386_0    conda-forge
requests                  2.28.1             pyhd8ed1ab_0    fastchan
scikit-learn              1.1.3           py310ha00a7cd_0    fastchan
scipy                     1.9.3           py310ha0d8a01_0    fastchan
setuptools                65.5.0             pyhd8ed1ab_0    conda-forge
shellingham               1.5.0              pyhd8ed1ab_0    fastchan
six                       1.16.0             pyh6c4a22f_0    fastchan
smart_open                5.2.1              pyhd8ed1ab_0    fastchan
spacy                     3.4.2           py310h629746b_0    fastchan
spacy-legacy              3.0.10             pyhd8ed1ab_0    fastchan
spacy-loggers             1.0.3              pyhd8ed1ab_0    fastchan
sqlite                    3.39.4               h2229b38_0    conda-forge
srsly                     2.4.5           py310h0f1eb42_0    fastchan
stack_data                0.6.0              pyhd8ed1ab_0    fastchan
svt-av1                   1.3.0                h7ea286d_0    fastchan
thinc                     8.1.5           py310h629746b_0    fastchan
threadpoolctl             3.1.0              pyh8a188c0_0    fastchan
tk                        8.6.12               he1e0b03_0    conda-forge
torchaudio                0.13.0                py310_cpu    pytorch
torchvision               0.14.0                py310_cpu    fastchan
tornado                   6.2             py310h02f21da_0    fastchan
tqdm                      4.64.1             pyhd8ed1ab_0    fastchan
traitlets                 5.5.0              pyhd8ed1ab_0    fastchan
typer                     0.4.2              pyhd8ed1ab_0    fastchan
typing-extensions         4.4.0                hd8ed1ab_0    fastchan
typing_extensions         4.4.0              pyha770c72_0    fastchan
tzdata                    2022f                h191b570_0    conda-forge
unicodedata2              15.0.0          py310h8e9501a_0    fastchan
urllib3                   1.26.11            pyhd8ed1ab_0    fastchan
wasabi                    0.10.0             pyhd8ed1ab_0    fastchan
watchdog                  2.1.9           py310h02f21da_0    fastchan
wcwidth                   0.2.5              pyh9f0ad1d_2    fastchan
wheel                     0.37.1             pyhd8ed1ab_0    conda-forge
x264                      1!164.3095           h57fd34a_2    fastchan
x265                      3.5                  hbc6ce65_3    fastchan
xorg-libxau               1.0.9                h27ca646_0    fastchan
xorg-libxdmcp             1.1.3                h27ca646_0    fastchan
xz                        5.2.6                h57fd34a_0    conda-forge
yaml                      0.2.5                h3422bc3_2    fastchan
zlib                      1.2.13               h03a7124_4    fastchan
zstd                      1.5.2                hd705a24_1    fastchan```

No worries about asking for help, Sam :slight_smile: Always happy to help.

However, the trouble with sharing my conda setup is that I don’t have a specific environment set up just for FastAI work. Mine’s got a lot of other packages since I also do Stable Diffusion stuff and other development work …

The error you see might be from torchvision. I have a vague recollection of that causing issues for me at one point or another. However, most of the time, as long as your code isn’t using torchvision, you should be fine and you can disregard the error. Do you actually get any issues running any of the code for the Jupyter Notebook other than for that cell?

Do note that I run the latest nightlies for PyTorch and torchvision generally since that’s how you can get the latest changes for PyTorch for M1 … Here’s what I’ve got:

torch                             1.14.0.dev20221021
torchvision                       0.15.0.dev20221021
1 Like

Some success on 00-is-it-a-bird-creating-a-model-from-your-own-data on M1

5 Likes

With:

torch==1.13.0
fastai==2.7.10

my tests shows gpu acceleration being turned off on mac, can anybody confirm?

edit: They reverted initial mps support: revert auto-enable of mac mps due to pytorch limitations · Issue #3769 · fastai/fastai · GitHub
Turned it on manually and doing some more testing.
edit2: Did initial testing, still not ready.

1 Like

I think this is out of scope but what do you think about this? https://twitter.com/svpino/status/1578354467572838402?t=IBbMYmtj6epC0bB-9To71Q&s=19

@Fahim could you also share with us your fastai version? I’m getting this error installing the pytorch nightly build with pip:

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
fastai 2.7.10 requires torch<1.14,>=1.7, but you have torch 1.14.0.dev20221206 which is incompatible.

Doesn’t seem like they’re compatible?

I believe the latest FastAI is not compatible with Pytorch versions greater than 1.13 … I haven’t used FastAI in a while. So my input’s probably not very relevant at this point, sorry.

1 Like

Adding default_device(torch.device("mps")) after importing fastai should do the trick. Works fine for me in chapter 1 of the book at least, on a base MacBook Pro 14" M1.

1 Like

Hi all!
I added my answer in the following branch: How do I set a fastai learner to use the GPU on an M-Series Mac? - #4 by JTaurus

And here are more details: FastAI 2022 on Macbook M1 Pro Max (GPU) | by Ivan T | Feb, 2023 | Medium

Let us know if someone solved Operation 'neg_out_mps()' does not support input type 'int64' in MPS backend. issue :slight_smile:

3 Likes

I have successfully ran the version just like the medium post. But I am facing issues if I am trying to augment the data, it shows error. Another issue I am facing while using learn.lr_find(), also it’s showing error. I used the same code in kaggle, it runs just fine on cuda. I was running pets data. Can anyone help me with these two issues?

I just wanted to report that as of today, M1 Macbook Pro seems to work with GPU acceleration out of the box.

Seriously, the setup couldn’t be easier.

pip install -r .devcontainer/requirements.txt

Is all I needed to do in a pyenv installed python 3.10.

Here are the training times from an unmodified 02-saving-a-basic-fastai-model.ipynb.

And here is the same on a Kaggle P100 GPU

I don’t know how will it be on later examples, but it seems that at least this example works out of the box as of today.

1 Like

Can you please clarify this statement?

is this → .devcontainer/requirements.txt part of the fast.ai install ?

pip install -r .devcontainer/requirements.txt

Thanks for your reply in advance !!!

Yes, it’s this file:

but is this in a docker container ?
or natively on MacOS with conda install ?

Can you provide more environment information for your successful case ?

I am trying with Homebrew / Conda Miniforge on an M2 and Kernal crashes during Jupyter lab notebook reaches the learner stage.

Sorry - can you please clarify???

→ are you running this in a local Python Env - directly - or via a docker container ?