Also, fastai added initial support for mps backend in v2.7.6
Now, fine_tuning takes ages for me, doesn’t use nor gpu or cpu, can’t move forward.
Anyone?
EDIT: OK, it didn’t appear to do anything so I played few times with new conda environment and well, I can confirm that 02-saving-a-basic-fastai-model.ipynb is working well using gpu (I checked, python process uses 99% gpu), it’s fast and working! Happy to see some benchmarks versus beefy nvidia gpus!
EDIT2: OK, false positive, exporting the model fails and it seems to be pytorch issue. Presumably nightly version of pytorch has this resolved but coudn’t make it work together with fastai, got back to the stage when cpu nor gpu didn’t do anything during training. Let’s hope pytorch 1.12.2 has the fix merged and we can get rolling.
import torch
import math
# this ensures that the current MacOS version is at least 12.3+
print(torch.backends.mps.is_available())
# this ensures that the current current PyTorch installation was built with MPS activated.
print(torch.backends.mps.is_built())
torch.device("mps")
device_type = "mps"
device = torch.device(device_type)
class TrainingArgumentsWithMPSSupport(TrainingArguments):
@property
def device(self) -> torch.device:
if device_type == "mps":
return torch.device("mps")
else:
return torch.device("cpu")
args = TrainingArgumentsWithMPSSupport('outputs',
learning_rate=lr, warmup_ratio=0.1, lr_scheduler_type='cosine',
evaluation_strategy="epoch", per_device_train_batch_size=bs, per_device_eval_batch_size=bs*2,
num_train_epochs=epochs, weight_decay=0.01, report_to='none')
trainer = Trainer(model, args,
train_dataset=dds['train'],
eval_dataset=dds['test'],
tokenizer=tokz,
compute_metrics=corr_d)
trainer.train()
Hi all,
I’m new to fastai and managed to get Lesson 1 working in a Jupyter Notebook locally on my M1 Max. I thought I’d post the exact steps I took for other new folks like myself. Big thanks to @iTapAndroid for his post (would not have succeeded without it). Lesson 1 ran/trained locally in less than 8 seconds for me.
@Fahim Really sorry to keep bugging you…I just know that you’ve been helpful on the forums and I’ve been having a nightmare of a time trying to get lesson two working now.
I think it’s to do with my setup but I’ve tried many different things mentioned here and on the PyTorch forum, but something always breaks - the widgets for Jupyter Notebook not working on Jupyter Lab, or now I’m getting a new error…
I wonder if you might be able to share the output of your mamba list so I can compare your versions to what I’ve got going / if you had steps to recreate your working environment I’d really appreciate it given we’re on the same M1Mac 32GB machine.
For comparison (although I definitely don’t expect you to read, just in case it’s helpful for someone else), mine is below…
No worries about asking for help, Sam Always happy to help.
However, the trouble with sharing my conda setup is that I don’t have a specific environment set up just for FastAI work. Mine’s got a lot of other packages since I also do Stable Diffusion stuff and other development work …
The error you see might be from torchvision. I have a vague recollection of that causing issues for me at one point or another. However, most of the time, as long as your code isn’t using torchvision, you should be fine and you can disregard the error. Do you actually get any issues running any of the code for the Jupyter Notebook other than for that cell?
Do note that I run the latest nightlies for PyTorch and torchvision generally since that’s how you can get the latest changes for PyTorch for M1 … Here’s what I’ve got:
@Fahim could you also share with us your fastai version? I’m getting this error installing the pytorch nightly build with pip:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
fastai 2.7.10 requires torch<1.14,>=1.7, but you have torch 1.14.0.dev20221206 which is incompatible.
I believe the latest FastAI is not compatible with Pytorch versions greater than 1.13 … I haven’t used FastAI in a while. So my input’s probably not very relevant at this point, sorry.
Adding default_device(torch.device("mps")) after importing fastai should do the trick. Works fine for me in chapter 1 of the book at least, on a base MacBook Pro 14" M1.
I have successfully ran the version just like the medium post. But I am facing issues if I am trying to augment the data, it shows error. Another issue I am facing while using learn.lr_find(), also it’s showing error. I used the same code in kaggle, it runs just fine on cuda. I was running pets data. Can anyone help me with these two issues?