As I understand, for fastai to make use of these GPUs, the underlying pytorch framework would need to work with it. Pytorch team seems to be working on it, but I haven’t heard any pytorch builds that can leverage the M1 architecture (yet.).
EDIT: This issue in the pytorch github has some discussion on what’s been going on in this regard:
opened 10:05PM - 10 Nov 20 UTC
module: performance
triaged
## 🚀 Feature
Hi,
I was wondering if we could evaluate PyTorch's performance… on Apple's new M1 chip. I'm also wondering how we could possibly optimize Pytorch's capabilities on M1 GPUs/neural engines.
I know the issue of supporting acceleration frameworks outside of CUDA has been discussed in previous issues like #488..but I think this is worth a revisit. In Apple's big reveal today, we learned that Apple's on a roll with 50% of product usage growth being as a result of new users this year. Given that Apple is moving to these in-house designed chips, enhanced support for these chips could make deep learning on personal laptops a better experience for many researchers and engineers. I think this really aligns with PyTorch's theme of facilitating deep learning from research to production.
I'm not quite sure how this should go down. But these could be important:
1. A study on M1 chips
2. Evaluation of Pytorch's performance on M1 chips
3. Assessment on M1's compatibility with acceleration frameworks compatible with PyTorch (best bet would be CUDA transpilation..from what I see at #488)
4. Investigating enhancements to PyTorch that can take advantage of M1's ML features.
cc @VitalyFedyunin @ngimel
3 Likes