It’s tricky to get CUDA to run on Macs, and impossible if your Mac does not have an NVIDIA GPU (which most modern Macs don’t). It might be possible to run fastai without CUDA but doing deep learning on the CPU is not recommended…
@bluesky314 - Can you verify that your Mac has cuda GPU?
Try running nvidia-smi on your Terminal to see if you get details on CUDA.
Also, if you go to Apple Icon on top left, About This Mac -> Overview -> System Report -> Graphics/Displays. In all likelihood, you might see Intel GPU Displayed there. Intel GPUs are no good for Deep Learning. Cuda is currently only available in NVIDIA GPUs.
Did you get this error when you ran this - conda env update -f environment-cpu.yml
Also, as pointed out in the above reply, Running Deep Learning without GPU is not fun. So your choices are -
Get a GPU machine
Use your mac to login to a Cloud GPU machine like Paperspace
Try Google Colab or Crestle or other alteratives.
There are a number of threads out there discussing these topics if you want to explore more.
I work on a Mac and use the environment-cpu.yml file to setup my Anaconda environment. If needed, you can remove pytorch and then install it from source following the excellent instructions on the pytorch site as well as described here. You want to install the 0.3.1 version with CPU-only support.
You won’t get GPU support but you can still do a lot of coding and ensuring things will run with a sample size before needing to push your code to AWS or whatever.
Believe me when I tell you getting things to run on a Mac is a pain and not worth it. I’ve gone down that road and it was maddening.
To expand on @wgpubs a bit: I’m running through the course on a Mac, and the setup is not trivial but also not so much work that it’s not worth it in all cases. Depending on your personal circumstances, it could be worth the hour or so of setup for you – I happen to have a beefy CUDA-capable card and only get to work on the course in short bursts of time; leaving a GPU instance running somewhere so I don’t have to start from scratch each time doesn’t thrill me.
For reference, I’m on a Mac Pro 4,1 (firmware upgraded to 5,1) with a GTX 1080 Ti GPU. My CPUs are only occasionally the bottleneck.
Here are the steps I took to get everything working, in case someone else comes across this forum and needs help:
Follow the instructions posted by @wgpubs above to compile pyTorch from source: pyTorch not working with an old NVidia card – if you’ve followed steps 2 and 3 above, it should in fact install pyTorch with GPU support.