I have been trying to get fast.ai library on my Mac and followed the instructions on the GitHub page. And this is what I get in the terminal:
Using Anaconda API: https://api.anaconda.org
Solving environment: failed
@jeremy said Mac is not supported. Is this still true? How can I get the library to run locally?
It’s tricky to get CUDA to run on Macs, and impossible if your Mac does not have an NVIDIA GPU (which most modern Macs don’t). It might be possible to run fastai without CUDA but doing deep learning on the CPU is not recommended…
@bluesky314 - Can you verify that your Mac has cuda GPU?
nvidia-smi on your Terminal to see if you get details on CUDA.
Also, if you go to Apple Icon on top left, About This Mac -> Overview -> System Report -> Graphics/Displays. In all likelihood, you might see Intel GPU Displayed there. Intel GPUs are no good for Deep Learning. Cuda is currently only available in NVIDIA GPUs.
Did you get this error when you ran this -
conda env update -f environment-cpu.yml
Also, as pointed out in the above reply, Running Deep Learning without GPU is not fun. So your choices are -
- Get a GPU machine
- Use your mac to login to a Cloud GPU machine like Paperspace
- Try Google Colab or Crestle or other alteratives.
There are a number of threads out there discussing these topics if you want to explore more.
I believe external GPU could be another option but it seems expensive.
I work on a Mac and use the environment-cpu.yml file to setup my Anaconda environment. If needed, you can remove pytorch and then install it from source following the excellent instructions on the pytorch site as well as described here. You want to install the 0.3.1 version with CPU-only support.
You won’t get GPU support but you can still do a lot of coding and ensuring things will run with a sample size before needing to push your code to AWS or whatever.
Believe me when I tell you getting things to run on a Mac is a pain and not worth it. I’ve gone down that road and it was maddening.
Another go could be-you could use crestle without the GPU (about 4 cents an hour) to run experiments and then push these onto a GPU machine.
To expand on @wgpubs a bit: I’m running through the course on a Mac, and the setup is not trivial but also not so much work that it’s not worth it in all cases. Depending on your personal circumstances, it could be worth the hour or so of setup for you – I happen to have a beefy CUDA-capable card and only get to work on the course in short bursts of time; leaving a GPU instance running somewhere so I don’t have to start from scratch each time doesn’t thrill me.
For reference, I’m on a Mac Pro 4,1 (firmware upgraded to 5,1) with a GTX 1080 Ti GPU. My CPUs are only occasionally the bottleneck.
Here are the steps I took to get everything working, in case someone else comes across this forum and needs help:
Follow the instructions here, using environment-cpu.yml instead of environment.yml in the appropriate place: https://github.com/reshamas/fastai_deeplearn_part1/blob/master/tools/setup_personal_dl_box.md
Install the CUDA + cuDNN developer tools from Nvidia.
Download and install Xcode v. 9.2 (anything newer won’t work with nvcc, the Nvidia compiler). Use the xcode-select command line tool to make sure this is the default Xcode installation (needed for step 4, but then can be switched back). You may find this link useful: https://devtalk.nvidia.com/default/topic/1032646/cuda-setup-and-installation/macos-10-13-4-and-xcode-9-3-compatibility-broken-with-cuda-toolkit-9-1/post/5260119/#5260119
Follow the instructions posted by @wgpubs above to compile pyTorch from source: pyTorch not working with an old NVidia card – if you’ve followed steps 2 and 3 above, it should in fact install pyTorch with GPU support.
I installed CUDA 9.2 externaly and mapped with fastai environment.
and tested it is working fine.
here is the detailed link for your reference.
Hope this will help you
so only thing you have to satisfy the prerequisite of CUDA 9.2.