I had the same issue in September 2017. I wanted to keep using my mac os workflows to do deep learning, unfortunately, this isn’t easy/possible. Here is why:
- you need Nvidia GPU and macs are shipped with AMD which is not yet supported or super slow (OpenCL), for something that has the potential of matching CUDA in the future see ROCm.
- Thunderbolt external GPU had driver issues in 2017, this is supposed to be fixed this year.
- Even if you manage to get GPU connected you might have issues compiling the deep learning frameworks. For example, Tensorflow does not support GPU since 1.2, PyTorch seems to have better support though.
So I’ve ended up with the following solutions:
- make a headless GPU rig - this is what I end up doing
- use AWS or services like Floydhub. Floydhub takes no time to setup but is a bit more expensive than AWS
- rent a dedicated server with GPU (you can get 1080) on hetzner.com for 99 usd /month + 99 usd setup on
- help to get the e-GPU’s working well on mac os.
I’ve gone for option 1. for the following reasons:
- I’d rather invest once and then worry that I’m not using the pc enough than consider the cost before each experiment.
- Cost of running it on AWS is huge. My PC can run 4 models at once 3 times faster than K80 on AWS. That gives me 600h (25d) after which running models on AWS starts to be more expensive than building your own PC, assuming that the electricity cost is not a huge factor here. Hetzner looks a bit better as the same budget gives you nine months of similar computing power.