[Hardware] AMD GPU support

AMD Radeon 7 GPU has 16 GB memory and is priced and performs (in games) at Nvidia RTX 2080 (8GB memory) level.
Anyone has it set up for Deep Learning, ideally fastai/Pytorch and wiling to share how to do it and how it performs? Thanks!

Have you tried to google it ?? BTW look at :wink:

1 Like

Thank you fabris. Very interesting blog post. Yeah I found some material online including rockm GitHub that you can set it up. But I’m a bit sceptical cos if indeed these GPUs worked well with Keras and Pytorch then why would you buy Nvidia as with AMD you get more for the same money. Even AMD is not advertising this fact, only quiet GitHub repo. That’s why my question here at forum if anyone run latest version of fastai with it and how it performed?
Thanks!

1 Like

I was wondering the same. I suppose AMD only tried to test the market. RadeonVII is very expensive to produce.
Anyway we know that with keras/tf it is fully supported.
Me either have been scared to invest to test it.
Hope someone did it already…

UPDATE2:
Maybe this works for you too, it seems to work for my AMD GPU… see here:


UPDATE:
The check described below only confirmed the installation of ROCm, so far I can only run Jupyter notebooks with my CPU, so it seems that my AMD RX 570 8GB wont accelerate calculations with fast ai. OS: Ubuntu Mate 18.04.3.

Hello all,
tried it in Ubuntu Mate 18.04.3 with AMD RX 570 8GB. The video card install test of ROCm was sucessfull, meaning

/opt/rocm/bin/rocminfo
/opt/rocm/opencl/bin/x86_64/clinfo

in
https://rocm.github.io/ROCmInstall.html
ran sucessfully. However, I did not get around to run Jupyter notebooks or any fast.ai with it yet, so I’m currently still learning the ropes in Google colab…

1 Like

I got fast ai running on my desktop with an AMD RX 6900 XT running inside a docker container with rocm installed. I followed amd’s instructions here: AMD Documentation - Portal
I started the container with a --net host argument to pass the notebook server through to my host machine

docker run --net=host -it --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --device=/dev/kfd --device=/dev/dri --group-add video --ipc=host --shm-size 8G rocm/pytorch:latest

that got it up, once inside I cloned into fastbook and pip installed the requirements. I opened a python interpreter and imported torch and checked that cuda.is_available and it was so that was a success.
I ran into a weird issue with the python installed in the container and the fastai site package. When I ran any notebook, it said that there was a formatting issue with a progress.py function and after some googling, it looks like f strings weren’t supported with pytorch at sometime so I changed the f strings in /opt/conda/lib/python3.8/site-packages/fastai callback/progress.py and in core.py of fastai to be ‘{}’.format() and that seemed to work. I don’t know if that’s an issue other people have run into though. I just was up late tonight tinkering and thought I could get rocm pytorch running on my 6900xt. it’s not nearly as fast as the kaggle gpus but it was a fun project to try.

2 Likes