I wanted to ask a few questions to users who have multiple GPUs in their local servers and how they utilize them. I also wanted to see what the fastai v1/v2 libraries are capable of. I am hoping to keep the “commentary” at a minimum are really just stick to the survey. So, if you are a single GPU user or a user of cloud options this thread is probably more towards information only.
- How many GPUs do you have?
- Are all the GPUs the same make/model?
- Do you use all the GPUs in a concurrent (multiple experiments/notebooks running on different cards) fashion?
- Do you use all the GPUs in a “parallel” (one notebook utilizing all the GPUs) fashion?
5a. Anyone able to do this with different GPU makes/models?
- Are there any fastai modules which cannot be used in a parallel fashion?
- Are your GPUs connected via NVlink or some other physical hardware?
- What OS are you using for your multiple GPU setup?
8a. Anyone running fastai within UNRAID via VM or docker?
- What version of the NVIDIA drivers are you currently using?
Your survey answers will help me with my current and future server configurations.
Thanks in advance to all who participate.