Questions for users of multiple GPUs

I wanted to ask a few questions to users who have multiple GPUs in their local servers and how they utilize them. I also wanted to see what the fastai v1/v2 libraries are capable of. I am hoping to keep the “commentary” at a minimum are really just stick to the survey. So, if you are a single GPU user or a user of cloud options this thread is probably more towards information only.

  1. How many GPUs do you have?
  2. Are all the GPUs the same make/model?
  3. Do you use all the GPUs in a concurrent (multiple experiments/notebooks running on different cards) fashion?
  4. Do you use all the GPUs in a “parallel” (one notebook utilizing all the GPUs) fashion?
    5a. Anyone able to do this with different GPU makes/models?
  5. Are there any fastai modules which cannot be used in a parallel fashion?
  6. Are your GPUs connected via NVlink or some other physical hardware?
  7. What OS are you using for your multiple GPU setup?
    8a. Anyone running fastai within UNRAID via VM or docker?
  8. What version of the NVIDIA drivers are you currently using?

Your survey answers will help me with my current and future server configurations.

Thanks in advance to all who participate.
-FMB