Due to the big size of my dataset I face the pressing need of running my model on multiple GPUs (2-4).
I have tried:
model=torch.nn.DataParallel(model, device_ids=[0, 3])
without no success.
What should I do?
Due to the big size of my dataset I face the pressing need of running my model on multiple GPUs (2-4).
I have tried:
model=torch.nn.DataParallel(model, device_ids=[0, 3])
without no success.
What should I do?
You can follow this tutorial to run your model on multiple GPUs https://docs.fast.ai/distributed.html
Thank you!!