Distributed CPU: Has anyone used torch.distributed?

Hi Jeremy/All

I’m working on using torch in a cluster/multi core environment. No GPU access. I’ll be testing the code on all CPU cores on my machine. Torch manual has 1 or 2 examples. Can anyone point me to some best practices or more examples to help me? Thank you

Link: http://pytorch.org/tutorials/intermediate/dist_tuto.html