Thank you @ilovescience for the link.
I’ve published an example of code that runs perfectly with DDP (Distributed Data Parallel) into a Terminal on my machine with 2 GPUs V100 NVIDIA 32Go thanks to the fastai v2 distributed training code.
See it in my guide about Data Parallel (DP) and Distributed Data Parallel (DDP) training in PyTorch and fastai v2.
However, I tried to do the same thing with the Transformers tutorial (notebook 39_tutorial.transformers.ipynb) from @sgugger but it does not work (until now). Any suggestions to make it work? (see my post)