The actual state of Multi-GPU training, in Pytorch and TF

My question is quite simple:

Is there anything, in TF and Pytorch, that still needs to be necessarily done on a single GPU?

Such question arises from the fact that I am replacing a single 1080ti with two 2070s. I’m worried about potential memory-hungry stuff that would need to be trained on a single GPU and would not fit into the smaller memory of a 2070.

Furthermore, do you devise any other potential drawback whatsoever?

Thanks.