How hard is it to parallelize model training across a Titan X Pascal (2016) and a Titan XP (2017)?

My home deep learning box build currently has a 2016 Titan X Pascal. I’m debating getting a Titan XP (2017 release) which has the same onboard memory specs but with a few feature improvements. However, since they’re not exactly the same model - just very similar - I’m not sure how easy they are parallelize vs. if I just bought another older 2016 Titan X Pascal model.

Any advice on this?

Related question: same as above, but re: parallelizing a Titan X Pascal (2016) and a 1080 TI. According to this blog post (possibly outdated), looks like CNTK, Torch, PyTorch have built-in support for multiple GPUs, but not sure if this applies to working across different models with different memory capacities.

You can see spec comparisons here:

1 Like

Mxnet might be an option if using keras.