A new paper that look promising came out today on the Arxiv.
This article is about improving the training process of WGAN. After multiple tests, the authors noticed that the main instabilities in the training process of WGANs were due to the weight clipping. In the original WGAN article, weight clipping is used to ensure that the theorical results can be applied (it is a Lipschitz constraint). In this new paper, the authors propose an alternative way to doing it (penalizing the norm of the gradient of the critic with respect to its input) and it appears to be working a lot better.