So the idea is inspired from this paper https://arxiv.org/abs/1710.10196 titled Progressive Growing of GANs for Improved Quality, Stability, and Variation.
The idea is as follows. You take any network (say ResNet18). Train the model till it starts to overfit. Now, add a few more convolution layers (padded to keep same size) and add residual mappings. Again train the network and so on. So at any point of time the model is at least as good as the previous model. This should keep on increasing accuracy I think (but am not sure).
Wanted to know if this has already been addressed somewhere and if I am re-inventing the wheel.