A new idea for regularization


Some time ago I have read an excellent article How to trick a neural network into thinking a panda is a vulture by Julia Evans.

I have been thinking about pushing the idea a little bit further to help the network to adjust to unseen data better. Namely, if there are N categories, then using the technique described in the article to produce new images that we could use to prevent the network from overfitting.

I know that this is fairly similar to changing brightness, hue, saturation etc, however, I am curious whether someone has used it before. It has an additional advantage, namely having used the previous technique we could then use that one.

The main con of this technique is that we need to perform a backward pass to get a new image, which is computationally expensive.

I am curious about your opinions on this idea. Has anyone ever tried it before?