Interesting Paper: ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness


In this paper, they are training a ResNet50 on ImageNet, but on top of using the original image, they train it with style transferred versions and a few more variants as well. They seem to achieve state of the art results with this quite extensive data augmentation.
More impressively, the neural network seems to learn shapes and not only textures.

I am very curious to see what the consequences are to transfer learning!
We all know that a pretrained ResNet can relatively easily be used to learn image segmentation, even though it has a very weak understanding of shapes. Intuitively, a ResNet model which has a more solid understanding of shape should be able to produce better results.
VGG is still the go to solution when it comes to style transfer. What happens if we train a VGG which has a more robust understanding of shapes? Is it still usable for style transfer or does it break it?

This is where I learned about the paper:
Two Minute Papers:


Yes this paper is very interesting, it would probably be helpful to use networks pretrained like so as a starting point for transfer learning.

I also wonder if Style Transfer could then be used as a type of Data Augmentation. Maybe an opportunity to contribute to fastai ? :slightly_smiling_face:

That’s for sure an opportunity, though an extremely time consuming one. My computer is clearly going to be very busy with this sort of problem in the foreseeable future :slight_smile: