Does anybody know whether using the Fourier transformed version of images (e.g. Fourier Images) is “a thing” in any kind of image based deep learning application, for example for image classification?
I was randomly wondering whether training on Fourier transformed images is perhaps easier for a neural network than on original images. For humans it’s certainly not very easy to recognise stuff in Fourier images, but perhaps for neural networks that’s different.
I couldn’t find a whole lot of information on it, perhaps because it’s a dumb idea …
Some initial thoughts:
- transfer learning techniques probably doesn’t work well from a network that is trained on “non Fourier transformed” images to an application that is using Fourier images, since normal and Fourier images are so different
- Fourier images seem to consist of a phase and an amplitude, and thus double the size of the input