Applying image filters, will it improve the results?

Hey guys,
thank you for such a great course…

i’m wondering, will it make any improvements if I apply some image filters (such black-white filter, Sobel…etc) on the data, then train the model with those photos?

41%20AM

PIP_Image_Processor

It probably won’t, because a convolutional layer in a cnn is basically a bunch of filters. And those filters are optimized for the task you train, while a Sobel maybe not. If it is important, the network would learn that.

Using a black-white filter can have the benefit of reducing the channels (1 instead of 3), but that will depend on the task I think.

thank you for your reply…

please have a look here: https://www.sciencedirect.com/science/article/pii/S1319157810000054#b0105

On this subject I’ve looked into trying to use pytorch image transformations (https://pytorch.org/docs/stable/torchvision/transforms.html) during image augmentation in fastai. The issue I’ve run into is fastai transformations work with openCV images, while the pytorch transformations work with PIL images. Is there a simple way to unify these?

Hi Karl !

Jeremy initially was using PIL for Image based operation, but he found out OpenCV was better than PIL and PIL-SIMD (if i recollect correctly). Since OpenCV was efficient he moved the code base into OpenCV based transforms.
Reason : Fastai extensively uses image transforms and he was more focused in using faster alternative.

You can find more info regarding the computational efficiency discussed Here: Link

BTW is there any reason why you are looking to unify them ? Just curious, because OpenCV by itself seems to be comprehensive and can handle all image based operations.

Hope it helps!
~Gokkul

Thanks for the reply Gokkul.

The specific use case I was looking at was for image classification on a dataset where some images were black and white, while others were color. I wanted to remove the variable by making everything black and white, but still keep the images as 3 channel images to use with a pretrained imagenet model. The Pytorch grayscale transformation does exactly that, so I tried to use it as a transformation in a Fastai model. That’s when I found out that Pytorch uses PIL images while Fastai works with OpenCV. I tried to make a similar function in OpenCV but the images turned out weirdly distorted in ways I didn’t intend (it also doesn’t help that I have zero experience using OpenCV).

In more general terms I thought it would be a nice option to have the flexibility to use Pytorch transformations in a Fastai context.

@Asim This article is from the pre-deep-dearning time. It’s actually a good example of the difference between those to paradigms: All the work they are doing to create features from the images is nowadays automatically done by Deep Neural Nets.