Cats & Dogs based on colours what about the human brain

The Lesson 1 mentions that we will differentiate between cats and dogs based on colour of each pixel that represents a cat and a dog. I know this might sound far fetched what if we have cat and dog with exactly the same colours in essence two pictures of a cat and dog respectively that has the same pixel values for RGB. Then how will the algorithm work?

The reason I am asking this I believe (I am not an expert by any chance) the human brain not only uses colours but also features of a cat/dog to differentiate between them?

Can someone please elaborate?

Sure! You could build an algorithm that just uses color information. That would be just like using a color histograms and inputting them to your model.

And you are correct. You could have two pictures of different things with very similar histograms, so just using histograms probably wouldn’t work super well… Convolutional neural nets do use color to classify images, but they are also very good at detecting edges and higher order features that are made up of combinations of edges or other higher order features.

You are also correct about the human brain. Here is a seminal paper on receptive fields in the cat’s visual cortex.

But it is unlikely, that the cat and dog looks the same in terms of (size , edges, and shape and color etc)

@Partha: You are right, that is the point I am trying to make. Since the implementation currently uses only colours then there is a very low likelihood however low (that in some cases the classification will fail) I reckon for images that are very close for example a bengal cat vs tiger

@erikg Thanks for the contribution you thought on our classification failing for “bengal cats” vs “tigers” based on colours alone? May be I will try it

But it does not only use colors. It’s true that each individual pixel only represents a color, but these kinds of neural networks look at the relationships between pixels, and it’s those relationships that contain information about the shapes of the things in the image.

(Naturally, if the only input to the neural network is the pixels from the image, it can only draw conclusions based on the image. If you have an image of a cat and one of a dog that are nearly identical at the pixel level – to the extent that it’s even for a human being hard to distinguish which is which without having extra information – then it will also be hard for the neural network to say whether the animal in the picture is a cat or a dog.)

1 Like

@machinethink thanks for the explanation it helps. Probably when I advance and learn more I will be able to delve deeper into relationship between pixels to determine the shapes.

This is true, Thanks for the post…

Sadly, we veterinarians are seeing an expanded pervasiveness of diabetes mellitus in cats and dogs. This is likely because of the developing commonness of corpulence (optional to the idle way of life, a high sugar slim down, the absence of activity, and so forth.).

two buildings are made of same group of pure atoms, only difference is the amount, you can still tell them apart, why?
there are layers of constructs between atoms and buildings.