Hi guys, Weights*pixels is always equal to 0 when the pixels are equal to 0, because of this situation we need a bias I don’t understand the proper context of this remark can anybody help me by elaborating the need for a bias in the prediction function?
For each pixel we assign a weight to it, to enable learning with gradient descent. We multiply each pixel with its weights to get the activation.
Suppose a pixel in the image is zero,
weight * pixel to get the activation, the activation will be zero, since any number multiplied by a zero is zero.
This means that for every pixel with value zero in the image, it will result into an activation with zero, which is an undesirable behavior.
So therefore, each neuron has its bias that it adds:
weight * pixel becomes
weight * pixel + bias
So that for the pixel with zero, we get an activation.