Can somebody explain convolution and max pooling?

Hey there guys. I think this question might be a little annoying for more experienced people but here it goes:

I came here from watching @jeremy’s video for Lesson #0 and had some doubts about convolution and max pooling, but I did understand optimisation.

I understood optimisation in this way:
Multiplication -> a*b. Squaring -> a**b.
Optimisation -> a as the input and a random variable x (the weight :thinking:??) that are operated on to give output b. Over some training periods, we get close enough to b as shown in the animation in the video mentioned.
(I could be wrong though.)

Can somebody explain convolution and max pooling in this way? It’s a bit easier to understand.

Both convolutions and max pooling are explained in great detail (using excel!) at the beginning of part 1 (lec #2 or maybe lec #3, can’t recall).

Optimization is a process in which we alter the weights, the parameters of our model, to bring the predictions closer to the ground truth labels in our train set. If our model is linear of the form y = ax + b, x is the train data, y are the predictions and a and b are the parameters we can optimize.

1 Like

@radek thanks for the insights! Waiting for #2 and #3.