Hey there guys. I think this question might be a little annoying for more experienced people but here it goes:

I came here from watching @jeremy’s video for Lesson #0 and had some doubts about convolution and max pooling, but I did understand optimisation.

I understood optimisation in this way:

Multiplication -> a*b. Squaring -> a**b.

Optimisation -> a as the input and a random variable x (the weight ??) that are operated on to give output b. Over some training periods, we get close enough to b as shown in the animation in the video mentioned.

(I could be wrong though.)

Can somebody explain convolution and max pooling in this way? It’s a bit easier to understand.