Hey there guys. I think this question might be a little annoying for more experienced people but here it goes:
I understood optimisation in this way:
Multiplication -> a*b. Squaring -> a**b.
Optimisation -> a as the input and a random variable x (the weight ??) that are operated on to give output b. Over some training periods, we get close enough to b as shown in the animation in the video mentioned.
(I could be wrong though.)
Can somebody explain convolution and max pooling in this way? It’s a bit easier to understand.