Lesson 3 In-Class Discussion

Hi Stathis. I think we’ve answered all these before, so have a search for the previous threads, and if you still have questions, feel free to follow up there.

1 Like

Hi all.

FWIW, I found an equivalent extension for Firefox, cliget, available here: cliget. Does the same thing for Firefox as CurlWget does for Chrome.

1 Like

Hi all: I have a very fundamental question on how CNN works.

I understand fully the training process as to take a bunch of images, start with random filters, convolve, activate, calculate loss, back propagate and learn weights. Fully understood.

But once the training is done, the last convolution layer has the most complex and complete features like faces, ears, wheels and such filters that can get activated by full features.

If that is so, during testing, do we need to pass our images through all the layers again? Why dont we convolve our images against the last convolution layer and see how many of these complex feature filters get activated? And pass that on to the fully connected layer for classification?

Let’s say the sizes will differ from your input test image to the last layer,
Also the test image needs to go through all the layers again so that the network will slowly build it’s feature representation in terms of Gabor like filters so that the final layer will detect it?

Thanks. I intuitively get the size issue.But cant we handle it in other ways than going through tens of layers? Gabor filters are a different story though. I agree there.

@Renga

Posted it on Stack…

Here’s the main gist(I meant that though)

Each layer is a function of the previous layer. Ignoring the details of convolution, a neural network is essentially a composition of multiple functions (let’s call them f, g, h, i, j for example) so that:

y = j(i(h(g(f(x))))

You are essentially asking here, can you just do y = j(x) instead of running all those functions in sequence. And the answer is no.

Basically whatever we do in ML/DL , afterall it’s nothing but a Mathematical operation…

Thanks. I now understand the visualisation thing better.