He can reply here.
Do you need dropout if you use batchnorm?
They do different things. Dropout is a regularization technique. BatchNorm has some regularization component but mostly allows for faster convergence.
Did Jeremy answer this?
He said to normalize anyways.
Why are they called bottleneck layers?
For high resolution image what sort of resnet architecture is useful - high number of blocks or lower number of blocks.
It is more about how much data you have. More layers means more parameters.
@jeremy can please end part 1 spending 5 minutes talking about your journey into deep learning. It will give us a learning model to follow.
Also, can you please talk about how you keep up with the large volume of research that’s coming out, how do you sift from this to find out what really matters to practitioners. Any workflow that you would recommend?
What’s CAM again?
Class Activation Maps
Is that like looking for something on an X-Ray?
This is exactly what I wanted to know like 1 week ago! lol
how do we attend part 2? will there be additional application again?
Everybody!
Yes. Like p1v2
yeah, I think we’re all in LOL
I want to attend to Part 2 v2