@zaoyang It is not so much loss than the data is compressed. Deep Learning is about reducing the noice inside the data and getting its essential characteristics.
Their is a lot of loss happening in the max pooling layer. This is something people do not like especially Hinton (one of the godfather). New research is coming in to solve that.
But we keep using them because it just works in practice.
Can we please pick our own seats next time? I canât see with where my group is placed.
Itâs actually weighted. Itâs the 3x3 subset coming from your layer weighted by your filter. Itâs the weighted sum. (was that your question?)
Regarding the spreadsheet - I have a different set of values (see below) in filters 1/1 and 2/1, and then in the conv2 layer, I guess the spreadsheet might be out of sync?
Any good resources for the fully convolutional layers?
@jeremy youâve briefly mentioned that you used network pre-trained on 3-channel input on a 4-channel input (RGB+NIR), by adding additional dimension to all pertained filters (initialized to 0 or perhaps random). Did you then âunfreezeâ these additional new filter weights along with the ones for RGB channels and train them?
Still muted⌠@jeremy
âŚIs it just me? Who is the TA I should @ ?
Weâre getting sound
perhaps just you @loldja, I can hear him
Hm ok thanks guys.
Got it. I feel like my video breaks every time we go on break, lol
okay but there are a lot of layers and they all âlearnâ different things. If you are pruning out non relevant data for one layer, it could be essential for another layer. So I guess the question is how can each layer learn âessential dataâ independently whereas the data flows through the layers in a dependent way.
Thatâs what SGD is for, the model is learning what is important to produce a good result.
where is planet data ?
If youâre like me and canât remember what SGD means, itâs stochastic gradient descent.
Is there a way to specify that the cloudy, clear labels are a softmax type, while the cover labels should be done with a different activation?
Does anyone remember (or can we get a reminder) of the commands within ipython to see the functionâs input parameters and to see itâs source. I was looking for that part of the previous lectures and couldnât find it.
shift + tab will give the input parameters, ??command_name will give you the source, Shift + tab 3 times will give you the documentation
what is the activation function used for multi label classification?
My image_models notebook is a little different from what jeremy is explaining now. i did a git pull but it shows the same as i had before.