The last 10 minutes were a break, so you weren’t missing stuff during that time.
yes back up for me
I dont have any echo
everything you say repeats twice. It’s everyone in the room I think
Its clear now … video and audio is good for me
Is anyone else hearing an echo? Is it of Jeremy?
no echo for me
Ok just me then, no biggie
Can you explain what a deconv layer is doing and how it works?
In the case of fast style transfer, at the end of training, we will end up with a single set of weights which is “good” at stylizing an image. Should we expect that single set of weights to be “worse” than optimizing a single image (a la Gatys et al)?
Do you have the youtube video running in multiple browser windows?
yea, that was the problem. I figured it out. Thanks!
Deconv paper here:
http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf
We will see a deconvolution in an excel spreadsheet in a moment 
Regarding the notebooks in GitHub. I personally would love to keep them there. Things will change frequently, but at least with GitHub we have a chance at keeping up. Students with bug fixes can update them for all to benefit. Otherwise we will all be on different notebooks all over the place.
Where do these architectures come from? Is it just experimentation?
Is this TensorFlow backend?
Shouldn’t BatchNormalization axis=-1 (I.e. Channels)?
How will it impact if the resultant matrix from convolution is put not in the center and do the same steps?
So a deconvolution is exactly the same operation as a convolution, with just added padding ?
In electronics, deconvolution means to ‘get back what you did using convolution’
@sakiran, if it’s not in the center, the filter would be multiplied by all zeros for some entries, which won’t possibly give the correct result