Is this “keep the last 10 or 20 approximate hessians”?
Is there any work on pre-training networks for generic style learning?
Just thinking of how some of this semi-relates to Feature Visualization from distill.pub
Is @
matrix multiplication or dot product?
did we skip the part where the vgg model is actually trained on the painting? I feel like we skipped straight to extracting the layers of interest.
This uses the pretrained VGG model. Probably no fine-tuning needed, since we’re using the features extracted by some layer in the middle of the network.
@ is matrix multiplication (new in python 3)
I am enjoying today’s lesson a lot
I’m really glad about this too…For anyone that might have missed it I actually started a thread a couple days back specifically for an open discussion on this subject in the forums so welcome to anyone that wants to join in or share insights there
first came to know about it on your computational algebra course
Please ask him to talk about Multi-GPU
what was the name of the paper with style-transferring captain america’s shield?
It’s matrix multiplication. It’s a pure python operator. But dot product and matrix multiply are the same, no?
Deep painterly harmonization https://arxiv.org/abs/1804.03189
I CANNOT get enough of Jeremy’s frustration with Math notation. I would watch a friggin’ netflix special about that.
Can we make this happen?
Is there a name for the type of operation where you just multiply each pair without adding, like this
[[a,b],[c,d]] * [[e,f],[g,h]] = [ [a * e, b * f], [ c * g,d * h ] ]
There is a convention (especially loved by physicists) called Einstein summation. Numpy can do it: https://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html
I guess it is Hadamard Product
element-wise multiplication. And yeah, that’s just * in pytorch/numpy/tensorflow