Yes, in the first post of this thread.
Ah, scanned at the bottom. I guess I was expecting an append
@jeremy For those who are enrolled for the in-person course but who weren’t able to make the first class in person (i.e. me), will we be able to join a group still, or are there enough of us who were absent the first class to make a new group? Thanks!
Something small that I noticed is how the style image and the content image should be of the same size. So you can crop your style image to match the dimensions of your content image before you go in to the style transfer section of the notebook.
I think the style image and the content image are not always to be of the same size.
As you could also use the style image you want (no matter the size), to style the desired content image (with respect to your machine capacity).
Yes of course. You can use the style and content image of your choice. I noticed that while I was running the style transfer NB that the content image is cropped to the style image dims before calling the content model. Specifically this line of code:
content_targ = K.variable(content_model.predict(src))
This gave me input_shape mismatch error if i ran the notebook in default fashion. I eventually fixed that by first resizing the style image to my content image and then following the steps. The resizing done in the notebook is perhaps hardcoded for the images that jeremy had earlier used.
I’m sure any group would be delighted to have you! There was at least one group with just 5 members IIRC–we’ll figure it out on Monday.
@jeremy Will there be groups for international fellows. My understanding is that you said you will let us know. But just making sure.
Yes, I provided a spreadsheet for allocating in the international fellows thread. Please add your details there.
Thanks, great stuff!
I find it little difficult to understand the numpy broadcasting used in the notebook. Starting to read the link shared by @kelvin https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html
The way I like to approach this is:
- read the method description
- hypothesize what will happen when you do the operation
- then write code to validate the hypothesis
and go through the above loop until you understand the mechanics.
Great! Thanks so much! Excited for Monday!
Ugh … I think I got this weird notion that expanding the dimensions had something to do with aligning for broadcasting. Tried with few examples and its clear atleast for now.
Your “weird idea” sounds right to me… If one tensor has less dims than the other, unit axes are prepended to the one with less dims.
back to testing
+1, I was absent in the first class as well.
I tried to write a blog. It was more of a hello world attempt. https://medium.com/@renjithmadhavan/deep-learning-spicy-indian-chicken-curry-and-a-little-vangogh-4bd8751c4e68#.s3bwz27w1
But though a start is a start
- posting here as I initially posted it in the assignment thread.
@renjithmadhavan For the example in the medium article, how many iterations did you run it for? And did you make any changes to the loss function?