Semantic Style Transfer

There is a python implementation on Matting Laplacian https://github.com/martinbenson/deep-photo-styletransfer/blob/master/gen_laplacian/mattinglaplacian.py

Or you are saying about faster implementation?

1 Like

Thank you @ibarinov, this is fantastic!

I honestly don’t understand what they’re doing there yet. If you think you’ve got a handle on it maybe we can do a skype call or some other communication and you can walk me through your understanding of it. Is two stage optimization just iterating back and forth between the two loss functions? That seems like a funny way to do it.

That said I think I have a sense of how to implement another photorealism regularizer that might be worth trying in addition to the one described in the paper. Probably more work that we can pull off on Friday, especially since i’ll be looking after my son in the middle of the day (11-1) while my wife works. Still we can get a start.

I’m going to look at the code and see if I can figure out what they’re doing from that perspective and then map it into the paper to build my understanding. Hopefully that’ll solve the questions I have about two stage optimization as well.

Thank you, just released a new version today with speed improvements and 4 new geometric primitives. The app is based on an algorithm published in 2008 https://rogerjohansson.blog/2008/12/07/genetic-programming-evolution-of-mona-lisa/ and reproduced on desktop by https://github.com/fogleman/primitive in 2016

I’d like to try to run NN on a mobile device and excited by Deep Photo Style Transfer. It reminds me a paper Style Transfer for Headshot Portraits https://people.csail.mit.edu/yichangshih/portrait_web/ also by Adobe people

1 Like

Do you plan to work on Deep Style tomorrow?

I’m working on segmentation tomorrow.

I want to get to the style work. However there’s more interesting things to look at wrt segmentation right now.

1 Like

@xinxin.li.seattle and I will be looking into it, although we’re both distance students (Seattle and Vancouver, Canada respectively). Are you on the slack channel that @brendan setup?

[quote=ā€œEven, post:44, topic:2179, full:trueā€]
That said I think I have a sense of how to implement another photorealism regularizer that might be worth trying in addition to the one described in the paper. Probably more work that we can pull off on Friday, especially since i’ll be looking after my son in the middle of the day (11-1) while my wife works. Still we can get a start.
[/quote]I’ve taken @kelvin’s advice to focus on getting the code working today, but I’m still debugging, hopefully the code owner can help resolve the issue some time tomorrow.

I think it’s a great idea to study the paper/ start thinking about implementing the photorealism regularizer. If the implementation is more work than we can pull off, at least we’ll get a sense of the work scope.

The other thing is that the paper mentioned a supplement material, the author told me that they are working on the release but there is no set date. They seems quite responsive and friendly, that might be a resource if we have trouble understanding the paper.

Deep Image Matting, paper by Adobe, 10 Mar 2017.

NVIDIA’s post about it.

Do you have ideas how to make automatic trimaps from Source and Style pictures. There are many matting sources but all I found require a trimap.

1 Like

Good article describing FBs new AR app. Looks like we were on the right track!

ā€œThe AML team created a green-screen effect that could pick out a person’s body and put all sorts of backgrounds behind it live in camera. It built filters that automatically identified common objects that might appear in images and created specialized effects for more than 100 of them.ā€

They must have seen your hackathon presentation!

Awesome blog post on a company that implemented image segmentation with squeeze net in their iOS photo editing app

ā€œDeep Learning for Photo Editingā€ @codingdivision https://blog.photoeditorsdk.com/deep-learning-for-photo-editing-943bdf9765e1

5 Likes

@brendan et al. Have you guys been successful in porting Deep Photo Style Transfer to Keras? Would love to know if I can join you guys and contribute to it.

1 Like

Yeah. Everyone is using the trimap thing. Correct me if I am wrong, with trimap approaches, we still need some user inputs for a very clean segmentation. Is there any way by which we can automate the trimap generation?