Semantic Style Transfer


(Igor Barinov) #42

There is a python implementation on Matting Laplacian https://github.com/martinbenson/deep-photo-styletransfer/blob/master/gen_laplacian/mattinglaplacian.py

Or you are saying about faster implementation?


(Xinxin) #43

Thank you @ibarinov, this is fantastic!


(Even Oldridge) #44

I honestly don’t understand what they’re doing there yet. If you think you’ve got a handle on it maybe we can do a skype call or some other communication and you can walk me through your understanding of it. Is two stage optimization just iterating back and forth between the two loss functions? That seems like a funny way to do it.

That said I think I have a sense of how to implement another photorealism regularizer that might be worth trying in addition to the one described in the paper. Probably more work that we can pull off on Friday, especially since i’ll be looking after my son in the middle of the day (11-1) while my wife works. Still we can get a start.

I’m going to look at the code and see if I can figure out what they’re doing from that perspective and then map it into the paper to build my understanding. Hopefully that’ll solve the questions I have about two stage optimization as well.


(Igor Barinov) #45

Thank you, just released a new version today with speed improvements and 4 new geometric primitives. The app is based on an algorithm published in 2008 https://rogerjohansson.blog/2008/12/07/genetic-programming-evolution-of-mona-lisa/ and reproduced on desktop by https://github.com/fogleman/primitive in 2016

I’d like to try to run NN on a mobile device and excited by Deep Photo Style Transfer. It reminds me a paper Style Transfer for Headshot Portraits https://people.csail.mit.edu/yichangshih/portrait_web/ also by Adobe people


(Igor Barinov) #46

Do you plan to work on Deep Style tomorrow?


(kelvin) #47

I’m working on segmentation tomorrow.

I want to get to the style work. However there’s more interesting things to look at wrt segmentation right now.


(Even Oldridge) #48

@xinxin.li.seattle and I will be looking into it, although we’re both distance students (Seattle and Vancouver, Canada respectively). Are you on the slack channel that @brendan setup?


(Xinxin) #49

[quote=“Even, post:44, topic:2179, full:true”]
That said I think I have a sense of how to implement another photorealism regularizer that might be worth trying in addition to the one described in the paper. Probably more work that we can pull off on Friday, especially since i’ll be looking after my son in the middle of the day (11-1) while my wife works. Still we can get a start.
[/quote]I’ve taken @kelvin’s advice to focus on getting the code working today, but I’m still debugging, hopefully the code owner can help resolve the issue some time tomorrow.

I think it’s a great idea to study the paper/ start thinking about implementing the photorealism regularizer. If the implementation is more work than we can pull off, at least we’ll get a sense of the work scope.

The other thing is that the paper mentioned a supplement material, the author told me that they are working on the release but there is no set date. They seems quite responsive and friendly, that might be a resource if we have trouble understanding the paper.


(Matthew Kleinsmith) #50

Deep Image Matting, paper by Adobe, 10 Mar 2017.

NVIDIA’s post about it.


(Igor Barinov) #51

Do you have ideas how to make automatic trimaps from Source and Style pictures. There are many matting sources but all I found require a trimap.


(Brendan Fortuner) #52

Good article describing FBs new AR app. Looks like we were on the right track!

“The AML team created a green-screen effect that could pick out a person’s body and put all sorts of backgrounds behind it live in camera. It built filters that automatically identified common objects that might appear in images and created specialized effects for more than 100 of them.”


(Jeremy Howard) #53

They must have seen your hackathon presentation!


(Brendan Fortuner) #54

Awesome blog post on a company that implemented image segmentation with squeeze net in their iOS photo editing app

“Deep Learning for Photo Editing” @codingdivision https://blog.photoeditorsdk.com/deep-learning-for-photo-editing-943bdf9765e1


(Karthik Kannan) #55

@brendan et al. Have you guys been successful in porting Deep Photo Style Transfer to Keras? Would love to know if I can join you guys and contribute to it.


(Aakash Nain) #56

Yeah. Everyone is using the trimap thing. Correct me if I am wrong, with trimap approaches, we still need some user inputs for a very clean segmentation. Is there any way by which we can automate the trimap generation?