There is a python implementation on Matting Laplacian https://github.com/martinbenson/deep-photo-styletransfer/blob/master/gen_laplacian/mattinglaplacian.py
Or you are saying about faster implementation?
There is a python implementation on Matting Laplacian https://github.com/martinbenson/deep-photo-styletransfer/blob/master/gen_laplacian/mattinglaplacian.py
Or you are saying about faster implementation?
Thank you @ibarinov, this is fantastic!
I honestly donāt understand what theyāre doing there yet. If you think youāve got a handle on it maybe we can do a skype call or some other communication and you can walk me through your understanding of it. Is two stage optimization just iterating back and forth between the two loss functions? That seems like a funny way to do it.
That said I think I have a sense of how to implement another photorealism regularizer that might be worth trying in addition to the one described in the paper. Probably more work that we can pull off on Friday, especially since iāll be looking after my son in the middle of the day (11-1) while my wife works. Still we can get a start.
Iām going to look at the code and see if I can figure out what theyāre doing from that perspective and then map it into the paper to build my understanding. Hopefully thatāll solve the questions I have about two stage optimization as well.
Thank you, just released a new version today with speed improvements and 4 new geometric primitives. The app is based on an algorithm published in 2008 https://rogerjohansson.blog/2008/12/07/genetic-programming-evolution-of-mona-lisa/ and reproduced on desktop by https://github.com/fogleman/primitive in 2016
Iād like to try to run NN on a mobile device and excited by Deep Photo Style Transfer. It reminds me a paper Style Transfer for Headshot Portraits https://people.csail.mit.edu/yichangshih/portrait_web/ also by Adobe people
Do you plan to work on Deep Style tomorrow?
Iām working on segmentation tomorrow.
I want to get to the style work. However thereās more interesting things to look at wrt segmentation right now.
@xinxin.li.seattle and I will be looking into it, although weāre both distance students (Seattle and Vancouver, Canada respectively). Are you on the slack channel that @brendan setup?
[quote=āEven, post:44, topic:2179, full:trueā]
That said I think I have a sense of how to implement another photorealism regularizer that might be worth trying in addition to the one described in the paper. Probably more work that we can pull off on Friday, especially since iāll be looking after my son in the middle of the day (11-1) while my wife works. Still we can get a start.
[/quote]Iāve taken @kelvinās advice to focus on getting the code working today, but Iām still debugging, hopefully the code owner can help resolve the issue some time tomorrow.
I think itās a great idea to study the paper/ start thinking about implementing the photorealism regularizer. If the implementation is more work than we can pull off, at least weāll get a sense of the work scope.
The other thing is that the paper mentioned a supplement material, the author told me that they are working on the release but there is no set date. They seems quite responsive and friendly, that might be a resource if we have trouble understanding the paper.
Do you have ideas how to make automatic trimaps from Source and Style pictures. There are many matting sources but all I found require a trimap.
Good article describing FBs new AR app. Looks like we were on the right track!
āThe AML team created a green-screen effect that could pick out a personās body and put all sorts of backgrounds behind it live in camera. It built filters that automatically identified common objects that might appear in images and created specialized effects for more than 100 of them.ā
They must have seen your hackathon presentation!
Awesome blog post on a company that implemented image segmentation with squeeze net in their iOS photo editing app
āDeep Learning for Photo Editingā @codingdivision https://blog.photoeditorsdk.com/deep-learning-for-photo-editing-943bdf9765e1
@brendan et al. Have you guys been successful in porting Deep Photo Style Transfer to Keras? Would love to know if I can join you guys and contribute to it.
Yeah. Everyone is using the trimap thing. Correct me if I am wrong, with trimap approaches, we still need some user inputs for a very clean segmentation. Is there any way by which we can automate the trimap generation?