CycleGAN vs style transfer

At the beginning of this week’s lesson, Jeremy shared someone’s post of using cycloGAN to do style transfer, wonder what is the benefit of doing that given that the style transfer can do it with single style image and a single content image? Anyone has thoughts?

1 Like

IMO, GAN’s are more universal - can be more flexible - about the same architecture can be applied broader, but yet not enough stable and slower to train.
Other artistic style transfer approaches are more narrower, but can deliver better quality and stable results.

It’s a good question - the person in question is @helena, so perhaps she can tell us! :slight_smile:

1 Like

it’s a great question indeed, and the best answer i know comes from a recent paper on MUNIT,
which is sort of Nividia’s equivalent to cycleGAN - classical style transfer vs collection style transfer:

Style transfer. Style transfer aims at modifying the style of an image while
preserving its content, which is closely related to image-to-image translation.
Here, we make a distinction between example-guided style transfer, in which the
target style comes from a single example, and collection style transfer, in which
the target style is defined by a collection of images. Classical style transfer approaches
[5, 50–55] typically tackle the former problem, whereas image-to-image
translation methods have been demonstrated to perform well in the latter [8].
We will show that our model is able to address both problems, thanks to its
disentangled representation of content and style.

7 Likes

thanks for the paper! Indeed, that is a good point, a specific style image may not capture the general theme of a collection of style images.