Hi everyone! I am working on an application which requires translating an image from a domain (A) into a different domain (say B). Models from the translation family usually help transfer styles between domains where underlying structure is similar between both domains (Cycle GAN and pix2pix etc. to convert horses to zebras)
But my need is to perform this translation where most of the styling remains similar but the content is restructured (reordered?). As an example, the task would be similar to transforming an image of a stationary horse to one that is (seemingly) galloping. As you can see, the content is the same, but transformed into a different setting (or posture etc.)
Can someone suggest if it is feasible to achieve this objective using neural networks - preferably in an unordered image collections context (unpaired)? Thanks in advance!