I found a project that used Inception v1 for style transfer:
If you look at the source, you can see that he tried to choose the best layers for content+style in the respective models.
# weights for the individual models
# assume that corresponding layers' top blob matches its name
VGG19_WEIGHTS = {"content": {"conv4_2": 1},
"style": {"conv1_1": 0.2,
"conv2_1": 0.2,
"conv3_1": 0.2,
"conv4_1": 0.2,
"conv5_1": 0.2}}
VGG16_WEIGHTS = {"content": {"conv4_2": 1},
"style": {"conv1_1": 0.2,
"conv2_1": 0.2,
"conv3_1": 0.2,
"conv4_1": 0.2,
"conv5_1": 0.2}}
GOOGLENET_WEIGHTS = {"content": {"conv2/3x3": 2e-4,
"inception_3a/output": 1-2e-4},
"style": {"conv1/7x7_s2": 0.2,
"conv2/3x3": 0.2,
"inception_3a/output": 0.2,
"inception_4a/output": 0.2,
"inception_5a/output": 0.2}}
CAFFENET_WEIGHTS = {"content": {"conv4": 1},
"style": {"conv1": 0.2,
"conv2": 0.2,
"conv3": 0.2,
"conv4": 0.2,
"conv5": 0.2}}
I’ve run this project in the past and the generated content is distinctive between VGG and Inception v1.