Do you know how the weights and some ‘magic constants’ used in the FeatureLoss for the super resolution are determined?

Hi there,

Do you know how the weights and some ‘magic constants’ used in the FeatureLoss for the super resolution are determined?

feat_loss = FeatureLoss(vgg_m, blocks[2:5], [5,15,2])

in particular the [5,15,2] weights applied to the layers, why exactly those values?

The other one is the ‘5e3’ in the FeatureLoss definition and the squared ‘w’ ( w**2 ), which is applied to the ‘gram_matrix’ contribution to the loss:

self.feat_losses += [base_loss(gram_matrix(f_in), gram_matrix(f_out))*w**2 * 5e3
for f_in, f_out, w in zip(in_feat, out_feat, self.wgts)]

I think in this case those weights help fine tune the ‘style’ loss in this case, don’t they?

I would greatly appreciate if someone could shed some light on those value choices? do they come from a paper or have been determined empirically using a grid search?

Thanks!

5 Likes

The layer weights for style losses are human input based on fast experiment by running the training few times and print all the losses. Rule of thumb: try to make these losses magnitude are close.

Source: https://mc.ai/neural-style-transfer-with-deep-vgg-model/

If you take a look at the function definition for FeatureLoss, in the forward pass definition you will find the line

self.feat_losses += [base_loss(f_in, f_out)*w for f_in, f_out, w in zip(in_feat, out_feat, self.wgts)]

Where the magic constants for the weight layers - self.wgts -, define the importance of each layer loss. So, if you are doing a style transfer, the weight would have a big impact on the resulting images.

PS: I’m not an expert.

1 Like

An old thread, but in case someone drops by, you reference pointed to this article, which shades some light:

In essence, one has to experimentally set those weights. Thanks!