in particular the [5,15,2] weights applied to the layers, why exactly those values?
The other one is the ‘5e3’ in the FeatureLoss definition and the squared ‘w’ ( w**2 ), which is applied to the ‘gram_matrix’ contribution to the loss:
self.feat_losses += [base_loss(gram_matrix(f_in), gram_matrix(f_out))*w**2 * 5e3
for f_in, f_out, w in zip(in_feat, out_feat, self.wgts)]
I think in this case those weights help fine tune the ‘style’ loss in this case, don’t they?
I would greatly appreciate if someone could shed some light on those value choices? do they come from a paper or have been determined empirically using a grid search?
The layer weights for style losses are human input based on fast experiment by running the training few times and print all the losses. Rule of thumb: try to make these losses magnitude are close.
If you take a look at the function definition for FeatureLoss, in the forward pass definition you will find the line
self.feat_losses += [base_loss(f_in, f_out)*w for f_in, f_out, w in zip(in_feat, out_feat, self.wgts)]
Where the magic constants for the weight layers - self.wgts -, define the importance of each layer loss. So, if you are doing a style transfer, the weight would have a big impact on the resulting images.