I’m trying to follow a guide to create a GAN to improve super resolution.
There is a line that goes like this:
return unet_learner(dls_gen, arch, loss_func=loss_gen,config=unet_config(blur=True, norm_type=NormType.Weight, self_attention=True, y_range=y_range))
I didn’t understand what those mean:
And how can I properly choose their values?
because your output is an image so I think your y_range=(0,1). Y range mean the minimum and maximum value of your output, given that the model has a hint of how to tune their parameters and will give you a reasonable result in less time
norm_type=NormType.Weight . It is a normalization technique, with normalization, your model can generalize better the problem ( For example: your training data is mostly with Persian cat, but because it can generalize the problem, it can predict correctly for others breeds: Siamese cat, … )
unet_config: I think this parameter is deprecated, just pass directly the parameters you want
I’m still trying to understand the y_range.
In some tutorials that I’ve seen, the values were between -3. to 3.
How could that be, if the values are always between 0 to 1?