Super resolution but improving dark images into bright images

Hey guys,

I’ve tried to build up some sort of Super-Resolution model that restores dark images into their original lighten images.

  1. It seems that most training session take no more than 30-50 epochs. How can I know if it needs 5000 or more?
  2. I tried to use SSIM index/measure as a Loss function (instead of L1). They both have 0,1 values limits. Is it a bad idea or not? I don’t want to confuse loss function with the metric.

You’re more than welcome to look at my code and help me fix problems or fix it.

1 Like

As for loss function, I’m not sure if using SSIM is good or bad. For example, in Loss Functions for Image Restoration with Neural Networks (https://research.nvidia.com/sites/default/files/pubs/2017-03_Loss-Functions-for/NN_ImgProc.pdf), they compare different metrics. In Understanding SSIM (https://arxiv.org/pdf/2006.13846.pdf), SSIM seems rather criticized.

Wow thanks… I just read the article about the SSIM. Seems very interesting to notice how this index isn’t as perfect as it seems to be. In the second article they used up to 2500 epochs. I just wonder how they knew to go that far. What if they went about 5000-10K epochs and reach better results?

Well, they have used 1,250 and then changed the error metric for the loss function. Still, just look who made those experiments: NVidia- they produce GPUs. If they have a spear server, then they don’t mind performing experiments. Still, in lessons, they say only a few epochs (or cycles?) are needed and that after running for 4-8, you can run “lr_finder” again to run new learning cycles. Well, I’ve used like 20 epochs (cuz I was training on my laptop overnight). There was a similar question I guess How much epochs to train for using OneCycle Policy? but I haven’t read it through.