Hey,
Here’s another experiment on video enhancement / superresolution I’ve been working on recently (and highly enjoyed doing!).
The idea is that since a video is a small dataset, if we start with a good image enhancement model (for example fastai lesson 7 model), and fine-tune it on the video’s images, the model can hopefully learn specific details of the scene when the camera gets closer and then reintegrate them back when the camera gets further away (does this makes sense?).
Here is a screenshot of the first results I got (more results screenshots and videos can be found on my github repository):
In my experiments, the algorithm achieved a better image quality than the pets lesson 7 model, which seems logical since it’s fine-tuned for each specific video.
I actually initially posted this work on the Deep Learning section, because I feel like it’s not finished yet, and I’m looking for help on how to move forward on this. I haven’t found a lot of work on transfer learning in video enhancement (did I miss something?) so far, although it looks like an interesting research direction to me. Do you think that this kind of transfer learning in video enhancement has potential? If so, what would you do to improve on this work?
Thanks!
Sebastien