I’ve tried to make an auto encoder as a U-net for the minist dataset to just re-create the image by compressing it to a less dimension and then extract it again. I guess you also consider that an autoencoder?
The result is very soft images with a lot of noise removed.
Isn’t the Super resolution U-net architecture almost the same but with the difference of some leakage to be able to create crisp corners from soft images?
Both of these models are generative right?