Unable to repurpose Super Resolution for Image Compression

Inspired by Professor @jeremy’s talks during the class and at Facebook F8 2019, thought of using lesson7-superres.ipynb for a rather simpler, image compression task.

  • I changed the resize_one() in the notebook to generate compressed images.
  • Then created a DataBunch with /small-96 images as X and /small-96-compressed as y.
  • Without changing any losses, after training, the images are actually bigger in bytes than original.

Is it because neither of pixel loss, gram loss nor feature loss communicate that bytes of the y_hat image should also equal y? If yes, any tips on how to share this loss also to the model?

Another small confusion is the string representation of DataBunch instance shows x and y like below. Paths for train and validation sets are also printed. Are these the paths for y or x? Because ideally, path for x should be small-96 and y should be small-96-compressed in my case.

ImageDataBunch;

Train: LabelList (6651 items)
x: ImageImageList
Image (3, 96, 128),Image (3, 96, 141),Image (3, 96, 128),Image (3, 96, 136),Image (3, 96, 128)
y: ImageList
Image (3, 96, 128),Image (3, 96, 141),Image (3, 96, 128),Image (3, 96, 136),Image (3, 96, 128)
Path: /home/username/.fastai/data/oxford-iiit-pet/small-96;

Valid: LabelList (739 items)
x: ImageImageList
Image (3, 96, 128),Image (3, 144, 96),Image (3, 96, 128),Image (3, 96, 112),Image (3, 96, 128)
y: ImageList
Image (3, 96, 128),Image (3, 144, 96),Image (3, 96, 128),Image (3, 96, 112),Image (3, 96, 128)
Path: /home/username/.fastai/data/oxford-iiit-pet/small-96;

Test: None

Thank you