Introducing DeOldify: A Progressive, Self-Attention GAN based image colorization/restoration project

Hello! I’ve just launched the public repo for this project I’ve been obsessing over for the last two months called DeOldify. Yes the name’s stupid but the tech I think is really cool.

Gist is this:

This is a deep learning based model. More specifically, what I’ve done is combined the following approaches:

  • Self-Attention Generative Adversarial Network (https://arxiv.org/abs/1805.08318) . Except the generator is a pretrained Unet , and I’ve just modified it to have the spectral normalization and self attention. It’s a pretty straightforward translation. I’ll tell you what though- it made all the difference when I switched to this after trying desperately to get a Wasserstein GAN version to work. I liked the theory of Wasserstein GANs but it just didn’t pan out in practice. But I’m in love with Self-Attention GANs.
  • Training structure inspired by (but not the same as) Progressive Growing of GANs (https://arxiv.org/abs/1710.10196). The difference here is the number of layers remain constant- I just changed the size of the input progressively and adjusted learning rates to make sure that the transitions between sizes happened successfully. It seems to have the same basic end result- training is faster, stable, and generalizes better.
  • Two Time-Scale Update Rule (https://arxiv.org/abs/1706.08500). This is also very straightforward- it’s just one to one generator/critic iterations and higher critic learning rate.
  • Generator Loss is two parts: One is a basic Perceptual Loss (or Feature Loss) based on VGG16- this basically just biases the generator model to replicate the input image. The second of course is the loss score from the critic. For the curious- Perceptual Loss isn’t sufficient by itself to produce good results. It tends to just encourage a bunch of brown/green/blue- you know, cheating to the test, basically, which neural networks are really good at doing! Key thing to realize here is that GANs essentially are learning the loss function for you- which is really one big step closer to toward the ideal that we’re shooting for in machine learning. And of course you generally get much better results when you get the machine to learn something you were previously hand coding. That’s certainly the case here.

Sample result images:

There’s more examples and details at the repo. I plan on continuing work on this project for the foreseeable future, with the goal of making this super easy to use, more memory efficient, and adding more models as needed to make photos even better (DeFade model I’m working on concurrently, for example).

Basically what I aimed to do with this whole project was to take what I learned in Parts I and II of the Fast.AI course and see just how far I could go with it. And this is the result! It’s funny- quite a few times I did think “Oh maybe Jeremy wasn’t quite right” so I’d try doing something a bit different, but I wound up learning the hard way that no- he was definitely right.

@jeremy and @rachel- You guys rock! What a life changing course.

48 Likes

Great work

1 Like

Thanks!

Very nice.

1 Like

Nice work and well done on the podcast interview

1 Like

seems like the Singapore Government finds your DeOldify useful too. read this

5 Likes

Just amazing work! Well done

1 Like

Jason, Jeremy,

I took a 1958 Hindi film classic movie song Aaja Re from Madhumati (known for its music, visuals, dance and intrigue) and colorized using DeOldify.

The result is amazing.

No human retouches. I used Jason’s pretrained model with render factor 21, unchanged.

This is my most favorite shot showcasing deoldify’s raw power.

How ‘true color’ it is!

photorealistic! Fog sunshine, tree, silhoute, shades of green on the tree. Wow!

There were a few places where the algorithm flipped on a tiny bit about shirt color, and interpreted water fall mist as jet engine exhaust at times :slight_smile: but I am so totally impressed by the continuity in coloring inspite of the fact that the algorithm does it frame by frame, if I understand from my reading of the blog. Which means there is more clue in the gradation AND shape, to encode color!

I cannot stop admiring the ingenuity, persistence AND generosity of Jason and Jeremy. And I thank Shemaroo Filmigaane on Youtube for the high quality clip and my musician composer friend Srikanth Devarajan for the mix and mastering and audio beauty. We did not apply any filter on the video, so that the deoldify result can be seen unaltered.

Thank you
Ravi

4 Likes

I have created a youtube not-for-pay open arts channel DISHA to show case use of AI (and human intelligence too) to make humans more humane through art and humanity enrichments using AI.

DISHA AI (Dedicate Intelligence in Service of Humanity and Art)
I strongly believe that instead of worrying how to make AI humane using human ideas,we should work on using AI to make humans more human by cross cultural, cross lingual, cross timezone, cross-age-zone translations.

The real problem is not that AI is not humane, the real problem is that humans are not humane. The long-tested solution to make humans humane is through art, music, and storytelling, which entices humans to see through the eyes of another even for briefly and shed prejudices.

I welcome others who create art, with aim of uplifting empathy and wisdom, please feel free to send me I can upload there.

DISHA is a hindi/sanskrit word (திசை in my mother tongue Tamil) that means direction. I believe that AI if applied in good direction, can amplify the purpose of art and humanities (which is to educate and unite and humanize humans beyond their tribes.) Leaders such as Jeremy and Rachel who have both the intelligence and the heart (merging analytic, artistic, ethical and altruistic sensibilities) and do not have the bindings of specific political and business organizations are the true leaders/mentors of the next generation, which already seems to have the right compass and bearings, as seen in open source, for instance.

3 Likes

Long-time (I guess about a year) admirer of DeOldify here, amazing work, @jsa169! I was wondering if you are collecting user examples that might or might not help you discover where the model sometimes still fails. If so, I haven’t found a place yet. Keep it going!

1 Like

Sure I’ll happily take samples. Just email me at jsa169@gmail.com .

1 Like