A failed project on recoloring Japanese Animation from black and white

I don’t know the meaning of uploading a failed example. I guess it means I need to learn more to find a way to solve this problem.
Anyway, I am here to share my failed project. I used the lesson 7 to try it.
I guess I should not use image-net stats for Japanese animation images.


I used lesson7 to recolor the cat from real image, and it worked, but the problem is just the eyes of cat became green. However, I do not know why it does not work on Japanese animation.

1 Like

Colab link is not publicly shared.

Sorry, it is now public.

It looks like you’re attempting to do something similar to DeOldify. I’ve discussed it with Jason once or twice and here’s what I can tell you:

DeOldify with GAN is especially tricky and took many many attempts to get right. He’s also released some training notebooks you could look at here:

Also I wouldn’t call that a failure at all! It looks like you’re starting to get some colorization :slight_smile: (especially look at the hair vs the body tone, there is a noticeable pigment difference)

Go through his plenty of notebooks and see what you find, and perhaps it may help you :slight_smile:

Also, the colorization is very hard to get perfect. Jason’s model is great but it’s far from absolutely complete. Most noticeably he’s talked about the purple issue (when the model is unsure it goes with purple).

I hope this helps you keep up your motivation, you’re doing cool work! Keep at it!


Thank You, I will try to read his repo if it helps. I just don’t know where to start on reading his repo.:sparkling_heart:

I’d start possibly with this one, as it’s geared towards the colorization in general:


Thanks @muellerzr! I’d add too: Perhaps starting with just trying to finetune DeOldify might work. I can tell you that even without training on art explicitly DeOldify tends to correctly pick up on the abstractions depicted correctly anyway (depending on the style), and does some cool things with it. So there’s good reason to believe you could finetune it and get a decent result out of it.

I also noticed that you’re training it as a deblurring/superres problem on top of a colorization problem. That’s probably making training tougher than it needs to be for this use case (or do you really want that…?).


Hi, I tried not to train it with "deblurring/superres " before, and it did not work too.
It was just using the resize_one_black_white function to make it black and white only without resizing.

def resize_one_black_white(fn, i, path):
dest = path/fn.relative_to(path_hr)
dest.parent.mkdir(parents=True, exist_ok=True)
# img = PIL.Image.open(fn).convert(‘1’)
img = PIL.Image.open(fn).convert(‘L’)
# img.show()

You can see the same notebook but only trying to recolor black and white(using the resize_one_black_white function only).
colab without resize but still black and white

As a result, I retried it again to train it as a deblurring/superres problem on top of a colorization problem, which it fails again.

For the DeOldify, it does not work too.
It can only outputs this for comic:

For this input image

Should I use japanese animation’s stats instead of imagenet stats for de-oldifer japanese image?
Using the DeOldify notebook and it does not work too.

It gets a bit better using F.l1_loss directly


Thank You for taking your time to make it better.
How did you use F.l1_loss directly ?

learn = unet_learner(data, arch,loss_func=F.l1_loss)

1 Like

With a different approach, the model gets a lot better in just few epochs of training, and I will keep trying. Maybe I will try it on Resnet50 and more.
My Goal is coloring these images:


I have just trained a model using DeOldify, and gotten some good results. If you want to try it, you can find the colab here : https://colab.research.google.com/github/Dakini/AnimeColorDeOldify/blob/master/ImageColorizerColab.ipynb

1 Like

I tried to use your model, and it does not work.
Maybe when I have time, I will take a look at DeOldify. I have tried to train a model on DeOldify on colab, but it seems to be DeOldify does not support colab.

Do you mind to share how to train data with DeOldify? In fact, I don’t even know how to train it with DeOldify. There is a readme, but It is not clear at all. In addition, it seems to be that it is using the imagenet dataset as an example, but I don’t even know where to download it. Could you tell me the file structure of it, so I can train it with my data?

So I can color this “As long As I Can Be” Image? This Anime is why I started this project.

In fact, I freaking build my own coloring from almost from scratch to train my own data to get a better result because I was told that I should not train it with colab

Do you mind to tell me what dataset you use to train your model?

This is the results from your model from these examples(Click me to see my test image for your model!)


You can check out my model, but I don’t feel it is really great; however, I still try to imrpove it on my free time.