I don’t know the meaning of uploading a failed example. I guess it means I need to learn more to find a way to solve this problem.
Anyway, I am here to share my failed project. I used the lesson 7 to try it.
I guess I should not use image-net stats for Japanese animation images.
I used lesson7 to recolor the cat from real image, and it worked, but the problem is just the eyes of cat became green. However, I do not know why it does not work on Japanese animation. dataset export.pkl
It looks like you’re attempting to do something similar to DeOldify. I’ve discussed it with Jason once or twice and here’s what I can tell you:
DeOldify with GAN is especially tricky and took many many attempts to get right. He’s also released some training notebooks you could look at here:
Also I wouldn’t call that a failure at all! It looks like you’re starting to get some colorization (especially look at the hair vs the body tone, there is a noticeable pigment difference)
Go through his plenty of notebooks and see what you find, and perhaps it may help you
Also, the colorization is very hard to get perfect. Jason’s model is great but it’s far from absolutely complete. Most noticeably he’s talked about the purple issue (when the model is unsure it goes with purple).
I hope this helps you keep up your motivation, you’re doing cool work! Keep at it!
Thanks @muellerzr! I’d add too: Perhaps starting with just trying to finetune DeOldify might work. I can tell you that even without training on art explicitly DeOldify tends to correctly pick up on the abstractions depicted correctly anyway (depending on the style), and does some cool things with it. So there’s good reason to believe you could finetune it and get a decent result out of it.
I also noticed that you’re training it as a deblurring/superres problem on top of a colorization problem. That’s probably making training tougher than it needs to be for this use case (or do you really want that…?).
Hi, I tried not to train it with "deblurring/superres " before, and it did not work too.
It was just using the resize_one_black_white function to make it black and white only without resizing.
def resize_one_black_white(fn, i, path):
dest = path/fn.relative_to(path_hr)
dest.parent.mkdir(parents=True, exist_ok=True)
# img = PIL.Image.open(fn).convert(‘1’)
img = PIL.Image.open(fn).convert(‘L’)
# img.show()
img.save(dest)
With a different approach, the model gets a lot better in just few epochs of training, and I will keep trying. Maybe I will try it on Resnet50 and more.
My Goal is coloring these images:
I tried to use your model, and it does not work.
Maybe when I have time, I will take a look at DeOldify. I have tried to train a model on DeOldify on colab, but it seems to be DeOldify does not support colab.
Do you mind to share how to train data with DeOldify? In fact, I don’t even know how to train it with DeOldify. There is a readme, but It is not clear at all. In addition, it seems to be that it is using the imagenet dataset as an example, but I don’t even know where to download it. Could you tell me the file structure of it, so I can train it with my data?
So I can color this “As long As I Can Be” Image? This Anime is why I started this project.
In fact, I freaking build my own coloring from almost from scratch to train my own data to get a better result because I was told that I should not train it with colab
Do you mind to tell me what dataset you use to train your model?