StackGANs Video Project?

re: Registration
By that I mean a way of aligning images to one another, often used in time series.
See the two animated GIFs at the top of TurboReg. It is outdated now, but shows the idea visually.
However, registration may not be enough, therefore I thought of doing something in 3D, such as in face2face - warning: scariest thing ever(!) and thanks to @Dario for the link.

Besides - I can tell from the screenshot you are using tensorboard. I got that to run the other day, but still felt at a loss to navigate through a reasonable deep DenseNet architecture. The guy in the Summit video mentions putting telling names when creating the layers. What is your experience with graph viz and TensorBoard?

@iNLyze I don’t think we need to align images together at all. in fact one of the recommendations was to augment the input data with some rotation and translation so that the model gets more robust. The orientation should be a property of the latent Z vector.

Thanks for the face2face demo link. I just typed about doing that, albeit in reverse… in the previous post. @brendan @xinxin.li.seattle… check face2face out. [quote=“Surya501, post:40, topic:2365”]
This blog post talks about how to get the facial landmarks and how to morph the faces from one angle to another.
[/quote]

Reg Tensorboard: this particular project uses tensorflow, so I didn’t have to do anything. However, I was able to use tensorboard to visualize embedding. It was very very useful for me on that front. You need to deal with some vodoo magic to name the layers and create a mapping via metadata file. I think I posted it on the forum before, IIRC. I can help you with that if you are having issues with that part. Trying to visualize the graph is a mess via tensorboard for me.

Add colorization to that list. https://twitter.com/colorizebot?lang=en

cool.
nice work on BEGAN by the way, did you try it out on the human face data (besides shoes)? There are some debate about replicating the face image, because the author didn’t use the celebrity database and people are having a hard time replicating their results. I’m curious about your experience.

I did try it on faces and it did work, but due to some bug it gave an NaN error @50k steps. Even at 50k steps, the results are not pretty, but better than most results. This is at 64x64 input, so quality could be better.

Have you guys been having luck training DenseNet from scratch on a dataset like ImageNet? Tempted to use that low parameter BC option for a competition vs a pretrained network like Xception.

I am planing on using densenet on the kaggle diabetic retinopathy dataset, but my gpus are currently being used for began training. What github issues are you referring to? I originally wanted to rewrite the densenet from scratch as Jeremy suggested that it is good candidate for first network implementation…

Thought I saw a discussion about difficulty training on ImageNet, but can’t find it now so may have been mistaken.

You could use the pretrained imagenet weights for Pytorch DenseNet here and lop off layers you don’t want. The authors published great results for ImageNet, I imagine you can replicate their results with these weights.

Have you come across any pretrained models on ImageNet for Densenet-BC-100? (The 800k param one.)

I have not, sorry.

But on a related note FB just release something that looks very similar to what we were trying to do at our hackathon with style transfer and semantic segmentation.

AR Studio
https://www.facebook.com/fbcameraeffects/home

I’m going to request access.

2 Likes