Part 2 Lesson 12 wiki

RNNs works better for sequential data than GANS:

There is also Wavenets which also work great on sequential data generation.

4 Likes

@Interogativ See Project Magenta from Google https://magenta.tensorflow.org/

2 Likes

All of this sounds awfully like translating in one language to another then back to the original. Have GANs or any equivalent been tried in translation?

5 Likes

I was just thinking the same thing. It’s an interesting idea for sure.

How to implement a GAN for a time-series model ? I’m not able to wrap my mind around it.

almost exactly what you mentioned:

Unsupervised Machine Translation Using Monolingual Corpora Only

(shameless plug)
my summary on that paper:

12 Likes

Thanks!

Can anyone share the deeplearning.net link of the gif/interactive diagram or diagram Jeremy showed earlier (couldn’t find it?)

please briefly explain vae, compared with gan’s?

http://deeplearning.net/software/theano/tutorial/conv_arithmetic.html

2 Likes

That paper looks awesome. Thanks for sharing.

Thx I forgot to scroll lol.

Instead of taking a random vector, does it make sense to, load a pretrained vector from another model. I think this is the key part to implement transfer learning in gans.

2 Likes

the idea of the random vector is that you can create a random image based on the vector.
So every time you generate some random vector, you are assured that you can get a new image.

cgan code that Jeremy’s talking about: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix

7 Likes

So, that means the weights in the deconvolution filter, makes this randomly generated noise close to the actual input. So, do you think that, substituting these weights from another model will allow us to learn faster/better?

the weights of the deconv filter are trained to convert some random noise into realistic images.

The problem is that there arent really models that train on this.

One thing that may work would be the decoder networks of auto encoders.

2 Likes

Another paper that does Unsupervised Machine Translation using Cycle Consistency Loss: https://arxiv.org/pdf/1710.11041.pdf

5 Likes

WaveNet doesn’t actually use RNN’s. RNN’s (from what i saw in a WaveNet presentation) tend to max out their “memory” around 50+ steps (remember we used 70 steps for language data). But WaveNet is generating from audio samples, which is more like thousands, or many thousands of steps. Beating this challenge was part of what made it cool. And they actually use convolutions! You can see more here: https://www.youtube.com/watch?v=YyUXG-BfDbE&t=523s

7 Likes

Why are the discriminators not trained for longer time than Generator’s in this case like WGANs?