RNNs works better for sequential data than GANS:
There is also Wavenets which also work great on sequential data generation.
RNNs works better for sequential data than GANS:
There is also Wavenets which also work great on sequential data generation.
All of this sounds awfully like translating in one language to another then back to the original. Have GANs or any equivalent been tried in translation?
I was just thinking the same thing. It’s an interesting idea for sure.
How to implement a GAN for a time-series model ? I’m not able to wrap my mind around it.
almost exactly what you mentioned:
(shameless plug)
my summary on that paper:
Thanks!
Can anyone share the deeplearning.net link of the gif/interactive diagram or diagram Jeremy showed earlier (couldn’t find it?)
please briefly explain vae, compared with gan’s?
That paper looks awesome. Thanks for sharing.
Thx I forgot to scroll lol.
Instead of taking a random vector, does it make sense to, load a pretrained vector from another model. I think this is the key part to implement transfer learning in gans.
the idea of the random vector is that you can create a random image based on the vector.
So every time you generate some random vector, you are assured that you can get a new image.
cgan code that Jeremy’s talking about: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix
So, that means the weights in the deconvolution filter, makes this randomly generated noise close to the actual input. So, do you think that, substituting these weights from another model will allow us to learn faster/better?
the weights of the deconv filter are trained to convert some random noise into realistic images.
The problem is that there arent really models that train on this.
One thing that may work would be the decoder networks of auto encoders.
Another paper that does Unsupervised Machine Translation using Cycle Consistency Loss: https://arxiv.org/pdf/1710.11041.pdf
WaveNet doesn’t actually use RNN’s. RNN’s (from what i saw in a WaveNet presentation) tend to max out their “memory” around 50+ steps (remember we used 70 steps for language data). But WaveNet is generating from audio samples, which is more like thousands, or many thousands of steps. Beating this challenge was part of what made it cool. And they actually use convolutions! You can see more here: https://www.youtube.com/watch?v=YyUXG-BfDbE&t=523s
Why are the discriminators not trained for longer time than Generator’s in this case like WGANs?