Why don’t we create a GAN to create ‘fake’ news in the style of a favorite politician?
To generate fake news you would need GAN not to identify…
Both nn.ConvTranspose2d
and nn.Upsample
seem to do the same thing, i.e. expand grid-size (height and width) from previous layer.
Can we say nn.ConvTranspose2d
is always better than nn.Upsample
, since nn.Upsample
is merely resize and fill unknowns by zero’s or interpolation?
Somebody indeed already built a fake news detector: https://towardsdatascience.com/i-trained-fake-news-detection-ai-with-95-accuracy-and-almost-went-crazy-d10589aa57c
Though really, they just built an “Associated Press writing style detector”. That’s all it does, which may or may not be that useful in practice…
Is it “Giff” or “Jiff”? Jeremy has answered “Giff”. Thus endeth the debate.
But …" tanh" is “thann”
Shouldn’t we use a sigmoid if we want values between 0 and 1?
Can anyone recommend any papers or blog posts which apply GANs to text generation?
Yes but when people talk about “the” sigmoid function, they usually mean the logistic sigmoid function specifically: https://en.wikipedia.org/wiki/Logistic_function
@AdrienLE in traditional stats, when I was taught the stuff, they were used interchangeably or for convenience. Logistic is easier to compute than the normal, so that was default. Has bigger tails than normal (kurtosis). tanh… well…its the long lost cousin of the bunch.
I remember reading on reddit Ian Goodfellow said that GANs don’t work well for text. I forget why. Maybe Jeremy can confirm? @rachel
Found a dataset for Fake vs Real News: https://github.com/KaiDMML/FakeNewsNet from researchers at Univ of Arizona.
It looked like Jeremy uses the y of tanh as -1 to 1 and the y of sigmoid as 0 to 1.
Is there any reason for using RMSProp specifically as the optimizer as opposed to Adam etc.?
Is there a link to EM algorithms where we train one thing and then the other?
Hi!
Which could be a reasonable way of detecting overfitting while training? Or of evaluating the performance of one of these GAN models once we are done training?
In other words, how does the notion of train/val/test sets translate to GANs, and how do we handle them?
Thanks!
I have that question about unsupervised methods in general.