Part 2 Lesson 12 wiki

(Mike Kunz ) #57

Why don’t we create a GAN to create ‘fake’ news in the style of a favorite politician?

(Debashish Panigrahi) #58

To generate fake news you would need GAN not to identify… :smiley:


Both nn.ConvTranspose2d and nn.Upsample seem to do the same thing, i.e. expand grid-size (height and width) from previous layer.

Can we say nn.ConvTranspose2d is always better than nn.Upsample, since nn.Upsample is merely resize and fill unknowns by zero’s or interpolation?

(blake west) #60

Somebody indeed already built a fake news detector:

Though really, they just built an “Associated Press writing style detector”. That’s all it does, which may or may not be that useful in practice…

(Rachel Thomas) #61

You might also be interested in the Fake News Challenge from last year.

(Mike Kunz ) #62

Is it “Giff” or “Jiff”? Jeremy has answered “Giff”. Thus endeth the debate.

(Mike Kunz ) #63

But …" tanh" is “thann”


Shouldn’t we use a sigmoid if we want values between 0 and 1?

(Mike Kunz ) #65 tanh is a sigmoid…

(Keita Broadwater) #66

Can anyone recommend any papers or blog posts which apply GANs to text generation?

(Adrien Lucas Ecoffet) #67

Yes but when people talk about “the” sigmoid function, they usually mean the logistic sigmoid function specifically:

(Mike Kunz ) #68

@AdrienLE in traditional stats, when I was taught the stuff, they were used interchangeably or for convenience. Logistic is easier to compute than the normal, so that was default. Has bigger tails than normal (kurtosis). tanh… well…its the long lost cousin of the bunch.

(Arnav) #69

I remember reading on reddit Ian Goodfellow said that GANs don’t work well for text. I forget why. Maybe Jeremy can confirm? @rachel

(nirant) #70

Found a dataset for Fake vs Real News: from researchers at Univ of Arizona.

(Kevin Bird) #71

It looked like Jeremy uses the y of tanh as -1 to 1 and the y of sigmoid as 0 to 1.

(Adrien Lucas Ecoffet) #72

Is there any reason for using RMSProp specifically as the optimizer as opposed to Adam etc.?

(Sneha Nagpaul) #73

Is there a link to EM algorithms where we train one thing and then the other?

(Adrian Galdran) #74


Which could be a reasonable way of detecting overfitting while training? Or of evaluating the performance of one of these GAN models once we are done training?

In other words, how does the notion of train/val/test sets translate to GANs, and how do we handle them?


(Sneha Nagpaul) #75

I have that question about unsupervised methods in general.

(Emil) #76

Are we supposed to get “good” bedrooms? My bedrooms are awful after 2 iterations from the notebook.