Lesson 10 in class

I believe the picture of the person touching a Manta Ray is from Quebec City.

They have an exhibit where you get to touch Manta Rays. :slight_smile:

@davecg I’m interested in Dask but haven’t tried it together with Keras yet. That’s great!

If we can’t train a DEVISE ourselves, it seems like the model itself can be used as a pretrained model for a lot tasks (Just like a pretrained VGG). Is such a model publicly availabe?

can you please elaborate on the role of the weights? what do they do here exactly? thanks!

Can you talk about what’s happening in that ‘K.eval’ line? (when we’re creating the GPU style targets) Also, can you talk about the distinction between a ‘Variable’ and a ‘Tensor’ in Keras?

@karthik_k314 We are pulling the activations from 3 or 4 different layers, and weighting how important each layer is

Coult this detailed accuracy be because if overfitting?

There seem to be artifacts on the stylized dog image. If these are artifacts, how would you remove them?

Can we some more examples of Jeremy’s super resolution result? I’m in complete disbelief!!


Can we feed the output of super resolution again and again repeatedly to get images of higher and higher resolution?

1 Like

can repeat /rephrase what happens after image leaves discriminator? it’s labelled true/false & then how is the generator updated?

Where we could get image data used in imagenet_process.ipynb, e.g. path = ‘/data/jhoward/imagenet/full/’, dpath = ‘/data/jhoward/fast/imagenet/full/’?

I recently saw a paper where the generative network was trained to create both images designed to fool the discriminator but also images that were deliberately bad. Have you seen anything like that and can you comment on it?

When training the generator, the images that enter the discriminator come from the generator. The better the discriminator can tell an image is fake, the higher the loss. This loss is used to update the generator.


How would you define blocks (e.g. conv blocks, res blocks) in PyTorch using object-oriented programming?

1 Like
1 Like

Will the GPU function be a problem if I run the nootebook file (pytorch GAN) on a CPU ?

Are WGANs currently being used in production anywhere? In what applications are people using it?


When adding WGAN to any other generator, would you just add a WGAN-style D, or D and G but feeding the other generator output into G? (instead of noise)

What does the E notation in the paper mean:

1 Like