Lesson 10 wiki

Please add any useful information you come across, or notes you create. :slight_smile:


Supplementary reading for the Wasserstein GAN paper


1 Like

In utils2.py, I got an error “layer_from_config not found”. I had to add “from keras.utils.layer_utils import layer_from_config” to bypass the error.

1 Like

I just added the video link to the wiki FYI.

@jeremy The imagenet torrent size, you have provided is showing "147.90 GB ".
However the official page shows different size - 138 GB for training.
Is the torrent file same as training images ( task 1 and 2 ) below.

I assume so, but I haven’t tried it (I can’t seem to get torrents working from USF) - if you download it, let me know, and we can compare.

Please @jeremy, is it possible to use the imagenet-sample-train dataset you provided for Lesson 8 to perform this task ?



I downloaded the 138 GB version.

Absolutely! I got pretty good results from that sample :slight_smile: Won’t be as good as the whole thing of course…

1 Like

OK, thank you @jeremy, I am glad to know that, as I currently work locally on my computer.
Later I will try on the whole 147.90 GB of data to see the difference in term of accuracy.

@jeremy & @rachel - where is the pytorch-tut notebook that’s being used around 1:50:00 in the Lesson 10 video? Does seem to be either in http://www.platform.ai/part2/lesson1/ or http://www.platform.ai/part2/lesson2/


1 Like

apologies … posted in wrong place

I made a text file of the lecture video, the text (other than the questions because i usually couldn’t understand what the person asking the question was saying) is shared on:


Seems like lesson 10 notebooks have been lost on the platform (no directory http://www.platform.ai/part2/lesson2/ neither for example http://www.platform.ai/part2/lesson10/ only lesson8, lesson9, and lesson11)

UPDATE: Rachel informed me that we continued with the same notebooks from lesson 9, so there aren’t any new notebooks for lesson 10.

1 Like

I believe it would be interesting to develop a little more the thoughts on the Curse of Dimensionality you highlighted in Lesson 10 (48th minute - when justifying usage of cosine rather than euclidean for k-NN purposes)

I was a little confused :

Let’s say that the probability it sits right on the edge is 1/10, then if you go 1 dimension, you’ve got a probability of 1/10 that it’s on the edge in 1 dimension. In 3 dimensions, it’s basically multiplicatively decreasing the probability that that happens. So in a few hundred dimensional space, everything is on the edge.

You probably meant “In 3 dimensions, it’s basically multiplicatively increasing” ?

Found that good resource here which even takes Cats vs Dogs as an example, yay :wink:


Oops! Thanks for clarifying - and great link :slight_smile:

Link for Super Resolution paper:

Image Super-Resolution Using Deep Convolutional Networks

Thanks! You can edit the wiki post directly to add the link :slight_smile: