Beautiful, approachable explanation of distributed representations

I wanted to share with you a paper by Geoffrey Hinton that accounts for much of my early fascination with neural networks.

The paper is beautifully written and is very approachable. It introduces the concept of distributed representations and outlines applying it to a small yet rich dataset (just over 100 rows of data!). It then goes into discussing the distributed representations that emerge and some of the ideas that they potentially embody. To make this even more interesting… the paper is from 1986!

Learning Distributed Representations of Concepts by Geoffrey E. Hinton


@radek Thanks for sharing. The paper is so old, it looks like it has been scanned in. :wink:

1 Like

Thanks! Sharing things like this really helps one get into the ‘Whys’ of DL.

1 Like

This paper led me to another one that looks at Deep Learning in the broader context of making an intelligent machine. While this material may seem tangential now, as this course is mainly concerned with the practical aspects of Deep Learning, I think that going through a work like this will help us obtain a wider perspective on what we ultimately aim to achieve with AI.

Here is the link :Building Machines That Learn and Think Like People.

This paper you cited is the one which introduced the idea of embeddings…but the backpropagation algorithm used to train the model became quite popular as a result of this paper.

Embeddings got it’s due share when Yoshua Bengio and team applied it to English text and learnt distributed representation of words.

Tomas Mikolov later made it work a lot better when he created the Word2Vec embeddings.

Hinton interview :

Mikolov interview :