I’m sure Jeremy will get to this in subsequent lectures but here is the high level idea:
There are a class of deep nets which consist of 2 parts : an encoder and a decoder. You take the input and compress it into an intermediate state…then the decoder takes that and creates the output.
This is quite useful in NLP tasks like language translation where you take an English input sequence, encode into an intermediate representation (a tensor) and then decide that to a German or Hindi sequence.
In the lesson 4 notebook, we see that the imdb review text is the input and we use an encoder-decoder model that learns to predict words(write new words given a seed text). We then throw away the decoder and only save the encoder. This is quite easy in the fastai library.
We can take the encoded representation and use it for a new task (sentiment analysis) using transfer learning. Transfer learning is very common in computer vision but quite new in NLP.
Hope this clarifies.