Better Summarization

An interesting addition to standard seq2seq + attention models for summarization.

Main additions are a probability of copying a word from the original (allows out of vocabulary words) and a coverage penalty (minimize overlap of current attention vector with sum of previous attention vectors) to avoid repetition.

Looks very promising and has a nice write up (http://www.abigailsee.com/2017/04/16/taming-rnns-for-better-summarization.html) in addition to the paper:

6 Likes

Really cool, thanks for sharing.

I really like the approach, and I think the writeup is one of the clearest I’ve read :slight_smile:

1 Like

Thanks for sharing!!