SOTA Text Matching 6 times faster

I would like to introduce an ACL 2019 paper Simple and Effective Text Matching with Richer Alignment Features (source code: https://github.com/hitvoice/RE2).

The task it’s trying to solve is text matching. In a text matching task, a model predicts the relationship between a pair of sentences. Entailment classification, paraphrase identification and answer selection all fall in this paradigm. This paper proposes a general approach for all these tasks, achieving performance on par with the state-of-the-art with no or few task-specific adaptations while reducing the inference time to <15% compared with previous SOTA.

Previous competitive methods in this field heavily depend on external syntactic features, carefully designed multi-way matching operations, or dense connections when stacking multiple blocks. The key idea behind the method is to seek a simple and effective way to do the same tasks. It turns out that keeping Residual vectors, initial Embeddings, and Encoder outputs (RE2) directly available for inter-sequence alignment is enough to achieve SOTA with all other components largely simplified.

Here’re some personal thoughts. I’m always fascinated by effective methods achieved by conceptually simpler ideas. ULMFit, by Jeremy and Sebastian from the fast.ai community, is just a better way of finetuning, which is superior to a bunch of much more sophisticated transfer learning methods. DrQA for machine reading comprehension is simpler than most other methods on the SQuAD leaderboard and 2 years later, it turns out to be still quite popular. RE2 follows the same philosophy.

If you like this work as well, welcome to star the source code and try some variations or try it on more text matching datasets! Any questions can be sent to the authors (which includes me :p) and they are glad to help.

Pytorch implementation is newly available: https://github.com/alibaba-edu/simple-effective-text-matching-pytorch