I’ve been doing some research on the BLEU metric and I was just wanting to hear everybody else’s thoughts on the metric. To anybody that hasn’t read about BLEU yet that is interested, here is the paper that proposes it as a way to judge Machine Translation http://aclweb.org/anthology/P/P02/P02-1040.pdf. A few issues that I have with the BLEU Metric:
- It requires a corpus to already be translated by humans.
- It doesn’t work well on a single sentence, but is rather built to be judged across a large corpus.
- Assumes the closer to a human translation, the better a sentence is.
Things I like about BLEU:
- It is a nice paper
- It was first to market and has other papers that use it as a metric
- Does a good job of keeping the judging of machine translation models out of the hands of people which is what it appears was happening before this metric was introduced.
- Not computationally intensive and seems pretty easy to calculate (still figuring out exactly how, but it doesn’t seem that difficult)
Questions about BLEU:
- What other alternatives are there to BLEU today?
- Are there any fields that still require a person to validate whether a model’s results are good or not?
Hopefully this is an interesting section to somebody. I definitely think that this was a game changer metric, however, it also seems like it is running out of steam since it requires a corpus to already have translations done and the new papers that are just utilizing the same latent space, but don’t have direct translations, make this metric impossible to calculate accurately. I’m also wondering if the assumption of a human translation being the best guess will make this metric hard to implement on less known language and possibly even languages that don’t have any translators left. Anything from this list would probably cause issues with the metric: http://www.endangeredlanguages.com/.