BLEU

BLEU (Bilingual Evaluation Understudy) is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Quality is considered to be the correspondence between a machine’s output and that of a human: “the closer a machine translation is to a professional human translation, the better it is”.[1] BLEU was one of the first metrics to achieve a high correlation with human judgements of quality,[2][3] and remains one of the most popular.

Scores are calculated for individual translated segments—generally sentences—by comparing them with a set of good quality reference translations. Those scores are then averaged over the whole corpus to reach an estimate of the translation’s overall quality. Intelligibility or grammatical correctness are not taken into account.

BLEU is designed to approximate human judgement at a corpus level, and performs badly if used to evaluate the quality of individual sentences.

BLEU’s output is always a number between 0 and 1. This value indicates how similar the candidate and reference texts are, with values closer to 1 representing more similar texts.

Notes

  1. ^ Papineni, K., et al. (2002)
  2. ^ Papineni, K., et al. (2002)
  3. ^ Coughlin, D. (2003)

References

This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia.

Leave a Reply

Your email address will not be published. Required fields are marked *