Home » Articole » Articles » Business » Translation » Computer-assisted translation » Machine translation » Machine translation software usability

Machine translation software usability

The article below give objective criteria for evaluating the usability of machine translation software output.

Stationarity or Canonical Form

Do repeated translations converge on a single expression in both languages? I.e. does the translation method show stationarity or produce a canonical form. In the above example, the translation does become stationary, although the original meaning is lost. This metric has been criticized as not being well correlated with Bilingual Evaluation Understudy scores[1]

Well-formed output

Is the output grammatical or well-formed in the target language? Using an interlingua should be helpful in this regard, because with a fixed interlingua one should be able to write a grammatical mapping to the target language from the interlingua. Consider the following Arabic language input and English language translation result from the Google translator as of 27 December 2006 [1]. This Google translator output doesn’t parse using a reasonable English grammar:

وعن حوادث التدافع عند شعيرة رمي الجمرات -التي كثيرا ما يسقط فيها العديد من الضحايا- أشار الأمير نايف إلى إدخال “تحسينات كثيرة في جسر الجمرات ستمنع بإذن الله حدوث أي تزاحم”. ==> And incidents at the push Carbuncles-throwing ritual, which often fall where many of the victims – Prince Nayef pointed to the introduction of “many improvements in bridge Carbuncles God would stop the occurrence of any competing.”

Semantics preservation

Do repeated re-translations preserve the semantics of the original sentence? For example, consider the following English input passed multiple times into and out of French using the Google translator as of 27 December 2006:

Better a day earlier than a day late. ==> Améliorer un jour plus tôt qu’un jour tard. ==> To improve one day earlier than a day late. ==> Pour améliorer un jour plus tôt qu’un jour tard. ==> To improve one day earlier than a day late.

Notes

  1. ^ Somers, H. (2005) “Round-trip Translation: What Is It Good For?”

References

  • Gimenez, Jesus and Enrique Amigo. (2005) IQmt: A framework for machine translation evaluation.
  • NIST. Annual machine translation system evaluations and evaluation plan.
  • Papineni, Kishore, Salim Roukos, Todd Ward and Wei-Jing Zhu. (2002) BLEU: A Method for automatic evaluation of machine translation. Proc. 40th Annual Meeting of the ACL, July, 2002, pp. 311-318.

This guide is licensed under the GNU Free Documentation License. It uses material from the Wikipedia.

Leave a Reply

Your email address will not be published. Required fields are marked *