Bilingual Evaluation Understudy       


Artificial Intelligence Accomplishment | 2002

IBM researchers: Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu

Where the work was done: IBM T.J. Watson Research Center

What we accomplished: Human evaluations of machine translation can take months to finish and involve human labor that cannot be reused. IBM researchers proposed a method of automatic machine translation evaluation that is quick, inexpensive, language-independent,  correlates highly with human evaluation, and has little marginal cost per run. Research paper, cited below, garnered more than 5,200 Google citations within 15 years.

Related links: Wikipedia; BLEU: A Method for Automatic Evaluation of Machine Translation (40th Annual Meeting of the Association for Computational Linguistics (ACL), Philadelphia, July 2002, pp. 311-318).

BACK TO ARTIFICIAL INTELLIGENCE

BACK TO IBM RESEARCH ACCOMPLISHMENTS