Translation Edit Rate plus (TERp)
The Department welcomes comments, suggestions and corrections. Send email to editor [at] cs [dot] umd [dot] edu.
Bonnie Dorr and her students, Matthew Snover and Nitin Madnani (in collaboration with Rich Schwartz at BBN Technologies) participated in the first ever NIST Metric MATR workshop to evaluate and compare automatic machine translation evaluation metrics. Their submission, TERp (Translation Edit Rate plus), was noted for its ability to automatically predict the quality of a translation.
<p> TERp was one of the top performing metrics at the workshop, and had the highest Pearson correlation coefficient, with human judgments in 9 of the 45 test conditions -- more than any other metric. In addition, in 33 of the 45 test conditions, TERp was statistically indistinguishable from the top metric -- again more than any other metric. Overall, TERp was consistently one of the best performing metrics in the workshop.</p>