|PET is a stand-alone, open-source (under LGPL) tool written in Java that should help you post-edit and assess machine or human translations while gathering detailed statistics about post-editing time amongst other effort indicators.|
Tool, documentation and examples:
Source code on github
If you are interested in evaluating translations through post-editing, this is an easy and cheap solution: you only need to provide source and translation segments (from one or multiple MT systems - it does not depend on any MT system) to set an experiment. Translators then post-edit the translations, while implicit quality indicators such as post-editing time, keystrokes, edit operations, edit distance and possibly others are stored for each segment. Explicit quality assessments can also be collected. Monolingual and bilingual dictionaries can also be provided.
The tool also works for monolingual revision, can show reference translations, can render html for special markups, and allows establishing constraints for jobs on a per segment basis (for example, the maximum time or length for a given post-edited segment).
Our plan is to maintain and further develop the tool, so if you have any comments/suggestions on how to improve it or ideas for interesting experiments, let us know!
Wilker Aziz (University of Wolverhampton)
Lucia Specia (University of Sheffield)
Aziz, W.; Sousa, S. C. M.; Specia, L. (2012). PET: a tool for post-editing and assessing machine translation. In The Eighth International Conference on Language Resources and Evaluation, LREC′12, Istanbul, Turkey. May 2012. (pdf,bibtex)
Aziz, W.; Koponen, M.; Specia, L. (2014). Sub-sentence Level Analysis of Machine Translation Post-editing Effort In Post-editing of Machine Translation: Processes and Applications, Chapter 8.
Aziz, W.; Mitkov, R.; Specia, L. (2013). Ranking Machine Translation Systems via Post-Editing. In Proceedings of Text, Speech and Dialogue (TSD 2013). Lecture Notes in Computer Science, pages 410-418, Pilsen, Czech Republic. Springer Verlag.
Koponen, M.; Aziz, W.; Ramos, L.; Specia, L. (2012). Post-editing Time as a Measure of Cognitive Effort. In the AMTA 2012 Workshop on Post-Editing Technology and Practice (WPTP 2012), pages 11-20, San Diego, USA.
Aziz, W.; Specia, L. (2012). PET: a Tool for Post-editing and Assessing Machine Translation. In The 16th Annual Conference of the European Association for Machine Translation, EAMT ’12, page 99, Trento, Italy.
Aziz, W.; Sousa, S. C. M.; Specia, L. (2012). PET: a tool for post-editing and assessing machine translation. In The Eighth International Conference on Language Resources and Evaluation, LREC ’12, Istanbul, Turkey.
Lucas Vieira and Lucia Specia. A Review of Translation Tools from a Post-editing Perspective. In 3rd Joint EM+/CNGL Workshop Bringing MT to the User: Research Meets Translators, JEC-2011, Luxembourg, 2011.
Sousa, S. C. M.; Aziz, W.; Specia, L. (2011). Assessing the post-editing effort for automatic and semi-automatic translations of DVD subtitles. In Proceedings of the International Conference Recent Advances in Natural Language Processing 2011, pages 79--103, Hissar, Bulgaria.
Lucia Specia. Exploiting Objective Annotations for Measuring Translation Post-editing Effort. In 15th Conference of the European Association for Machine Translation, pp. 73–80, EAMT-2011, Leuven, Belgium, 2011.