TITLE:
Quality and Machine Translation: An Evaluation of Online Machine Translation of English into Arabic Texts
AUTHORS:
Mohammed Abdulmalik Ali
KEYWORDS:
Machine Translation, Quality of Machine Translation, Online Translator, Translation Errors
JOURNAL NAME:
Open Journal of Modern Linguistics,
Vol.10 No.5,
October
15,
2020
ABSTRACT: This study compares the translation outputs of an English into Arabic text using the three machine translators of Google Translate, Microsoft Bing, and Ginger. To carry this evaluation of the machine translation (MT) outputs, an English text and its Arabic counterpart were selected from the UN records. The English source text was segmented into 84 semantic chunks. Depending on the Arabic counterpart model text, each chunk was rated as “correct or incorrect” at the two levels of the translation attributes: fidelity and intelligibility. To perform the quantitative description of the evaluation process, the numbers of fidelity and intelligibility errors and their percentages were calculated. Results of this evaluation process revealed that none of the three translated versions of the source text was perfectly translated. Although the translation of Microsoft Bing was rated the best, Google’s translation was found the least accurate due to the high percentage of fidelity and intelligibility errors detected in its translation output. However, the quality of Ginger’s translation was found slightly less accurate than that of Microsoft Bing, but remarkably better than Google’s translation. The findings of this study imply that these MT applications can be implemented to perform English into Arabic translation to get the broad gist of a source text, but a deep and thorough post-editing process looks essential for a full and accurate understanding of an English into Arabic MT output. The study recommends that more studies are encouraged to continue to assess the quality of MT that will further highlight its weaknesses and the strategies that should be adopted to overcome them.