Human Evaluation of Yorùbá-English Google Translation

Thumbnail Image

Date

2016

Journal Title

Journal ISSN

Volume Title

Publisher

Creative Common Attribution-Non-Commercial 4.0 International

Abstract

The task of Machine Translation is not just about translating the text of a language to another but also its evaluation so as to monitor its improvement particularly in fluency, accuracy and efficiency. However, the only available free machine translation on Yoruba-English is “Google Translate” which has been observed to be grossly inadequate. This paper therefore examines translations done by Google Translate as against human translation in order to investigate why machine translation applications make some errors while translating human natural language. There are many matrix evaluators to do this. This paper adopts human evaluation also known as manual evaluation which is considered to be more efficient, but costly. The paper adopts Ibadan and Akungba Structured Sentence Paradigm to evaluate the translators (Google Translate and human). The translations were sent to twenty human evaluators out of which only eleven responded. The responses were subjected to statistical analysis. Findings show that human translation fares better in terms of accuracy and fluency which are informed by the quality and the quantity of training data. This paper suggests that more data, especially literary texts, should be acquired to train the translator for general efficiency and fluency.

Description

In: Bodomo, A., Abubakari, H., Issah, S. A., Angsongna, A. (eds) Journal of West African Languages 43(1), pp. 79-92

Keywords

Machine Translation, Statistical Machine Translation, Google Translator, Human/Manual Evaluation

Citation

Endorsement

Review

Supplemented By

Referenced By