ChatGPT’s Self-Correction between Error Marking and Fine-Grained Taxonomy: Is Taxonomy Worth the Effort?
##plugins.themes.bootstrap3.article.main##
Abstract
This study aims to investigate external feedback strategies and their effects on ChatGPT’s self-correction for the purpose of improving its translation performance. The study evaluates the effectiveness of the proposed strategies and points out which strategy is more effective. To achieve these objectives, the researchers built a test suite that is composed of 200 English sentences. These sentences were translated into Arabic by ChatGPT using a default translation prompt. The translated sentences were manually annotated, and an error marking and an error taxonomy were provided. The researchers, then, selected the 60 sentences with the most errors to be retranslated using self-correction strategies. A professional manual evaluation of the initial translation and the retranslation was performed to evaluate the effectiveness of the used strategies. The study investigated whether the effort spent on error classification is needed to get a better translation when using ChatGPT’s self-correction strategy. The study is very important for translators, post-editors, researchers, and developers of MT, and it provided recommendations for future work.
##plugins.themes.bootstrap3.article.details##
self-correction, feedback, error marking, error taxonomy, ChatGPT.

This work is licensed under a Creative Commons Attribution 4.0 International License.
JSS publishes Open Access articles under the Creative Commons Attribution (CC BY) license. If author(s) submit their article for consideration by JSS, they agree to have the CC BY license applied to their work, which means that it may be reused in any form provided that the author (s) and the journal are properly cited. Under this license, author(s) also preserve the right of reusing the content of their article provided that they cite the JSS.