Published

ChatGPT's Handling of L2 Learners’ Fossilized Errors: A Linguistic Evaluation

Authors

1

Zahratun Nufus

STAI Rasyidiyah Khalidiyah (Rakha) Amuntai

2

Saleman Mashood Warrah

Kwara State University, Malete, Nigeria

Abstract

This study investigates ChatGPT’s capacity to address fossilized grammatical errors in English as a Foreign Language (EFL) learners’ academic writing. Through a mixed-methods design, a controlled corpus of 500 hypothetical sentences containing persistent error types, such as verb tenses, articles, prepositions, and non-idiomatic expressions, was submitted to ChatGPT-4. Quantitative analysis evaluated correction accuracy using standard metrics (precision, recall, F-score), while qualitative content analysis assessed the pedagogical appropriateness and consistency of ChatGPT’s feedback. Results showed high accuracy in correcting rule-based structures (e.g., subject-verb agreement), but significantly lower performance for context-sensitive and fossilized errors. While ChatGPT often provided clear corrections, its feedback frequently lacked explanatory depth, contextual sensitivity, and scaffolding necessary for promoting learner noticing and long-term acquisition. These findings suggest that although ChatGPT can effectively support surface-level proofreading, it cannot fully substitute the role of human instructors in addressing deeply ingrained L2 errors. The study emphasizes the importance of explainable AI, AI literacy, and hybrid instructional models that combine technological efficiency with pedagogical intentionality. It offers implications for educators, curriculum developers, and AI tool designers seeking to integrate language models into second language acquisition contexts.

Publication Info

Volume / Issue
Vol. 1, No. 2
Year
2025
Pages
21-41
Submitted
27 June 2025
Published
01 September 2025

Original Article

View this article on the original journal website for additional features and citation options.

View in OJS

Share

Publication History

Transparent editorial process timeline

Submitted

27 Jun 2025

Sent to Review

01 Jul 2025

Review Completed

02 Jul 2025

Review Completed

08 Jul 2025

Resubmit

09 Jul 2025

Review Completed

16 Jul 2025

Editorial Decision

16 Jul 2025

Review Completed

21 Jul 2025

Revisions Required

21 Jul 2025

Accepted

04 Aug 2025

Sent to Production

05 Aug 2025

Published

01 Sep 2025