Published

Pragmatic Failure in EFL Learners’ Emails and AI Grammar Tools Feedback

Authors

1

Nur Ifadloh

Universitas Lambung Mangkurat

2

Ameen Saliman Abdullahi

Al-Hikmah Univeristy, Ilorin, Kwara State

3

Rani Aryanti Rukmana

Universitas Lambung Mangkurat

Abstract

This study investigates the pragmatic failures found in EFL learners’ academic email communication and evaluates the extent to which AI grammar tools can detect and address such failures. Drawing on theories of interlanguage pragmatics and politeness, the research identifies recurring issues in the realization of requests, apologies, and formal politeness—where learners often produce grammatically correct yet pragmatically inappropriate messages. These failures commonly stem from first-language pragmatic transfer and a lack of explicit instruction in target language norms. Adopting a mixed-methods approach, the study analyzed a corpus of 640 elicited emails from 80 EFL university students and assessed feedback from Grammarly, Quillbot, and ChatGPT using comparative qualitative and quantitative analysis. While the tools effectively corrected surface-level errors, they fell short in addressing context-sensitive pragmatic nuances such as indirectness, tone, and formality. The findings underscore the distinction between linguistic and pragmatic competence, highlight the limitations of current AI tools in fostering pragmatic awareness, and emphasize the need for explicit, context-rich instruction. This study contributes to a more integrated understanding of how human expertise and AI technologies can collaboratively support pragmatic development in digital language learning environments.

Publication Info

Volume / Issue
Vol. 1, No. 2
Year
2025
Pages
01-20
Submitted
28 June 2025
Published
01 September 2025

Original Article

View this article on the original journal website for additional features and citation options.

View in OJS

Share

Publication History

Transparent editorial process timeline

Submitted

28 Jun 2025

Sent to Review

01 Jul 2025

Review Completed

03 Jul 2025

Review Completed

13 Jul 2025

Revisions Required

13 Jul 2025

Editorial Decision

15 Jul 2025

Review Completed

16 Jul 2025

Review Completed

16 Jul 2025

Revisions Required

16 Jul 2025

Accepted

03 Aug 2025

Sent to Production

04 Aug 2025

Published

01 Sep 2025