Published

Voices in Transition: EFL Learners’ Interaction with AI Tools to Improve Speaking

Authors

1

Zahratun Nufus

STAI Rasyidiyah Khalidiyah (Rakha) Amuntai

2

Pooveneswaran Nadarajan

Universiti Pendidikan Sultan Idris, Perak, Malaysia

Abstract

This study explores how English as a Foreign Language (EFL) learners experience and make sense of their interactions with Artificial Intelligence (AI) tools to develop speaking proficiency. Using a narrative inquiry approach, in-depth interviews and reflective journals were collected from 12 learners who regularly used ChatGPT, ELSA Speak, Duolingo, and MySpeaker Rhetorich. Grounded in Sociocultural Theory and Swain’s Output Hypothesis, the analysis examined how AI mediated learners’ cognitive and affective engagement within their Zones of Proximal Development. Findings revealed that AI tools created psychologically safe spaces, reduced speaking anxiety, and provided immediate, precise feedback, fostering greater fluency, accuracy, and learner autonomy. Learners valued AI’s personalization and accessibility but also noted limitations in cultural nuance, humor, and emotional depth, positioning AI as a supplement rather than a substitute for human interaction. This study offers qualitative insights into the affective and social dimensions of AI-mediated speaking practice, highlighting strategies for integrating AI into EFL pedagogy to support both linguistic development and emotional readiness for communication.

Publication Info

Volume / Issue
Vol. 1, No. 2
Year
2025
Pages
18-39
Submitted
25 June 2025
Published
12 November 2025

Original Article

View this article on the original journal website for additional features and citation options.

View in OJS

Share

Publication History

Transparent editorial process timeline

Submitted

25 Jun 2025

Sent to Review

26 Jun 2025

Review Completed

30 Jun 2025

Review Completed

30 Jun 2025

Revisions Required

02 Jul 2025

Resubmit

02 Jul 2025

Editorial Decision

02 Jul 2025

Review Completed

13 Oct 2025

Review Completed

14 Oct 2025

Accepted

15 Oct 2025

Sent to Production

15 Oct 2025

Published

12 Nov 2025