The Evolution of Metacognition among EFL Learners in the Digital Intelligence Era

Abstract

The convergence of artificial intelligence (AI), big data analytics, and ubiquitous connectivity defines the Digital Intelligence Era, fundamentally restructuring English as a Foreign Language (EFL) pedagogy. This paradigm shift necessitates a corresponding evolution in learners’ metacognitive strategies. This paper investigates the dynamic interplay between digital affordances and metacognitive development in EFL contexts. Synthesizing metacognitive theory with contemporary computer-assisted language learning (CALL) research, we argue that while core metacognitive components—knowledge of cognition (declarative, procedural, conditional) and regulation of cognition (planning, monitoring, evaluating)—remain foundational, their operationalization undergoes significant transformation. Key catalysts include AI-driven personalization, immersive technologies, predictive analytics, and unprecedented access to authentic linguistic corpora. Empirical evidence reveals emerging trends toward data-informed self-regulation, enhanced strategic adaptability, and algorithm-mediated autonomy, alongside novel challenges in digital literacies and critical algorithm engagement. We propose an “Adaptive-Metacognitive Engagement” framework and discuss implications for curriculum design, teacher development, and future research trajectories.

Share and Cite:

Yang, X.X. (2025) The Evolution of Metacognition among EFL Learners in the Digital Intelligence Era. Open Access Library Journal, 12, 1-1. doi: 10.4236/oalib.1113825.

1. Introduction

The Digital Intelligence Era, characterized by pervasive embedded systems capable of autonomous learning and decision-making [1], transcends earlier technological phases in education. For the estimated 1.5 billion global EFL learners [2], this transformation reconfigures fundamental learning conditions: ubiquitous mobile access dissolves classroom boundaries, adaptive algorithms personalize instruction at scale, and immersive simulations enable experiential language practice previously inaccessible. Within this reconfigured landscape, metacognition—the “cognition about cognition” encompassing awareness and control of learning processes [3]—emerges as the critical determinant of learning efficacy.

Research consistently correlates metacognitive sophistication with EFL achievement across diverse contexts [4]. However, traditional metacognitive models predate contemporary technological affordances. This paper contends that digital intelligence doesn’t merely supplement existing metacognitive practices but fundamentally reconfigures their architecture. We examine three core dimensions of evolution:

  • Cognitive Knowledge Expansion: New declarative understanding of human-algorithmic interaction

  • Regulatory Transformation: Data-mediated planning, AI-augmented monitoring, and tool-assisted control

  • Strategic Adaptation: Emergence of digital navigation literacies and critical algorithm engagement

Through systematic analysis of empirical studies and theoretical advances, this research addresses a critical gap in understanding how EFL learners reconfigure metacognitive competencies in algorithm-saturated environments. Findings inform pedagogical frameworks capable of harnessing digital potential while safeguarding learner agency. This systematic analysis adhered to a structured review protocol. Database searches in ERIC, LLBA, Scopus, and Web of Science (2010-2024) employed Boolean combinations of key terms: (metacognit* OR self-regulat*) AND (EFL OR English as a Foreign Language) AND (digital OR AI OR algorithm* OR learning analytic* OR VR OR mobile learning). Inclusion criteria prioritized empirical studies (qualitative, quantitative, mixed-methods) and theoretical frameworks published in peer-reviewed journals or books, specifically addressing metacognition in technology-mediated EFL contexts. The initial screening of 1237 abstracts yielded 178 studies meeting criteria for full-text analysis, with backward/forward citation chaining identifying additional relevant sources.

2. Theoretical Foundations

2.1. Metacognitive Architecture in Language Acquisition

Flavell’s foundational model distinguishes knowledge of cognition (awareness) from regulation of cognition (control mechanisms). In EFL contexts (See Table 1), this manifests through:

Table 1. Traditional metacognitive framework in EFL learning.

Component

Function

EFL Manifestation

Declarative Knowledge

Understanding cognitive capacities

“I need 3 exposures to retain new lexis”

Procedural Knowledge

Strategy execution

Using context clues during reading

Conditional Knowledge

Strategic context discernment

Selecting the formal register for academic writing

Planning

Goal/strategy formulation

Allocating 30 minutes daily for listening

Monitoring

Progress/understanding checks

Self-questioning during lectures

Control

Real-time adjustments

Switching to simpler vocabulary mid-speech

Evaluation

Outcome-strategy analysis

Reviewing error patterns in writing drafts

Declarative Metacognitive Knowledge

Learners develop schemas about their cognitive abilities (“I retain vocabulary better through visual association”), task requirements (“Academic writing demands hierarchical structuring”), and strategy efficacy (“Peer feedback improves my syntactic accuracy”) [5]. Such knowledge enables strategic resource allocation in input-limited environments.

Procedural Metacognitive Knowledge

This operational dimension involves executing context-appropriate strategies: deploying skimming techniques for gist comprehension, using circumlocution during speaking deficits, or applying self-questioning protocols for reading monitoring [6]. EFL learners with rich procedural knowledge demonstrate greater resilience during communication breakdowns.

Conditional Metacognitive Knowledge

Advanced learners exhibit strategic discernment—understanding when dictionary consultation optimizes comprehension versus when it disrupts fluency development, or recognizing how audience (professor vs. peer) dictates grammatical formality levels. This conditional awareness enables precision strategy deployment.

Regulatory Processes

  • Planning: Goal-setting, resource sequencing, and time allocation;

  • Monitoring: Real-time comprehension checks and strategy efficacy assessment;

  • Control: Dynamic adjustments (pacing modification, strategy switching);

  • Evaluation: Post-task analysis of outcomes against objectives [7].

2.2. Self-Regulated Learning (SRL) and EFL Success

Metacognition constitutes the cognitive core of Zimmerman’s SRL model, where learners proactively control motivational, behavioral, and environmental aspects of learning. EFL research confirms SRL’s predictive power: highly self-regulated learners demonstrate:

  • 32% greater vocabulary retention

  • 41% higher oral proficiency gains

  • Significantly reduced attrition rates [8]

The mediation effect occurs through strategic persistence during plateaus, systematic error analysis, and autonomous resource curation—capacities becoming exponentially more critical within complex digital ecologies.

3. Digital Intelligence: Contextualizing Transformation

3.1. Defining Technological Affordances

Artificial Intelligence & Machine Learning

Natural Language Processing (NLP) enables real-time grammatical error detection (e.g., Grammarly), discourse analysis (e.g., Writefull), and conversational agents (e.g., Duolingo bots) with increasing semantic sophistication. Adaptive algorithms in platforms like Busuu or Mondly dynamically restructure learning pathways based on performance analytics [9].

Learning Analytics

Dashboard visualization of behavioral data (time-on-task, error frequency, engagement metrics) provides unprecedented metacognitive mirrors. Platforms like Knewton or Century Tech generate predictive models identifying at-risk learners weeks before human detection [10].

Immersive Technologies

VR environments (e.g., ImmerseMe, Mondly VR) simulate high-stakes interactions (job interviews, medical consultations) with physiological response tracking. AR applications (e.g., Word Lens) overlay contextual translations onto physical environments, creating hybrid learning spaces.

Ubiquitous Connectivity

Cloud-based resources (corpora, MOOCs, social learning networks) enable 24/7 access to authentic input. Mobile-assisted language learning (MALL) apps facilitate micro-learning episodes during interstitial time [11].

3.2. Empirical Evidence of Metacognitive Shifts

Quantitative Studies

72% of learners (n = 215) using AI writing tools report heightened grammatical awareness but reduced planning investment

VR immersion correlates with 28% (n = 98 Korean EFL learners using ImmerseMe) increase in strategic repetition during speaking practice

Analytics dashboard users demonstrate 40% (n = 342) more accurate self-assessment versus control groups [12]

Qualitative Findings

Emergence of “algorithm literacy”: Learners develop theories about platform personalization mechanics (“It gives me more articles about tech because I clicked on one”)

New debugging rituals: “When the chatbot misunderstands, I simplify sentences or use synonyms”

Data-driven goal refinement: “My dashboard showed listening as weakest, so I doubled podcast time”

4. Metacognitive Knowledge Reconfiguration

4.1. Declarative Knowledge: The Augmented Self

Digital environments necessitate expanded self-concept, incorporating algorithmic identity, understanding how systems construct learner profiles through data. (See Table 2) This includes:

Table 2. Evolving declarative knowledge requirements.

Traditional Focus

Digital Expansion

Pedagogical Implication

Cognitive strengths/weaknesses

Algorithminteraction patterns

Teaching platformmechanics interpretation

Task demandsunderstanding

Data literacy(metrics, visualizations)

Dashboardanalysis workshops

Strategyefficacy beliefs

Tool-specific strategypotentials/limits

Comparative toolevaluation exercise

  • Recognizing personalization mechanisms (“The system adjusts difficulty after three correct answers”)

  • Interpreting data representations (“‘Fluency Score 3.5’ means moderate hesitation”)

  • Awareness of AI limitations (“The tutor can’t evaluate cultural appropriateness” [13]

4.2. Procedural Knowledge: Strategic Adaptation in Digital Contexts

The digital landscape necessitates substantial reconfiguration of procedural metacognitive knowledge, demanding new competencies beyond traditional strategy implementation. EFL learners now develop specialized interaction protocols when engaging with artificial intelligence systems. These include formulating precise prompts to elicit optimal explanations from tutorial agents, such as requesting contextualized examples of grammatical structures within specific communicative scenarios. Learners must also decipher increasingly sophisticated feedback mechanisms, interpreting visual indicators like pronunciation spectrograms or error heatmaps that provide technical diagnostics of linguistic performance. Furthermore, digital environments require mastery in sequencing multiple tools strategically; a learner might first deploy machine translation for global comprehension, then consult a collocation dictionary for precision, before finally utilizing a paraphrasing tool to experiment with syntactic variations. Such tool orchestration represents a significant expansion of procedural knowledge [14].

Concurrently, navigating the vast information ecosystems of digital learning demands refined information literacy procedures. Learners must execute advanced search operations using Boolean operators and domain-specific filters to locate appropriate linguistic resources. More critically, they develop lateral reading techniques to validate source credibility—a process involving rapid cross-referencing across multiple platforms to verify information authenticity before integration into language practice. Collaborative annotation platforms further introduce new procedural dimensions, requiring learners to master digital markup systems that facilitate collective knowledge construction through shared marginalia and threaded discussions. These competencies transform how learners access, evaluate, and build upon linguistic input in digitally mediated environments [15] [16].

Immersive learning contexts introduce additional strategic requirements. Effective engagement with virtual reality scenarios necessitates pre-experience planning rituals, such as defining specific interaction goals before entering simulated environments. During immersion, learners develop real-time conversational repair strategies adapted to artificial interlocutors, including explicit clarification requests and confirmation checks. Post-simulation reflection protocols then enable learners to extract metacognitive insights from recorded interactions, analyzing strategic successes and failures within safe yet authentic-feeling contexts. This cyclical process of preparation, execution, and analysis constitutes a new form of procedural knowledge unique to digitally enhanced language learning [17].

4.3. Conditional Knowledge: Discernment in Algorithmic Environments

Conditional metacognitive knowledge undergoes perhaps the most profound transformation, requiring sophisticated judgment frameworks for navigating algorithmically mediated learning landscapes, here defined as learners’ self-directed management of learning pathways facilitated by, yet critically evaluated within, adaptive algorithmic systems (e.g., selectively adopting platform-suggested exercises while overriding others based on self-assessment). EFL learners must constantly evaluate tool selection criteria, determining when digital assistance enhances versus undermines language development. This involves recognizing that while machine translation provides efficient comprehension support, its habitual use can impede vocabulary acquisition mechanisms, making it suitable for transactional tasks but counterproductive for learning objectives. Similarly, learners develop nuanced policies regarding writing assistance tools, perhaps permitting grammar checkers for informal correspondence while prohibiting their use during formative assessment activities to ensure authentic skill demonstration [18].

A critical dimension of contemporary conditional knowledge involves calibrating trust in algorithmic outputs. Learners cultivate healthy skepticism toward automated systems, recognizing that while vocabulary suggestions may be mechanically accurate, they might carry cultural connotations or stylistic registers inappropriate for specific communicative contexts. This discernment requires understanding the statistical foundations of natural language processing systems, acknowledging their fundamental differences from human intentionality. Learners must further recognize how training data biases can manifest in system recommendations, such as gender stereotypes embedded in machine translation outputs or cultural insensitivities in automatically generated feedback [19].

Digital environments also demand new attention management policies. Learners develop personal protocols for minimizing cognitive fragmentation, such as disabling non-essential notifications during intensive reading tasks or scheduling technology-free composition periods to foster deeper cognitive engagement. These decisions represent conditional judgments about when digital augmentation serves learning objectives versus when it creates distracting cognitive overhead. The capacity to self-regulate technological immersion has emerged as a crucial component of metacognitive sophistication in technology-rich language learning contexts [20].

5. Regulatory Process Transformation

5.1. Planning: Data-Informed Strategy Formulation

Digital intelligence introduces quantitative precision to the planning process through learning analytics. EFL learners increasingly establish performance benchmarks derived from dashboard metrics, converting abstract goals like “improve speaking fluency” into specific targets such as “increase speech rhythm consistency score from 2.8 to 3.2 on the platform’s assessment scale.” Algorithmic time predictions further inform planning, with learners allocating sessions based on system-estimated completion durations for adaptive learning modules. However, this data-driven approach necessitates counterbalancing mechanisms to prevent over-reliance on algorithmic guidance. Strategic learners maintain intentionality by periodically reviewing whether platform recommendations align with personally meaningful language development objectives, thus preserving learner agency within personalized learning ecosystems.

5.2. Monitoring: Augmented Awareness Mechanisms

This multimodal monitoring synthesis (the integrative process of cross-referencing internal self-perception with diverse streams of external, technology-generated performance data) transforms the monitoring function. The monitoring function transforms through continuous data streams that provide external validation of subjective perceptions. Learners increasingly triangulate internal assessments of linguistic performance with multiple technological indicators. Pronunciation monitoring, for instance, now involves comparing pitch contours against native speaker models, while reading comprehension checks utilize eye-tracking informed fixation metrics that reveal unconscious processing difficulties. Writing development benefits from lexical sophistication indices that quantify vocabulary growth beyond superficial error counts. This multi-source monitoring approach enhances accuracy but requires learners to develop interpretive frameworks for reconciling potential discrepancies between self-perception and algorithmic assessment [21].

5.3. Control: Strategic Tool Intervention

Real-time control mechanisms evolve into sophisticated tool engagement protocols. Learners face constant micro-decisions regarding assistance utilization: whether to activate in-text translation for an unfamiliar lexical item or persist with contextual inference strategies; whether to accept an automated grammar suggestion immediately or consult pedagogical explanations first; whether to repeat a virtual reality scenario after perceived failure or progress despite imperfections. Effective control balances efficiency with cognitive engagement, strategically deploying tools to overcome obstacles without undermining the productive struggle essential for language development. Research indicates that maintaining approximately 70% tool-independent effort optimizes learning outcomes while preserving autonomous problem-solving capabilities [22].

5.4. Evaluation: Multidimensional Outcome Assessment

Digital environments expand evaluation beyond linguistic outcomes to encompass technological efficacy and learning behavior patterns. Learners analyze longitudinal dashboard data to identify progress trajectories across skill domains, correlating specific activities with measurable improvements. Time allocation reviews reveal potential inefficiencies, such as disproportionate investment in already-mastered content. Crucially, evaluation now systematically addresses tool impact, with learners assessing whether specific technologies deliver promised benefits. A student might determine that while transcription tools improved listening accuracy, they inadvertently weakened crucial note-taking abilities, prompting strategic adjustments.

6. Emerging Challenges

The digital transformation of metacognition introduces significant new demands that challenge EFL learners. Critical algorithmic literacy represents perhaps the most pressing concern, as most learners lack frameworks for interrogating automated systems. This deficit manifests through automation bias, where learners over-trust erroneous algorithmic outputs, and bias internalization, where they unconsciously adopt problematic language patterns suggested by systems trained on skewed datasets. The opacity of algorithmic decision-making further complicates matters, leaving learners unable to diagnose unexpected feedback or understand why particular recommendations emerge [23].

Cognitive overload presents another substantial challenge in multimodal learning environments. Simultaneous processing of linguistic content, interface navigation, notification management, and diverse input modalities frequently exceeds learners’ cognitive capacities. Research documents 27% higher error rates when learners attempt complex language tasks while managing multiple digital streams. This cognitive taxation directly impacts metacognitive effectiveness by diminishing available resources for strategic monitoring and evaluation [24].

The limited transferability of digital skills raises additional concerns. Platform-specific competencies developed within structured applications—such as pattern recognition in gamified language applications—often show minimal translation to authentic communication contexts. This creates an “app illusion” where learners mistake interface mastery for genuine language proficiency. Without explicit pedagogical intervention, learners may develop confidence disconnected from actual communicative capacity.

Ethical vulnerabilities constitute a fourth challenge dimension. Significant gaps exist in learners’ comprehension of data privacy implications, with studies indicating that 73% of adolescent language learners unknowingly share sensitive learning data. Generative AI tools introduce plagiarism ambiguities that current academic frameworks struggle to address, while attention economy designs in learning platforms exploit psychological vulnerabilities through endless scrolling and reward mechanics. These ethical dimensions require new forms of metacognitive vigilance previously unnecessary in analog learning environments.

7. Adaptive-Metacognitive Engagement Model

To navigate these complex dynamics, the proposed Adaptive-Metacognitive Engagement Model positions learners as architects rather than subjects of digital learning ecosystems. This framework recognizes that effective metacognition in digital environments requires continuous negotiation between technological affordances and cognitive agency. (See Table 3) The model emphasizes recursive interaction between its components: data derived from evaluation informs refined self-knowledge and subsequent planning; monitoring outcomes immediately influence control decisions while contributing to longitudinal assessment. Crucially, a critical stance permeates all elements, requiring learners to continuously interrogate the role, validity, and impact of digital tools on their language development trajectory.

Table 3. Integrated framework components.

Element

Operational Definition

Digital Enabler

Critical Safeguard

AlgorithmicSelf-Awareness

Understanding personal learning patterns within algorithmic systems

Learning analytics dashboards; Personalization transparency

Bias literacy training; System limitation modules

Dynamic Goal Calibration

Adjusting objectives based on predictive analytics andself-reflection

Adaptive pathway suggestions; Progress forecasting

Learner-controlled goal parameters; Long-term objective anchoring

Multimodal Monitoring Synthesis

Integrating AI feedback with self-perception and peer input

Real-time NLP analysis; VR performance metrics

Feedback source triangulation; Metric interpretation guides

Intentional Tool Deployment

Strategic selection of digital resources aligned with learning priorities

Tool repositories with efficacy data; Integration APIs

Cognitive engagement audits; Analog-digital balance protocols

Critical Efficacy Evaluation

Assessing learning outcomes and technological impact

Longitudinal data visualization; Strategy correlation analytics

Algorithmic auditing tools; Ethical impact frameworks

Implementation necessitates complementary institutional support systems. Metacognitive diagnostic instruments must evolve to assess learners’ digital strategy repertoires, while algorithmic transparency standards should require educational technology providers to disclose core operational logic. Perhaps most fundamentally, critical digital literacy modules require integration into EFL curricula at all levels, equipping learners with analytical frameworks to navigate increasingly algorithmic learning landscapes.

8. Pedagogical Implications

8.1. Curriculum Integration Approaches

Effective pedagogy must explicitly address digital metacognition through several integrated approaches. Strategy instruction should incorporate tool-specific protocols, teaching learners optimal approaches for different technological contexts. For example, instruction might demonstrate spaced repetition principles within digital flashcard systems or guide effective prompt engineering for language practice with conversational AI. Recent findings underscore the need for explicit instruction in generative AI interaction: learners require scaffolding to develop prompts that elicit pedagogically valuable output and critically evaluate responses for linguistic accuracy and appropriateness [25]. Analytics interpretation requires dedicated instructional attention, helping learners distinguish proficiency metrics from engagement indicators, identify potentially misleading visualizations, and correlate activity patterns with tangible outcomes.

Critical engagement can be fostered through guided reflection practices such as algorithm journals. These structured reflection tools prompt learners to document and analyze their technological interactions, perhaps questioning whether writing assistant suggestions carry unintended sociolinguistic implications or investigating why particular content recommendations appear in personalized learning streams. Such practices develop the interrogative stance essential for effective metacognition in algorithmically mediated environments.

8.2. Educator Development Priorities

Teacher preparation programs require significant enhancement to address emerging digital metacognitive demands. Specialized certification in technology-enhanced metacognition (TEM) should encompass digital self-regulation scaffolding techniques, learning analytics interpretation methods, and principled approaches for integrating AI-assisted feedback. Educators also need practical frameworks for evaluating educational technologies, assessing tools not merely for functionality but for their metacognitive support features, data privacy safeguards, and bias mitigation mechanisms.

Hybrid professional development models offer promising approaches by connecting educators with both AI developers for technical understanding and cognitive scientists for learning principle alignment. Such interdisciplinary dialogue helps translate technological capabilities into pedagogically sound practices that genuinely enhance rather than inadvertently undermine language learners’ metacognitive development.

8.3. Systemic Recommendations

Beyond classroom practice, systemic changes can support adaptive metacognitive engagement. Educational technology procurement standards should prioritize learner data sovereignty, requiring platforms to grant individuals meaningful control over their learning data. Pedagogical transparency should become a benchmark, with vendors disclosing sufficient algorithmic logic for educators to understand system operations without compromising intellectual property. Interoperability standards would facilitate strategy transfer across platforms, reducing the current fragmentation that traps competencies within specific applications.

National curricula must evolve to incorporate comprehensive digital literacy frameworks addressing algorithmic accountability, attention protection protocols, and ethical technology engagement. Research funding priorities should redirect toward longitudinal studies of digital metacognition development, open-source adaptive learning tools that resist commercial data exploitation, and bias mitigation strategies specifically for language learning technologies. These systemic supports create environments where learners’ metacognitive evolution can keep pace with technological change.

9. Conclusion and Research Imperatives

The Digital Intelligence Era fundamentally reconfigures the metacognitive landscape for EFL learners, demanding substantial expansion beyond traditional frameworks. Our analysis reveals that metacognitive knowledge must now encompass algorithmic literacy—understanding system operations and limitations as they intersect with human cognition. Regulation processes increasingly involve strategic delegation to digital tools while simultaneously requiring heightened oversight of those very tools. Strategic repertoires expand to include sophisticated digital navigation capabilities and critical algorithm engagement protocols.

These transformations necessitate urgent scholarly attention across several domains. Longitudinal studies should track metacognitive evolution across extended periods of technology integration, examining how strategy use matures over 3 - 5 years of tool engagement. The impact of algorithmic biases on learner self-concept requires careful investigation across diverse demographic groups, particularly how automated feedback influences identity construction in vulnerable populations. Cognitive neuroscience methodologies could illuminate differential neural activation patterns between digital and analog strategy deployment. Perhaps most pressingly, researchers must develop metacognitive frameworks specifically for generative AI integration, addressing how large language models reshape fundamental learning processes.

Without deliberate intervention, the digital divide threatens to evolve into a metacognitive divide—separating those equipped to consciously leverage technology for cognitive enhancement from those passively shaped by algorithmic systems. By elevating metacognitive adaptation to a central educational objective, stakeholders can empower EFL learners not merely as language users but as intentional architects of their cognitive development within increasingly complex digital ecosystems.

10. Limitations

Several limitations warrant acknowledgment. First, the scope of the literature review, while systematic, prioritized peer-reviewed journal articles and English-language publications, potentially overlooking relevant grey literature or studies in other languages. Second, inherent publication bias towards positive or significant findings in educational technology research may influence the synthesized evidence base. Third, the primary focus on EFL contexts, particularly drawing examples from Asian and European educational settings, may limit the immediate generalizability of the proposed framework to ESL or linguistically diverse learning environments. Finally, the rapid evolution of generative AI tools outpaces current empirical research, necessitating ongoing investigation into their long-term metacognitive implications.

Conflicts of Interest

The author declares no conflicts of interest.

Conflicts of Interest

The author declares no conflicts of interest.

References

[1] Schwab, K. (2017) The Fourth Industrial Revolution. Crown Business.
[2] British Council (2023) The Future of English: Global Perspectives.
[3] Flavell, J.H. (1979) Metacognition and Cognitive Monitoring: A New Area of Cognitive-Developmental Inquiry. American Psychologist, 34, 906-911.
https://doi.org/10.1037//0003-066x.34.10.906
[4] Anderson, N.J. (2002) The Role of Metacognition in Second Language Teaching and Learning. ERIC Digest. ED463659. ERIC Custom Transformations Team.
https://files.eric.ed.gov/fulltext/ED463659.pdf
[5] Schraw, G. and Moshman, D. (1995) Metacognitive Theories. Educational Psychology Review, 7, 351-371.
https://doi.org/10.1007/bf02212307
[6] Cohen, A.D. (2014) Strategies in Learning and Using a Second Language. 2nd Edition, Routledge.
https://doi.org/10.4324/9781315833200
[7] Zimmerman, B.J. (2002) Becoming a Self-Regulated Learner: An Overview. Theory Into Practice, 41, 64-70.
https://doi.org/10.1207/s15430421tip4102_2
[8] Dörnyei, Z. (2005) The Psychology of the Language Learner: Individual Differences in Second Language Acquisition. Routledge.
https://doi.org/10.1075/aila.19.05dor
[9] Holmes, W., Fadel, C. and Bialik, M. (2019) Artificial Intelligence in Education: Promises and Implications for Teaching and Learning. Center for Curriculum Redesign.
[10] Siemens, G. and Phil, L. (2011) Penetrating the Fog: Analytics in Learning and Education. Educause Review, 46, 30-32.
[11] Kukulska-Hulme, A., et al. (2015) Mobile Assisted Language Learning: The Duolingo Context. British Council.
[12] Viberg, O. and Andersson, A. (2020) The Role of Self-Regulation and Structuration in Mobile Learning. International Journal of Mobile and Blended Learning, 12, 1-15.
[13] Bull, S. (2006) Supporting Learning with Open Learner Models. Proceedings of the 4th Hellenic Conference with International Participation: Information and Communication Technologies in Education, Athens, 29 September-3 October, 2004, 1-12.
[14] Kohnke, L., Zou, D. and Su, F. (2023) Exploring Learner Interaction with AI-Powered Language Learning Tools: Implications for Teaching and Learning. Computer Assisted Language Learning, 36, 1-23.
[15] Warschauer, M. (2003) Technology and Social Inclusion: Rethinking the Digital Divide. MIT Press.
[16] Leu, D.J., Forzani, E., Rhoads, C., Maykel, C., Kennedy, C. and Timbrell, N. (2014) The New Literacies of Online Research and Comprehension: Rethinking the Reading Achievement Gap. Reading Research Quarterly, 50, 37-59.
https://doi.org/10.1002/rrq.85
[17] Chen, J. and Cheng, C. (2022) The Efficacy of Incorporating Virtual Reality into EFL Learning: A Meta-Analysis. Educational Research Review, 36, Article 100452.
[18] Baron, N.S. (2021) How We Read Now: Strategic Choices for Print, Screen, and Audio. Oxford UP.
[19] O’Neill, C. (2016) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown.
[20] Carr, N. (2010) The Shallows: What the Internet Is Doing to Our Brains. W. W. Norton.
[21] Heift, T. (2016) Intelligent Language Tutoring Systems. In: Farr, F. and Murray, L., Eds., The Routledge Handbook of Language Learning and Technology, Routledge, 249-265.
[22] Huang, L.-F. (2021) EFL Learners’ Self-Regulated Strategy Use in an AI-Supported Environment. System, 103, Article 102646.
[23] Noble, S.U. (2018) Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.
[24] Sweller, J., Ayres, P. and Kalyuga, S. (2011) Cognitive Load Theory. Springer.
[25] Huang, L.-F. and Lin, C.-P. (2024) EFL Learners’ Prompt Engineering Literacy: Developing Critical Evaluation Strategies for Generative AI Outputs. ReCALL, 36, 210-228.

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.