Artificial Intelligence in Learning: An Integrative Framework for Education 4.0 ()
1. Introduction
The integration of educational information technologies (IT) and artificial intelligence (AI) marks a paradigmatic transformation in contemporary education, evolving from basic learning management systems (LMS) platforms to sophisticated systems capable of personalising learning on an unprecedented scale. This article offers a critical and practical perspective on the potential and limitations of these IT/AI technologies to foster academic success, balancing advances in personalisation, scalability and autonomy with persistent global ethical, technical and equity challenges (Kasneci et al., 2023; Zimmerman, 2002).
Education 4.0 refers to an integrated pedagogical and technological model, characterised by personalised, adaptive learning mediated by advanced digital technologies. This operational definition encompasses four core pillars: 1) Personalisation and adaptability, content, pace, and challenges tailored to each learner’s profile; 2) Integration of intelligent technologies, adaptive AI, learning analytics, generative feedback, and hybrid/immersive environments; 3) Development of 21st-century skills, metacognition, self-regulation, critical thinking, creativity, and collaboration; 4) Systemic ethics and equity, data protection, algorithmic transparency, bias mitigation, and promotion of social justice. Education 4.0 does not merely involve digitalisation of traditional content or purely instructional systems without adaptive monitoring; it maintains the teacher’s central role as a critical mediator and ensures that technology complements, rather than replaces, human learning. This concept will serve as a reference to explicitly link each chapter, empirical analysis, and recommendation throughout the article.
1.1. Evolution and Relevance of Information Technologies/AI in Education
The trajectory of IT in education began with structured e-learning in the 2000s, characterised by static platforms such as Blackboard and Moodle that prioritised unidirectional content distribution. The COVID-19 pandemic (2020-2022) served as a catalyst, driving the adoption of hybrid modalities and synchronous tools such as Zoom, Microsoft Teams, and Google Meet. According to OECD (2023), approximately 90% of higher education institutions in OECD countries reported implementing at least one of these platforms by 2022, measured as institutional-level adoption for course delivery in undergraduate programs. This transitional period revealed not only technical viability but also structural limitations of digital approaches without embedded intelligence: cognitive overload, Zoom fatigue and superficial learning (Selwyn, 2021).
Between 2023 and 2026, the emergence of generative AI, exemplified by large language models (LLMs) such as GPT-4o, Claude 3 and specialised educational variants, completely redefined the paradigm. Recent Brazilian scholarship contextualises this shift: Bandeira and Aquino (2025) critically interrogate whether AI constitutes mere “euphoria” or a genuine educational revolution, offering a balanced Lusophone perspective on the tension between technological promise and pedagogical reality. Recent systematic reviews document that approximately 80% of European and North American universities integrate AI into core pedagogical processes. These studies report average gains of 15% - 25% in student engagement, knowledge retention, and satisfaction. Engagement was measured using Likert-scale surveys on participation and attention during AI-supported activities; retention was operationalised through course assessments and test scores; and satisfaction was assessed via standardised student feedback questionnaires. The reported effects were observed across diverse samples, ranging from small pilot classes (≈30 students) to large courses (>200 students) in higher education settings (Kasneci et al., 2023).
This third technological wave transcends mere content delivery: machine learning algorithms process real-time behavioural patterns: reading speed, recurrent error patterns, multimodal preferences (text, video, audio), generating truly adaptive experiences that respond to each learner’s zone of proximal development (Crompton & Burke, 2023; Tlili et al., 2023).
The spectrum of modern educational technologies is broad and complementary. Collaborative digital tools such as Google Classroom, Padlet and Microsoft OneNote facilitate real-time co-construction of knowledge (Hadwin et al., 2018); gamification platforms (Kahoot!, Classcraft, Duolingo) incorporate playful elements that enhance intrinsic motivation through variable rewards and social leader boards (Sailer et al., 2017); immersive environments of augmented reality (Merge Cube, Google Expeditions) and virtual reality (Oculus Quest for Education) promote deep experiential learning, anchored in constructivist theories (Radianti et al., 2020).
Adaptive AI elevates this continuum with previously unimaginable capabilities: intelligent tutors such as Duolingo Max and systems like Squirrel AI dynamically adjust task difficulty in real time (Kılıç et al., 2025). Predictive learning analytics, applied in a sample of Brazilian secondary and higher education institutions, anticipate student dropout risks, achieving a prediction accuracy of 85% - 90% when measured as correctly classified cases over total enrolled students (Marcolino et al., 2025). Recommendation systems personalise learning sequences based on multidimensional knowledge graphs (Zhou & Wang, 2025).
At the theoretical-conceptual core resides self-regulated learning (SRL), formalised by Zimmerman (2002, 2023) in its cyclical model of three interdependent phases: 1) Forethought/Planning phase: integration of task analysis (definition of specific goals via implicit SMART goal setting) and strategic planning (selection of cognitive/metacognitive strategies), energised by self-efficacy. This initial phase structures and cyclically influences subsequent performance and self-reflection phases, promoting autonomous learning; 2) Performance/Execution phase: active operationalisation of strategies outlined in the planning phase, characterised by attentive monitoring (self-monitoring) of progress and behavioural control (self-control) to maintain focus and effort; 3) Self-reflection phase: characterised as diagnostic evaluation of performance obtained, through causality analysis (attribution of successes/failures to controllable factors) and self-evaluative satisfaction, leading to strategic adaptation for subsequent cycles.
SRL gains computational support via multimodal analytics dashboards and real-time adaptive feedback, with recent empirical evidence demonstrating positive correlations between human-AI collaborative interventions and gains in autonomous regulation (Al Nabhani et al., 2025; Hadwin et al., 2023).
1.2. Critical Challenges and Structural Tensions
This technological evolution is neither linear nor free from deep systemic tensions that compromise its equitable implementation. Digital equity emerges as the most serious structural dilemma of the era: while urban centres with 5G/6G infrastructure, universal device access and advanced digital literacy fully benefit from scaled personalisation, rural populations, socioeconomically vulnerable and geographically isolated face chronic structural exclusion (Selwyn, 2021; Van Dijk, 2020; Vesna et al., 2025).
Ethical challenges multiply at an exponential rate, far exceeding current regulatory capacity. Algorithmic biases inherited from non-representative training data, predominantly Anglophone, upper-middle class and ethnically homogeneous, perpetuate systemic discriminations of gender, ethnicity and socioeconomic stratification in recommendation systems and automated assessment, with documented over-penalisation for minority students (Akgun & Greenhow, 2022; Heggler et al., 2025; O’Neil, 2016).
Privacy of sensitive data constitutes another critical front: unique cognitive patterns (e.g., processing speed, linguistic hesitations), emotional vulnerabilities (detected via sentiment analysis), and complete academic trajectories become valuable commodities for big tech, entering irreconcilable tension with GDPR principles and educational ethics conventions (Liu et al., 2025). Algorithmic transparency remains structurally opaque (the black-box problem), whereby teachers and students do not understand the underlying decision logic of systems, undermining essential trust and pedagogical agency (Holstein et al., 2019).
Pathological dependence emerges as an existential threat to the very autonomy that AI seeks to foster. Students exposed early to excessive computational scaffolding may develop chronic cognitive offloading, permanently delegating essential metacognitive functions: strategic planning, self-monitoring, critical evaluation, to the machine (Risko & Gilbert, 2016; Wahn et al., 2023). Pilot experiences with GPT-4 in academic writing reveal a decline in original composition skills after 6 months of intensive use, suggesting skill atrophy similar to that observed with calculators for basic arithmetic competencies (Georgiou, 2025).
Critical scarcity of longitudinal evidence exacerbates these tensions: systematic reviews from 2019-2025 reveal that fewer than 12% of publications report studies with follow-up exceeding 12 months, with a systematic overrepresentation of elite samples (Harvard, Stanford, MIT) and an underrepresentation of diverse linguistic contexts (Akhmetova et al., 2025). This epistemological asymmetry severely compromises the generalisation of findings to multilingual, culturally heterogeneous and socioeconomically inclusive educational systems.
Finally, inadequate teacher training constitutes a critical human bottleneck: only 23% of university professors report themselves “very confident” in integrating ethical AI in 2025, with significant cultural resistances to the perceived “dehumanisation” of teaching (UNESCO, 2023). Without systematic capacitation in AI literacy, prompt engineering and computational ethics, AI’s transformative potential risks remaining confined to technophile early adopters.
1.3. Objectives, Questions, and Methodology
This article aims to offer a critical and practical perspective on the integration of IT and AI in education, balancing the transformative potential of these technologies with the structural challenges they face. It specifically seeks to: characterise available tools and systems; analyse the support for self-regulated learning; examine ethical and equity risks; and propose sustainable paths for responsible implementation (Kasneci et al., 2023; Zimmerman, 2002).
The key questions guiding the analysis include: How can intelligent technologies operationalise the cyclical phases of self-regulation? What evidence supports the practical effectiveness of intelligent tutors and educational analytics? How can technological innovation be reconciled with ethical principles and equity?
The reflection is based on a critical review of recent literature on educational AI, drawing on relevant empirical and theoretical studies. The qualitative approach selected sources indexed in Scopus, Web of Science, and ERIC, prioritising thematic relevance and contextual diversity (Hadwin et al., 2023). This strategy ensures methodological rigour and an in-depth analysis of human-AI interactions in self-regulated learning.
To enhance transparency and methodological consistency, this critical review followed a structured search and selection protocol. Searches were conducted between January and March 2025 in the databases Scopus, Web of Science, ERIC, and complementary searches in Google Scholar. The temporal scope covered publications from 2015 to 2025, reflecting the most recent developments in educational artificial intelligence and Education 4.0. The search strings combined Boolean operators, including (“artificial intelligence” OR “generative AI” OR “intelligent tutoring systems”) AND (“education” OR “higher education”) AND (“self-regulated learning” OR “personalised learning”) AND (“ethics” OR “equity” OR “privacy”). The inclusion criteria comprised peer-reviewed journal articles in English that explicitly addressed AI applications in education with empirical, theoretical, or review-based methodological clarity. Exclusion criteria included non-academic publications, conference abstracts without full papers, duplicated records, and studies not directly related to educational contexts. The screening process involved title review, abstract analysis, and full-text assessment. Studies were synthesised through thematic analysis, allowing the identification of recurring conceptual dimensions across adaptive AI, immersive environments, self-regulated learning, and ethical governance. The coding process involved iterative comparison across studies, enabling analytical triangulation and conceptual refinement.
The article is organised into five main sections (2 - 6) that flow from the concrete to the propositional: section 2 explores collaborative and immersive digital tools; section 3 analyses adaptive AI; section 4 examines self-regulation; section 5 discusses ethical challenges; section 6 proposes sustainable paths.
2. Exploration of Collaborative and Immersive Digital Tools
The integration of collaborative and immersive digital technologies redefines contemporary pedagogical practices, evolving from basic LMS platforms to sophisticated interactive ecosystems that enhance engagement, mobile accessibility, and deep experiential immersion. This section groups these solutions into two progressive functional blocks, from collaborative platforms with mobile learning to immersive environments, analysing robust international empirical evidence from meta-analyses, inherent cognitive and structural limitations, and illustrative practical cases that demonstrate a balance between transformative potential and practical implementation challenges.
2.1. Collaborative Platforms and Mobile Learning
LMS platforms such as Moodle 4.4, Google Classroom, Canvas LMS, and Microsoft Teams for Education form the backbone of modern digital educational collaboration, incorporating advanced layers of gamification (badges, leader boards, progressive narratives), asynchronous/synchronous social learning, and adaptive micro learning that produce statistically significant gains across multiple dimensions of academic performance (Lathifah et al., 2025; Sailer & Homner, 2020). These collaborative and immersive tools operationalise the personalisation and adaptability pillar of Education 4.0, enabling tailored learning pathways and student-centred engagement across diverse learning contexts. The literature confirms the benefits of gamification in general educational contexts (Sailer & Homner, 2020), while Sales and Pane (2021) demonstrate that LMS platforms with sophisticated tracking increase time devoted to tasks. Specific studies comparing gamified and non-gamified LMSs require further research.
Specialised gamified tools amplify these effects in diverse international contexts. Kahoot! adopted by more than 200 million global educational users, increases active participation rates in university and secondary classes, simultaneously raising student satisfaction and factual memory retention at 30 days, although it shows a “wear-out effect” after 12 consecutive sessions (Wang, 2015). Duolingo, through its proprietary gamified spaced repetition system (SRS) algorithm, achieves high daily consistency rates among mobile users across multiple countries and significantly accelerates the acquisition of conversational fluency compared with traditional classroom methods, demonstrating moderate to high effects on receptive vocabulary and productive grammar (Settles & Meeder, 2016).
These platforms sustain rich and scalable collaborative ecosystems: Moodle H5P Workshops implement structured peer review with automated rubrics and AI-predictive feedback; Google Classroom Streams and Spaces facilitate asynchronous inter-class projects with real-time contextualised comments and native integration with Google Workspace; Canvas MasteryPaths and Outcomes offer adaptive learning trajectories based on demonstrated mastery learning, dynamically adjusting difficulty according to individual proficiency (Xu et al., 2025). Mobile accessibility emerges as an exponential catalyst, with global meta-analyses indicating that mobile-first learning designs reduce dropout rates, increase on-time graduation, and improve equity of access among underrepresented populations (Garzón et al., 2025). Specifically, these interventions reduced dropout rates by 12% - 18% and increased on-time graduation by 8% - 15% across secondary and tertiary education populations in Latin America and Europe. The analysis encompassed 27 empirical studies with sample sizes ranging from 150 to 12,000 students per study, providing robust evidence of the positive effects of mobile accessibility on engagement, persistence, and equity of access in diverse educational contexts.
Despite these advances, structural cognitive overload constitutes a critical theoretical and empirical limitation of multitasking digital collaborative architectures. Grounded in Sweller’s (1988) classical Cognitive Load Theory, empirically refined by Kirschner et al. (2018), this issue documents that simultaneous navigation between asynchronous forums, synchronous multimedia chats, adaptive formative quizzes, and multi-edit collaborative wikis significantly increases extraneous cognitive load (measured by NASA-TLX), reducing deep comprehension of complex content and long-term semantic retention through split-attention and redundancy effects. Particularly vulnerable are students with emerging digital literacy, for whom working memory overload compromises deep elaborative processing and learning transfer (Kirschner et al., 2018).
In the Portuguese context, national programmes such as Escola Digital (Governo de Portugal, 2025), implemented by the Ministry of Education in 1247 public schools, show significant gains in engagement (platform time, timely submissions) through the adoption of gamified Moodle, but face high rates of technical dropout due to complex interfaces on low-end devices and unstable connectivity in rural inland regions. This national initiative, which combines teacher training in digital tools with classroom equipment, illustrates both the transformative potential of large-scale gamification and the enduring structural challenges of digital infrastructure (DGEEC, 2025).
2.2. Immersive Environments and Practical Cases
Augmented reality (AR) and virtual reality (VR) represent a paradigmatic rupture in pedagogical interaction, overcoming the physical, temporal, material, and logistical limitations of conventional 2D education.
Radianti et al. (2020) systematise 59 IVR higher education studies (2009-2018), dominated by medicine (78%): enterprise-grade HMDs (Oculus Rift, HTC Vive), multimodal interactivity (gestural, haptic, voice), and full sensory presence yield robust gains in affective/cognitive engagement and complex psychomotor skills (simulated surgeries, virtual dissections), although factual knowledge shows mixed results due to heterogeneous metrics. They diagnose an excessive focus on technical usability versus pedagogical learning outcomes and propose a priority agenda: standardised evaluation metrics, systematic curricular integration, and longitudinal scalability studies.
Empirical practical cases validate the structural superiority of immersion: Ibáñez et al. (2014) demonstrate that AR visualising invisible electromagnetic fields significantly outperforms traditional web lessons (large effect size, secondary school Physics); Merge Cube AR revolutionises Organic Chemistry/Biochemistry by enabling tactile 3D molecular manipulation over a physical cube, improving spatial retention, conformational resolution, and reactivity prediction compared to flat 2D representations; Schmid et al. (2020) demonstrate that AR combined with online 3D models significantly improves structural understanding and molecular manipulation in chemistry, outperforming traditional 2D representations. Google Expeditions AR/VR, implemented in thousands of classrooms worldwide, simulates interactive biological ecosystems, immersive historical expeditions, and quantum/macroscopic physical demonstrations, documenting gains in intrinsic motivation (Self-Determination Theory), formative assessment performance, and spontaneous formulation of exploratory questions; Cardullo and Wang (2022) document pre-service teachers’ positive experiences with Google Expeditions in early childhood settings, reporting enhanced engagement and pedagogical confidence. Silva Pontes et al. (2025) confirm increases in motivation and understanding of abstract concepts in Brazilian basic education.
Structural systemic barriers compromise massive scalability, and Radianti et al. (2020) identify four critical bottlenecks: 1) prohibitive costs of enterprise-grade HMDs excluding public institutions/restricted budgets; 2) vestibular/proprioceptive Cybersickness affecting 20% - 30% of young users (nausea, spatial disorientation); 3) English-dominant content lacking Portuguese/Spanish curricular certification; 4) deficit of certified teacher training preventing systematic pedagogical integration beyond experimental pilots. Vallim and de Lucca Filho (2024) highlight challenges in medical education: the provision of continuous technical support, the planned integration of curricula, and the validation of certified learning outcomes.
Portuguese institutional initiative’s structure AI + immersive convergence: NOVA IMS leads with the EDGE model (Educational Design Global Experience), integrating generative AI (personalised assistants for data/code analysis and real case simulation) with an immersive Bridge Room: multi-screen collaborative panels, flipped classroom tools, and contextualised gamification (NOVA IMS, 2025). This hybrid pedagogical ecosystem eliminates in-person/online asymmetries, enhances personalised learning flows, continuous formative self-assessment, and complex multidisciplinary simulations.
Projections for 2025 foresee a global pedagogical transformation: Carvalho Ramos and Lobato Borges Junior (2025) identify the convergence of VR, gamification, and adaptive AI as a catalyst for hybrid teaching through personalised learning paths, haptic STEM interfaces (molecular/electromagnetic manipulation), contextualised simulations based on predictive big data analytics, and adaptive real-time feedback. Silva Pontes et al. (2025) validate planned curricular integration as a precondition for sustainable large-scale gains.
Strategic teacher training appears to be a decisive bottleneck: certified training that integrates immersive technological literacy, active pedagogical design, and multimodal assessment of learning outcomes is a prerequisite for large-scale scalability. Programmes should include (Carvalho Ramos & Lobato Borges Junior, 2025; NOVA IMS, 2025; Radianti et al., 2020): 1) certified pedagogical HMDs/IVR training; 2) Cybersickness literacy/mitigation; 3) certified multilingual content design; 4) gamification learning analytics assessment. NOVA IMS (2025) demonstrates the institutional viability of this hybrid model, aligning Portuguese innovation with prospective global trends, positioning Portugal as a European leader in transformative technological-pedagogical convergence.
Immersive + AI environments pedagogy, fundamentally reshaping it through a combination of robust empirical evidence and strategic institutional innovation. Certified teacher training emerges as a decisive catalyst for operationalising this convergence at a systemic scale, transforming educational equity and excellence.
3. In-Depth Analysis of Adaptive AI and Intelligent Tutoring
Adaptive AI and intelligent tutoring represent the natural evolution of the immersive environments analysed in topic 2, transforming empirical evidence into personalised and scalable pedagogical flows. These systems monitor each student’s individual progress in real time, adjusting content, pace, and challenge levels to maximise deep retention and intrinsic motivation, combining advanced technology with human teaching practices.
3.1. Mechanisms of Adaptation and Intelligent Tutoring
Adaptive AI systems operate as invisible personal tutors, continuously monitoring students’ interaction patterns (e.g., correct answers, hesitations, response times) to calibrate the optimal learning experience. This adaptive approach directly implements the integration of intelligent technologies pillar, providing real-time feedback and predictive analytics aligned with Education 4.0 principles. Recent developments in adaptive AI highlight the potential of intelligent curricula to personalise learning pathways, provide tailored feedback, and foster the acquisition of transformative skills, complementing existing approaches in intelligent tutoring and educational analytics (Lourenço & Valente, 2026). Unlike traditional, uniform instruction, these systems create personalised trajectories that avoid the boredom of overly easy content and the frustration of material that is too complex (Anderson et al., 1995; Al Nabhani et al., 2025).
Anderson (1993) establishes the ACT-R theory as the cognitive foundation for intelligent tutoring. Corbett and Anderson (1994) formalise Bayesian Knowledge Tracing (BKT) as a probabilistic model of a student’s knowledge state, updating beliefs after each response. Later, Anderson et al. (1995) summarise ten years of development of the original Cognitive Tutors for LISP, geometry, and algebra, based on simple rules identifying common errors and offering specific personalised guidance, achieving proficiency in one-third the time of traditional instruction. Baker et al. (2008) exemplify this evolution by refining BKT through analysis of student-system interaction patterns, enabling cognitive tutors to detect not only correct/incorrect responses but also hesitations and ineffective strategies, dynamically adjusting support to optimise the shift from declarative to procedural knowledge, achieving efficiency gains that triple learning speed in subjects such as algebra and geometry (Anderson et al., 1995).
Oliveira Santana et al. (2023) document significant gains in algebra and geometry attributed to adaptive scaffolding that respects individual cognitive pacing. Similarly, Blackboard Learn and Moodle plugins implement Adaptive Release, unlocking content only after confirmed mastery of prerequisites, a logic that eliminates manual management and personalises pacing without additional teacher effort.
Advanced virtual tutors further personalise instruction. Graesser et al. (2004) created AutoTutor, a pioneer in multimodal tutoring that analyses not only correct or incorrect answers but also the quality of the reasoning expressed in natural language. Similarly, Graesser et al. (2018) developed ElectronixTutor, an advanced ITS for electronics that integrates multiple learning resources (simulations, tutorials, interactive exercises) with adaptive tutoring, demonstrating scalable personalisation in STEM domains. The system employs 13 dialogic tactics—subtle hints, deep prompts, and corrective statements, that adapt to the student’s discourse type. Recent meta-analyses confirm its effectiveness in complex disciplines such as physics (Létourneau et al., 2025).
Recent meta-analyses validate robust gains. Létourneau et al. (2025) systematised 28 controlled studies on intelligent tutoring systems in primary and secondary education, finding consistent medium effects on factual retention and procedural skills. Kinskofer and Tulis (2025) analysed motivational factors for the adoption of GenAI in education using the UTAUT model extended with SDT. They found that performance expectancy, effort, and social influence predict behavioural intention, mediated by intrinsic motivation and subjective competence in GenAI, which reduces threat appraisals and increases challenge appraisals.
Contemporary generative AI extends this tradition: Appadoo-Ramsamy (2025) positions ChatGPT as a Socratic tutor, leveraging conversational inquiry to scaf-fold critical thinking—a direct descendant of AutoTutor's natural language dia-logue adapted to LLMs.
This sophistication culminates in the most advanced teacher + AI integration. Algorithms based on cognitive models such as ACT-R and BKT simulate multiple problem-solving paths in real time while teachers calibrate the main pedagogical goals, thereby maximising students’ independent problem-solving and promoting durable internalisation of higher-order metacognitive strategies. This granular personalisation transforms learning. Cognitive Tutors (Anderson et al., 1995) identify specific error patterns in algebra, offer personalised visual modelling, and progress only after confirmed mastery, achieving proficiency in one-third the time required by traditional methods, as validated by recent meta-analyses (Létourneau et al., 2025).
3.2. Practical Integration and Technical Limits
Teacher transcends the role of content transmitter to become a facilitator of higher cognitive experiences: monitoring metacognition while the AI performs routine micro-tutoring (Camelo & Siebra, 2025), enabling teachers to reclaim approximately 70% of the time previously spent on routine tasks—such as grading, content delivery, and repetitive tutoring—for advanced activities, including conceptual debates, interdisciplinary synthesis, and personalised mentoring. Time savings were measured through direct observation and workflow analysis over a 12-week AI-assisted intervention (Century Tech, 2025).
In a typical hybrid classroom, during a history lesson, the AI delivers personalised summaries and adaptive questions via tablets while the teacher circulates, facilitating group discussions based on emerging patterns from aggregated class data. Zhao et al. (2025) and Wang and Fan (2025) confirm that this dynamic significantly increases higher-order thinking (the upper levels of Bloom), freeing teachers’ cognitive capacity for irreducibly human functions such as interdisciplinary synthesis and strategic mentoring. Chen et al. (2024) synthesize applications of AI in entrepreneurship education, identifying big data analytics for opportunity spotting and machine learning for precise assessment, but diagnose the need for more sophisticated pedagogical designs.
Multilingual challenges persist as an equity barrier. Models trained predominantly in English generate semantic bias in Portuguese, losing cultural nuances (“to comprehend” versus “to understand”). OpenAI admits higher error rates for Iberian accents; technical solutions (mT5 + LoRA fine-tuning) are computationally unfeasible for public schools (Kshetri, 2025).
Variable accuracy manifests in practice, with notable successes such as Duolingo Max (Duolingo, 2024). In a multi-country study involving 3500 secondary and university learners, participants improved their conversational fluency scores by 20% - 25% over a 6-week period, as measured by standardised oral proficiency assessments, while dropout rates decreased from 12% to 5% during the same period (Kittredge et al., 2025). However, mathematical limits persist: GPT-4o adequately solves basic problems but struggles with complex multi-step reasoning without explicit chain-of-thought guidance. The unbeatable symbiosis arises in the combination Khanmigo + teacher, which surpasses AI alone by uniting computational speed with contextual human judgement.
Systemic scalability faces three critical bottlenecks: in infrastructure, enterprise GPUs are prohibitive, while edge computing (Qualcomm Snapdragon) reduces latency but sacrifices accuracy; in privacy, the GDPR blocks centralised datasets, favouring federated learning that trains models locally and preserves sensitive data; and in teacher appropriation, 42% of higher-education faculty surveyed in Portugal across 25 institutions in 2025 reported fearing technological obsolescence, particularly regarding AI-assisted pedagogical tools, making the intensive 120-hour certification (NOVA IMS, 2025) essential to transform fear into strategic competence (Silva Pontes et al., 2025).
Planned curricular integration is a precondition for sustainability. Silva Pontes et al. (2025) validate gradual implementation: departmental pilot → certified teacher training → curricular scaling → systemic evaluation. This approach avoids technological fatigue and ensures deep cultural appropriation.
Transformative projection for 2026: the convergence of agentic AI (collaborative multi-agent systems) with human-in-the-loop approaches promises structural pedagogical gains. Each student receives 24/7 personalised tutoring while teachers focus on irreducibly human excellence: creativity, empathy, ethical judgement. This human-AI symbiosis, articulated with the immersive environments of topic 2 and the strategic capacity-building of section 2.2, positions Portugal as a European leader in equitable and transformative adaptive pedagogy.
4. Specific Support of Technologies for Self-Regulation
Education 4.0 directly supports the development of SRL by combining personalised, adaptive, and intelligent technological mediation with teacher guidance. This framework not only structures the tools but also clarifies causal pathways: adaptive dashboards, predictive feedback, and generative prompts influence planning, monitoring, and self-reflection by reinforcing goal clarity, real-time adjustment, and reflective insight. The magnitude of these effects depends on learner characteristics (e.g., SRL proficiency) and instructional arrangements (e.g., teacher scaffolding, autonomy-preserving environments). Each of its core pillars, personalisation and adaptability, integration of intelligent technologies, development of 21st-century skills, and systemic ethics and equity, underpins the design and implementation of AI-based SRL tools analysed in this chapter. This framework ensures that technology complements, rather than replaces, human metacognition and strategic self-regulation.
SRL is a central construct in contemporary educational psychology and is consistently associated with academic success, persistence, and lifelong learning (Lourenço et al., 2025; Zimmerman, 2002). In a context of increasing cognitive complexity, student diversity, and accelerated digitalisation, the integration of educational technologies and AI emerges as a powerful external mediator of self-regulatory processes. This topic critically analyses how AI-based technologies can both support and challenge the development of self-regulation, using Zimmerman’s cyclical model as a conceptual guiding thread.
Zimmerman (2002, 2023) conceptualises SRL as a dynamic and recursive process organised into three interdependent phases: planning (forethought), execution/monitoring (performance), and self-reflection, which mutually influence each other across multiple learning cycles. AI does not replace these processes but can function as cognitive and metacognitive scaffolding, externalising self-regulatory functions until these are progressively internalised by the learner (Hadwin et al., 2023).
4.1. AI Support in Zimmerman’s Cyclical Phases
4.1.1. Planning: Goal Setting, Task Analysis, and Motivational Activation
The planning phase constitutes the entry point to the self-regulatory cycle, involving the definition of objectives, the analysis of task demands, and the selection of appropriate cognitive and metacognitive strategies, strongly mediated by self-efficacy beliefs (Zimmerman, 2002). Recent evidence indicates that students show recurring difficulties in this phase, tending to formulate vague goals that are hard to monitor and misaligned with success criteria (Panadero, 2017).
AI-based tools, particularly educational chatbots and generative assistants, have shown high potential to support this initial stage. These tools produce gains in planning by providing structured prompts that clarify task decomposition and strategy selection, which causally enhances goal specificity and self-efficacy. Evidence shows these mechanisms are particularly effective for novice learners or those with lower baseline SRL, and under instructional arrangements that allow learners to retain decision-making autonomy rather than following prescriptive instructions. Chatbots such as Khanmigo, educational ChatGPT, or conversational agents integrated into LMSs can guide students in formulating implicit SMART goals, decomposing complex tasks into sequential subtasks, and suggesting strategies adjusted to each learner’s cognitive profile and performance history (Kasneci et al., 2023).
Experimental studies indicate that structured prompts generated by AI increased the quality of strategic planning and goal clarity by 15% - 22% on average, measured through rubric-based scoring of goal statements in secondary and higher education students in European and North American contexts (sample n = 432; Al Nabhani et al., 2025; Azevedo et al., 2022). This improvement was associated with measurable gains in persistence and engagement, with students completing 12% - 18% more planned tasks within deadlines compared with control groups. By externalising classical metacognitive questions: “What do I know?”, “What do I need to learn?” and “What strategies can I use?”. AI acts as an initial self-regulatory mediator, particularly relevant for novice learners or those with low levels of SRL.
However, Zimmerman (2023) warns that planning only contributes to autonomous self-regulation when control remains with the learner. Thus, the effectiveness of AI depends on its pedagogical design: overly prescriptive systems may reduce student agency, turning planning into a passive algorithmic process.
4.1.2. Execution and Monitoring: Regulation of Effort, Attention, and Progress
The execution phase entails the active implementation of the outlined strategies, including continuous self-monitoring, attentional control, and effort management (Pintrich, 2000). In this phase, digital technologies demonstrate the greatest functional maturity, particularly through learning analytics, predictive dashboards, and real-time adaptive systems.
Intelligent platforms collect fine behavioural data (time on task, error patterns, revision frequency, pauses) and transform them into visual indicators accessible to both students and teachers. These tools produce gains in planning by providing structured prompts that clarify task decomposition and strategy selection, which causally enhances goal specificity and self-efficacy. Evidence shows these mechanisms are particularly effective for novice learners or those with lower baseline SRL, and under instructional arrangements that allow learners to retain decision-making autonomy rather than following prescriptive instructions. Self-regulatory dashboards allow students to compare their current progress with previously defined goals, promoting immediate strategic adjustments (Marcolino et al., 2025). Empirical evidence indicates that students with access to continuous visual feedback demonstrated increases of 18% - 22% in task persistence, reductions of 15-20% in procrastination, and improvements of 12% - 18% in effort regulation, measured using standardised self-regulation scales (SRL-Q; Lourenço et al., 2025). The study was conducted in Portuguese higher education institutions with a sample of 320 undergraduate students, providing contextualised and generalizable metrics of AI-supported SRL interventions.
Predictive systems based on machine learning provide an additional layer of support by anticipating risks of dropout or conceptual difficulties before they become irreversible, achieving 87% - 91% accuracy in predicting at-risk students. The study was conducted in Portuguese higher education institutions with a sample of 450 undergraduate students, using historical academic performance and engagement data as predictive features (Marcolino et al., 2025). When integrated ethically and transparently, these systems function as early warning systems that strengthen self-regulation, enabling timely pedagogical interventions.
However, continuous algorithmic monitoring raises relevant ethical tensions. The boundary between self-regulatory support and excessive surveillance is thin. Holstein et al. (2019) argue that dashboards should prioritise pedagogical interpretation rather than raw metrics, as otherwise they risk promoting performance anxiety and extrinsic regulation driven by external control.
4.1.3. Self-Reflection: Generative Feedback and Reflective Learning
The self-reflection phase closes the self-regulatory cycle, involving self-evaluation of performance, causal attributions, and affective reactions that influence future cycles (Schunk & Zimmerman, 1998). Traditionally, this phase has been underexplored in formal educational contexts, often reduced to late summative assessment.
Generative AI introduces a qualitative shift in this domain, enabling immediate, personalised, and explanatory formative feedback, oriented toward reflection rather than mere error correction. Mechanistically, this feedback fosters causal reasoning and metacognitive insight by prompting learners to evaluate strategies and outcomes. Its impact is maximised when feedback is aligned with explicit rubrics, combined with teacher guidance, and gradually reduced as students internalise reflective processes, ensuring sustainable gains in autonomous SRL. Systems based on LLMs can generate explanations of why a response is incorrect, suggest alternative strategies, and promote adaptive causal attributions (Kasneci et al., 2023).
Recent studies show that generative, process-oriented feedback (rather than outcome-focused) increases metacognitive reflection by 18% - 22%, self-efficacy by 12% - 16%, and learning transfer by 15% - 20%, based on standardised scales applied to secondary and higher education students in Canada and Europe, with a combined sample of 380 participants (Hadwin et al., 2023; Tlili et al., 2023). When aligned with explicit rubrics and transparent criteria, AI feedback can strengthen realistic self-reflective judgments, one of the pillars of mature SRL (Zimmerman, 2023).
However, the risk of excessive externalisation of evaluation remains, as students come to rely systematically on algorithmic judgment, thereby weakening their capacity for independent self-assessment. This risk reinforces the need for intentional pedagogical integration, in which AI feedback is progressively reduced or transformed into reflective prompts, promoting gradual internalisation (Lourenço et al., 2025).
Table 1 synthesises how AI operationalises each of Zimmerman’s SRL phases, mapping classical processes to contemporary technological support and empirical evidence. This structured visualisation clarifies the precise mechanisms through which artificial intelligence externalises and progressively internalises self-regulatory competencies across the complete SRL cycle.
Table 1. AI integration in the phases of SRL according to Zimmerman’s Model.
SRL phase/Zimmerman |
Classical process |
AI-Based support |
Practical examples |
Key Evidence |
Planning |
SMART goal setting;
task analysis |
Generative chatbots |
Khanmigo;
ChatGPT Edu |
Kasneci et al. (2023) |
Monitoring |
Self-observation;
progress tracking |
Learning analytics dashboards |
Moodle Analytics;
Canvas |
Marcolino et al. (2025) |
Self-reflection |
Causal attribution; strategy evaluation |
Explanatory generative feedback |
AutoTutor;
Duolingo Max |
Hadwin et al. (2023) |
Source: Developed by the authors based on Zimmerman (2002, 2023) and empirical AI-SRL literature.
4.2. Empirical Evidence and Gradual Dependency
Despite the exponential growth of educational AI applications, the empirical basis regarding their long-term effects on self-regulation remains limited. Recent systematic reviews indicate that most studies report short-term gains in engagement and performance, but only a minority evaluate the progression toward autonomous self-regulation after the withdrawal of technological support (Akhmetova et al., 2025).
Emerging longitudinal studies suggest that AI-based interventions are more effective when designed according to a gradual dependency (fading) model, in which algorithmic scaffolding is progressively withdrawn as the student demonstrates greater self-regulatory competence (Hadwin et al., 2023). For example, prompts initially provide prescriptive guidance (“Start by identifying three key objectives for this task”) and gradually evolve into reflective prompts (“Which strategies worked best for you in this task, and why?”). Similarly, feedback shifts from direct answers toward self-verification questions that encourage metacognitive evaluation. SRL progress is assessed at each fading stage using analytics dashboards and standardized scales, measuring planning, monitoring, and reflective capacities after support reduction. This principle, aligned with Zimmerman’s sociocognitive theory, prevents the phenomenon of chronic cognitive offloading documented by Risko and Gilbert (2016).
On the other hand, evidence warns of real risks of technological dependence. Georgiou (2025) observed significant declines in the quality of autonomous academic writing after intensive use of generative AI without explicit metacognitive guidance. These results reinforce the need for explainable AI (XAI), which makes the systems’ criteria, limits, and uncertainties visible, promoting algorithmic literacy and critical thinking (Kasneci et al., 2023).
In summary, the literature converges on a central conclusion: AI significantly enhances self-regulation cycles when it functions as a transitional mediator rather than a permanent substitute for human metacognitive processes. Progression toward autonomous self-regulation requires intentional pedagogical design, solid teacher training, and clear ethical frameworks.
The integration of AI into learning profoundly redefines support for self-regulation, operationalising, in an unprecedented way, the cyclical phases of Zimmerman. However, its transformative potential is only realised when guided by pedagogical, ethical, and developmental principles. Self-regulation is not automatable; it is cultivable. AI can accelerate that cultivation, or compromise it, depending on the educational choices that frame it.
5. Ethical Challenges, Privacy, and Systemic Equity
Education 4.0 provides a guiding framework for addressing ethical, privacy, and equity challenges in educational technology. Its core pillars—personalisation and adaptability, integration of intelligent technologies, development of 21st-century skills, and systemic ethics and equity—serve as the normative reference to evaluate how AI can enhance learning while safeguarding fairness, inclusiveness, and human-centred decision-making.
The increasing incorporation of advanced digital technologies and AI into educational ecosystems, as analysed in the previous chapters, introduces substantial pedagogical benefits but also exposes substantial ethical, legal, and social vulnerabilities. On one hand, AI enhances self-regulation cycles, personalisation, and pedagogical efficiency; on the other, its uncritical implementation can deepen structural inequalities, compromise student privacy, and institutionalise algorithmic biases that are difficult to detect and correct.
This chapter analyses the ethical challenges of educational AI from a systemic perspective, organising the risks into three interconnected dimensions: data protection, algorithmic biases, and educational equity, and relating them to European regulatory frameworks, recent empirical evidence, and international guidelines, with particular attention to the Portuguese context.
5.1. Data Protection and Algorithmic Biases
5.1.1. Educational Big Data and Privacy: Structural Tensions with the General Data Protection Regulation
AI-based educational systems operate on massive volumes of sensitive data, frequently referred to as educational big data, which include not only administrative information (age, gender, academic background) but also highly granular behavioural and cognitive data: error patterns, response times, learning styles, motivation levels inferred through interaction analysis, and, in some cases, emotional indicators derived from sentiment analysis (Liu et al., 2025).
This level of permanent monitoring is in direct tension with the fundamental principles of the General Data Protection Regulation (GDPR), namely data minimisation, purpose limitation, and the right to explanation of automated decisions (European Union, 2018). Selwyn (2021) argues that education has become one of the sectors most vulnerable to “silent datafication”, in which students, often minors, do not have the real capacity for informed consent.
Recent data breaches at global educational platforms expose the fragility of current data governance models. In 2024-2025, multiple European universities reported unauthorised access to academic databases integrated with learning analytics and generative AI tools, compromising sensitive information of thousands of students (Liu et al., 2025). These episodes reinforce O’Neil’s (2016) critique that high-impact algorithmic systems often operate without adequate public scrutiny.
In the educational context, the problem is exacerbated by power asymmetries among students, institutions, and global technology providers. Commercial educational AI platforms collect data for continuous model improvement, often with opaque contractual clauses that make it difficult to fully enforce the rights enshrined in the GDPR, such as the right to erasure or data portability (Kasneci et al., 2023).
5.1.2. Algorithmic Biases and Structural Discrimination
Beyond privacy, algorithmic biases constitute one of the most documented ethical threats of contemporary AI. Machine learning algorithms learn from historical data that reflect pre-existing social inequalities; when such data are used without critical correction, the systems tend to reproduce and amplify discrimination based on gender, ethnicity, social class, or language (Akgun & Greenhow, 2022; O’Neil, 2016).
In the educational domain, empirical studies show that recommendation and automated assessment systems can penalise students from linguistic or cultural minorities. Heggler et al. (2025) identified systematic discrepancies in the automated evaluation of academic texts produced by non-native English-speaking students, even when the conceptual content was equivalent. Similarly, Akgun and Greenhow (2022) warn of the under-representation of non-Anglophone educational contexts in the training datasets of widely used generative models.
These biases are particularly problematic when systems are used for high-impact decisions, such as curricular progression, dropout risk identification, or allocation of pedagogical support, without adequate human mediation. Holstein et al. (2019) emphasise that model opacity (the black-box problem) hinders the identification of algorithmic injustices, even by experienced teachers.
The emergence of generative AI intensifies these risks. Large Language Models trained predominantly on Anglo-Saxon corpora often incorporate implicit cultural norms that are not universal, thereby affecting pedagogical adequacy in Lusophone contexts (Kasneci et al., 2023). This phenomenon undermines semantic and cultural equity, an aspect often overlooked in assessments of technical effectiveness.
5.1.3. Audits, Transparency, and Algorithmic Ethics
In the face of these challenges, the literature converges on the need for systematic algorithmic audits and greater ethical transparency. Emerging Ethical AI frameworks advocate adopting principles such as fairness, accountability, transparency, and human oversight (FAT-HO), particularly in sensitive educational contexts (Floridi et al., 2018).
Kasneci et al. (2023) argue that explainable AI (XAI) should be integrated as a minimum requirement in educational systems, enabling teachers and students to understand the basis for algorithmic recommendations. This transparency is essential not only for legal compliance but also for maintaining pedagogical trust and student agency.
In the European space, the AI Act (in its final stage of implementation in 2025) classifies educational AI systems as “high-risk”, requiring continuous impact assessment, rigorous technical documentation, and mandatory human oversight. However, the effectiveness of these measures will depend on educational institutions’ ability to operationalise them, preventing regulation from remaining merely formal.
5.2. Equity and Contextual Regulation
5.2.1. Digital Divide and Territorial Inequalities
Equity constitutes the most critical transversal ethical axis of educational AI. The promise of personalisation and inclusion can be reversed when structural inequalities in access, digital literacy, and technological infrastructure are ignored (Selwyn, 2021).
In Portugal, despite significant progress with programmes such as Escola Digital (Governo de Portugal, 2025), deep asymmetries persist between urban and rural contexts. Inland regions show 37% lower broadband coverage, 28% of students lack access to functional devices, and only 15% of teachers received specialised digital training, limiting the effective use of AI-based tools (Conselho de Ministros, 2025). These disparities reproduce the so-called second digital divide, in which the problem is no longer merely access but the capacity for meaningful pedagogical use.
Recent studies indicate that students from more privileged socioeconomic contexts benefit disproportionately from educational AI, as they possess greater digital capital, family support, and technologically robust school environments (Selwyn, 2021). Without explicit compensatory policies, AI risks amplifying historical educational inequalities.
5.2.2. International Guidelines and Ethical Regulation
International organisations have sought to respond to these challenges. UNESCO (2023) published global guidelines for the ethical use of AI in education, emphasising principles of equity, inclusion, linguistic diversity, and sustainability. These guidelines argue that AI should complement, not replace, the human role, respecting local cultural contexts and promoting social justice.
However, the implementation of these guidelines faces significant obstacles. The absence of binding mechanisms and the dependence on global technology providers hinder the translation of ethical principles into concrete institutional practices (Selwyn, 2021). Effective regulation requires coordination between public policies, educational institutions, and local communities.
5.2.3. Teacher Training and Recent Critical Cases
Teacher training emerges as a decisive variable in the ethical mediation of educational AI. UNESCO (2023) data indicate that 23% of surveyed European higher-education and secondary school teachers feel fully prepared to integrate AI in a critical and ethical way, based on a representative sample of 1250 teachers across 15 EU countries. This training gap compromises the ability to identify biases, protect data, and promote autonomous self-regulation, as discussed in topic 4.
Critical cases documented in 2025 illustrate these weaknesses. In 7 universities across Germany, France, and Spain, the rushed adoption of AI-based automated assessment systems affected over 3400 students, leading to protests over opaque and perceived unfair decisions (Selwyn et al., 2025). These episodes reinforce the need for human-in-the-loop, continuous training, and participatory ethical evaluation.
In summary, the ethical challenges of AI in education are not peripheral but structural. Privacy, algorithmic biases, and equity constitute interdependent dimensions that demand systemic responses informed by empirical evidence, contextual regulation, and collective ethical commitment. AI can be a powerful instrument of educational democratisation or a sophisticated mechanism of exclusion—the difference lies in the political, pedagogical, and ethical choices that guide its implementation.
The following chapter proposes integrative and sustainable pathways that link technological innovation with social justice, digital literacy, and responsible governance toward a truly inclusive Education 4.0.
6. Sustainable Integrative Proposals for Practical Implementation
Education 4.0 serves as the integrative reference for translating theoretical, empirical, and ethical insights into actionable and sustainable educational strategies. Its four pillars—personalisation and adaptability, integration of intelligent technologies, development of 21st-century skills, and systemic ethics and equity—guide the design of IT, AI, and SRL interventions, ensuring that technological innovation complements human agency, fosters inclusion, and promotes socially just learning environments.
The previous chapters have shown that the integration of IT and AI in contemporary education is not merely a technical matter, but a profoundly pedagogical, ethical, and political process. The empirical evidence analysed confirms that AI can significantly enhance personalised learning, self-regulation cycles, and the efficiency of educational systems. However, it also reveals relevant systemic risks: privacy, algorithmic biases, cognitive dependency, and structural inequalities, that require integrated and sustainable responses.
This final chapter presents a propositional synthesis, articulating the theoretical and empirical contributions discussed throughout the article into an integrative framework that combines IT, AI, SRL, and ethics, and concludes with practical recommendations for teachers, institutions, and policymakers. It concludes with a forward-looking reflection on research gaps and emerging post-2026 trends.
6.1. Integrative Framework and Recommendations
6.1.1. A Unified Model: IT + AI + SRL + Ethics
Based on Zimmerman’s cyclical model (2002, 2023), the evidence on adaptive AI (Hadwin et al., 2023; Kasneci et al., 2023), and the ethical challenges analysed in topic 5, a sustainable integrative framework is proposed, grounded on four interdependent pillars: 1) Technological infrastructure (IT): robust, accessible, and interoperable digital platforms (LMS, mobile learning, hybrid environments); 2) Pedagogical IA: adaptive systems, analytics, and generative feedback oriented toward deep learning, not just performance; 3) SRL: progressive development of metacognitive, motivational, and behavioural competencies, with gradual scaffolding; 4) Ethics and systemic equity: data protection, bias mitigation, algorithmic transparency, and social justice as structuring principles.
The integrative framework proposed in this article articulates four interdependent pillars essential to the sustainable implementation of AI in education: IT infrastructure, AI systems, SRL processes, and ethical governance. These pillars interact dynamically to create a coherent ecosystem capable of scaling personalised learning while mitigating structural risks.
Table 2 synthesises this conceptual architecture, mapping each pillar’s core components to practical examples and institutional implementations that demonstrate operational viability.
Table 2. Pillars of education 4.0: Components, practical examples, and institutional implementation.
Pillar |
Core components |
Practical examples |
Institutional implementation |
IT |
LMS platforms; mobile learning; hybrid models |
Moodle 4.4; MS Teams; Canvas |
Escola Digital (Governo de
Portugal, 2025) |
AI |
Adaptive systems; learning analytics; XAI |
Cognitive Tutors; AutoTutor |
NOVA IMS (2025) |
SRL |
Zimmerman’s three cycles;
fading scaffolds |
SMART goals; analytics
dashboards |
Certified teacher training
programmes |
Ethics |
GDPR compliance; bias audits;
equity |
Explainable AI dashboards |
Portugal Profundo Equity
Initiative |
Source: Developed by the authors synthesizing sections 2 - 5 empirical and theoretical findings.
Expanding Figure 1, the IT pillar provides the foundational infrastructure enabling seamless access across diverse contexts, from urban 6G connectivity to rural 4G-limited regions. AI systems operationalise Zimmerman’s SRL cycles through real-time adaptation and predictive analytics, while ethical governance ensures GDPR-XAI compliance and algorithmic transparency. The SRL pillar maintains human agency as the architectural core, preventing pathological cognitive offloading through intentional fading scaffolds.
This four-pillar architecture resolves the structural tensions identified throughout the article: scalability versus equity, personalisation versus bias, and efficiency versus autonomy. Portuguese institutional innovation, exemplified by NOVA IMS (2025) and, “Escola Digital” (Governo de Portugal, 2025), validates systemic feasibility, positioning Portugal as a European vanguard for Education 4.0 implementation.
Source: Developed by the authors based on the theory presented in this article.
Figure 1. Four pillars architecture for education 4.0.
6.1.2. Actionable Recommendations for Practical Implementation
Based on the synthesis of the literature and the evidence discussed, seven practical recommendations are presented, oriented toward European contexts and particularly relevant for Portugal:
1) Design AI as gradual scaffolding:
Educational AI systems should incorporate explicit fading mechanisms that progressively reduce support as the student develops self-regulatory competencies, thereby avoiding cognitive dependency (Hadwin et al., 2023; Risko & Gilbert, 2016).
2) Integrate SRL as an explicit pedagogical goal:
Self-regulation should not be an implicit by-product of technology but an explicit curricular objective, with clear indicators of planning, monitoring, and self-reflection (Zimmerman, 2023).
3) Adopt interpretable and learner-centred dashboards:
Learning analytics should prioritise comprehensible pedagogical visualisations that promote reflection and autonomous decision-making rather than control or surveillance (Holstein et al., 2019).
4) Ensure ethical compliance and algorithmic transparency (XAI):
Institutions should require a minimum level of explainability for the systems used, conduct regular bias audits, and maintain clear documentation of data use, in line with the GDPR and the European AI Act (Kasneci et al., 2023).
5) Invest in continuous teacher training in educational AI:
Teacher development should extend beyond technical use to encompass algorithmic literacy, AI ethics, self-regulatory pedagogical design, and the critical evaluation of tools (UNESCO, 2023).
6) Implement compensatory digital equity policies:
Public programmes should prioritise rural and socioeconomically vulnerable contexts, ensuring infrastructure, technical support, and contextual adaptation of AI solutions (Selwyn, 2021; Conselho de Ministros, 2025).
7) Maintain the human-in-the-loop model as a structural principle:
High-impact pedagogical decisions should always involve qualified human supervision, preserving professional judgment, empathy, and cultural contextualisation.
These recommendations do not constitute an exhaustive list but rather an operational guide for the responsible and sustainable integration of AI in education, aligned with the values of Education 4.0 and the principles of equity and inclusion.
The seven actionable recommendations listed above are summarised in Table 3, which maps each recommendation to the relevant stakeholders, the implementation timeline, and the measurable success metrics. This synthesis facilitates operational planning for the integration of Education 4.0 pillars (IT, AI, SRL, and Ethics) in European educational contexts, particularly in Portugal.
The success metrics presented represent reference projections for the 2026-2027 policy planning scenarios and do not constitute previously validated empirical results.
Table 3. Evidence-based recommendations for integrating education 4.0 pillars: IT, AI, SRL, and Ethics.
|
Recommendation |
Target stakeholders |
Implementation timeline |
Success metrics |
1 |
Hybrid LMS + XAI dashboards |
School directors |
2026 Q1 |
80% teacher adoption rate |
2 |
Zimmerman 3-cycle teacher training |
Ministries of Education |
2026-2027 |
Certified teachers +30% |
3 |
GDPR-XAI compliance audits |
Tech providers |
Immediate (2026) |
Zero major violations |
4 |
Equity-first AI deployment |
Regional schools |
2026 Q2 |
Digital divide reduction 25% |
5 |
SRL analytics for at-risk students |
Psychologists |
Pilot 2026 |
Dropout rate −15% |
6 |
National AI-Education Observatory |
Governments |
2027 launch |
Annual impact reports |
7 |
Teacher-AI co-creation workshops |
Universities |
Semester programs |
500 teachers/year trained |
Source: Developed by the authors synthesising policy (Sections 2 - 3), technology (Section 5), and empirical findings (Section 4) for Education 4.0 implementation.
7. Conclusion
Education 4.0, articulated through its four pillars—technological infrastructure, intelligent systems, self-regulated learning, and ethics and systemic equity—provides a coherent framework for integrating AI into education. This framework ensures that technological innovation complements human agency, fosters inclusion, and promotes socially just learning environments. The evidence presented throughout this article demonstrates that these interdependent pillars are essential for operationalising personalised learning, adaptive support, and reflective feedback, while mitigating systemic risks.
The analysis confirms the significant potential of AI-based interventions to enhance SRL. Adaptive dashboards, generative feedback, and predictive analytics support planning, monitoring, and reflection, especially when aligned with gradual scaffolding and fading mechanisms (Hadwin et al., 2023; Zimmerman, 2023). These interventions improve engagement, persistence, and strategic competence, while immersive environments and hybrid models like NOVA IMS (2025) provide scalable examples of operational effectiveness (Radianti et al., 2020; Létourneau et al., 2025).
At the same time, AI integration carries important risks. Algorithmic biases, privacy concerns, cognitive dependency, and structural inequities can undermine the benefits if ethical and pedagogical principles are not actively applied (Kasneci et al., 2023; Selwyn, 2021). The combination of GDPR-XAI compliance, teacher training, and equity-oriented policies is essential to prevent harm and preserve human agency. In particular, intentional fading of AI support, coupled with monitoring via dashboards and standardised SRL scales, ensures that students gradually internalise self-regulatory skills rather than becoming dependent on technology.
This review also has limitations. Despite growing empirical evidence, long-term studies on autonomous SRL progression post-AI intervention remain scarce (Akhmetova et al., 2025). Cultural, linguistic, and contextual diversity is underrepresented, limiting the generalisation of current findings. Additionally, affective AI and the deployment of explainable AI (XAI) in real classrooms remain insufficiently explored, leaving open questions regarding motivation, trust, and ethical use (Liu et al., 2025).
Looking forward, the convergence of generative AI, multi-agent systems, immersive learning environments, and predictive analytics is expected to create highly personalised educational ecosystems. The transformative potential of these technologies will only be realised if guided by the Education 4.0 pillars, ethical governance, and human-centred pedagogy. In this sense, AI is not a substitute for education but a catalyst: it can deepen learning, promote inclusion, and strengthen autonomy when applied reflectively and ethically.