The Challenges of Higher Education Students Face in Using Artificial Intelligence (AI) against Their Learning Experiences ()
1. Introduction
The COVID-19 pandemic disrupted global education, affecting 1.6 billion students and leading to school closures in 190 nations, with low-income countries hit hardest (UN Sustainable Development Group, 2020). The shift to online learning exposed a lack of IT infrastructure in many universities, delaying education. School closures also impacted critical community services and parents’ ability to work. Governments prioritized education continuity, often turning to ICT and online instruction, accelerating a trend already underway. Online learning became the new normal, supported by institutions like the World Bank, which provided guidance on adapting to new platforms. This shift promotes 21st-century skills such as critical thinking, creativity, collaboration, and communication, moving away from traditional, one-size-fits-all teaching methods (Bondie et al., 2019). Student-centered learning approaches, like Problem-Based Learning (PBL) and flipped classrooms, are now emphasized, supported by tools like Padlet, Canva, and Furthermore, as the twenty-first century dawns, a person’s capacity for critical thought and level of literacy must be considered for effective functioning and employment in society where algorithmic thinking and computing literacy have been introduced (Coleman, 2020). Students need to master the skills as to prepare them before stepping into society world, with the recent boom of big data, machine learning, robotics, and artificial intelligence, new skills such as computational thinking, data literacy, and AI literacy have developed.
The rise of AI, especially AI generative tools, has instilled dread in the education industry in recent months. The fundamental fear among institutions is that AI implementation would jeopardize the survival of valuable academic paradigms such as evaluation, course design, activities, and so on. Based on the news in Straits Times, Fadhlina Sidek, the Minister of Education in Malaysia has proposed the beginning in 2027, elementary school students will get instruction on the fundamentals of artificial intelligence (AI) which could help them become adaptable and ready to seize the possibilities and tackle new problems utilizing intelligent technologies like AI (Harun & Sallehuddin, 2024). Moreover, in May 2024, Fadhlina Sidek, the Minister of Education declared a number of programs, most of which were designed to improve teachers’ use of digital technology. With a budget of RM1 million and the goal of reaching 500 selected teachers nationwide, one of these initiatives offers instructors the opportunity to enhance their skills through courses on technological empowerment, with a focus on artificial intelligence. She noted that to prepare 100,000 teachers to become recognized as “Apple Teachers”, MOE would work with Apple Professional Learning Specialist Malaysia. To recognize educators who exhibit extraordinary competence in integrating digital technology, 1000 educators will also be selected to enroll in courses that lead to accreditation as digital specialists and recognition as MOE’s Guru Jauhari Digital (Mail, 2024a). It’s critical to pinpoint any gaps, obstacles, or constraints that prevent AI systems from reaching their full potential and jeopardize students’ safety during the educational process. So, the aim of this research is to identify the relationship of challenges faced by higher education students on using AI to their learning experiences.
2. Artificial Intelligence (AI)
2.1. The Origin of AI
The concept of artificial intelligence (AI) originated in ancient history and mythology, with examples like the Greek myth of Talos and Jewish golems and progressed through the Renaissance with mechanical automatons (Iavazzo et al., 2024). In 1943, Walter Pitts and Warren McCulloch introduced neural networks, while Alan Turing’s work in 1950 on machine learning and the Turing test furthered AI development (Forghani, 2020). Marvin Minsky and Dean Edmonds built the first neural net machine in 1951 (Flasiński, 2016). The 1956 Dartmouth Conference, organized by John McCarthy and others, marked AI’s formal inception as a field (Haenlein & Kaplan, 2019). However, the 1970s and 1980s experienced an “AI winter” due to unmet expectations and slow progress. A resurgence in the late 20th and early 21st centuries, driven by advancements in machine learning, processing power, and big data, led to significant AI advancements. Today, AI is integral in fields such as computer vision, natural language processing, autonomous vehicles, healthcare, and finance, though it also raises important ethical issues.
2.2. The Developer of AI
The development of AI involved numerous influential figures rather than a single inventor. John McCarthy, who coined the term “artificial intelligence” and organized the 1956 Dartmouth Conference, is a central figure Alan Turing introduced the Turing Test in 1950, foundational for theoretical computer science and AI. After that, Herbert A. Simon and Allen Newell created the first AI program, “Logic Theorist”, in 1955 (Muggleton, 2014). Arthur Samuel pioneered machine learning with a self-learning checkers program in 1959 (Lefkowitz, 2019). While Joseph Weizenbaum created ELIZA, an early natural language processing tool, in 1966 (Muthukrishnan et al., 2020). Ernst Dickmanns developed the first autonomous vehicle in 1986 (Delcker, 2018). These pioneers collectively advanced AI, leading to modern technologies like OpenAI, ChatGPT, and Apple’s Siri.
2.3. AI in System Education
AI’s integration into education poses challenges, notably concerning privacy and data security. UNESCO has issued guidance emphasizing a human-centered approach to deploying generative AI in educational settings, promoting ethical validation and pedagogical design (UNESCO, 2023). China’s Ministry of Education has designated schools as AI education bases, enhancing classroom engagement through tools like Brainco’s headbands, which monitor student attentiveness. Similarly, in Abu Dhabi, AI engages students in critical thinking exercises and debates (Almansoori et al., 2024). In Canada, universities like Dalhousie, McGill, and the University of Alberta offer diverse AI courses, reflecting a global trend towards AI education in higher learning institutions (Bungay, 2021). These initiatives highlight both the potential and challenges of AI in education, underscoring the need for thoughtful implementation and ethical considerations.
In Malaysia, AI is increasingly integrated into education, ChatGPT has gained popularity in academia since its 2022 launch, with educators like Assoc Prof Ooi Boon Yaik using it for grammar checks (Okonkwo & Ade-Ibijola, 2023). Malaysia’s Ministry of Higher Education plans to establish the nation’s first AI-focused polytechnic in collaboration with the University of Tsukuba, Japan (Mail, 2024b). Additionally, the Ministry of Education intends to introduce basic AI education in primary schools by 2027 to prepare students for digital literacy (Timotheou et al., 2023). In higher education, Malaysia offers various AI-related master’s programs and has inaugurated its first AI faculty at Universiti Teknologi Malaysia (UTM) to cultivate AI and robotics expertise among students. These initiatives underscore Malaysia’s commitment to integrating AI into its educational framework to meet future workforce demands and foster technological innovation.
2.4. Challenges Face by Students in Utilizing AI
A study titled “Challenges and Opportunities of Generative AI for Higher Education as Explained by ChatGPT” uses an ethnographic approach to understand ChatGPT’s perspective on the challenges and potential it offers for higher education. The study, based on interview data, identifies several challenges students face when using AI like ChatGPT: lack of awareness and understanding of its capabilities, accuracy and reliability issues, inadequacies in personalized learning, technological barriers, and limited communication and collaboration opportunities (Michel-Villarreal et al., 2023). There are further concerns associated with the broad use of AI in education. Moreover, given that most AI-powered applications seek and gather users’ personal data—sometimes even without their consent—we may also bring up concerns like security and invasion of privacy. Excessive automation can also cause social isolation and lower levels of interpersonal engagement, which significantly reduces the role that teachers play in the teaching and learning process. Ethical challenge is another concern with AI applications in education. These challenges are bias and discrimination (automatic scoring methods), surveillance (personalized learning systems), and autonomy (predictive systems) (Akgun & Greenhow, 2021). Lastly, some students are particularly concerned about the reliability of Generative AI technologies. ChatGPT lacks the originality and creativity of humans Chan & Hu (2023) and cannot generate a thesis for postgraduate students.
2.5. Students’ Learning Experiences with AI
There has dual impact of AI on student learning experiences. Positive outcomes include immediate feedback, personalized learning, continuous support, access to resources and data analytics (Huang et al., 2021; Chan & Hu, 2023; Aithal & Aithal, 2023). These benefits enhance engagement, learning effectiveness, and self-directed learning of students. However, the negative impact includes limited interaction, academic integrity issues like plagiarism, technology dependence, and limited practical experiences (Ali et al., 2024b; Michel-Villarreal et al., 2023). The absence of face-to-face interaction can affect mental health and motivation. Moreover, AI’s structured approach may hinder creative and critical thinking skills necessary for future careers (Nguyen et al., 2024). Maintaining academic integrity and ensuring AI’s ability to accommodate diverse learning needs remain crucial impact on students’ learning experiences.
3. Problem Statement
The function of technology is not just to provide students with information and communications technology (ICT) skills, but also to achieve excellent education, free of the limits of geography and time, and to foster curiosity, creativity, and cooperation. AI technologies are rapidly progressing and becoming an integral part of everyday life and education, providing substantial advantages for teachers, students, administrators, and organizations. These technologies support various learning processes with tools like Duolingo for intelligent language learning, EdApp for adaptive learning, and Grammarly for automatic writing evaluation (Habibie, 2020). Educators must address student concerns and experiences with AI to achieve the desired learning outcomes. Although extensive research exists on AI in education, much of it focuses on technical aspects or broader pedagogical and institutional implications, often neglecting the nuanced challenges students face in using AI. Wang et al. (2023) identify limits and risks in AI education, such as difficulties in speech recognition with accents and dialects and the lack of comprehensive cultural understanding in AI-based English learning resources, which affect student learning experiences. Research on ChatGPT underscores ethical considerations in its use for supplementary learning and its impact on academic performance (Vargas-Murillo et al., 2023). Pan (2016) in Sirghi et al. (2024), warns that excessive AI adoption could replace intellectual development tools like reading with sounds or visuals, potentially harming long-term human intelligence. Over-automation can lead to student isolation, decreased social interaction, and reduced educator roles in teaching, raising concerns about lifelong learning and ethical implications.
Recent research highlights numerous challenges associated with the widespread adoption of AI in education. Chung Kwan Lo (2023) in Michel-Villarreal et al. (2023) identifies challenges such as accuracy, reliability, overreliance, and plagiarism issues when using ChatGPT. Akgun & Greenhow (2021) categorize ethical challenges, including privacy concerns, bias in scoring, surveillance through personalized learning systems, and autonomy issues. Additionally, according to Shofiah et al. (2023) discuss challenges in AI for academic writing, such as understanding AI limitations, technical skills, ethics, bias, and changes in educational roles. These challenges underscore the need for careful consideration and ethical implementation of AI in education to maximize its benefits effectively. Ali et al. (2024b) discusses the use of Natural Language Processing (NLP) by ChatGPT and other AI models to produce human-like transcripts in various languages. However, these tools pose inherent challenges, classified into five dimensions: user, operational, environmental, technological, and ethical. Content knowledge limitations due to restricted training data access affect output quality, potentially providing students with inaccurate information in less-explored study areas, impacting their learning. Although ChatGPT generates educational materials and answers student questions, it currently lacks the capability to offer personalized support akin to traditional educational environments. Various research identifies major gaps in students’ positive and negative AI learning experiences due to challenges such as privacy and data security, ethical considerations, over-reliance on AI, access and equity, lack of awareness and understanding, and skill development and adaptability. Literature, including Michel-Villarreal et al. (2023) and Aithal & Aithal (2023), supports these challenges’ impact on student learning experiences. Addressing these challenges is essential to maximize AI’s benefits and create effective, inclusive learning environments for students.
4. Research Objectives
The aim of this research is:
1) To identify challenges faced by students in UNITAR International University when using AI applications.
2) To identify UNITAR International University students’ learning experiences on using AI applications.
3) To identify the relationship of challenges faced by students in UNITAR International University on using AI applications to their learning experiences.
5. Methodology
5.1. Population and Sampling
This study, conducted at UNITAR International University in Kelana Jaya, Petaling Jaya, Selangor in the midterm of May as aimed to understand university students’ learning experiences using AI. The target population included undergraduate and postgraduate students, with the specific sample comprising students pursuing foundation, diploma, bachelor’s, and master’s degrees. A total of 150 respondents were selected, with an equal split of 75 undergraduate and 75 postgraduate students. The sampling technique employed was stratified sampling, a reliable technique that guarantees the proportional inclusion of all pertinent subgroups, improving the representativeness and accuracy of a sample. It is especially helpful for research where population subgroups differ markedly on important factors.
This approach ensured a balanced representation of students’ learning experiences with AI based on their education level. The study’s methodology was grounded in the understanding that a population represents any group from which data must be collected, as noted by Banerjee & Chaudhury (2010), and a sample is the specific group from which data is gathered, as defined by Van Haute (2021). The stratified sampling method was chosen to streamline the data collection process while to guarantee that every demographic subgroup is fairly represented in the sample, which will improve the precision and accuracy of the findings, especially when there are notable disparities between the subgroups.
5.2. Research Instrument
Data for this study will be collected using a questionnaire administered via Google Forms. The survey questions will focus on the challenges students face using AI and their learning experiences with AI, aiming to identify the relationship between these challenges and learning experiences among UNITAR International University students. The questionnaire comprises three parts which is Part A collects demographic information from respondents, Part B asks respondents to rate their agreement with statements about the challenges they face when using AI and Part C asks them to rate their learning experiences when using AI.
A combination of Likert scale and category scale was used, as they effectively gather data by allowing respondents to indicate their level of agreement with each statement. The Likert scale also provides the option to remain neutral, avoiding a forced “either or” position. This approach facilitates easy analysis of responses once all data is collected. This research used a category scale in Part A to evaluate respondents’ backgrounds, including age, gender, race, programs, and AI usage in learning. The category scale is simple and allows respondents to easily identify the most appropriate option. Part A collects background information using this scale. In Parts B and C, a 5-point Likert scale is used to measure respondents’ levels of agreement or disagreement. The Likert scale is beneficial for its comprehensibility and ease of quantitative data analysis. Part B investigates challenges faced by UNITAR International University students when using AI, assessing four challenges: privacy & data security, ethical considerations, over-reliance on AI, and lack of awareness and understanding, with five questions for each challenge. Part C assesses students’ learning experiences when using AI, consisting of five questions. Each of the 25 Likert-scaled items is rated from 1 (strongly disagree) to 5 (strongly agree), evaluating students’ perceptions of the challenges and their impact on learning experiences. The questionnaire, distributed via UNITAR’s official email using Google Forms, includes close-ended questions. Data collection takes a few days from 150 respondents, with each respondent completing the questionnaire independently. This research targets both undergraduate and postgraduate students. Completing the questionnaire takes approximately 15 minutes. Descriptive and inferential statistics are the two statistical techniques that are employed. Mean, standard deviation and frequency have been used in descriptive statistic to explain the objectives one and two while correlation analysis were used in inferential statistic to test the relationship between variables to identify the Pearson correlation (r) and significant value (p). Statistical Product and Service Solutions (SPSS) has been used to test the correlation. This comprehensive analysis will identify the relationship between the challenges faced by students when using AI and their learning experiences.
5.3. Reliability and Validity Test
Table 1 demonstrates that the Cronbach’s alpha coefficient value for all five variables is more than 0.7, with each variable represented by five items. A total of 20 items for Part B of the questionnaire, which addresses the four independent variables selected. Part C of the questionnaire includes one item for the dependent variable of interest. Among the four independent variables, lack of awareness and understanding had the highest Cronbach’s alpha coefficient value of 0.89. Therefore, the internal consistency of this variable is acceptable. This is followed by reading the alpha coefficient for privacy & data security, which is 0.87. The Cronbach’s alpha value for the privacy & data security item category is acceptable. Next, followed by ethical considerations and over-reliance on Cronbach’s alpha coefficient readings, which are both 0.78 and 0.77. It showed that the variable is acceptable. The dependent variable, student learning experiences with AI, has a Cronbach’s alpha of 0.86, which is also acceptable. In addition, validity test has been done to the questionnaire in where it had showed each of the item is correlate and significant. As a result, the questionnaire was valid.
Table 1. Reliability analysis statistics.
No. |
Items |
Cronbach’s Alpha |
Number of items |
1. |
Privacy & data security |
0.87 |
5 |
2. |
Ethical considerations |
0.78 |
5 |
3. |
Over-reliance on AI |
0.77 |
5 |
4. |
Lack of awareness and understanding |
0.89 |
5 |
5. |
Students’ learning experiences when using AI |
0.86 |
5 |
6. Result
The respondents’ background in this study is based on six items: age, gender, race, pro-grams, use of AI in the learning process, and AI applications or tools used. Table 2 below shows the findings of background of respondents for this research.
Table 2. Background of respondents.
No. |
Items |
Categories |
Frequency |
Percentage (%) |
1. |
Age |
Below 20 |
33 |
22.0 |
21 - 25 |
41 |
27.3 |
26 - 30 |
4 |
2.7 |
31 - 35 |
30 |
20.0 |
36 - 40 |
24 |
16.0 |
40 and above |
18 |
12.0 |
2. |
Gender |
Female |
105 |
70.0 |
Male |
45 |
30.0 |
3. |
Race |
Chinese |
23 |
15.3 |
Indian |
71 |
47.3 |
Malay |
43 |
28.7 |
Native |
10 |
6.7 |
Other |
3 |
2.0 |
4. |
Programmes |
Foundation |
5 |
3.3 |
Diploma |
29 |
19.3 |
Bachelor’s Degree |
41 |
27.3 |
Master’s Degree |
75 |
50.0 |
5. |
Do you use AI in your learning process? |
Yes |
129 |
86.0 |
No |
21 |
14.0 |
6. |
What AI applications or tools that you use? |
No |
19 |
12.67 |
Bard |
9 |
6 |
Bing AI |
1 |
0.67 |
Canva |
7 |
4.67 |
Canva and ChatGPT |
3 |
2 |
Canva and Cici |
1 |
0.67 |
Canva, Cici and Quillbot |
1 |
0.67 |
Canva and Kahoot |
1 |
0.67 |
ChatGPT |
78 |
52 |
ChatGPT and paragraph tool |
1 |
0.67 |
ChatGPT and Quillbot |
2 |
1.33 |
ChatGPT and Canva |
1 |
0.67 |
ChatGPT, Cici and Gemini |
1 |
0.67 |
Cici |
5 |
3.33 |
Cici and Quillbot |
1 |
0.67 |
Co-pilot |
1 |
0.67 |
Course Network with Teams |
1 |
0.67 |
Gemini |
6 |
4 |
Jenni AI |
5 |
3.33 |
Quillbot, Poe, Gemini, Co-pilot and ChatPDF |
1 |
0.67 |
Quillbot |
4 |
2.67 |
Spinbot |
1 |
0.67 |
Above table shows the findings of background of respondents. The age of respondents among 150 respondents 27.3% are age between 21 to 25 years old followed by 22% are below age 20 years old. Next, 20% are age between 31 to 35 years old followed by 16% are age between 36 to 40 years old. 12% are age above 40 years old and followed by the 2.7% are age between 26 to 30 years old. The percentage of gender has been analyzed in this research based on the analysis of respondents that have participated. Majority of respondents are female with 70% followed by male with 30%. In addition, the majority race of respondents were Indian with 47.3% followed by Malay with 28.7%. Besides that, 15.3% of respondents are Chinese follow by 6.7% are Native. The remaining of 2% are Others. The majority of respondents are from Master’s degree programs with 50% followed by Bachelor’s Degree with 27.3%. However, 19.3% of respondents are in Diploma program and followed by 3.3% of respondents in foundation programs. Furthermore, 86% of respondents use AI in their learning process while the remaining 14% of respondents do not use AI in their learning process. The most used AI tool is ChatGPT with 52% followed by not used AI tools with 12.67%. The application of Bard used by respondents with 6% and followed by Canva with 4.67%. Other tools include Gemini AI with 4%, Jenni AI and Cici with 3.33%, and Quillbot with 2.67%. Various combinations and other tools were also noted by individual respondents.
6.1. Challenges Face by Students in University When Using AI Applications
Table 3 analyzed the first research question of this research which focuses on the challenges faced by students in university when using AI applications. Based on the findings shows the overall of challenges faced by students in university when using AI applications are at high level (mean = 3.81, SD = 0.443). The issues of privacy & data security and ethical considerations are also at high level based on the findings (mean = 3.74, SD = 0.817) and (mean = 3.88, SD = 0.711). Besides that, the issues of over-reliance on AI and lack of understanding and awareness recorded at high level with (mean = 3.87, SD = 0.702) and (mean = 3.76, SD = 0.805).
Table 3. Challenges face by students in university when using AI applications.
No. |
Items |
Mean |
Standard Deviation |
1. |
Issues of Privacy & Data Security |
3.74 |
0.817 |
2. |
Issues of Ethical Considerations |
3.88 |
0.711 |
3. |
Issues of Over-Reliance on AI |
3.87 |
0.702 |
4. |
Issues of Lack of Understanding and Awareness |
3.76 |
0.805 |
|
Total |
3.81 |
0.443 |
6.2. Students’ Learning Experiences on Using AI Applications
Table 4 analyzed the second research question of this research which focus on students’ learning experiences on using AI applications. Based on the findings shows the overall of students’ learning experiences on using AI applications are at high level (mean = 3.74, SD = 0.892). The students learning experiences of decrease of critical thinking skills and detracted from the quality of education are also at high level based on the findings (mean = 3.93, SD = 1.072) and (mean = 3.93, SD = 1.008). Besides that, the students learning experiences of lack of interaction reduce motivation and misleading on materials recorded at high level with (mean = 3.95, SD = 1.119) and (mean = 3.62, SD = 1.241). Lastly, the students learning experiences of reduced engagement in learning process at medium level based on the findings (mean = 3.26, SD = 1.019).
Table 4. Students’ learning experiences on using AI applications.
No. |
Items |
Mean |
Standard Deviation |
1. |
Decrease of critical thinking skills |
3.93 |
1.072 |
2. |
Detracted from the quality of education |
3.93 |
1.008 |
3. |
Lack of interaction reduce motivation |
3.95 |
1.119 |
4. |
Misleading on materials |
3.62 |
1.241 |
5. |
Reduced engagement in learning process |
3.26 |
1.019 |
|
Total |
3.74 |
0.892 |
6.3. Relationship of Challenges Face by Students in University on Using AI Applications to Their Learning Experiences
According to Table 5 shows the third objectives for this research which is the relationship of challenges faced by students in university on using AI applications to their learning process. The findings based on Table 3 show there is a weak positive linear correlation (r = 0.21, p < 0.05) between the challenges of privacy and data security with students’ learning experiences on using AI applications, indicating a significant relationship between these variables. Besides that, there is also a weak positive linear correlation (r = 0.23, p < 0.01) between the challenges of ethical considerations with students’ learning experiences on using AI applications, showing a highly significant relationship as the p value is less than 0.05 and 0.01.
Table 5. Relationship of challenges face by students in university on using AI applications to their learning experiences.
|
Students’ Learning Experiences on Using AI Applications |
Privacy & Data Security |
Ethical Considerations |
Over-Reliance on AI |
Lack of Understanding and Awareness |
Students’ Learning Experiences on Using AI Applications |
Pearson Correlation |
1 |
0.207* |
0.234** |
0.493** |
0.394** |
Sig. (2-tailed) |
|
0.011 |
0.004 |
<0.001 |
<0.001 |
N |
150 |
150 |
150 |
150 |
150 |
Privacy & Data Security |
Pearson Correlation |
0.207* |
1 |
−0.027 |
0.076 |
0.177* |
Sig. (2-tailed) |
0.011 |
|
0.739 |
0.357 |
0.031 |
N |
150 |
150 |
150 |
150 |
150 |
Ethical Considerations |
Pearson Correlation |
0.234** |
−0.027 |
1 |
0.095 |
−0.064 |
Sig. (2-tailed) |
0.004 |
0.739 |
|
0.245 |
0.433 |
N |
150 |
150 |
150 |
150 |
150 |
Over-Reliance on AI |
Pearson Correlation |
0.493** |
0.076 |
0.095 |
1 |
0.458** |
Sig. (2-tailed) |
<0.001 |
0.357 |
0.245 |
|
<0.001 |
N |
150 |
150 |
150 |
150 |
150 |
Lack of Understanding and Awareness |
Pearson Correlation |
0.394** |
0.177* |
−0.064 |
0.458** |
1 |
Sig. (2-tailed) |
<0.001 |
0.031 |
0.433 |
<0.001 |
|
N |
150 |
150 |
150 |
150 |
150 |
*. Correlation is significant at the 0.05 level (2-tailed); **. Correlation is significant at the 0.01 level (2-tailed).
Additionally, there is a moderate positive linear correlation between over-reliance on AI with students’ learning experiences on using AI applications (r = 0.49, p < 0.01), indicating a highly significant relationship. Lastly, between lack of understanding and awareness with students’ learning experiences on using AI applications also shows moderate positive linear correlation with (r = 0.39, p < 0.01), also showing a highly significant relationship. These findings suggest varying degrees of correlation between these challenges and students’ experiences on using AI applications, with some relationships being more pronounced than others, but all statistically significant.
7. Discussion
7.1. Challenges Face by Students When Using AI Applications
This is to answer the first objectives for this research. Researchers examined four challenges students face when using AI applications: privacy & data security, ethical considerations, over-reliance on AI, and lack of understanding and awareness.
First and foremost, ethical considerations in AI involve ensuring technology is impartial and free from biases. This study identifies it as the highest challenge students face. The concerns of ethical implication have been agreed by 53 respondents out of 100 respondents equal to 53% in Ali et al. (2024a). The reason is because due to the vast amount of data the AI algorithms must consider while making conclusions, some precise but uncommonly unique solutions may be overlooked. Therefore, AI-based evaluation systems cannot be totally correct in every case without a human mentor. There is a need in balancing the use of artificial intelligence to help students keep uniqueness and creativity in their work. Moreover, the issues of black box problem in AI which AI systems are frequently opaque, making it difficult for students to grasp how decisions are made.
Next, over-reliance on AI is another challenge. When students overly depend on AI, such as ChatGPT, they risk unintentional plagiarism and errors from accepting inaccurate AI advice. As AI programs may produce essays, reports, and even code, which may result in plagiarism. Students may submit AI-generated content as their own. Based on Salido (2023) emphasized the danger of unintentional plagiarism when students use ChatGPT for academic work. Besides that, Ali et al. (2024a) found 18 respondents concerned about copyright issues related to AI. However, the communication, collaboration, and interpersonal skills may be lacking if students deal solely with AI rather than with peers and teachers.
Furthermore, lack of understanding and awareness means students lack sufficient knowledge about how AI works. Students frequently meet unfamiliar technical words, making it difficult to understand AI principles. While, students may fear AI owing to a lack of knowledge, resulting in reluctance to use AI tools. Limited knowledge of AI’s features could hinder its adoption and effectiveness. According to Aithal and Aithal (2023) mentioned AI’s limited contextual understanding, leading to inaccurate information. As AI may deliver broad responses that may not address the complexities of a query or situation.
Lastly, privacy & data security scored the lowest means. This challenge pertains to the ethical handling of personal data by AI systems. Privacy concerns in educational AI applications is being noted which support by Merikko et al. (2022) in Lim et al. (2023) found students are reluctant to share detailed personal data, emphasizing the need for equitable data stewardship practices. AI systems frequently acquire large quantities of personal data, such as names, addresses, and other identifying information. While AI may monitor and evaluate students’ behavior, preferences, and performance, which could lead to misuse if not adequately guarded.
7.2. Students’ Learning Experiences on Using AI Applications
This is the second objectives for this research. After identifying the challenges students face when using AI applications, researcher examined their learning experiences, focusing on negative impacts like decreased critical thinking skills, lack of interaction reduced motivation, detracted from the quality of education, misleading on materials and reduced engagement in learning process. Despite AI not replacing teachers, it negatively affects student interaction as it also lacks human encouragement which might result in lower motivation which scores the highest mean 3.93. Overuse of AI may reduce students’ abilities to cooperate and communicate with their classmates, which is important for social learning. According to Salido (2023), AI reduces student-student and student-teacher interactions, leading to a sense of isolation while Ali et al. (2024a) found 36% of respondents agreed that lack of human interaction was a significant issue, impacting communication skills and leading to emotional detachment and lower empathy. Interaction with AI may lack the warmth and humanity of human teachers, rendering learning impersonal and isolated. Next, AI frequently delivers generic comments that may not address individual students’ unique needs or misconceptions. This one-size-fits-all strategy has the potential to lower education quality (mean = 3.93). This one-size-fits-all approach reduces educational quality. Excessive usage of Moodle may result in decreased attendance and lower education quality (Lichy et al., 2014).
AI-driven education can promote passive learning (mean = 3.26), in which students absorb information rather than actively engage with the topic. This limits possibilities for active learning, which is required for deeper comprehension and memory. It also frequently directs students onto prescribed learning routes, limiting opportunity for pupils to investigate issues beyond the curriculum. This limitation may impede interest and involvement with the learning process. Based on research by Kaledio et al. (2024) support that AI leading to passive learning experiences of students due to the over-reliance on AI. In addition, as students overly reliant on AI for solutions, impeding the development of critical thinking and problem-solving abilities (mean = 3.93). Based on research by Sebastian and Sebastian (2021) in Ahmad et al. (2023) observed an increase in AI’s role in decision-making from 10% to 80% in five years, potentially diminishing students’ independence and critical thinking skills. AI tools like ChatGPT could hinder students from developing these skills, making them overly reliant on technology. Lastly, AI-generated material might be inaccurate or misleading at times (mean = 3.62), especially if the AI is not up to date. Seo et al. (2021) found that students worry about AI providing unreliable information, impacting their academic performance. Teachers also expressed concerns about AI’s potential to mislead students, emphasizing the importance of human oversight to clarify misunderstandings. Common misunderstandings regarding AI, such as it is replacing all occupations or being unmanageable, might cause undue stress.
7.3. The Relationship of Challenges Faced by Students on Using AI Applications to Their Learning Experiences
This is to answer the third objectives in this research which is identified a relationship between challenges faced by students at UNITAR International University when using AI applications to their learning experiences. The highest correlation (r = 0.49) was found with over-reliance on AI. While AI can provide personalized learning experiences, there is a risk that over-reliance on AI technology may lead to passive learning, reducing student interaction and motivation that affect to their learning experiences. This passivity and dependence on AI for tasks such as communication and writing can result in social isolation and hinder effective self-expression. Over-reliance on AI can also diminish students’ critical thinking skills, as they become accustomed to relying on AI for answers instead of developing their own problem-solving abilities (Ahmad et al., 2023). So, students without a thorough knowledge of AI may overestimate its capabilities or get concerned about their own talents in comparison. This can foster a dependence attitude in which pupils distrust their own abilities and rely excessively on AI for assistance impact to learning anxiety and overestimation on AI. According to Choi et al. (2023) in Vargas-Murillo et al. (2023), especially ChatGPT, inhibits students’ growth as critical thinkers. Misuse of AI for cheating or plagiarism undermines academic integrity. It’s essential to strike a balance between AI-based education and human interaction. Human educators play a crucial role in providing guidance, support, and personalized instruction, which AI cannot replicate. Achieving a balance between AI and human involvement is vital for fostering effective learning experiences.
The correlation for lack of understanding and awareness (r = 0.39) highlights the importance of transparency in AI systems used in education. As AI technology advances, it is crucial for students to develop skills to interact with it effectively. To thrive in an AI-driven educational environment, students need strong critical thinking, problem-solving, and digital literacy skills. A lack of understanding about how AI analyzes data might result in inefficient problem-solving tactics. Students may not understand AI’s limits, expecting it to solve complicated issues that require human understanding and ingenuity. This lack of understanding underscores the critical need for education, as students will increasingly need to incorporate AI applications into their learning processes. Uncertainty, over how AI can be integrated into their learning may impede motivation and concentration, leading to dread or skepticism of such applications. Based on Hornberger et al. (2023), students with prior expertise in computer science and AI (either university or informal study) have a better degree of AI literacy in where they have better learning experiences. So, a basic lack of knowledge of AI might make it difficult to comprehend more complicated concepts later. Students may struggle to develop in fields that rely more on AI and other modern technology.
The correlation for ethical considerations (r = 0.23) shows that while AI can revolutionize learning, bias in AI models can lead to errors and lack of customization. AI technology in education raises ethical concerns, particularly regarding the use of student data and AI algorithms’ decision-making processes. AI models trained on biased data may fail to meet individual learning needs and reinforce existing prejudices (Seo et al., 2021). The standardization of learning processes by AI systems can also demotivate learners. Ensuring transparency and adherence to ethical guidelines in AI systems is essential. To offer equal educational opportunities and enhance learning experiences for all students, educators and governments must address issues of bias, fairness, and transparency in AI algorithms. Ethically deploying AI, which includes obtaining informed consent for data usage and maintaining transparent procedures, is crucial (Von Garrel & Mayer, 2023). In addition, AI-based grading systems can perpetuate biases, potentially disadvantaging underrepresented student groups. It is essential to address biases in AI algorithms used in educational decision-making to prevent discrimination and ensure fairness, as biases can undermine the quality of education and negatively impact students’ motivation and engagement. A lack of clarity about how AI systems make choices will diminish student trust. If students do not understand or trust AI tools’ decision-making processes, they may be less likely to use them, resulting in missed learning opportunities.
The correlation for privacy & data security (r = 0.21) underscores the importance of ethical data handling in AI-driven education. AI delivers personalized learning experiences by collecting and analyzing vast amounts of student data, which raises concerns about data privacy and security. This concerns may cause students to be cautious to fully participate in digital learning settings. They may avoid accessing platforms or applications that need personal information, resulting in a loss of valuable educational materials and experiences. As students are often unaware of what data is collected, how it is used, and who has access to it, leading to discomfort and negatively impacting their learning experiences. It is crucial to prioritize the ethical use of AI in education to protect students’ rights and privacy (Chiu et al., 2023). So, a classroom environment in which students feel their privacy is violated can cause tension and discomfort. Students’ concentration, creativity, and general academic performance may suffer because of being more concerned with potential threats than with studying information. Preserving personal information and adhering to data protection regulations is essential. Educational institutions and AI developers must implement robust security measures to safeguard student privacy and prevent unauthorized access to sensitive data.
The study concludes that the identified challenges significantly affect students’ learning experiences on using AI applications. These challenges particularly influence students at UNITAR International University, Kelana Jaya, Petaling Jaya, highlighting the need for balanced AI integration with human interaction to enhance learning experiences.
8. Limitations
This study has identified several limitations that could be addressed in future research. Firstly, it focuses solely on the impact of challenges in utilizing AI on students’ learning experiences, without considering other potential factors. Secondly, the study is confined to students at UNITAR International University, Kelana Jaya, Petaling Jaya, Selangor (main campus), limiting its applicability to higher education students and excluding secondary and primary school students across Malaysia. Additionally, there is a notable gender bias, with a significant disparity between female and male participants. The research examines only four specific challenges faced by students using AI, while there are undoubtedly other challenges that were not explored. Furthermore, the study has a higher frequency of Indian respondents, reflecting the demographic composition of UNITAR International University’s main campus, which may not be representative of the broader population’s perspectives. Lastly, the survey was distributed online via Google Forms, through email, which may have led to respondents providing inaccurate information or completing the survey hastily without fully engaging with the questions. These limitations suggest areas for future research to provide a more comprehensive understanding of the challenges and impacts of AI on students’ learning experiences.
9. Conclusion
In conclusion, this research confirms that challenges faced by students using AI negatively impact their learning experiences. While AI can enhance education through personalized learning and data-driven insights, its rapid advancement often outpaces the ability of educational institutions to integrate it effectively. The study identified four key challenges, privacy and data security, lack of understanding and awareness, over-reliance on AI, and ethical considerations, showing moderate to weak positive correlations with negative learning experiences. Addressing these issues requires an interdisciplinary approach, including policies to ensure ethical AI use, data protection, and increased AI awareness. Balancing AI with traditional teaching methods can help maximize AI’s benefits in education while maintaining essential human elements and preventing over-reliance on AI.
Acknowledgements
We express our thousands of thanks to the UNITAR International University for the support of the publication of this research.
Appendix