An Empirical Study on the “High Criticality, Low Strategizing” Profile: College Students’ Human-AI Interaction Competence and Implications for Foreign Language Education ()
1. Introduction
Generative artificial intelligence (AI), exemplified by dialogue systems like ChatGPT and DeepSeek, is integrating into the educational landscape at an unprecedented pace, particularly within foreign language learning contexts. The focus of the educational community has rapidly shifted from “whether to allow the use of AI” to “how to use AI efficiently to promote learning” (Wen & Liang, 2024; Halaweh, 2023). These technologies promise to revolutionize education by streamlining repetitive tasks, aiding in data interpretation, and pioneering new learning methods (George & Wooden, 2023; Zhai & Wibowo, 2023).
However, this rapid integration is a double-edged sword. A growing body of evidence indicates that the uncritical adoption of these tools leads to over-reliance, a phenomenon where users accept AI-generated outputs without question, which in turn impairs essential cognitive abilities such as critical thinking, analytical reasoning, and decision-making (Facione & Facione, 1996; Dergaa et al., 2023; Grassini, 2023). This over-reliance is exacerbated by inherent ethical concerns in AI systems, including AI hallucinations, algorithmic biases, plagiarism, privacy issues, and a lack of transparency (Zhai et al., 2024). When students are not equipped to navigate these challenges, they risk diminished creativity, increased academic complacency, and a weakened capacity for independent analysis (Duhaylungsod & Chavez, 2023; Koos & Wachsmann, 2023).
To address this critical need, Wen & Liang (2024) proposed the concept of “Human-AI Interactive Negotiation Competence” (HAINC), defining it as a core literacy for the AI era. HAINC moves beyond simple operational skill to frame the interaction as a continuous negotiation where the human must proactively maintain dominance. It consists of a dynamic, cyclical process of five elements: understanding AI, setting goals, issuing instructions, analyzing feedback, and adjusting strategies. This framework is designed as a direct antidote to over-reliance, systematically cultivating the very cognitive skills that passive AI use erodes. However, theoretical frameworks require empirical validation. A systematic understanding of students’ actual HAINC levels—their strengths and, most importantly, their critical shortcomings—is essential for designing effective pedagogical interventions. Therefore, this study aims to empirically examine the current state of university students’ HAINC through a questionnaire survey and, based on the results, derive concrete implications for foreign language education.
2. Literature Review and Theoretical Framework
2.1. The Challenge of AI over-Reliance in Education
The systematic review by Zhai et al. (2024) underscores the significant cognitive risks associated with AI integration in education. Over-reliance on AI dialogue systems is linked to a marked decline in essential cognitive abilities. This passive consumption is often compounded by ethical concerns such as AI hallucinations (the generation of plausible but false information) and algorithmic biases, which can mislead users and skew analysis (Gao et al., 2022; Grassini, 2023). In foreign language education, this may manifest as students accepting AI-generated paraphrases or translations without verification, thereby impairing their own analytical and decision-making skills (Kim et al., 2023). Furthermore, the convenience of AI can foster academic laziness and reduce the motivation for independent inquiry (Ahmad et al., 2023).
2.2. The Human-AI Interactive Negotiation Competence (HAINC) Framework
HAINC is defined as a specialized skill set that enables users to interact effectively and critically with AI systems (Wen & Liang, 2024). It builds upon foundational models of digital literacy, which emphasize the critical evaluation and use of digital technologies, but extends them specifically to the unique challenges of generative AI, where the interaction is conversational and dynamic rather than with a static tool or information source. Furthermore, it aligns with evolving paradigms in human-computer interaction (HCI) that view the relationship not as a simple command-and-response but as a collaborative partnership. Its essence is negotiation within a “master-subordinate” relationship, where the human user must actively maintain the dominant position. It consists of five interconnected, cyclically iterative components (Wen & Liang, 2024: p. 288).
Understanding AI: Knowing the capabilities, limitations, and basic principles of interaction. This foundational step directly addresses ethical concerns such as AI hallucination and algorithmic bias.
Setting Goals: Proposing clear, feasible, and ethically sound task requirements for the AI, countering passive consumption.
Issuing Instructions: Employing “prompt engineering”—describing the task clearly and accurately, often involving role, input, and output.
Analyzing Feedback: Critically examining AI output for accuracy, logic, and reasonableness, which is the core component that mitigates over-reliance.
Adjusting Strategies: Continuously optimizing instructions based on feedback, fostering resilience and problem-solving skills.
Collectively, the components of “Setting Goals” and “Issuing Instructions” constitute the proactive, strategizing dimension of HAINC. This dimension encompasses the user’s capacity for advanced planning and tactical command before and during the interaction, specifically through skills such as task decomposition (breaking down complex problems) and role-playing (assigning a specific persona to the AI to guide its responses). This framework empowers learners to harness the benefits of AI as a partner while actively safeguarding against the ethical concerns and cognitive detriments associated with its overuse.
3. Research Design
3.1. Research Questions
This study addresses the following questions:
1) What is the overall level of university students’ Human-AI Interactive Negotiation Competence (HAINC)?
2) What characteristics (strengths and weaknesses) do students exhibit across the five components of HAINC?
3.2. Research Instrument and Participants
This study employed a questionnaire survey method. The 16-item instrument was developed based on the HAINC theoretical framework. The items were newly created by the authors to directly reflect the five components of the HAINC theoretical framework. The questionnaire used a 5-point Likert scale (1 = “Strongly Disagree” to 5 = “Strongly Agree”). Content validity was confirmed by a panel of three experts, and the final version demonstrated high internal consistency (Cronbach’s Alpha = 0.973). Participants were 290 undergraduate students recruited via convenience sampling from a public university, representing different academic years and majors.
3.3. Data Analysis Method
Descriptive statistical analysis was conducted using SPSS 26.0 to calculate the mean (M) and standard deviation (SD) for each item and dimension.
4. Results and Discussion
This section presents the empirical findings on students’ HAINC levels and discusses their implications in the context of the theoretical framework and existing literature.
4.1. The Overall HAINC Profile: A Foundational But Incomplete Competence
Analysis of the data from 290 participants indicates that the overall level of HAINC among university students is moderately high, with a grand mean of M = 3.78 (SD = 0.77) (see Table 1). This suggests that students generally possess a foundational awareness and ability to interact with AI constructively and critically. However, this overall score masks a critical dissonance in their skill set, revealing a distinctive profile that we term “high criticality, low strategizing.”
Table 1. Questionnaire results on university students’ HAINC (N = 290).
HAINC Dimension |
Item |
Description |
Mean (M) |
SD |
Understanding AI |
1 |
Awareness of AI’s capabilities and limitations |
3.77 |
0.795 |
2 |
Understanding basic principles of effective interaction |
3.74 |
0.790 |
Setting Goals |
3 |
Considering AI’s suitability for the task type before use |
3.75 |
0.809 |
4 |
Defining the desired outcome before prompting |
3.85 |
0.746 |
5 |
Proficiency in decomposing complex tasks into sub-tasks |
3.70 |
0.781 |
6 |
Consciously avoiding tasks that violate academic integrity/ethics |
3.82 |
0.825 |
Issuing Instructions |
7 |
Consciously setting a specific role for the AI |
3.60 |
0.842 |
8 |
Providing clear context and key details in prompts |
3.77 |
0.756 |
9 |
Specifying desired output format |
3.77 |
0.768 |
Analyzing Feedback |
11 |
Not blindly trusting AI information; verifying facts/data |
3.79 |
0.776 |
|
12 |
Checking AI output for logical consistency |
3.81 |
0.758 |
13 |
Judging fluency and stylistic appropriateness of AI output |
3.81 |
0.753 |
Adjusting Strategies |
10 |
Prioritizing reflection and instruction optimization when dissatisfied |
3.84 |
0.762 |
14 |
Trying different phrasings or providing more information to follow up |
3.87 |
0.745 |
15 |
Ability to diagnose the cause of problems (instruction, goal, or AI
limitation) |
3.74 |
0.780 |
16 |
Viewing interaction as an iterative cycle of query-evaluate-requery |
3.80 |
0.777 |
4.2. The “High Criticality” Dimension: Strengths in Reactive Engagement
The five dimensions of HAINC demonstrated varying performance levels, ranked as follows: Adjusting Strategies (M = 3.83), Analyzing Feedback (M = 3.79), Setting Goals (M = 3.77), Understanding AI (M = 3.75), and Issuing Instructions (M = 3.73).
Students exhibited the highest proficiency in the reactive components of HAINC. Their strength in Adjusting Strategies is evidenced by the highest-scoring item in the survey, Item 14 (“If the AI’s first reply is unsatisfactory, I try rephrasing or providing more information to follow up,” M = 3.87). Similarly, strong performance in Analyzing Feedback (e.g., Item 12: “I will carefully check the logical consistency of the AI-generated content,” M = 3.81) demonstrates a healthy skepticism and a propensity for critical engagement.
This pattern is encouraging, as it aligns positively with the educational goal of mitigating uncritical “over-reliance,” where users accept AI outputs without question (Zhai et al., 2024). The data suggest that students are not passive recipients; they exhibit a foundational disposition to critique and refine AI-generated content, which serves as a vital defense against the most blatant forms of cognitive offloading.
4.3. The “Low Strategizing” Dimension: Weaknesses in Proactive Command
Despite their critical faculties, the data reveal significant skill gaps in the proactive components of HAINC. The lowest-scoring item across all dimensions was “consciously set a specific role for the AI” (Item 7, M = 3.60). Furthermore, the skill of “decomposing a complex task into a series of smaller problems” (Item 5, M = 3.70) was also notably weak.
This “low strategizing” tendency is critical. It indicates that students use AI more for solving immediate, discrete problems rather than as a strategic partner for managing complex projects. Without the ability to frame tasks effectively through role-playing and decomposition—key techniques in advanced prompt engineering—users cannot access the technology’s full potential. Consequently, they might settle for suboptimal outputs that, while critically evaluated, still represent a superficial engagement with the tool. This finding echoes the concern raised by Zhai et al. (2024) that over-reliance can stem from an inability to assess AI’s reliability and guide its use effectively.
Despite their critical faculties, the data reveal significant skill gaps in the proactive components of HAINC. The lowest-scoring item across all dimensions was “consciously set a specific role for the AI” (Item 7, M = 3.60). Furthermore, the skill of “decomposing a complex task into a series of smaller problems” (Item 5, M = 3.70) was also notably weak.
This “low strategizing” tendency is critical. It indicates that students may use AI more for solving immediate, discrete problems rather than as a strategic partner for managing complex projects. Without the ability to frame tasks effectively through role-playing and decomposition—key techniques in advanced prompt engineering—users may struggle to access the technology’s full potential. Consequently, they might settle for suboptimal outputs that, while critically evaluated, could still represent a superficial engagement with the tool. This finding is consistent with the concern raised by Zhai et al. (2024) that over-reliance can be associated with an inability to guide AI use effectively.
4.4. Implications of the Dissonant Profile
The identified “high criticality, low strategizing” profile has two significant implications. First, there is a risk that this incomplete skill set inadvertently limits the cognitive benefits of AI integration. If students primarily use AI for low-level tasks and lack the strategic skills to leverage it for complex problem-solving, the technology may do little to foster higher-order cognitive abilities like analytical reasoning and creative synthesis—the very abilities threatened by over-reliance (Zhai et al., 2024). The act of strategically decomposing a task and orchestrating an AI’s role is, in itself, a profound cognitive exercise that is currently being missed.
Second, this profile may exacerbate educational inequalities, effectively creating a new “AI strategic divide.” Learners who intuitively or through instruction develop these advanced strategic skills will be able to gain far more sophisticated support from AI, accelerating their learning. In contrast, others may remain trapped in a cycle of basic Q&A, using AI for efficiency gains that do not translate into deeper understanding. This potential bifurcation mirrors the ethical concerns about algorithmic bias and access raised in the literature (Alrazaq et al., 2023).
5. Pedagogical Implications and Suggestions
The empirical identification of the “high criticality, low strategizing” profile necessitates a strategic overhaul in educational approaches to AI. To bridge this competency gap, we propose the following implementation paths based on the principles of “practice-orientation, incremental learning, and timely feedback.”
5.1. Enhancing Training in AI Task Orchestration
Instruction must provide structured, repeated practice in task decomposition and role-playing. In a foreign language writing context, for example, instead of a directive to “write an essay,” students should be guided to decompose the process into a sequence: (a) instructing the AI to act as a subject matter expert to generate a research outline in the target language; (b) tasking it to draft specific sections with a focus on using key vocabulary and complex grammatical structures; and (c) finally, role-playing the AI as a critical reviewer to provide feedback on the draft’s coherence and argumentation. This pedagogical practice explicitly trains the Setting Goals and Issuing Instructions components.
5.2. Shifting Assessment to Evaluate the Interaction Process
Adhering to the “practice-orientation” principle requires that evaluation criteria extend beyond the final AI-generated product to include the quality of the entire interaction process. Instructors should analyze students’ “dialogue records,” assessing, for instance, the clarity of initial prompts for a conversational practice task (Setting Goals), the strategic use of role-playing (e.g., instructing the AI to “act as a job interviewer” or “a debate partner” to practice specific registers) (Issuing Instructions), and the logical progression of follow-up queries to clarify meaning or correct errors (Adjusting Strategies).
5.3. Scaffolding Learning Experiences for Progressive Competency Development
Curriculum design must embody the “incremental learning” principle by constructing a clear competency ladder. Initial modules could focus on foundational interactions, such as using AI for vocabulary lookup or text summarization. The complexity should then be gradually increased to integrated tasks like preparing for an oral presentation, which requires students to use AI for researching content, drafting speaking notes, and finally role-playing as an audience to simulate Q&A—all in the target language.
5.4. Implementing Reflective Feedback and Curating a Repository of Best Practices
The “timely feedback” mechanism should be leveraged to solidify learning. Instructors can select anonymized examples of both effective and ineffective student-AI dialogues for in-class analysis. Contrasting a dialogue that successfully negotiates a nuanced grammatical explanation with one that fails due to vague instructions provides students with tangible, relatable models for their language learning journey.
6. Conclusion
This study provides a granular assessment of university students’ HAINC, revealing a critical dissonance between their critical and strategic skills. The “high criticality, low strategizing” profile underscores that without guided intervention, AI use may remain cognitively superficial. The findings affirm the urgent need to integrate HAINC cultivation into curricula to transition students from passive consumers to skilled “conductors” of AI, directly addressing concerns about cognitive atrophy (Zhai et al., 2024).
This study is not without limitations. The findings are based on data collected from a single university using a convenience sampling method, which may affect the generalizability of the results. Future research would benefit from employing larger, more diverse samples from multiple institutions to enhance the external validity of the “high criticality, low strategizing” profile. Future research should also develop standardized HAINC assessments and conduct longitudinal studies to measure the impact of HAINC-integrated curricula on language proficiency and critical thinking gains.
Acknowledgements
This paper and the study are funded by: Fund Project 1: An investigation and research on affective factors and English learning engagement of primary and secondary school students in western Guangdong (Project No.: Lingshi Teaching Affairs 2023 No. 93); Fund Project 2: A study on English classroom learning engagement and teaching intervention of junior middle school students from the perspective of activity theory (U-G-S Project: No.7).
Appendix
University Students’ AI Interaction and Negotiation Competence Questionnaire
Age: ______ Gender: ______ Major: ______
Dear student,
This survey aims to understand the actual practices and competence levels of university students when interacting with artificial intelligence (such as DeepSeek, ChatGPT, etc.). This questionnaire is anonymous. All data will be used solely for academic research, and your information will be kept strictly confidential. Please answer according to your actual experience. Thank you for your support and participation!
Based on your experience using AI, please indicate your level of agreement with the following statements.
1 = Strongly Disagree 2 = Disagree 3 = Neutral 4 = Agree 5 = Strongly Agree
1) I clearly understand the current strengths of AI (e.g., creative writing, information summarization) and its limitations (e.g., potential fabrication of information, inability to understand complex emotions).
2) I understand the basic principles for effective interaction with AI (e.g., instructions need to be clear, specific, and can provide examples).
3) Before using AI, I consider whether it is the most suitable tool for the type of task.
4) Before asking AI a question, I first clarify what kind of result I ultimately hope to achieve.
5) I am good at breaking down a complex task into a series of smaller problems that AI can solve step by step.
6) I consciously avoid having AI complete tasks that violate academic integrity or ethics (e.g., directly writing the entire paper for me).
7) I consciously set a specific role for AI (e.g., “You are now an experienced English editor”).
8) When asking questions, I try to provide clear background information and key details, rather than asking very vague questions.
9) I clearly state my requirements for the format of the output to AI (e.g., “Please list in table form,” “Please implement using Python code”).
10) When I am dissatisfied with the result, my first reaction is to reflect on and optimize my instructions, rather than giving up directly.
11) I do not completely believe all the information provided by AI; especially for facts, data, and cited literature, I verify them.
12) I carefully check whether the content generated by AI is logically self-consistent and whether there are any internal contradictions.
13) I can judge whether the text generated by AI is fluent and natural, and whether it meets my language style requirements.
14) If the AI’s first response is not ideal, I try to rephrase the question or provide more information to follow up.
15) Based on the AI’s feedback, I can determine whether the problem lies in unclear instructions, overly high goals, or the AI’s own limitations.
16) My interaction with AI is usually a cyclical process of “questioning-evaluating-re-questioning” until satisfactory results are obtained.