Artificial Intelligence Use in Primary Care: Attitudes, Concerns, and Readiness among Health Professionals in Metropolitan Australia ()
1. Introduction
The growing recognition of artificial intelligence (AI) as a transformative technology has highlighted its potential to augment clinical decision-making, reduce administrative workload, and improve patient care outcomes in the healthcare industry. At the ground level of the healthcare system in Australia, AI technologies can assist general practices with data management tasks and support practitioners in consultations with patients presenting with multiple clinical problems [1] [2]. Additionally, new advancements, such as the ability to converse with computers to generate human-like text, enable AI applications in primary care, including automated symptom checking, risk stratification, and documentation generation [2] [3].
The potential benefits of AI in primary care have not been fully realised in Australia. General practices have lagged behind hospital specialities like radiology in adopting AI tools. The unique nature of primary care creates barriers to the use of AI: data is often disorganised and siloed across disparate IT systems; consultations cover a wide array of concerns; and continuity of care requires mastering the patient-doctor relationship-blended bond. The use of AI technologies also poses serious risks of providing unsafe, biased, or harmful recommendations, raising significant concerns about patient safety and the integrity of healthcare systems [3] [4]. Here in Australia, active steps are being taken towards solutions. The Australian Health Practitioner Regulation Agency (Ahpra) issued guidance [5] in 2024 stating that health practitioners remain liable for decisions made with the assistance of AI—they cannot abdicate clinical judgement to a machine. The Royal Australian College of General Practitioners (RACGP) also acknowledges the potential of AI, but insists that General Practitioners must take a lead role in its development and incorporation, ensuring that the tools developed serve their intended purpose [3]. The Australian Alliance for AI in Healthcare has developed a National Policy Roadmap (2023) that articulates several prerequisites for the safe implementation of AI, including workforce training, implementation support, and strict governance structures [6].
With this in mind, analysing the use and attitudes toward AI technology in Australian primary care is of great importance. As noted by other scholars, healthcare professionals are likely to hold constructive views towards AI, but generally lack sufficient exposure or experience with such technologies [7]. A 2024 study of allied health professionals in Australia found that 87% of respondents had little to no knowledge of AI, and over 80% had never utilised it in their work. Although Australian medical students are keen to learn about AI, they have a limited understanding of it [8].
The goal of this research was to determine the familiarity and self-assessed confidence AI technologies held for Australian primary care practitioners, their prevalent apprehensions and perceived obstacles for AI utilisation, their awareness of existing pertinent guidelines and biases, as well as their readiness to employ AI technologies and the priority areas for its application in general practice. We also set out to analyse the variation in attitudes across professional hierarchy and seniority. Understanding these aspects could help develop measures for the responsible integration of AI into primary healthcare.
2. Methodology
2.1. Study Design and Participants
We conducted an anonymous, voluntary cross-sectional survey of primary care personnel across seven medical centres in Sydney, Australia, in February 2025. A convenience sampling approach was used to recruit participants who were accessible through the authors’ existing clinical and professional networks. This study posed no more than a negligible risk to participants, and informed consent was obtained from all participants prior to data collection. The research was conducted in accordance with the principles outlined in the National Statement on Ethical Conduct in Human Research (2023). In line with Section 5.1.22 of the Statement, research that involves only negligible risk, does not address sensitive topics, and does not involve vulnerable populations may be exempt from review by a Human Research Ethics Committee.
The target population consisted of general practitioners (GP), non-GP specialists, allied health practitioners, practice nurses, and administrative staff working in general practice clinics located in a metropolitan suburb. After excluding incomplete submissions, data from 39 respondents (22 clinicians, 6 nurses, 11 administrative staff) were included in the analysis. Among the clinicians, 16 were GPs, 3 were other medical specialists working in primary care, and 3 were allied health professionals. Administrative respondents included practice managers, senior management staff, and practice office staff.
2.2. Survey Instrument
For the purpose of this survey, “Artificial Intelligence (AI)” was defined for participants as “computer systems capable of performing tasks that normally require human intelligence, such as recognising patterns, processing language, or making predictions based on data.” This brief description was provided at the beginning of the questionnaire to ensure shared understanding. The survey questionnaire assessed multiple dimensions of AI use and attitudes in primary care, based on literature and expert input. Key sections included:
Professional background: Role and years of practice;
Familiarity with AI applications: Participants rated their familiarity (1) with AI—5 scale) with AI for specific tasks;
Confidence in using AI: Self-rated confidence (1 - 5 scale) in using AI tools;
Current AI usage: Yes/no item about current use of AI tools;
Concerns about AI: Multi-select list of potential concerns;
Awareness of AI guidelines/policies: Knowledge of Australian AI-related policies;
Awareness of AI biases: Understanding of bias in AI systems;
Willingness to adopt AI: Yes/No/Unsure response to future adoption;
Preferred areas for AI integration: Areas where AI would be most beneficial.
The survey instrument was reviewed by a team of one general practitioner with an academic research background, one allied health practitioner, one management team member and a health informatics researcher for face validity before dissemination.
2.3. Data Analysis
Survey responses were analysed using descriptive statistics. For Likert-scale responses, we computed mean scores stratified by professional subgroup. Multiple-choice questions were summarised by frequency counts and percentages. Qualitative free-text responses were reviewed to extract common themes.
3. Results
3.1. Participant Characteristics
Of the 39 respondents, 59% were clinicians (GPs, non-GP Specialists and allied health practitioners), 15% were practice nurses, and 26% were administrative staff. Within the medical practitioners’ group, participants’ experiences varied: about one-quarter had 5 years or fewer of experience, while roughly one-third had over 20 years of practice, and the remaining participants were mostly at mid- to senior-career levels (between 6 and 20 years of experience). Most (>90%) of the administrative staff who responded were in early career stages, i.e, less than 5 years of experience.
3.2. Familiarity and Confidence with AI
Self-reported familiarity with AI applications was low across all groups. On a 1-5 scale, GPs and non-GP Specialists reported a mean familiarity of approximately 2.4 for clinical record-keeping tasks, 2.3 for diagnostic decision support, and 2.25 for treatment recommendations. Allied health professionals had even lower familiarity (1.0 - 1.5). Practice nurses showed slightly higher familiarity (2.5 - 3.0), while administrative staff varied by role, with managers reporting higher familiarity (3.5 - 4.0) than front-line staff (2.1 - 2.4). Only three respondents reported currently using any form of AI in their practice, all of whom were in administrative roles. Examples included AI-driven appointment reminders and software for suggesting billing codes for reconciliation purposes. In terms of confidence levels paralleling familiarity ratings, GPs’ average confidence in using AI for clinical documentation was about 2.5 out of 5, and for diagnostic support, around 2.4. Interestingly, confidence and familiarity were somewhat higher among GPs with more years of experience, contrary to the expectation that GPs in early career stages would be more tech-savvy. These subgroup differences are descriptive and should not be interpreted as statistically significant due to the small sample size (N = 39).
3.3. Concerns and Perceived Barriers
The top three concerns about AI were:
1. Data privacy and security (>80% of respondents): The prospect of AI systems handling sensitive patient data raised worries about confidentiality breaches.
2. Potential for AI errors or “hallucinations” (~75% of respondents): Fear that AI might produce inaccurate recommendations that clinicians might not detect.
3. Integration challenges (~70% of respondents): Concerns that AI tools might not interface well with existing healthcare systems.
Other notable concerns included over-reliance leading to de-skilling of clinicians (>50%), ethical and liability issues (~40%), and bias leading to unequal treatment (~33%). Cost and staff resistance were less frequently cited concerns. The distribution of major concerns raised by respondents is shown in Table 1.
Table 1. Main concerns and perceived barriers to artificial intelligence adoption in primary care.
Concern |
Number of
Respondents
(n = 39) |
Percentage (%) |
Key Comments/Notes |
Data privacy and security |
32 |
82% |
Concern about patient confidentiality breaches |
AI errors or “hallucinations” |
29 |
75% |
Fear of inaccurate or unsafe outputs |
Integration challenges |
27 |
70% |
Difficulty integrating AI into current systems |
Over-reliance leading to de-skilling |
20 |
51% |
Potential erosion of clinical skills |
Ethical and liability issues |
16 |
41% |
Who is responsible for AI-driven errors |
Algorithmic bias |
13 |
33% |
Unequal treatment risk across populations |
Cost and staff resistance |
8 |
21% |
Practical and attitudinal barriers |
3.4. Awareness of Policies and Guidelines
Limited awareness of existing AI-related guidelines was revealed. Two-thirds of respondents (26/39) were unaware of the listed Australian policies, including AHPRA’s guidelines, Therapeutic Goods Administration (TGA) regulations, or the National AI Roadmap. Only four people had heard of the AHPRA guideline, and three knew of the National AI Healthcare Roadmap. This suggests a significant communication gap between high-level AI strategy and frontline healthcare workers.
3.5. AI Bias Awareness
About 67% of respondents were aware that AI systems can exhibit biases. Among those aware, the most commonly recognised bias types were:
Bias from unrepresentative training data (80%);
Algorithm design biases (80%);
Diagnostic accuracy disparities across populations (60%);
Treatment recommendation disparities (50%).
3.6. Willingness to Adopt AI
Approximately 64% of respondents indicated a willingness to incorporate AI into their practice, while 36% were unsure. Notably, no respondent explicitly refused to adopt AI. This pattern held across all subgroups, suggesting cautious openness rather than opposition. Practice nurses showed the highest hesitancy (75% unsure), while administrative staff and GPs were more willing to adopt AI. Key findings on participants’ familiarity, confidence, and attitudes toward AI are summarised in Table 2.
Table 2. Familiarity, confidence, and attitudes toward artificial intelligence among primary care professionals.
Dimension |
Measure/Item |
Mean (SD) or % |
Notable Differences by Role |
Familiarity with AI |
Clinical documentation |
2.4/5 |
Higher among nurses (≈3.0) |
Diagnostic decision support |
2.3/5 |
GPs > Allied Health |
Treatment recommendation |
2.25/5 |
GPs > Allied Health |
Confidence in using AI |
Overall confidence |
~2.5/5 |
Senior GPs slightly higher |
Concerns |
Data privacy |
80% |
Common across all roles |
AI errors/hallucinations |
75% |
Especially clinicians |
Integration challenges |
70% |
Common across all roles |
Policy awareness |
Aware of AHPRA guideline |
10% |
Mostly GPs |
Aware of National AI Roadmap |
8% |
Very limited |
Willingness to adopt AI |
Willing |
64% |
GPs & admin staff most open |
Unsure |
36% |
Nurses most hesitant |
3.7. Preferred AI Applications
The top three desired AI applications were:
1. Drafting clinical records (35% of respondents): Help with consultation notes, referral letters, and documentation.
2. Monitoring patient progress and follow-up (35% of respondents): Tracking health indicators, flagging needed follow-ups.
3. Managing scheduling and administrative tasks (28% of respondents): Appointment optimisation, reminders, triaging. Administrative staff frequently chose billing assistance and inventory management.
Other notable areas included treatment recommendations (20%), diagnostic assistance (15%), and medical imaging analysis (12%). The emphasis on documentation and administrative tasks reflects the desire to reduce paperwork burden while maintaining clinical autonomy. Table 3 presents the areas where participants believed AI would be most beneficial in primary care.
Table 3. Areas where respondents consider AI most beneficial in primary care settings.
AI Application Area |
Number of
Respondents
(n = 39) |
Percentage (%) |
Typical Use Cases Mentioned |
Drafting clinical records |
14 |
35% |
Consultation notes, referral letters |
Monitoring patient progress and follow-up |
14 |
35% |
Automated follow-up alerts |
Scheduling and administrative tasks |
11 |
28% |
Appointment optimisation, reminders |
Billing and reconciliation |
9 |
23% |
AI-assisted billing software |
Treatment recommendations |
8 |
20% |
Decision support tools |
Diagnostic assistance |
6 |
15% |
Pattern recognition, risk stratification |
Medical imaging analysis |
5 |
12% |
Preliminary interpretation support |
4. Discussion
This survey reveals cautious optimism toward AI among primary care professionals working in a metropolitan region of Australia. The low familiarity and confidence levels indicate that AI in primary care is still in its early stages, highlighting significant opportunities for education and training. The universal lack of outright rejection of AI suggests that acceptance is a matter of when and how, not if.
Education and Training Needs: The limited AI exposure among clinicians highlights the need for structured education programs. Professional bodies should develop AI competency frameworks and continuing education modules specific to primary care contexts, aligning with AHPRA’s guidance that practitioners must understand the tools they use [5].
Implementation Strategy: Early AI implementations should target administrative and routine clinical tasks where there is a strong desire and lower resistance. Starting with documentation assistance and appointment management can build familiarity and trust while addressing immediate pain points. This aligns with the RACGP’s view that AI should reduce administrative burden [3].
Policy and Governance: The poor awareness of existing guidelines calls for improved dissemination strategies. Regulatory bodies must actively integrate AI guidance into continuing education and practice standards rather than relying on passive publication. Clear liability frameworks and safety standards are crucial for addressing clinicians’ concerns about accountability.
Trust and Safety: The emphasis on data security and error prevention indicates that any AI deployment must demonstrate robust safeguards. Transparent, explainable AI systems with error-mitigation strategies will be crucial to adoption. The concern about “hallucinations” suggests that even a single high-profile mistake could significantly damage trust.
Collaborative Approach: The desire for AI to augment rather than replace human roles emphasises the need for clinician involvement in AI design and implementation. Co-design approaches that respect clinical workflows and maintain clinician agency will be essential for successful adoption.
Practical Implications: Based on the top three concerns identified—data privacy, AI errors, and system integration—future initiatives could pilot a short educational module or workshop tailored for primary care teams. Such a program could provide hands-on exposure to safe AI tools, explain current regulatory expectations, and demonstrate secure integration with existing clinical systems. This would directly target the readiness gaps revealed in this survey.
5. Comparison with Other Studies
Our findings align with other research in this area, which shows that healthcare professionals have limited knowledge of AI but are willing to learn [8] [9]. The consistency of concerns about bias, safety, and workload impact mirrors international studies, suggesting universal challenges in healthcare AI adoption [9]. However, our focus on primary care reveals specific needs regarding longitudinal care management and administrative efficiency that differ from those of hospital-based applications.
6. Limitations
This study has several limitations. The small sample size limits generalisability, and the convenience sampling method may introduce selection bias. The survey was conducted at a single time point, capturing attitudes at an early stage of AI development in primary care. Additionally, responses were self-reported and may not reflect actual behaviour or sustained attitudes over time.
7. Future Directions
Future research should include larger, more representative samples and longitudinal designs to track attitude changes as AI tools become more prevalent. Implementation studies testing specific AI applications in real primary care settings are needed to provide the evidence base that clinicians seek. Research should also examine patient perspectives on AI in primary care, as patient acceptance will be crucial for the widespread adoption of AI in this setting.
8. Conclusion
Primary care providers in a metropolitan region of Australia display a poised optimism toward AI with a strong readiness to accept AI technologies that streamline their administrative and clinical tasks. Barriers include low familiarity, safety and integration concerns, limited policy knowledge, and limited awareness of the existing frameworks that need to be addressed. Proactive frameworks accompanied by well-designed educational initiatives, robust evidence-based implementation strategies, beginning with non-invasive applications, inclusive governance policies, and effective interprofessional collaboration between health technology specialists and primary care practitioners are what are needed to make this work. If coordinated action is taken by educators, policymakers, and technology developers, the primary care system in Australia can harness the positive impacts of AI technologies to improve operational efficiency and quality of care, while upholding the foundational human-centred ideals that define exemplary primary care. The balance of cautious optimism revealed in this study contributes to the trust necessary for successfully integrating AI in Australian general practice.