Artificial Intelligence; a Pragmatic Approach to Implementation in Medicine, a Review of the literature and a Survey of Local Practice in Midlands in UK

Abstract

The use of Artificial Intelligence (AI) for clinical pathway management and decision making is believed to improve clinical care and has been used to improve pathways for treatment in most medical disciplines. Methods: A literature review was undertaken to identify the hurdles and steps required to introduce supported clinical decision-making using AI within hospitals. This was supported by a survey of local hospital practice within the Midlands of the United Kingdom to see what systems had been introduced and were functioning effectively. Results: It is unclear how to practically implement systems using AI within medicine easily. Algorithmic medicine based on a set of rules calculated from data only takes a clinician so far to deliver patient centred optimal treatment. AI facilitates a clinicians ability to assimilate data from disparate sources and can help with some of the analysis and decision making. However, learning remains organic and the subtleties of difference between patients, care providers who exhibit non-verbal communication for instance make it difficult for an AI to capture all the pertinent information required to make the correct clinical decision for any given individual. Hence it assists rather than controls any process in clinical practice. It also must continually renew and adapt considering changes in practise and trends as the goalposts change to meet fluctuations in resources and workload. Precision surgery is benefiting from robotic-assisted surgery in parts driven by AI and being used in 80% of trusts locally. Conclusion: The use of AI in clinical practice remains patchy with it being adopted where research groups have studied a more effective method of monitoring or treatment. The use of robotic-assisted surgery on the other hand has been more rapid as the precision of treatment that this provides appears attractive in improving clinical care.

Share and Cite:

Capes, N. , Patel, H. , Sarhan, I. , Ashwood, N. , Dekker, A. and Shehata, R. (2023) Artificial Intelligence; a Pragmatic Approach to Implementation in Medicine, a Review of the literature and a Survey of Local Practice in Midlands in UK. International Journal of Intelligence Science, 13, 63-79. doi: 10.4236/ijis.2023.133005.

1. Introduction

Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems and its definition has varied over time with different trends [1] . Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision [2] [3] . If there is no true grasp of what AI is, then the ability to utilize it will result in substandard results and less effective implementation. Artificial intelligence is coming of age in facilitating clinical decision-making and processes being implemented in an ever-increasing number of areas within medicine [4] . Clinical decision-making, reasoning under uncertainty, and knowledge representation to systems integration, translational bioinformatics, and cognitive issues in both the modelling of expertise and the creation of acceptable systems’ [4] are but some of the ways AI has been applied in the belief it will help deliver better healthcare to patients and help support clinicians and organisations. There remain several questions in relation to how AI effectively supports clinical decision-making [5] . AI and deep learning are based on algorithms developed initially by clinicians who have a perception or model for practice which will include flaws and biases [6] . Careful calibration, validation and monitoring are required to ensure that the modelling is accurate and meaningful [7] . Clinical decision-making is multifaceted and involves the collection, analysis of data from varied sources which is evaluated against the known outcomes from research and clinical practice to enable ‘actionable decisions’ [8] [9] [10] .

AI has the potential to harness the vast amounts of data that is being generated across the health system including from health records and delivery systems, to improve the safety and quality of care decisions. Today AI has been incorporated successfully into decision support systems (DSSs) for diagnosis in data-intensive specialties like radiology, pathology, and ophthalmology [11] [12] .

The purpose of this article is to look at the challenges facing clinicians in implementing supported decision-making within a clinical environment by looking at the literature and personal reflections. The aim is to streamline processes and make recommendations to fellow clinicians looking to implement this technology into clinical practice. To do this a literature review was undertaken and a survey of local hospitals within the Midlands in the United Kingdom.

2. Search Strategies

Eligible studies were searched on Medline, EMBASE, and PsycINFO databases using the algorithm ((exp “Artificial intelligence”/OR (medicine* OR surgery*) AND clinical decision making AND/OR processes AND/OR outcomes. This search was undertaken in 2021 by an experienced librarian of the lead author to maximize sensitivity in identifying relevant articles using the following criteria.

The first search generated 14639 papers when looking at artificial intelligence or deep learning on clinical decision making, however when looking at the impact on outcomes a total of 989 papers was identified. The number of studies excluded at each stage including the title, abstract and full-text screening were recorded systematically with reasons (e.g., “non-surgery”). Two researchers participated in both titles, abstract and full-text screening stages (NC, NA), and where there was uncertainty in study eligibility, a consensus decision was made by at least two screeners. Endnote captured and managed the references at each stage of screening. The researchers (NC, NA) extracted the study design, intervention characteristics and details of outcomes assessed with an assessment of effectiveness.

3. Results

In total 989 articles were identified on the search and full texts were screened of these 866 were excluded at the abstract stage as not including any relevant information about clinical decision-making or its impact on outcomes 53 were letters, or commentaries; 123 eligible studies were identified. The following themes were identified from the papers that helped inform the survey to look at the effective implantation of AI.

3.1. Data

There are many ways to process data to inform clinical decision-making in many areas within medicine from medicine management, to surgical precision to aid with treatment decisions [10] . The use of AI can help by 1) improving data collection and extraction within existing electronic health record systems to improve decision making; 2) enhancing measurement fidelity; 3) harnessing smartphone and biosensor data to inform clinical decision-making; 4) and finally helping co-ordinate care through analysis of the available data [10] . However, data sharing, privacy and standardization prevent AI integration in many systems within healthcare with each hospital within the UK operating a different system rich in complexity and limited in functionality often due to cost pressures [13] [14] . The data collected must be accurate and meaningful in order to produce outputs that are similar otherwise the adage of “rubbish in rubbish out” in computing still stands [15] . It becomes impossible to develop useful models that facilitate testing before changes are implemented for instance patient flow in Accident and Emergency (A & E) Departments if the data source is not clear or standardised and if stakeholders such as patients or clinicians are not involved in modelling or facilitating the understanding of the information required to achieve reliable effective decision making [16] .

In order to ensure reliability each area of consideration almost requires starting from scratch rather than trying to adapt to the information available. Inferences about the data required for decisions and the outputs generated in themselves can limit the usefulness of the system being adopted. There is however a pressure to develop as we try to embrace the technology available and look to improve evidence-based medical practice with technology.

[17] There is big data available within the National Health Service (NHS) to support clinical decision making and clinicians need systems of support by having this information available and presented to them in a digestible easily translatable form [18] .

3.2. How does AI Work?

AI requires a foundation of specialized hardware and software for writing and training machine learning algorithms. Those algorithms themselves need human input to achieve any result and can be flawed as “to err is to be human” and similar results happen with computer systems [19] [20] .

In general, AI systems work by ingesting large amounts of labelled training data, analysing the data for correlations and patterns, and using these patterns to make predictions about future states [21] . This input is one of the fundamental needs to begin developing AI models that support clinical decision-making, the use of the word support rather than make clinical decisions is deliberate [22] [23] .

As the hype around AI has accelerated, vendors have been scrambling to promote how their products and services use AI [24] . Often what they refer to as AI is simply one component of AI, such as machine learning [25] . AI requires a foundation of specialized hardware and software for writing and training machine learning algorithms [26] . No one programming language is synonymous with AI, but a few, including Python, R and Java, are popular [27] . So, there is heterogenicity in the software systems, this sometimes gives developers more flexibility in their approach as seldom does “one size fit all”.

In general, AI systems work by ingesting large amounts of labelled training data, analysing the data for correlations and patterns, and using these patterns to make predictions about future states where it has been trained in that function [28] . In this way, a chatbot that is fed examples of text chats can learn to produce lifelike exchanges with people [29] [30] , or an image recognition tool can learn to identify and describe objects in images by reviewing millions of examples and this can be even available on a smartphone [31] .

AI programming focuses on three cognitive skills: learning, reasoning and self-correction.

Learning processes. This aspect of AI programming focuses on acquiring data and creating rules for how to turn the data into actionable information. The rules, which are essentially algorithms, provide computing devices with step-by-step instructions for how to complete a specific task. These algorithms assist clinicians in decision making for instance when to operate and how best to deliver a successful operation by using a specific approach [32] .

[30] Algorithms have existed to provide this guidance before the availability of AI to manage what is an ever-increasing set of rules based on evidence gained from research and practice [33] . However, there are some difficulties in using this approach to achieve optimal patient care, including the magnitude of evidence available for often marginal statistical gains with poor generalisability to those with complex multimorbidity. The technology-prompted care was felt to be less patient focussed by some [34] .

Reasoning processes. This aspect of AI programming focuses on choosing the right algorithm to reach a desired outcome. Clinical reasoning is difficult to model, and “a doctor’s reasoning is built around a temporal unfolding of information” [35] . Reaching a medical diagnosis on which to base treatment decisions often involves ill-structured processes as the number of explanations for presenting conditions can be immense [36] . Computers traditionally use “structured decision-making approaches” whereas more “open-ended reasoning methods” are used by medical artificial intelligence (AI) programs” [36] . Further complexity is introduced when the AI suggests a route of treatment in addition to aiding diagnosis [36] . Sometimes these decisions are difficult to explain by the clinician [37] , who cannot easily check the validity of the decision [36] and the outcome may still be adverse making it difficult for a clinician to unpick how the decision was made [38] . There remain some ethical and legal dilemmas about where the responsibility for the decision lies [39] .

Self-correction processes. This aspect of AI programming is designed to continually fine-tune algorithms and ensure they provide the most accurate results possible. Part of how an AI improves relates to the processes that the AI employs to deal with uncertainty and this impacts on how much “trust” a clinician can attribute to that decision [40] .

In order to understand how AI had been implemented locally we surveyed local trust IT departments about the use of AI within their trusts and the practical applications being implemented to support clinical practice.

Is artificial intelligence important in aiding medical decision-making in current practice?

AI is important because it can give enterprises and healthcare insights into operational performance and potential improvements not previously evident [41] . In some cases, AI can perform tasks virtual or robotic-assisted better than humans [42] [43] . Particularly when it comes to repetitive, detail-oriented tasks like analyzing large numbers of legal documents to ensure relevant fields are filled in properly, AI tools often complete jobs quickly and with relatively few errors [44] . Despite the attractiveness of improved accuracy and performance, there is a general distrust of AI performance amongst patients in contrast to the media’s perception or government [45] .

In a way, it’s every clinician’s dream to automate data collection analysis when it is standard, duplicated, and monotonous even though detecting abnormality can be key and potentially lifesaving such as observation monitoring [46] .

3.3. The Types of AI and Applicability within Healthcare?

Initially, the machines were used with simple algorithms to process healthcare information and were reactive with limited memory [24] , however, the AI machines used have improved and now are used to provide precision medicine, error reduction, drug development to name a few [47] .

Type 1: Reactive machines. These AI systems have no memory and are task-specific. An example is Deep Blue, the IBM chess program that beat Garry Kasparov in the 1990s. Deep Blue can identify pieces on the chessboard and make predictions, but because it has no memory, it cannot use past experiences to inform future ones [48] .

Type 2: Limited memory. These AI systems have memory, so they can use past experiences to inform future decisions. Some of the decision-making functions in self-driving cars are designed this way and the analytics performed by healthcare apps installed on mobile phones [49] .

Type 3: Theory of mind. Theory of mind is a psychology term [50] but when applied to AI, it means that the system would have the social intelligence to understand emotions. This type of AI will be able to infer human intentions and predict behaviour, a necessary skill for AI systems to become integral members of human teams [51] .

Type 4: Self-awareness. In this category, AI systems have a sense of self, which gives them consciousness [52] . Machines with self-awareness understand their own current state. At present, this type of AI does not yet exist [53] .

What examples of AI technology are used today in the UK in healthcare?

To understand what practical applications AI was being used within the United Kingdom a survey of 20 local hospitals within the Midlands was undertaken with 12 responding to a short questionnaire.

All the trusts that replied had used AI in the trust mainly in administrative areas rather than care areas. There were projects underway that used machine learning algorithms to look at results of anemia levels in pregnancy [54] , to help analyse results in patients with acute kidney injury (AKI) [55] and imaging review for scaphoid fractures [56] .

Nine of the trusts felt that clinical decision-making was being positively impacted by the introduction of AI improving decision-making. It was felt to have been cost effective by half although the consensus was that it was a little too early to be evaluating its effectiveness as the AI being used was still being calibrated and evaluated.

Automation (not currently utilized in UK practice locally). When paired with AI technologies, automation tools can expand the volume and types of tasks performed [3] . An example is robotic process automation (RPA), a type of software that automates repetitive, rules-based data processing tasks traditionally done by humans [57] . When combined with machine learning and emerging AI tools, RPA can automate bigger portions of enterprise jobs, enabling RPA’s tactical bots to pass along intelligence from AI and respond to process changes improving healthcare access for instance and record keeping [58] .

Machine learning. This is the science of getting a computer to act without programming [59] [60] . Deep learning is a subset of machine learning that, in very simple terms, can be thought of as the automation of predictive analytics for instance predicting psychosis from recorded dialogue [60] . This type of AI application is proving useful in diagnostics and treatment finding a place within practice locally. There are three types of machine learning algorithms:

Supervised learning. Data sets are labelled so that patterns can be detected and used to label new data sets [61] . For instance, the detection of acute kidney injury from laboratory results is being developed or used in 4 of the 15 trusts surveyed [62] .

Unsupervised learning. Data sets are not labelled and are sorted according to similarities or differences [63] , such as the detection of Sepsis in Intensive Care Units (ICU) [64] . At present no local units are utilising programmes with this design.

Reinforcement learning. Data sets are not labelled but, after performing an action or several actions, the AI system is given feedback [65] . This had no applications in clinical practice locally as safe practice is the prime goal and therefore AI is being used to build on previous learning and ensure rules are adhered to every time [66] [67] .

Machine vision. This technology gives a machine the ability to see [68] . Machine vision captures and analyses visual information using a camera, analog-to-digital conversion and digital signal processing [69] . It is often compared to human eyesight, but machine vision is not bound by biology and can be programmed to see through walls [70] . It is used in a range of applications from signature identification to medical image analysis and is being used to look for scaphoid fractures on a trial basis and to help scan medical notes in three trusts.

[56] Computer vision, which is focused on machine-based image processing [71] , is often conflated with machine vision but these are two separate things.

Natural language processing (NLP). This is the processing of human language by a computer program [72] . One of the older and best-known examples of NLP is spam detection, which looks at the subject line and text of an email and decides if it’s junk [73] . Current approaches to NLP are based on machine learning facilitating text translation, sentiment analysis and speech recognition [74] .

Robotics. This field of engineering focuses on the design and manufacturing of robots to perform tasks that are difficult for humans to perform consistently [75] . For example, robots are used in surgery to improve the precision in alignment of joint replacement [76] or tumour excision [77] . Researchers are also using machine learning to build robots that can interact in social settings and help with patient rehabilitation like Cyberdyne’s Hybrid Assistive Limb (HAL) exoskeleton designed for the rehabilitation of patients with spinal cord injuries and stroke [3] [78] . Autonomous surgery uses a combination of computer vision, image recognition, deep learning and robotics to build automated skills for piloting a robot to deliver precision surgery [75] . At present most trusts eight of those that had responded employed a form of robotic-assisted surgery to improve precision and outcomes.

4. Discussion

AI implementation in healthcare whilst seen by some as revolutionary [79] remains problematic due to resource, ethical and legal issues [23] [80] . This study shows that there are pockets of good practice within the UK driven by the need to understand and process vast amounts of data to improve patient outcomes such as with AKI [8] [55] seems the most valid goal. Improving safety through disease or injury detection such as a scaphoid fracture [56] appears to be gathering pace but there remain serious ethical issues when problems are missed as the clinician may be unable to fully explain the processes of the decision [81] . Reducing costs drives some of the implementation and makes a technological solution attractive to healthcare managers when clinicians are less enthusiastic [82] . There is however duplication of effort for instance in radiology within the UK and some are calling for a registry of AI implementation to reduce this phenomenon and implement shared learning [83] . In this study, seven out of eleven trusts were using AI to help with fracture detection, anemia or AKI changes but also to facilitate health informatics. At present there is a sporadic approach. Co-ordinating this nationally would facilitate better integration of systems and avoid duplicate working

Applying machine learning can make more accurate and faster diagnoses than humans [5] . However, humans can emphasize and interpret non-visual clues and reactions to information faster for now such as interpreting suicide risk [84] . There remains a lack of material to train the AI in verbal clue recognition for now although that is likely to change [84] . Other systems such as the IBM Watson which understands natural language and can respond to questions asked of it, can then mine patient data and other available data sources to form a hypothesis, which it then presents with a confidence scoring schema [85] . This then still requires a human to explain it and help formulate a plan for treatment, should it therefore be used more as a second opinion [86] .

Automating decisions with AI slightly goes against the grain of shared decision-making with patients [80] . It reduces interactivity and helps patients understand the boundaries of what is possible or even ethical and the risks involved with any intervention or treatment [81] . It may not facilitate discussion and the effectiveness of shared clinical decision-making when the recommendations are difficult to understand or explain to patients when the presentation is unclear leading to uncertainty and bringing up the question of who or what is in charge or responsible [87] .

Other AI applications are obviously beneficial as they improve administrative tasks including the use of online virtual health assistants and chatbots to help patients and healthcare customers find medical information, schedule appointments, understand the billing process and complete other administrative processes [88] . Also, an array of AI technologies is also being used to predict, fight and understand pandemics such as COVID-19 [89] [90] . AI and robotic surgery are available in a sizeable number of trusts as the technology and reproducibility of results become more apparent [91] .

Some industry experts believe the term artificial intelligence is too closely linked to popular culture, and this has caused the general public to have improbable expectations about how AI will change the workplace and life in general including medical practice [92] . The label augmented intelligence, which has a more neutral connotation, may help people understand that most implementations of AI will be weak and simply improve products and services [93] [94] . Examples include automatically surfacing important information in reports on which clinicians must then act [90] .

True AI, or artificial general intelligence, is closely associated with the concept of the technological singularity—a future ruled by an artificial superintelligence that far surpasses the human brain’s ability to understand it or how it is shaping our reality [95] . This remains within the realm of science fiction, though some developers are working on the problem. Many believe that technologies such as quantum computing could play an important role in making AGI a reality and that we should reserve the use of the term AI for this kind of general intelligence [96] .

Barriers to implementation remain because of concerns around data sharing and privacy, transparency of algorithms, data standardization, and interoperability across multiple platforms, and concern for patient safety [13] . Robust clinical evaluation with metrics that are intuitive to patients and clinicians such as measuring the quality of care and patient outcomes will help the benefits of implementation be better understood [97] . Challenges in introducing AI in medicine are represented in the selection of multiple algorithms, the design and development of AI, and the required ongoing surveillance of AI [98] .

5. Conclusion

To conclude there are obvious benefits in pockets of practice worldwide and within the UK but implementation is not structured and requires thought or a plan that does not look at only the cost implications. National guidance and a strategic plan driven by an AI with registries and good outcome measures would help the implementation.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Kok, J.N., Boers, E.J., Kosters, W.A., Van der Putten, P. and Poel, M. (2009) Artificial Intelligence: Definition, Trends, Techniques, and Cases. Artificial Intelligence, 1, 270-299.
https://unesdoc.unesco.org/ark:/48223/pf0000128588
[2] Haleem, A., Javaid, M. and Khan, I.H. (2019) Current Status and Applications of Artificial Intelligence (AI) in Medical Field: An Overview. Current Medicine Research and Practice, 9, 231-237.
https://doi.org/10.1016/j.cmrp.2019.11.005
[3] Shaheen, M.Y. (2021) Applications of Artificial Intelligence (AI) in Healthcare: A Review.
https://doi.org/10.14293/S2199-1006.1.SOR-.PPVRY8K.v1
[4] Patel, V.L., Shortliffe, E.H., Stefanelli, M., et al. (2009) The Coming of Age of Artificial Intelligence in Medicine. Artificial Intelligence in Medicine, 46, 5-17.
https://doi.org/10.1016/j.artmed.2008.07.017
[5] Magrabi, F., Ammenwerth, E., McNair, J.B., De Keizer, N.F., et al. (2019) Artificial Intelligence in Clinical Decision Support: Challenges for Evaluating AI and Practical Implications. Yearbook of Medical Informatics, 28, 128-134.
https://doi.org/10.1055/s-0039-1677903
[6] Ross, J. (2018) The Fundamental Flaw in AI Implementation. MIT Sloan Management Review, 59, 10-11.
https://sloanreview.mit.edu/article/the-fundamental-flaw-in-ai-implementation/
[7] Yang, Q., Hao, Y., Quan, K., Yang, S., Zhao, Y., Kuleshov, V. and Wang, F. (2023) Harnessing Biomedical Literature to Calibrate Clinicians’ Trust in AI Decision Support Systems. CHI‘23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, 23-28 April 2023, 1-14.
https://doi.org/10.1145/3544548.3581393
[8] Croskerry, P. and Nimmo, G.R. (2011) Better Clinical Decision Making and Reducing Diagnostic Error. Journal of the Royal College of Physicians of Edenborough, 41, 155-162.
https://doi.org/10.4997/JRCPE.2011.208
[9] Stacey, D., Légaré, F. and Kryworuchko, J. (2009) Evidence-Based Health Care Decision-Making: Roles for Health Professionals. In: Edwards, A. and Elwyn, G., Eds., Shared Decision-Making in Health Care: Achieving Evidence-Based Patient Choice, 2nd Edition, Oxford University Press, Oxford, 164-172.
[10] Hallgren, K.A., Bauer, A.M. and Atkins, D.C. (2017) Digital Technology and Clinical Decision Making in Depression Treatment: Current Findings and Future Opportunities. Depress Anxiety, 34, 494-501.
https://doi.org/10.1002/da.22640
[11] Coiera, E. (2018) The Fate of Medicine in the Time of AI. The Lancet, 392, 2331-2332.
https://doi.org/10.1016/S0140-6736(18)31925-1
[12] Yu, K.H. and Kohane, I.S. (2019) Framing the Challenges of Artificial Intelligence in Medicine. BMJ Quality & Safety, 28, 238-241.
https://doi.org/10.1136/bmjqs-2018-008551
[13] He, J., Baxter, S.L., Xu, J., et al. (2019) The Practical Implementation of Artificial Intelligence Technologies in Medicine. Nature Medicine, 25, 30-36.
https://doi.org/10.1038/s41591-018-0307-0
[14] Brennan, S. (2005) The NHS IT Project: The Biggest Computer Programme in the World-Ever! Radcliffe Publishing, Abingdon.
[15] Singler, B. (2023) Left Behind? Religion as a Vestige in “The Rapture of the Nerds” and Other AI Singularity Literature. In: Science and Religion in Western Literature, Routledge, London, 136-150.
https://doi.org/10.4324/9781003213987-10
[16] Mohiuddin, S., Busby, J., Savović, J., Richards, A., Northstone, K., Hollingworth, W., Donovan, J.L. and Vasilakis, C. (2017) Patient Flow within UK Emergency Departments: A Systematic Review of the Use of Computer Simulation Modelling Methods. BMJ Open, 7, e015007.
https://doi.org/10.1136/bmjopen-2016-015007
[17] Subbiah, V. (2023) The Next Generation of Evidence-Based Medicine. Nature Medicine, 29, 49-58.
https://doi.org/10.1038/s41591-022-02160-z
[18] Malhotra, A., Molloy, E.J., Bearer, C.F. and Mulkey, S.B. (2023) Emerging Role of Artificial Intelligence, Big Data Analysis and Precision Medicine in Pediatrics. Pediatric Research, 93, 281-283.
https://doi.org/10.1038/s41390-022-02422-z
[19] Senders, J.W. and Moray, N.P. (2020) Human Error: Cause, Prediction, and Reduction. CRC Press, Boca Raton.
https://doi.org/10.1201/9781003070375
[20] Nolan, P. (2023) Artificial Intelligence in Medicine—Is Too Much Transparency a Good Thing? Medico-Legal Journal.
https://doi.org/10.1177/00258172221141243
[21] Ahmed, Z., Mohamed, K., Zeeshan, S. and Dong, X. (2020) Artificial Intelligence with Multi-Functional Machine Learning Platform Development for Better Healthcare and Precision Medicine. Database, 2020, baaa010.
https://doi.org/10.1093/database/baaa010
[22] Lysaght, T., Lim, H.Y., Xafis, V. and Ngiam, K.Y. (2019) AI-Assisted Decision-Making in Healthcare: The Application of an Ethics Framework for Big Data in Health and Research. Asian Bioethics Review, 11, 299-314.
https://doi.org/10.1007/s41649-019-00096-0
[23] Aoki, N. (2021) The Importance of the Assurance That “Humans Are Still in the Decision Loop” for Public Trust in Artificial Intelligence: Evidence from an Online Experiment. Computers in Human Behavior, 114, Article ID: 106572.
https://doi.org/10.1016/j.chb.2020.106572
[24] Feng, J., Phillips, R.V., Malenica, I., Bishara, A., Hubbard, A.E., Celi, L.A. and Pirracchio, R. (2022) Clinical Artificial Intelligence Quality Improvement: Towards Continual Monitoring and Updating of AI Algorithms in Healthcare. NPJ Digital Medicine, 5, 66.
https://doi.org/10.1038/s41746-022-00611-y
[25] Toh, T.S., Dondelinger, F. and Wang, D. (2019) Looking beyond the Hype: Applied AI and Machine Learning in Translational Medicine. EBioMedicine, 47, 607-615.
https://doi.org/10.1016/j.ebiom.2019.08.027
[26] Belani, H., Vukovic, M. and Car, Ž. (2019) Requirements Engineering Challenges in Building AI-Based Complex Systems. 2019 IEEE 27th International Requirements Engineering Conference Workshops (REW), Jeju Island, 23-27 September 2019, 252-255.
https://doi.org/10.1109/REW.2019.00051
[27] Ali, M.A., Yap, N.K., Ghani, A.A.A., Zulzalil, H., Admodisastro, N.I. and Najafabadi, A.A. (2022) A Systematic Mapping of Quality Models for AI Systems, Software and Components. Applied Sciences, 12, 8700.
https://doi.org/10.3390/app12178700
[28] Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., Wang, Y., Dong, Q., Shen, H. and Wang, Y. (2017) Artificial Intelligence in Healthcare: Past, Present and Future. Stroke and Vascular Neurology, 2, 230-243.
https://doi.org/10.1136/svn-2017-000101
[29] Guzman, A.L. (2016) Making AI Safe for Humans: A Conversation with Siri. In: Social Bots and Their Friends, Routledge, London, 85-101.
https://doi.org/10.4324/9781315637228-11
[30] Borsci, S., Malizia, A., Schmettow, M., Van Der Velde, F., Tariverdiyeva, G., Balaji, D. and Chamberlain, A. (2022) The Chatbot Usability Scale: The Design and Pilot of a Usability Scale for Interaction with AI-Based Conversational Agents. Personal and Ubiquitous Computing, 26, 95-119.
https://doi.org/10.1007/s00779-021-01582-9
[31] Susanto, A.P., Winarto, H., Fahira, A., Abdurrohman, H., Muharram, A.P., Widitha, U.R., Efirianti, G.E.W., George, Y.A.E. and Tjoa, K. (2022) Building an Artificial Intelligence-Powered Medical Image Recognition Smartphone Application: What Medical Practitioners Need to Know. Informatics in Medicine Unlocked, 32, Article ID: 101017.
https://doi.org/10.1016/j.imu.2022.101017
[32] Dvorak, M.F., Fisher, C.G., Fehlings, M.G., Rampersaud, Y.R., Öner, F.C., Aarabi, B. and Vaccaro, A.R. (2007) The Surgical Approach to Subaxial Cervical Spine Injuries: An Evidence-Based Algorithm Based on the SLIC Classification System. Spine, 32, 2620-2629.
https://doi.org/10.1097/BRS.0b013e318158ce16
[33] Evidence Based Medicine Working Group (1992) Evidence Based Medicine. A New Approach to Teaching the Practice of Medicine. JAMA, 268, 2420-2425.
https://doi.org/10.1001/jama.1992.03490170092032
[34] Greenhalgh, T., Howick, J. and Maskrey, N. (2014) Evidence Based Medicine: A Movement in Crisis? BMJ, 348, g3725.
https://www.bmj.com/content/348/bmj.g3725
https://doi.org/10.1136/bmj.g3725
[35] Barrows, H.S. and Feltovich, P.J. (1987) The Clinical Reasoning Process. Medical Education, 21, 86-91.
https://doi.org/10.1111/j.1365-2923.1987.tb00671.x
[36] Shortliffe, E.H. (1984) Reasoning Methods in Medical Consultation Systems: Artificial Intelligence Approaches. Computer Programs in Biomedicine, 18, 5-13.
https://doi.org/10.1016/0010-468X(84)90018-7
[37] Kempt, H., Heilinger, J.C. and Nagel, S.K. (2022) Relative Explainability and Double Standards in Medical Decision-Making: Should Medical AI Be Subjected to Higher Standards in Medical Decision-Making than Doctors? Ethics and Information Technology, 24, Article No. 20.
https://doi.org/10.1007/s10676-022-09646-x
[38] Panch, T., Mattie, H. and Atun, R. (2019) Artificial Intelligence and Algorithmic Bias: Implications for Health Systems. Journal of Global Health, 9, Article ID: 010318.
https://doi.org/10.7189/jogh.09.020318
[39] Habli, I., Lawton, T. and Porter, Z. (2020) Artificial Intelligence in Health Care: Accountability and Safety. Bulletin of the World Health Organization, 98, 251-256.
https://doi.org/10.2471/BLT.19.237487
[40] Abdar, M., Khosravi, A., Islam, S.M.S., Acharya, U.R. and Vasilakos, A.V. (2022) The Need for Quantification of Uncertainty in Artificial Intelligence for Clinical Data Analysis: Increasing the Level of Trust in the Decision-Making Process. IEEE Systems, Man, and Cybernetics Magazine, 8, 28-40.
https://doi.org/10.1109/MSMC.2022.3150144
[41] Mehta, N., Pandit, A. and Shukla, S. (2019) Transforming Healthcare with Big Data Analytics and Artificial Intelligence: A Systematic Mapping Study. Journal of Biomedical Informatics, 100, Article ID: 103311.
https://doi.org/10.1016/j.jbi.2019.103311
[42] Bohr, A. and Memarzadeh, K. (2020) The Rise of Artificial Intelligence in Healthcare Applications. In: Artificial Intelligence in Healthcare, Academic Press, Cambridge, 25-60.
https://doi.org/10.1016/B978-0-12-818438-7.00002-2
[43] Moglia, A., Georgiou, K., Georgiou, E., Satava, R.M. and Cuschieri, A. (2021) A Systematic Review on Artificial Intelligence in Robot-Assisted Surgery. International Journal of Surgery, 95, Article ID: 106151.
https://doi.org/10.1016/j.ijsu.2021.106151
[44] Davenport, T. and Kalakota, R. (2019) The Potential for Artificial Intelligence in Healthcare. Future Healthcare Journal, 6, 94.
https://doi.org/10.7861/futurehosp.6-2-94
[45] Yakar, D., Ongena, Y.P., Kwee, T.C. and Haan, M. (2022) Do People Favor Artificial Intelligence over Physicians? A Survey among the General Population and Their View on Artificial Intelligence in Medicine. Value in Health, 25, 374-381.
https://doi.org/10.1016/j.jval.2021.09.004
[46] Davoudi, A., Malhotra, K.R., Shickel, B., Siegel, S., Williams, S., Ruppert, M., Bihorac, E., Ozrazgat-Baslanti, T., Tighe, P.J., Bihorac, A. and Rashidi, P. (2019) Intelligent ICU for Autonomous Patient Monitoring Using Pervasive Sensing and Deep Learning. Scientific Reports, 9, Article No. 8020.
https://doi.org/10.1038/s41598-019-44004-w
[47] Mudgal, S.K., Agarwal, R., Chaturvedi, J., Gaur, R. and Ranjan, N. (2022) Real-World Application, Challenges and Implication of Artificial Intelligence in Healthcare: An Essay. The Pan African Medical Journal, 43, 3.
[48] Bory, P. (2019) Deep New: The Shifting Narratives of Artificial Intelligence from Deep Blue to AlphaGo. Convergence, 25, 627-642.
https://doi.org/10.1177/1354856519829679
[49] Tawalbeh, L.A., Mehmood, R., Benkhlifa, E. and Song, H.B. (2016) Mobile Cloud Computing Model and Big Data Analysis for Healthcare Applications. IEEE Access, 4, 6171-6180.
https://doi.org/10.1109/ACCESS.2016.2613278
[50] Cuzzolin, F., Morelli, A., Cirstea, B. and Sahakian, B.J. (2020) Knowing Me, Knowing You: Theory of Mind in AI. Psychological Medicine, 50, 1057-1061.
https://doi.org/10.1017/S0033291720000835
[51] Williams, J., Fiore, S.M. and Jentsch, F. (2022) Supporting Artificial Social Intelligence with Theory of Mind. Frontiers in Artificial Intelligence, 5, Article ID: 750763.
https://doi.org/10.3389/frai.2022.750763
[52] Thakur, A., Armstrong, J., Youssef, A., Eyre, D. and Clifton, D.A. (2023) Self-Aware SGD: Reliable Incremental Adaptation Framework for Clinical AI Models. IEEE Journal of Biomedical and Health Informatics.
https://doi.org/10.36227/techrxiv.21865212
[53] Bishop, J.M. (2018) Is Anyone Home? A Way to Find out If AI Has Become Self-Aware. Frontiers in Robotics and AI, 5, 17.
https://doi.org/10.3389/frobt.2018.00017
[54] Sarsam, S.M., Al-Samarraie, H., Alzahrani, A.I. and Shibghatullah, A.S. (2022) A Non-Invasive Machine Learning Mechanism for Early Disease Recognition on Twitter: The Case of Anemia. Artificial Intelligence in Medicine, 134, Article ID: 102428.
https://doi.org/10.1016/j.artmed.2022.102428
[55] Mistry, N.S. and Koyner, J.L. (2021) Artificial Intelligence in Acute Kidney Injury: From Static to Dynamic Models. Advances in Chronic Kidney Disease, 28, 74-82.
https://doi.org/10.1053/j.ackd.2021.03.002
[56] Hendrix, N., Scholten, E., Vernhout, B., Bruijnen, S., Maresch, B., de Jong, M., Diepstraten, S., Bollen, S., Schalekamp, S., de Rooij, M. and Scholtens, A. (2021) Development and Validation of a Convolutional Neural Network for Automated Detection of Scaphoid Fractures on Conventional Radiographs. Radiology: Artificial Intelligence, 3, e200260.
https://doi.org/10.1148/ryai.2021200260
[57] Stanfill, M.H. and Marc, D.T. (2019) Health Information Management: Implications of Artificial Intelligence on Healthcare Data and Information Management. Yearbook of Medical Informatics, 28, 56-64.
https://doi.org/10.1055/s-0039-1677913
[58] Alugubelli, R. (2016) Exploratory Study of Artificial Intelligence in Healthcare. International Journal of Innovations in Engineering Research and Technology, 3, 1-10.
https://www.neliti.com/publications/429277/exploratory-study-of-artificial-intelligence-in-healthcare
[59] Wiens, J. and Shenoy, E.S. (2018) Machine Learning for Healthcare: On the Verge of a Major Shift in Healthcare Epidemiology. Clinical Infectious Diseases, 66, 149-153.
https://doi.org/10.1093/cid/cix731
[60] Toh, C. and Brody, J.P. (2021) Applications of Machine Learning in Healthcare. In: Kheng, T.Y., Ed., Smart Manufacturing: When Artificial Intelligence Meets the Internet of Things, IntechOpen, London, 65.
https://doi.org/10.5772/intechopen.92297
[61] Roy, S., Meena, T. and Lim, S.J. (2022) Demystifying Supervised Learning in Healthcare 4.0: A New Reality of Transforming Diagnostic Medicine. Diagnostics, 12, Article No. 2549.
https://doi.org/10.3390/diagnostics12102549
[62] Chan, L., Vaid, A. and Nadkarni, G.N. (2020) Applications of Machine Learning Methods in Kidney Disease: Hope or Hype? Current Opinion in Nephrology and Hypertension, 29, 319-326.
https://doi.org/10.1097/MNH.0000000000000604
[63] Eckhardt, C.M., Madjarova, S.J., Williams, R.J., et al. (2023) Unsupervised Machine Learning Methods and Emerging Applications in Healthcare. Knee Surgery, Sports Traumatology, Arthroscopy, 31, 376-381.
https://doi.org/10.1007/s00167-022-07233-7
[64] Komorowski, M., Celi, L.A., Badawi, O., Gordon, A.C. and Faisal, A.A. (2018) The Artificial Intelligence Clinician Learns Optimal Treatment Strategies for Sepsis in Intensive Care. Nature Medicine, 24, 1716-1720.
https://doi.org/10.1038/s41591-018-0213-5
[65] Gottesman, O., Johansson, F., Komorowski, M., Faisal, A., Sontag, D., Doshi-Velez, F. and Celi, L.A. (2019) Guidelines for Reinforcement Learning in Healthcare. Nature Medicine, 25, 16-18.
https://doi.org/10.1038/s41591-018-0310-5
[66] Coronato, A., Naeem, M., De Pietro, G. and Paragliola, G. (2020) Reinforcement Learning for Intelligent Healthcare Applications: A Survey. Artificial Intelligence in Medicine, 109, Article ID: 101964.
https://doi.org/10.1016/j.artmed.2020.101964
[67] Riachi, E., Mamdani, M., Fralick, M. and Rudzicz, F. (2021) Challenges for Reinforcement Learning in Healthcare.
[68] Saraswat, M., Sharma, H. and Arya, K.V. (2022) Intelligent Vision in Healthcare. In: Intelligent Vision in Healthcare, Springer Nature, Singapore, 1-8.
https://doi.org/10.1007/978-981-16-7771-7_1
[69] Sevilla, R.V., Alon, A.S., Melegrito, M.P., Reyes, R.C., Bastes, B.M. and Cimagala, R.P. (2021) Mask-Vision: A Machine Vision-Based Inference System of Face Mask Detection for Monitoring Health Protocol Safety. 2021 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET), Kota Kinabalu, 12-14 September 2023, 1-5.
https://doi.org/10.1109/IICAIET51634.2021.9573664
[70] Raut, R., Krit, S. and Chatterjee, P. (2022) Machine Vision for Industry 4.0: Applications and Case Studies. CRC Press, Boca Raton.
https://doi.org/10.1201/9781003122401
[71] Gao, J., Yang, Y., Lin, P. and Park, D.S. (2018) Computer Vision in Healthcare Applications. Journal of Healthcare Engineering, 2018, Article ID: 5157020.
https://doi.org/10.1155/2018/5157020
[72] Voytovich, L. and Greenberg, C. (2022) Natural Language Processing: Practical Applications in Medicine and Investigation of Contextual Autocomplete. In: Machine Learning in Clinical Neuroscience: Foundations and Applications, Springer International Publishing, Berlin, 207-214.
https://doi.org/10.1007/978-3-030-85292-4_24
[73] Bacanin, N., Zivkovic, M., Stoean, C., Antonijevic, M., Janicijevic, S., Sarac, M. and Strumberger, I. (2022) Application of Natural Language Processing and Machine Learning Boosted with Swarm Intelligence for Spam Email Filtering. Mathematics, 10, Article No. 4173.
https://doi.org/10.3390/math10224173
[74] Young, I.J.B., Luz, S. and Lone, N. (2019) A Systematic Review of Natural Language Processing for Classification Tasks in the Field of Incident Reporting and Adverse Event Analysis. International Journal of Medical Informatics, 132, Article ID: 103971.
https://doi.org/10.1016/j.ijmedinf.2019.103971
[75] O’Sullivan, S., Nevejans, N., Allen, C., Blyth, A., Leonard, S., Pagallo, U., Holzinger, K., Holzinger, A., Sajid, M.I. and Ashrafian, H. (2019) Legal, Regulatory, and Ethical Frameworks for Development of Standards in Artificial Intelligence (AI) and Autonomous Robotic Surgery. The International Journal of Medical Robotics and Computer Assisted Surgery, 15, e1968.
https://doi.org/10.1002/rcs.1968
[76] Beyaz, S. (2020) A Brief History of Artificial Intelligence and Robotic Surgery in Orthopedics & Traumatology and Future Expectations. Joint Diseases and Related Surgery, 31, 653-655.
https://doi.org/10.5606/ehc.2020.75300
[77] Chang, T.C., Seufert, C., Eminaga, O., Shkolyar, E., Hu, J.C. and Liao, J.C. (2021) Current Trends in Artificial Intelligence Application for Endourology and Robotic Surgery. Urologic Clinics, 48, 151-160.
https://doi.org/10.1016/j.ucl.2020.09.004
[78] Cruciger, O., Schildhauer, T. A., Meindl, R. C., Tegenthoff, M., Schwenkreis, P., Citak, M. and Aach, M. (2016) Impact of Locomotion Training with a Neurologic Controlled Hybrid Assistive Limb (HAL) Exoskeleton on Neuropathic Pain and Health Related Quality of Life (HRQoL) in Chronic SCI: A Case Study. Disability and Rehabilitation: Assistive Technology, 11, 529-534.
[79] Yu, K.H., Beam, A.L. and Kohane, I.S. (2018) Artificial Intelligence in Healthcare. Nature Biomedical Engineering, 2, 719-731.
https://doi.org/10.1038/s41551-018-0305-z
[80] Shank, D.B., DeSanti, A. and Maninger, T. (2019) When Are Artificial Intelligence versus Human Agents Faulted for Wrongdoing? Moral Attributions after Individual and Joint Decisions. Information, Communication & Society, 22, 648-663.
https://doi.org/10.1080/1369118X.2019.1568515
[81] Kraus, C.K. and Marco, C.A. (2016) Shared Decision Making in the ED: Ethical Considerations. The American Journal of Emergency Medicine, 34, 1668-1672.
https://doi.org/10.1016/j.ajem.2016.05.058
[82] Hamet, P. and Tremblay, J. (2017) Artificial Intelligence in Medicine. Metabolism, 69, S36-S40.
https://doi.org/10.1016/j.metabol.2017.01.011
[83] Silkens, M.E.W.M., Ross, J., Hall, M., Scarbrough, H. and Rockall, A. (2023) The Time Is Now: Making the Case for a UK Registry of Deployment of Radiology Artificial Intelligence Applications. Clinical Radiology, 78, 107-114.
https://doi.org/10.1016/j.crad.2022.09.132
[84] Dhelim, S., Chen, L., Ning, H. and Nugent, C. (2022) Artificial Intelligence for Suicide Assessment Using Audiovisual Cues: A Review. Artificial Intelligence Review, 56, 5591-5618.
https://doi.org/10.1007/s10462-022-10290-6
[85] Castaneda, C., Nalley, K., Mannion, C., Bhattacharyya, P., Blake, P., Pecora, A., Goy, A. and Suh, K.S. (2015) Clinical Decision Support Systems for Improving Diagnostic Accuracy and Achieving Precision Medicine. Journal of Clinical Bioinformatics, 5, Article No. 4.
https://doi.org/10.1186/s13336-015-0019-3
[86] Luxton, D.D. (2019) Should Watson Be Consulted for a Second Opinion? AMA Journal of Ethics, 21, 131-137.
https://doi.org/10.1001/amajethics.2019.131
[87] Triberti, S., Durosini, I. and Pravettoni, G. (2020) A “Third Wheel” Effect in Health Decision Making Involving Artificial Entities: A Psychological Perspective. Frontiers in Public Health, 8, 117.
https://doi.org/10.3389/fpubh.2020.00117
[88] Bates, M. (2019) Health Care Chatbots Are Here to Help. IEEE Pulse, 10, 12-14.
https://doi.org/10.1109/MPULS.2019.2911816
[89] Malik, Y.S., Sircar, S., Bhat, S., Ansari, M.I., Pande, T., Kumar, P., Mathapati, B., Balasubramanian, G., Kaushik, R., Natesan, S. and Ezzikouri, S. (2021) How Artificial Intelligence May Help the Covid-19 Pandemic: Pitfalls and Lessons for the Future. Reviews in Medical Virology, 31, 1-11.
https://doi.org/10.1002/rmv.2205
[90] Long, J.B. and Ehrenfeld, J.M. (2020) The Role of Augmented Intelligence (AI) in Detecting and Preventing the Spread of Novel Coronavirus. Journal of Medical Systems, 44, Article No. 59.
https://doi.org/10.1007/s10916-020-1536-6
[91] Hung, A.J., Chen, J. and Gill, I.S. (2018) Automated Performance Metrics and Machine Learning Algorithms to Measure Surgeon Performance and Anticipate Clinical Outcomes in Robotic Surgery. JAMA Surgery, 153, 770-771.
https://doi.org/10.1001/jamasurg.2018.1512
[92] Wadden, J.J. (2022) Defining the Undefinable: The Black Box Problem in Healthcare Artificial Intelligence. Journal of Medical Ethics, 48, 764-768.
https://doi.org/10.1136/medethics-2021-107529
[93] Bazoukis, G., Hall, J., Loscalzo, J., Antman, E.M., Fuster, V. and Armoundas, A.A. (2022) The Inclusion of Augmented Intelligence in Medicine: A Framework for Successful Implementation. Cell Reports Medicine, 3, Article ID: 100485.
https://doi.org/10.1016/j.xcrm.2021.100485
[94] Crigger, E., Reinbold, K., Hanson, C., Kao, A., Blake, K. and Irons, M. (2022) Trustworthy Augmented Intelligence in Health Care. Journal of Medical Systems, 46, Article No. 12.
https://doi.org/10.1007/s10916-021-01790-z
[95] Liu, W., Zhuang, G., Liu, X., Hu, S., He, R. and Wang, Y. (2021) How Do We Move towards True Artificial Intelligence. 2021 IEEE 23rd International Conference on High Performance Computing & Communications; 7th International Conference on Data Science & Systems; 19th International Conference on Smart City; 7th International Conference on Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys), Haikou, 20-22 December 2021, 2156-2158.
https://doi.org/10.1109/HPCC-DSS-SmartCity-DependSys53884.2021.00321
[96] Heuveline, V. and Stiefel, V. (2022) Artificial Intelligence and Algorithms: True Progress or Just Digital Alchemy? In: Holm-Hadulla, R.M., Funke, J. and Wink, M., Eds., Intelligence—Theories and Applications, Springer International Publishing, Cham, 219-227.
https://doi.org/10.1007/978-3-031-04198-3_12
[97] Kelly, C.J., Karthikesalingam, A., Suleyman, M., Corrado, G. and King, D. (2019) Key Challenges for Delivering Clinical Impact with Artificial Intelligence. BMC Medicine, 17, Article No. 195.
https://doi.org/10.1186/s12916-019-1426-2
[98] Magrabi, F., Ammenwerth, E., McNair, J.B., De Keizer, N.F., Hyppönen, H., Nykänen, P., Rigby, M., Scott, P.J., Vehko, T., Wong, Z.S. and Georgiou, A. (2019) Artificial Intelligence in Clinical Decision Support: Challenges for Evaluating AI and Practical Implications. Yearbook of Medical Informatics, 28, 128-134.
https://doi.org/10.1055/s-0039-1677903

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.