AI and Predictability in Law: Advancements, Limitations, and Ethical Challenges ()
1. Introduction
The importance of prediction in the legal system has long been a subject of debate and plays a crucial role in legal theory [1] - [4] . Justice Holmes defined law as the prophecy of the courts [5] . However, Hart took a different stance, arguing that laws exist independently of any authority figure, as the final authority does not create law [3] . Therefore, he limited the application of prediction theory to the decision-making process [5] . Other scholars also argue that a court of last resort cannot sensibly predict how it will rule [2] . The concept of predictive justice has often been synonymous—especially in the common law legal system—with a transparent and consistent legal framework [4] [6] [7] . It helps anticipate legal outcomes based on legislation, judicial precedents, and legal principles [6] . However, in recent times, with the evolution of AI in the justice sector, this debate has resurfaced. AI is increasingly being applied in legal decision-making, such as predicting recidivism, assessing the probability of winning a case, or even determining bail eligibility for criminal offenders [8] . Despite its advantages, the widespread use of AI in legal prediction has raised several concerns, including bias, accountability, and fairness [8] . Nonetheless, AI predictions are being prioritized over human judgment in various legal contexts. Against this backdrop, this paper explores the feasibility of AI-driven predictive justice and its potential societal implications. It also scrutinizes whether AI judges could offer greater predictability than human judges. Furthermore, it examines whether AI’s involvement in judicial processes could enhance legal predictability and, ultimately, how AI can be effectively integrated into the legal system as an assistive tool for human judges.
2. Methodology
In this article, we followed a doctrinal research methodology that is a widely used approach in legal studies that focuses on analyzing legal principles, statutes, case laws, and scholarly writings to develop a comprehensive understanding of a legal issue. This method is primarily library-based, relying on primary sources such as legislation, judicial precedents, and legal doctrines, as well as secondary sources like journal articles and commentaries. In this study, the doctrinal approach will be employed to examine the concept of predictable justice, its significance, and the challenges associated with its implementation. By systematically analyzing legal texts and case laws, this methodology aims to interpret and critically assess the extent to which legal predictability can be achieved. The doctrinal method ensures a structured and objective examination of legal frameworks, making it an essential tool for understanding the theoretical and practical aspects of legal predictability.
3. Concept of Predictable Justice
Anthony argues that the legal uncertainty is evolving over time [9] . As new legal issues arise and societal norms shift, the law must adapt, leading to fluctuations in its predictability. In contrast, other scholars contend that legal systems tend to become more certain and predictable over time, as precedents are established, legal interpretations are refined, and judicial decision-making follows a more structured approach [10] [11] . This ongoing debate underscores the complex nature of legal predictability. Predictable justice refers to the consistent and uniform application of laws, ensuring that legal outcomes are foreseeable and certain [6] . In this context, predictive justice, particularly through the use of Artificial Intelligence (AI), has emerged as a transformative tool in modern legal systems. By applying advanced algorithms, mathematical models, and data analytics, AI can assess vast amounts of case law and legal precedents to compute the probability of various litigation outcomes [12] . Bailey emphasised that the predictability of legal decisions is crucial for maintaining public respect as well as the rule of law [13] . AI-driven predictive models offer significant advantages, particularly in addressing judicial inefficiencies, such as case backlogs, inconsistencies in legal rulings, and ineffective dispute resolution mechanisms [14] . These models can assist judges, lawyers, and policymakers in making informed decisions, ultimately contributing to a more efficient and transparent legal system. However, the application of AI in prediction is not always accurate and is not beyond criticism. It raises a number of concerns that should be addressed at the time of prediction, especially in the legal area.
4. The Limits of Achieving Complete Predictability in Law
Over the years, the debate over whether law can be predicted with absolute precision has persisted among legal scholars and practitioners. While predictability in law is essential for ensuring stability and consistency, rigid legal predictions risk undermining the law’s ability to adapt to unique circumstances. Legal systems are often required to address complex and evolving societal issues, which demand a degree of flexibility to achieve equitable justice on a case-by-case basis. Norbert H. Rascher argues that predictability provides a necessary framework of certainty, allowing individuals and businesses to plan their actions with confidence [15] . However, he also warns that a highly rigid legal system may fail to deliver fair and just outcomes in individual cases, as it leaves little room for judicial discretion and contextual interpretation.
A purely deterministic approach to legal decision-making could lead to injustice in cases that require nuanced judgment beyond strict legal formalism. Moreover, another critical argument against complete legal predictability is that societal necessities demand that laws evolve over time, leading to inherent uncertainties [9] . As social norms, values, and technologies change, the legal system must adapt to new realities. This evolving nature of law introduces a level of unpredictability, making it difficult to rigidly apply past legal precedents to present and future cases.
Before exploring the applications of AI in legal prediction, it is essential to understand how AI functions in the legal domain. Galli and Sartor identify two primary approaches that have developed over the years [4] . The first is the rule-based approach, where AI systems meticulously apply predefined legal rules, enabling them to provide solutions for the majority of cases. The second is the knowledge-based approach, which draws decisions from legal precedents, making it adaptable to evolving legal interpretations [4] . Another perspective connects machine learning models to AI-driven legal prediction [16] . Similar to the knowledge-based approach, machine learning models rely heavily on past data. However, they can be trained using supervised learning, allowing them to predict case outcomes based on patterns identified in previous rulings. The most advanced form of AI prediction involves neural networks and deep learning models [17] . These systems function as black boxes, making it difficult to interpret their reasoning. While they excel in recognizing complex legal patterns, their lack of transparency raises concerns about explainability and accountability in judicial decision-making.
While this paper has referred to AI prediction models as “black boxes,” it is important to recognize that the complexity and architecture of different algorithms significantly influence their behavior and outputs. Rule-based systems operate on explicit, interpretable logic, while machine learning models—particularly neural networks—learn patterns through statistical correlations that are often difficult to trace or explain [18] [19] . These models can yield varying outcomes depending on their training data, feature selection, and the specific legal context in which they are deployed. For example, an algorithm trained on US criminal law data may produce distorted predictions if applied in a different jurisdiction without retraining. This context-specific variability introduces challenges in terms of fairness, transparency, and accountability. As Ward (2021) and Galli & Sartor (2023) note, the same algorithm may behave inconsistently across different datasets or legal domains, resulting in unpredictable judicial outcomes [4] [17] . Therefore, a deeper understanding of algorithmic design and contextual sensitivity is critical to evaluating the reliability and limitations of AI-driven legal predictions.
These AI-driven predictive justice measures raised a series of concerns. A major concern is that its predictions rely heavily on historical data and past judicial decisions [18] . If past legal decisions reflect racial or socioeconomic biases, AI models trained on these datasets may perpetuate such disparities. AI prediction often overlooks the need for contextual interpretation along with judicial reasoning [20] . Since AI operates through statistical models and pattern recognition, it may struggle with cases that require reinterpretation of legal precedents or adaptation to novel legal issues. Blind reliance on past examples could reinforce biases, fail to account for evolving legal standards, and limit the flexibility necessary for just outcomes [21] . Therefore, while predictability is an important goal in law, it cannot be applied rigidly without undermining the core principles of justice. The challenge lies in striking the right balance between predictability and flexibility, ensuring that legal decision-making remains both consistent and adaptable. This is precisely where AI faces significant limitations—it excels at recognizing patterns and making statistical predictions, but it struggles with the interpretative, ethical, and discretionary aspects of human judgment.
The following section will critically analyze AI-driven legal predictions in comparison with human judicial reasoning, assessing their strengths, weaknesses, and overall implications for the future of legal decision-making.
5. AI Prediction vs. Human Prediction
AI predictions rely on large datasets and machine learning algorithms, which can analyze patterns from past legal cases and judicial decisions to forecast potential legal outcomes. However, AI-driven predictions often reflect the mindset and biases of the code writers, as well as the way the algorithms are trained and maintained [8] . Thus, it raises concerns about the algorithmic bias. A notable example of such bias was observed in State v. Loomis [22] , where the AI-powered COMPAS system was used to predict an individual’s likelihood of recidivism. The system faced widespread criticism for exhibiting racial biases, as it disproportionately classified individuals from marginalized communities as “high risk” compared to their white counterparts, even when their backgrounds were similar [22] . This raised serious due process concerns, as defendants were denied the ability to fully challenge how the algorithm reached its conclusions. Thus, Loomis’s case drew attention to the ethical and legal concerns surrounding the use of AI in judicial decision-making through prediction. Legal scholars argue that relying on biased AI models violates the principle of procedural fairness, which demands that defendants understand and contest the evidence used against them [23] . However, there is currently no clear legal framework in many jurisdictions to regulate AI bias in legal predictions, creating a significant norm conflict between technological efficiency and fundamental rights.
Several studies have compared the accuracy of AI predictions versus human predictions in legal decision-making [23] - [25] . Findings suggest that AI-based predictions often outperform human predictions, but only in cases involving large datasets [26] . When analyzing vast amounts of legal precedents, AI demonstrates higher accuracy due to its ability to detect complex statistical relationships. However, in cases involving small datasets, human judgment tends to be more accurate, as it incorporates experience, intuition, and contextual interpretation, which AI lacks.
One of the fundamental differences between AI and human judges lies in how decisions are formulated and justified. Human judges rely on contextual interpretation, moral reasoning, and judicial discretion, considering factors beyond textual legal precedents [19] . Beyond these elements, a judge’s intuition—developed through courtroom experience—plays a crucial role in predicting case outcomes. This intuition often emerges from interactions within the courtroom, sometimes aligning with the prosecution’s arguments or the defense presented by the accused. In contrast, AI predictions primarily rely on textual data, making it difficult for AI to interpret non-textual factors such as body language, courtroom demeanor, or social and psychological cues—elements that frequently influence judicial decision-making. Sourdin argues that judicial functions inherently require human intelligence, which AI has yet to replicate [27] . She further contends that computer programs lack the ability to engage with people in ways that reflect human compassion, emotional understanding, or adaptive responsiveness, all of which are essential for sound legal reasoning and fair decision-making.
Moreover, decision-making—whether by a judge, jury, or lawyer—requires complex mental processes, such as moral reasoning, weighing competing interests, and applying legal principles to specific contexts. Scholars have emphasized that mental processes are a prerequisite for effective judicial decisions [28] , making human judgment indispensable in law. AI, on the other hand, operates on pattern recognition and statistical modeling rather than genuine reasoning.
However, human predictions also exhibit inconsistencies. Studies have shown that external factors, such as time of day or even a judge’s break schedule, can influence legal decisions. For example, a 2015 study by Bank of America Merrill Lynch found that judges were more lenient in sentencing in the morning and after lunch, while they imposed harsher sentences before lunch or at the end of the day. This suggests that human judgment, while essential, is also susceptible to variability based on external factors, which AI models, in theory, could mitigate by applying a standardized approach [29] . Even a judge’s break time can also impact decisions. Conducting empirical research, Avery finds that judges care most about what the law requires in an absolute sense [7] . In his research, he found judges are not confident in predicting risk assessments.
Human judges’ prediction is reasoned by explanation and that’s why they are more convincing [8] . In contrast, the lack of explainability in AI predictions creates a significant challenge, especially in cases where legal rights, personal liberty, or significant financial consequences are at stake. If AI models cannot provide a clear and comprehensible reasoning process, their use in judicial decision-making may undermine procedural fairness and due process. Therefore, while AI can assist in legal predictions, its opacity remains a major obstacle to its full adoption in judicial systems.
Another critical area of concern in AI-driven legal predictions is transparency. Unlike human judges, who provide detailed reasoning and justifications for their decisions, AI predictions often operate as black boxes, making it difficult to understand how and why a particular outcome was reached. Since AI models rely on complex algorithms and vast datasets, the decision-making process can be highly opaque, raising concerns about accountability and fairness [30] [31] . In contrast, human judicial decisions are grounded in legal reasoning and supported by written judgments, allowing parties to review, appeal, and challenge decisions based on clear arguments and interpretations of the law. In case of human prediction, who is responsible for the error, if it occurs, can be easily identified, while it is difficult if the error occurs in AI prediction, where programmers, judges and legislatures are all equally involved. This creates a legal vacuum, as existing laws do not adequately address who bears liability for AI-driven errors in judicial processes [32] . Some scholars propose creating AI accountability laws, but as of now, no uniform global standard exists for regulating AI in the judiciary [31] .
In Pintarich v. Deputy Commissioner of Taxation (2018), an Australian court considered an AI-generated decision regarding tax obligations [28] . The ruling highlighted that an algorithmic decision without human oversight lacked legal validity, emphasizing that AI-generated rulings must be subject to legal reasoning and review. However, many legal systems lack explicit laws or judicial guidelines addressing how AI-generated decisions should be reviewed, leading to regulatory ambiguity. Judicial transparency is essential for ensuring public trust and upholding the rule of law, as it allows individuals to understand the rationale behind a ruling and provides a mechanism for legal scrutiny.
Thus, though AI predictions offer efficiency, consistency, and accuracy in handling large amounts of legal data, they lack contextual understanding, human discretion, and the ability to account for emotions, ethics, and non-textual information. On the other hand, human prediction, though sometimes inconsistent, remains crucial in legal decision-making due to its reliance on mental processes, moral reasoning, and case-by-case adaptability.
6. Societal Implications and Human Emotions
As AI technology advances, it is becoming increasingly capable of learning autonomously, developing reasoning abilities, and even simulating aspects of consciousness [6] . However, despite these advancements, AI still lacks a genuine connection with human emotions—a crucial element in the judicial decision-making process. Sourdin argues that AI can process vast amounts of legal data and deliver consistent rulings, but it lacks the ability to exhibit compassion, understand nuanced human experiences, or adapt to the emotional weight of legal proceedings [27] .
Fast and Horvitz conducted a study on public perception of AI based on 30 years of data [33] . They found that since 2009, public attitudes toward AI have significantly evolved, with people becoming more optimistic than pessimistic. Another study revealed that the level of trust in AI is generally higher in East Asia compared to Western countries [34] . However, this positive perception is not fully reflected in the judicial sector, and there is still insufficient evidence to conclude that the general public is prepared to accept AI-generated decisions over those made by human judges.
One of the fundamental principles of justice is that it must not only be done but also be perceived and felt as being done [35] . Judicial decisions often involve sensitive and deeply personal matters, where empathy and emotional intelligence play a vital role in ensuring fairness. AI, while immune to fatigue, bias, and emotional distractions, is also devoid of the human touch necessary to address complex ethical dilemmas, social contexts, and individual circumstances. This limitation raises significant concerns about AI’s role in the legal system, particularly in areas where subjective judgment and moral reasoning are essential.
7. Strong Ethical Concerns in AI-Driven Judgments
AI operates on big data, raising significant concerns about how data is collected, processed, and utilized in predictive models. One of the primary issues is data ethics and privacy, as AI systems rely on vast datasets that may include sensitive or personal information. Rainer Mühlhoff argues that AI-driven legal predictions often compromise data ethics and privacy regulations, leading to potential violations of individual rights [32] . He highlights a twofold challenge: first, privacy risks, where AI can infer sensitive personal details through proxy data; and second, data protection concerns, as AI models compare information from numerous “data donors”, potentially exposing individuals to unintended data exploitation [32] . Similarly, Malek raises concerns about how AI training data is gathered and processed, emphasizing the risks of biased and discriminatory effects [36] . If the dataset used for training AI models is skewed, incomplete, or historically biased, the predictions generated may reinforce existing inequalities rather than provide fair and impartial legal outcomes. This underscores the need for transparency, accountability, and ethical oversight in AI-driven legal prediction systems [37] .
8. Conclusions
AI-driven legal prediction presents significant advancements in enhancing efficiency, consistency, and predictability in judicial decision-making. By processing vast amounts of legal data, AI has the potential to reduce case backlogs, minimize inconsistencies, and improve access to justice. However, despite these advantages, its limitations and criticisms cannot be overlooked. The accuracy and fairness of AI predictions heavily depend on the quality and neutrality of the data used, and biases within legal datasets can lead to flawed, discriminatory, or unjust outcomes. Additionally, concerns regarding accountability, transparency, and ethical implications pose challenges to the widespread adoption of AI in judicial processes.
Moreover, AI lacks the ability to replicate human emotions, moral reasoning, and contextual adaptability, which are fundamental aspects of legal decision-making. Justice is not just about applying laws mechanically—it also involves compassion, ethical deliberation, and an understanding of societal values. As Sourdin argues, human intelligence and discretion remain irreplaceable in ensuring that legal decisions are not only legally sound but also perceived as fair and just.
Therefore, while AI can serve as a valuable tool in legal systems, its role should be complementary rather than a complete replacement for human judgment. A balanced approach is necessary—one that leverages technological advancements while upholding the core principles of fairness, accountability, and due process. Only by integrating AI with human oversight and ethical safeguards can the legal system harness its benefits without compromising justice.