DOI: 10.5553/ELR.000275

Erasmus Law ReviewAccess_open

Article

The Judicial Duty to State Reasons in the Age of Automation? The Impact of Generative AI Systems on the Legitimacy of Judicial Decision-Making

Keywords judicial duty to state reasons, algorithmic and generative AI systems in courts, fair trial, legitimacy of judicial decision-making, judicial decision support systems
Authors
DOI
Show PDF Show fullscreen
Abstract Author's information Statistics Citation
This article has been viewed times.
This article been downloaded 0 times.
Suggested citation
Victoria Hendrickx, "The Judicial Duty to State Reasons in the Age of Automation? The Impact of Generative AI Systems on the Legitimacy of Judicial Decision-Making", Erasmus Law Review, 1 (incomplete), (2024):

    The steady reliance on algorithmic and artificial intelligence (AI) systems within judicial proceedings poses several risks to fundamental rights, including the judicial duty to state reasons. The judicial duty to state reasons refers to the obligation of judges to provide reasons whenever they rule in a case. Although this duty constitutes an essential component for the rule of law and the right to a fair trial, and pursues important normative goals, it has not been studied much – let alone in the age of automation. Through the analysis of the case study of generative AI systems assisting judges in their legal drafting, this article aims to explore how and to what extent such systems can affect the judicial duty to state reasons. To this end, the article focuses on the impact of generative AI systems on one of the underlying normative values of this duty, notably the legitimacy of judicial decision-making. The assessment shows that while generative AI systems can strengthen the legitimacy of judicial decision-making and thereby the judicial duty to state reasons, they simultaneously impede it in various ways. The article therefore briefly reflects on possible avenues to uphold the judicial duty to state reasons in the age of automation. It highlights the importance of AI literacy and explores the potential of enhanced transparency, accountability and legitimacy through a more robust duty to state reasons, while also acknowledging the limits of such approaches.

Dit artikel wordt geciteerd in

    • 1. Introduction

      This article examines the impact of generative AI systems on the judicial duty to state reasons, and, in particular, on the underlying normative goals it pursues. The judicial duty to state reasons refers to the obligation of judges to provide reasons whenever they rule in a case. The duty constitutes a crucial component of the rule of law and the right to a fair trial enshrined in Article 6, European Convention on Human Rights (ECHR). Stating reasons plays an important role in promoting trust in courts and pursues several underlying normative goals, such as legitimacy, accountability and transparency of judicial decision-making. However, despite its importance, this duty has not been studied much, especially in the age of automation. While the normative goals pursued by the judicial duty to state reasons must be guaranteed in both human- and algorithmic-led decision-making processes, there are certain features specific to the functioning of algorithms that warrant further scrutiny. Algorithms and AI systems differ from human decision-making as the information and multitude of features that can be taken into account vary, as well as the scale at which they operationalise and generalise, and the way their explicability differs.1x L. Naudts, ‘Fair or Unfair Differentiation? Reconsidering the Concept of Equality for the Regulation of Algorithmically Guided Decision-Making’ (PhD thesis KU Leuven, 2023). This may be all the more worrisome in high-stakes contexts such as the judiciary, where decisions may affect not only individuals but society at large. Additionally, judges serve important social functions in liberal democracies, such as resolving conflicts, fostering trust in the judiciary, and safeguarding fundamental rights.2x A.R. Reeves, ‘Do Judges Have An Obligation to Enforce the Law?: Moral Responsibility and Judicial-Reasoning’, 29(2) Law And Philosophy 159-87 (2009). Given the growing reliance of judges on algorithmic and AI systems,3x A recent UNESCO survey indicates that over 40% of judges interviewed in over 90 countries reported using algorithmic systems in their work-related activities. See, J.D. Gutiérrez, ‘UNESCO Global Judges’ Initiative: Survey on the Use of AI Systems by Judicial Operators’, UNESCO (2024), CI/DIT/2024/JI/01. it is thus necessary and especially timely to reflect on how the judicial duty to state reasons can remain safeguarded within the algorithmic context.
      Through the analysis of the case study of generative AI systems assisting judges in their legal drafting, this article aims to explore how and to what extent such systems can affect the judicial duty to state reasons. To this end, the article focuses on the impact of generative AI systems on one of the underlying normative values of this duty, notably the legitimacy of judicial decision-making.
      The article is structured in the following way. Section 2 discusses the implementation of algorithmic and AI systems in the judiciary. Given the increasing use as well as controversy surrounding it, the article specifically delves into generative AI systems used in courts assisting judges in their legal drafting. The article highlights both opportunities and risks associated with their use in courts. Section 3 delves into the concept of the judicial duty to state reasons. Although a comprehensive theory on the judicial duty to state reasons is currently lacking, the article outlines its essential elements, development through case law and importance in liberal democracies. Moreover, I propose to strengthen the analysis by developing the underlying normative goals pursued by the duty in more depth. By bringing in the normative goals of the duty, not only can the theory on the judicial duty to state reasons be strengthened, but it also allows to examine the impact of algorithmic and AI systems on the duty in this perspective. I detect three underlying normative goals, namely transparency, accountability and legitimacy of judicial decision-making. In section 4, I examine how and to what extent generative AI systems assisting judges in their legal drafting can affect the judicial duty to state reasons by focusing on one of the underlying normative goals, notably the legitimacy of judicial decision-making. Narrowing the scope down to legitimacy allows for a more in-depth, focused and feasible assessment. More importantly, unlike transparency and accountability,4x I. Carnat, ‘Addressing the Risks of Generative AI for the Judiciary: The Accountability Framework(s) Under the EU AI Act’, SSRN (2024), https://doi.org/10.2139/ssrn.4887438; I. Cheong, A. Caliskan & T. Kohno, ‘Safeguarding Human Values: Rethinking US Law for Generative AI’s Societal Impacts’, AI Ethics (2024), https://doi.org/10.1007/s43681-024-00451-4. the normative goal of legitimacy has received less attention. The section looks at both the positive and the negative impact of such systems on the legitimacy. In section 5, I briefly reflect on possible avenues on how to continue safeguarding the judicial duty to state reasons in the age of automation. Section 6 concludes the article.
      The research in this article is delineated in three ways. First, the research focuses exclusively on algorithmic and AI systems used by courts, without extending to other judicial actors. Second, my assessment concerns the specific case study of generative AI systems used in courts. While several conclusions can likely be extended to other algorithmic and AI systems as well, certain other conclusions will be exclusively relevant to this case study. Finally, I focus on the European level, which includes case law and literature of the European Court of Human Rights (ECtHR) – although for inspiration I might also refer to illustrations from other countries where the use of AI in courts is more prevalent.

    • 2. Emergence of Algorithmic and AI Systems in Courts

      2.1 Algorithmic and AI Systems in Courts

      The use of algorithmic and AI systems in courts has grown significantly over the past decades, both in civil and criminal cases.5x Already in the 1980s, research was conducted on automation of court applications, although mostly focusing on rule-based experts systems, referring to systems that could reason on the basis of a predetermined set of rules, often expressed in if-then statements. See, R.E. Susskind, ‘Expert Systems in Law: A Jurisprudential Inquiry’, 29(2) The Modern Law Review 168-94 (1986); D. Kolkman, F. Bex, N. Narayan & M. van der Put, ‘Justitia ex machina: The Impact of An AI System on Legal Decision-making and Discretionary Authority’, 11(2) Big Data & Society (2024), https://doi.org/10.1177/20539517241255101. Whereas initially it revolved around digitalising the judiciary and assisting with administrative tasks, such as online case management, communication between court personnel, and automatic allocation of cases to competent courts, there has been a noticeable trend towards algorithmic and AI systems helping with substantial tasks in the judicial decision-making process.6x N. Smuha and V. Hendrickx, ‘AI and the Administration of Justice: Taking “Precedent Analysis” as a Use Case to Assess the Adequacy of the AI Act’, The Law, Ethics & Policy of AI Blog (2 October 2023), https://www.law.kuleuven.be/ai-summer-school/blogpost/Blogposts/AI-administration-justice; T. Sourdin, ‘Judge v Robot? Artificial Intelligence and Judicial Decision-making’, 41(4) UNSW Law Journal 1114-33 (2018); D. Reiling, ‘Court and AI’, 11(2) International Journal for Court Administration 1-10 (2020); CEPEJ, ‘Possible Use of AI to Support the Work of Courts and Legal Professionals’, https://www.coe.int/en/web/cepej/tools-for-courts-and-judicial-professionals-for-the-practical-implementation-of-ai (last visited 7 May 2024); B. Custers, ‘AI in Criminal Law: An Overview of AI Applications in Substantive and Procedural Criminal Law’, in B. Custers and E. Fosch-Villaronga (eds.), Law and Artificial Intelligence. Information Technology and Law Series (2022) 35, at 205. For instance, in several countries, judges are relying on systems performing risks assessments to predict the likelihood of someone reoffending.7x M. Medvedeva, M. Wieling & M. Vols. ‘Rethinking the Field of Automatic Prediction of Court Decisions’, 31 Artificial Intelligence and Law 195-212 (2023); M. A. Malek, ‘Criminal Courts’ Artificial Intelligence: The Way It Reinforces Bias and Discrimination’, 2 AI and Ethics 1-13 (2022).These tools are not always AI-based; many rely on purely statistical models that analyse input variables, such as criminal history, socioeconomic status or behavioural patterns, to generate risk scores.8x W.T. Miller, C.A. Campbell, J. Papp, & E. Ruhland, ‘The Contribution of Static and Dynamic Factors to Recidivism Prediction for Black and White Youth Offenders’, 66(16) International Journal of Offender Therapy and Comparative Criminology 1779-1795 (2021). In contrast, AI-based risk assessments models use general-purpose learning algorithms to detect patterns in unstructured and complex data.9x D. Bzdok, N. Altman, & M. Krzywinski, ‘Statistics Versus Machine Learning’, 15(4) Nature Methods 233 (2018); G. van Dijck, ‘Predicting Recidivism Risk Meets AI Act’, European Journal on Criminal Policy and Research 407–23 (2022). Similarly, judges have been using systems to calculate average sentences for crimes or to recommend appropriate sentences based on (comparable) circumstances.10x I. Taylor, ‘Justice by Algorithm: The Limits of AI in Criminal Sentencing’, 42 Criminal Justice Ethics 193-213 (2023); Judicial Commission of New South Wales, ‘Judicial Information Research System (JIRS)’, https://www.judcom.nsw.gov.au/judicial-information-research-system-jirs/ (last visited 7 May 2024); G. Sartor, et al., ‘Thirty Years of Artificial Intelligence and Law: The Second Decade’, 30(4) Artificial Intelligence and Law 521-57 (2022). These systems too can be either statistical or AI-driven. More recently, emerging generative AI systems, like ChatGPT, are gradually being used to assist judges in legal drafting. Generative AI systems are based on large language models (LLMs),11x T. Taulli, ‘Large Language Models’, in T. Taulli (ed.), Generative AI 93-125 (2023). which refer to AI systems trained on large amounts of data – specifically legal data in this context – to produce human-like text, such as legal recommendations.12x For a detailed explanation of LLMs, see P. Kumar, ‘Large Language Models (LLMs): Survey, Technical Frameworks, and Future Challenges’, 57(260) Artificial Intelligence Review 1-51 (2024). In 2023, a Colombian judge first relied on ChatGPT in drafting his judgment.13x Labour Circuit of Cartagena, case 032, 30 January 2023, https://forogpp.com/wp-content/uploads/2023/01/sentencia-tutela-segunda-instancia-rad.-13001410500420220045901.pdf; L. Taylor, ‘Colombian Judge Says He Used ChatGPT in Ruling’, The Guardian 3 February 2023, https://www.theguardian.com/technology/2023/feb/03/colombia-judge-chatgpt-ruling. Faced with a case on whether the parents of an autistic child were entitled to healthcare benefits, the judge asked ChatGPT some legal questions regarding the costs and reimbursements of medical treatment. Similarly, an Indian High Court judge used ChatGPT to help justify his decision on denying bail to a man accused of assault and murder. To that end, he asked for a summary of case law on the issue.14x A. Smith, A. Moloney & A. Asher-Schapiro, ‘AI in the Courtroom: Judges enlist ChatGPT Help, Critics Cite Risks’, CS Monitor 30 May 2024, https://www.csmonitor.com/USA/Justice/2023/0530/AI-in-the-courtroom-Judges-enlist-ChatGPT-help-critics-cite-risks (last visited 7 May 2024). In May 2024, an American judge experimented with ChatGPT to interpret a key legal term15x The case concerned a convenience store robbery. The judge asked ChatGPT what the ordinary meaning of ‘physically restrained’ generally referred to. This term was crucial to the case’s outcome, as one of the key questions was whether the robber had ‘physically restrained’ one of the victims during the robbery. crucial to the case’s outcome.16x United States Court of Appeal, Eleventh Circuit, No. 23-10478, 9 May 2024, https://fingfx.thomsonreuters.com/gfx/legaldocs/jnpwanznepw/09062024newsom.pdf. Closer to us, a UK Court of Appeal judge admitted to using ChatGPT to summarise and to write a part of his judgment.17x H. Farah, ‘Court of Appeal Judge Praises ‘Jolly Useful’ ChatGPT After Asking It for Legal Summary’, The Guardian 15 February 2023, https://www.theguardian.com/technology/2023/sep/15/court-of-appeal-judge-praises-jolly-useful-chatgpt-after-asking-it-for-legal-summary (last visited 7 May 2024). More recently, a Dutch court used ChatGPT to calculate damages by inquiring factual questions.18x Rechtbank Gelderland, ECLI:NL:RBGEL:2024:3636, 7 June 2024, https://linkeddata.overheid.nl/front/portal/document-viewer?ext-id=ECLI:NL:RBGEL:2024:3636. In September 2024, Ukraine revised its Code of Judicial Ethics to explicitly permit judges to use AI, including generative AI, in their professional duties.19x K. Topolsky, ‘Ukrainian Judges Set to Integrate AI Tools into Their Workflow’, elblog.pl 11 September 2024, https://elblog.pl/2024/09/11/ukrainian-judges-set-to-integrate-ai-tools-into-their-workflow/. The emerging guidelines on the use of generative AI systems in courts, such as the ones issued in the UK20x UK Courts and Tribunals Judiciary, ‘Artificial Intelligence (AI) Guidance for Judicial Office Holders’, 12 December 2023, https://www.judiciary.uk/wp-content/uploads/2023/12/AI-Judicial-Guidance.pdf. or by Council of Europe European Commission for the efficiency of justice (CEPEJ),21x CEPEJ, ‘Information Note: Use of Generative Artificial Intelligence (AI) By Judicial Professionals in a Work-related Context’, 12 February 2024, https://rm.coe.int/cepej-gt-cyberjust-2023-5final-en-note-on-generative-ai/1680ae8e01. suggest that its use in other countries may follow soon. This is further supported by a recent UNESCO survey, which revealed that 6 to 9% of judges interviewed across more than 90 countries reported using ChatGPT or another chatbot in work-related activities on a daily or weekly basis.22x Gutiérrez, above n. 3. Although not all judges use ChatGPT daily, these developments indicate that the reliance on generative AI is becoming increasingly common and is no longer purely anecdotal.
      In addition to these algorithmic and AI systems supporting judges, often referred to as judicial decision support systems (JDSS), algorithmic systems might potentially also (entirely) replace judges and issue judgments autonomously, which is referred to as automated decision-making systems (ADMS). Consider, for instance, China’s robot judges that render autonomous decisions.23x N. Wang and M.Y. Tian, ‘“Intelligent Justice”: AI Implementations in China’s Legal Systems’, in A. Hanemaayer (ed.), Artificial Intelligence and Its Discontents. Social and Cultural Studies of Robots and AI (2022); N. Wang, ‘“Black Box Justice”: Robot Judges and AI-based Judgement Processes in China’s Court System’, IEEE International Symposium on Technology and Society 58-65 (2020). However, considering the current inability of algorithmic systems to justify in clear terms the outcome of decisions,24x J. Morison and A. Harkens, ‘Re-engineering Justice? Robot Judges, Computerized Courts and (Semi) Automated Legal Decision-making’, 39(4) Legal Studies 618-35 (2019). the system’s lack of transparency, concerns around trust, and the general reluctance and conservative attitude of judges towards technology,25x T. Sourdin, Judges, Technology and Artificial Intelligence (2021); M. Van der Put, ‘Kunstmatige intelligentie bij rechterlijke oordeelsvorming’ (PhD thesis Tilburg University, 2022). ADMS are not (yet) deemed effectively deployable in the judiciary in Europe in the foreseeable future. This article therefore exclusively focuses on algorithmic and AI systems that assist and support judges in their decision-making process, while leaving the final decision to them.

      2.2 Opportunities and Risks of Algorithmic and AI Systems in Courts

      If used responsibly and in accordance with the law, the transformational capacity of JDSS on the judiciary could bring about significant opportunities. Their deployment could mainly generate efficiency gains and ease the workload of judges by optimising their work processes.26x M. Zalnieriute and F. Bell, ‘Technology and the Judicial Role’, in G. Appleby and A. Lynch (eds.), The Judge, the Judiciary and the Court: Individual, Collegial and Institutional Judicial Dynamics in Australia 1-21 (2021); van der Put above n. 25. Better time management could ensure reduced court fees and overall costs and less excessive delays. It is also argued that, unlike their human counterparts, JDSS never fatigue and can process more data more rapidly.27x D. Barysé and R. Sarel, ‘Algorithms in the Court: Does It Matter Which Part of the Judicial Decision-making is Automated?’ Artificial Intelligence and Law (2024); M. Hildebrandt, ‘The Issue of Bias’, in M. Pelillo and T. Scantamburlo (eds.), Machines We Trust. Perspectives on Dependable AI (2021), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3497597. Moreover, these efficiency gains could also strengthen more substantial rights by, for example, lowering the threshold for individuals to access justice, thanks to reduced financial barriers and shortened waiting times.28x Sourdin, above n. 25.
      At the same time, however, technologies cannot be considered a panacea for all ills that justice systems hold. The deployment of JDSS raises diverse ethical and legal challenges, which are often intertwined.29x B. Mittelstadt et al., ‘The Ethics of Algorithms: Mapping the Debate’, 3(2) Big Data & Society 1-21 (2016). The growing literature, research and real examples show the many and diverse risks arising from the use of algorithmic and AI systems in the judiciary. Scholars are, for example, exploring risks related to biased outcomes,30x Hildebrandt, above n. 27. the opacity of systems and subsequential lack of transparency,31x U. Franke, ‘First-and Second-Level Bias in Automated Decision-making’, 35(2) Philosophy & Technology 21 (2022). and the unclear attribution of accountability.32x L. Diver, ‘Digisprudence: The Design of Legitimate Code’, 13(2) Law, Innovation and Technology 325-54 (2021). The adverse impact of algorithmic systems on the rule of law is currently also the subject of scholarly research,33x N. Smuha, Algorithmic Rule by Law: How Algorithmic Regulation in the Public Sector Erodes the Rule of Law (2024). focusing on questions about judicial independence and impartiality, minimising the risk of automation bias, or the threshold for allowing AI-based evidence into courts.34x V. Dessers and P. Valcke, ‘Judicial Analytics on Trial’, 27(6) Maastricht Journal of European and Comparative Law 759-73 (2020); K. Quezada-Tavárez, P. Vogiatzoglou & S. Royer, ‘Legal Challenges in Bringing AI Evidence to the Criminal Courtroom’, 12(4) New Journal of European Criminal Law 531-51 (2021).
      A key issue within the broader debate on the rule of law, which – unlike what has been discussed so far – has hitherto been under-examined, is the question of how and to what extent the judicial duty to state reasons, including its normative goals it fulfils in a liberal democratic society, is affected when judges start relying on algorithmic systems in their decision-making process. The judicial duty to state reasons refers to the obligations of judges to provide reasons whenever they rule in a case. Although studies started to emerge about JDSS’ ability to impact the judicial duty to state reasons,35x A. Albright, ‘The Hidden Effects of Algorithmic Recommendations’, github (2024), https://apalbright.github.io/pdfs/albright-algo-recs-PAPER.pdf; T. Araujo et al., ‘In AI we trust? Perceptions about Automated Decision-making by Artificial Intelligence’, 35 AI & Society 611-23 (2020); Dessers and Valcke, above n. 34; N. Chronowski, K. Kálmán and B. Szentgáli-Tóth, ‘Artificial Intelligence, Justice, and Certain Aspects of Right to a Fair Trial’, 10(2) Acta Universitatis Sapientiae, Legal Studies 169-89 (2021); L. Beckman, J. Hultin Rosenberg, & K. Jebari, ‘AI and Democratic Legitimacy’, 39 AI & Society 975-84 (2022). there is a notable gap in understanding precisely what this impact consists of, and how it affects the normative goals this duty is meant to fulfil. While the importance of this duty is generally accepted – as will be discussed in the next section–36x J. Rawls, A Theory of Justice (1971); Morison and Harkens, above n. 24; R. Simmons, ‘Big Data, Machine Judges, and the Legitimacy of the Criminal Justice System’, 52 University of California Davis Law Review 1067-118 (2021). a comprehensive theory on the duty is currently lacking. However, without a more thorough understanding of the importance of this duty and the goals it pursues, it is not possible to assess ways in which judges’ reliance on JDSS can affect it, which is nevertheless crucial given the fast pace at which the technology is being rolled out in judiciaries across the world. Therefore, the subsequent part of this article provides a concise conceptualisation of the judicial duty to state reasons.

    • 3. Conceptualising the Judicial Duty to State Reasons

      3.1 The Judicial Duty to State Reasons

      The judicial duty to state reasons refers to the obligation of judges to provide reasons whenever they rule in a case. The duty constitutes an essential component of the rule of law and the right to a fair trial enshrined in Article 6 ECHR. It has a procedural nature, and is also referred to as the right to a reasoned decision. There is no definite definition of what the duty entails, hence Frederick Schauer’s definition of reason-giving can serve as a starting point, namely ‘the explicit act of offering a justification or explanation for the result reached’.37x F. Schauer, ‘Giving Reasons’, 47(4) Stanford Law Review 633-59 (1995). As Schauer considers reasons as a condition of rationality, decisions without reasons are subsequently deficient.38x For a more philosophical analysis of the right to justification, see M.F. Ibsen, ‘Rainer Forst’s Justification Paradigm of Critical Theory’, in M.F. Ibsen (ed.), A Critical Theory of Global Justice: The Frankfurt School and World Society 313-341 (2023). In most civil law countries, the duty is a statutory obligation.39x For instance, Art. 149 of the Belgian Constitution provides that ‘Every judgements should be reasoned’. In common law countries, there exists no general duty to state reasons for court decisions – although there has been some discussion on this topic over the years.40x H.L. Ho, ‘The Judicial Duty to Give Reasons’, 20(1) Legal Studies 42-65 (2006); M. Cohen, ‘When Judges Have Reasons Not to Give Reasons: A Comparative Law Approach’, 72(2) Washington and Lee Law Review 483 (2015); J. Bosland and J. Gill, ‘The Principle of Open Justice and the Judicial Duty to Give Public Reasons’, 38(2) Melbourne University Law Review 482-524 (2014); M. Weinberg, ‘Adequate, sufficient and excessive reasons’, in Handbook for Judicial Officers, https://www.judcom.nsw.gov.au/publications/benchbks/judicial_officers/adequate_sufficient_and_excessive_reasons.html (last visited 7 May 2024).
      While a comprehensive theory on the judicial duty to state reasons is currently lacking, some insights can be found in the literature and case law of the ECtHR. The judicial duty to state reasons is a formal duty, meaning that it only requires that a judgment be reasoned, regardless of its correctness or accuracy.41x Vetrenko/Moldova, ECHR (18 May 2010), No. 36552/02. This stands in contrast to a substantial duty to state reasons, where judges are obliged to provide a more thorough and robust explanation in justification and explanation of their decisions.42x For instance, this is the case in Brazil and Mexico. See Art. 489 §1 of the Brazilian code of civil procedure, https://www.lawyerinbrazil.com/wp-content/uploads/2019/06/BRAZILIAN_CODE_OF_CIVIL_PROCEDURE-1.pdf, and Art. 402 of the Mexican National Code of Criminal Procedures, https://www.wipo.int/wipolex/en/legislation/details/17432. As a main thrust of the formal judicial duty to state reasons, judges are obliged to give reasons for their judgments but are not required to answer every argument in detail. As long as the essential arguments of the parties – arguments that are significant and capable of influencing the outcome of the proceedings – have been answered, judges have fulfilled their duty.43x Ruiz Torija/Spain, ECHR (9 December 1994), No. 18390/91. A violation occurs when judgments lack adequate reasons. This may arise, for example, when judges remain silent with regard to evidence and statements that are crucial elements to acquit or convict a person,44x Grădinar/Moldova, ECHR (8 April 2008), No. 7170/02. or when the reasons appear prima facie contradictory.45x Hirvisaari/Finland, ECHR (27 September 2001), No. 49684/99/. Additionally, the extent of this duty varies according to the nature of the decision and circumstances of the case.46x Higgins and Others/France, ECHR (19 February 1998), No. 134/1996/753/952. For instance, reasons may be deemed sufficient when judges simply endorse the reasons of the lower court’s decision, whereas in other circumstances not. Sufficient concrete procedural safeguards and parties’ ability to understand the verdict may counterbalance the lack of reasons.47x Helle/Finland, ECHR (19 December 1997), No. 157/1996/776/977; Rusishvili/Georgia, ECtHR (30 September 2022), No. 15269/13; Okropiridize/Georgia, ECHR (7 September 2023), No. 43627/16. There are some particularities with regard to jury trials in criminal cases, where verdicts often lack adequate reasons. Nevertheless, the fairness of such jury trials on account of lack of reasons in jury verdict depends on the procedure as a whole. Again, sufficient concrete procedural safeguards and parties’ ability to understand the verdict may counterbalance the lack of reasons.48x Rusishvili/Georgia, ECHR (30 September 2022), No. 15269/13; Okropiridize/Georgia, ECHR (7 September 2023), No. 43627/16. The ECtHR gives some parameters and guidance in interpretation but also leaves some room for interpretation and appreciation to member states to further concretise the duty in their legal order.

      3.2 The Importance of the Judicial Duty to State Reasons

      The imperative to thoroughly investigate the impact of JDSS on the judicial duty to state reasons emanates from scholarship and case law indicating that the judicial duty to state reasons is essential to the rule of law in liberal democracies.49x Popov/Moldova, ECHR (6 March 2006), No. 19960/04; Morison and Harkens, above n. 24; Simmons, above n. 36. Several goals seem to play a role to this end. For instance, stating reasons is recognised as a safeguard against arbitrary powers, fostering confidence in the judiciary and enhancing the transparency of justice systems.50x Rusishvili/Georgia, ECHR (30 September 2022), No. 15269/13. The duty reflects the proper administration of justice and underpins other fundamental rights, such as the right to a fair trial and the right of defence.51x Hirvisaari/Finland, ECHR (27 September 2001), No. 49684/99. By stating reasons, judges also demonstrate that parties’ arguments have been heard, enabling them to make an informed decision regarding appeals.52x Perez/France, ECHR (12 February 2004), No. 47287/99; B. Maes, De motiveringsplicht van de rechter (1990); W. Van Gerven, De taak van de rechter in een West-Europese democratie (2013). More insight into judges’ reasoning could also lead to more acceptance of the judicial decisions and contribute to legal certainty.53x Melnic/Moldova, ECHR (14 February 2007), No. 6923/03; H. Mercier and D. Sperber, ‘Why do humans reason?’, 34(2) Behavioral and Brain Sciences 57-111 (2011); P. Leanza and O. Pridal, Right to a Fair Trial: Article 6 of the European Convention on Human Rights (2014). Giving reasons is also a way to improve the quality of decisions.54x Schauer, above n. 37; M. Shapiro, ‘The Giving Reasons Requirement’, 1992(1) University of Chicago Legal Forum 179-220 (1992).
      In addition, the judicial duty to state reasons constitutes a core feature of procedural justice. As opposed to substantive justice, which focuses on the fairness of the final decision or the allocation of benefits and burdens, procedural justice is concerned with the fairness of the processes used to reach that decision or allocation.55x D. Miller, ‘Justice’, Stanford Encyclopedia of Philosophy 6 August 2021, https://plato.stanford.edu/entries/justice/. It concerns the idea of fair processes and treatment, and fair decision-making by authorities. In the first place, procedural justice is considered an end in itself, ensuring that parties involved in the case – and the public at large – feel they have been treated fairly. This perspective is crucial because how individuals view their treatment by courts can significantly affect their satisfaction with the outcome. It emphasises that both the final decision and the process leading to it matter.56x E. Sargeant, J. Barkworth & N.S. Madon, ‘Procedural Justice in the Criminal Justice System’, Oxford Research Encyclopedia of Criminology (28 September 2020), https://doi.org/10.1093/acrefore/9780190264079.013.635. Procedural justice is also a means to yield other outcomes. Focusing on procedural aspects of dispute resolution may prove useful in the sense that where it is often difficult for people to agree on substantive outcomes or the attribution of costs and benefits, it might be easier to ‘simply’ focus on whether procedures are fair or not.57x Simmons, above n. 36. Fair procedures also contribute to substantive justice, i.e. fair outcomes. Fair procedures serve as constraints against erroneous decisions in that they operate as counterbalances to the personal interests of decision-makers.58x L. Alexander, ‘Are Procedural Rights Derivative Substantive Rights?’, 17(1) Law and Philosophy 19-42 (1998); T.C. Grey, ‘Procedural Fairness and Substantive Rights’, 18 Nomos 182-205 (1977). Procedural justice emancipates parties by allowing them to take control and have a say in the proceedings, despite not making the final decision themselves.59x R. Vermunt and H. Steensma, ‘Procedural Justice’, in C. Sabbagh and M. Schmitt (eds.), Handbook of Social Justice Theory and Research 219-36 (2016). Moreover, procedural justice is instrumental for trust in and legitimacy of courts as well as compliance with law and judgments.60x T.R. Tyler, ‘Psychological Perspectives on Legitimacy and Legitimation’, 57(1) Annual Review of Psychology 375-400 (2006); T.R. Tyler, ‘Procedural Justice, Legitimacy, and the Effective Rule of Law’, 30 Crime and Justice 283-357 (2003); T.R. Tyler, ‘Procedural Justice and the Courts’, 44(1/2) Court Review: The Journal of the American Judges Association 26-31 (2007). The judicial duty to state reasons, having an inherent procedural nature, thus attains great importance for the reasons outlined.

      3.3 The Normative Goals of the Judicial Duty to State Reasons

      To strengthen the theory on the judicial duty to state reasons, I propose to additionally develop the underlying normative goals pursued by this duty in more detail. In fact, a more sophisticated and granular analysis of the normative goals can help in analysing the impact of generative AI systems on the duty. I detect three normative goals: transparency, accountability and legitimacy of judicial decision-making – each of them consistently emerging as central themes in both literature and case law.61x Please note, other goals might be detected as well, or they might be structured differently.
      First of all, reason-giving is considered a device to enhance transparency. This goal can be linked to Bentham’s theory of justice, in which he states that publicity is the soul of law.62x J. Bentham, Draught of a New Plan for the Organisation of the Judicial Establishment in France (1790); G. Postema, ‘The Soul of Justice: Bentham on Publicity, Law, and the Rule of Law’, in X. Zhai and M. Quinn (eds.), Bentham’s Theory of Law and Public Opinion 267-82 (2013). Transparency and publicity serve as a tool against opacity, facilitate explainability and procedural justice and fairness, and allow for judicial review, evaluation, audit and vetting.63x Shapiro, above n. 54; Simmons, above n. 36; Beckman, above n. 35. From an epistemological viewpoint, transparency is crucial in the sense that giving reasons enables parties to understand the decision and to know when they can best exercise their right of appeal.64x V.M. Dryer, ‘The Epistemology and Science of Justified Reason’ 50(2) Philosophia 503-32 (2021); R. Binns et al., ‘“It’s Reducing a Human Being to a Percentage”; Perceptions of Justice in Algorithmic Decisions’, (2018), https://arxiv.org/abs/1801.10408v1. Transparency also contributes to legal certainty.65x M. Hazelhorst, ‘The Right to a Fair Trial in Civil Cases’, in M. Hazelhorst (ed.), Free Movement of Civil Judgements in the European Union and the Right to a Fair Trial 123-75 (2017). Second, reason-giving is an accountability enhancing mechanism. As judges are not accountable by election, their accountability stems from reasoned explanations.66x Cohen, above n. 40. In this regard, reference can be made to Bentham, who argues that reason-giving protects against arbitrary exercise of power and against abuse of judicial discretion. Reason-giving also enables monitoring and public oversight, allows scrutiny and hence maximises responsibility.67x Bentham, above n. 62; Postema, above n. 62; Shapiro, above n. 54. By requiring officials to provide reasons, their (‘sinister’) motivation can be constrained.68x M. Cohen, ‘Sincerity and Reason-Giving: When May Legal Decision Makers Lie’, 59 DePaul L. Rev 1091-150 (2010). Third, several liberal democratic political theories emphasise the idea of public justification as a key requirement for the legitimacy of courts and democracy.69x Rawls, above n. 36; L. Fuller, ‘Forms and Limits of Adjudication’, 92(2) Harvard Law Review 353-409 (1978); S. F. D’Agostino and G.F. Gaus, ‘Public Reason’, 4 Ethical Theory and Moral Practice 91-92 (2001); R. Forst, The Right to Justification (2013); Beckman, above n. 35. The definition of legitimacy has long been the subject of debate, still lacking a uniform consensus.70x M.J. Warning, ‘Concepts of Legitimacy’, in M.J. Warning (ed.), Transnational Public Governance. Transformations of the State 179-89 (2009). Nevertheless, Max Weber’s perspective offers valuable insights on how to understand legitimacy. He defines it as a feeling of ‘an internal sense of moral obligation to obey authority’ wherein legitimacy is an attribute bestowed by the subjects of power (in this case, citizens) upon the holders of power (in this case, courts).71x M. Weber, The Theory of Economic and Social Organization (1947); O.M. Akinlabi, ‘Understanding Legitimacy in Weber’s Perspectives and in Contemporary Society’, in O.M. Akinlabi (ed.), Police-Citizen Relations in Nigeria. Palgrave’s Critical Policing Studies 11-24 (2022); R. Cotterrell, ‘Legality and Legitimacy: The Sociology of Max Weber’, in R. Cotterrel (ed.), Law’s Community: Legal Theory in Sociological Perspectives 134-59 (2012). Another key definition stems from Tom R. Tyler, describing legitimacy as ‘a property of an authority or institution that leads people to feel that that authority or institution is entitled to be deferred to and obeyed’. He also underscores that legitimacy depends on how society perceives the authority.72x J. Sunshine and T.R. Tyler, ‘The Role of Procedural Justice and Legitimacy in Shaping Public Support for Policing’, 37(3) Law & Society Review 513-48 (2003). For the purpose of this article, I draw on both definitions and understand legitimacy as the ‘property of a legal authority, such as courts, that they are worthy of their institutional role and that leads people to believe the authority is appropriate, proper and just’. As a consequence, the legitimacy of courts is fundamentally tied to how they are perceived by society.
      The judicial duty to state reasons pursues this normative goal of legitimacy in multiple ways. By providing reasons, judges justify their authority and the decisions they make. Clear reason-giving allows parties and the general public to understand the logic, evidence and legal principles underlying judicial decisions, making the process appear fairer and more just, thereby reinforcing courts’ legitimacy. Stating reasons helps demonstrate that judges are guided by legal principles rather than personal biases or external pressure. Proper reasoning thereby fosters perceptions of legitimacy and entitlement of obedience.73x T.R. Tyler, ‘Procedural Justice and the Courts’, 44(1/2) Court Review: The Journal of the American Judges Association 26-31 (2007). Legitimacy, in turn, ensures acceptance and tolerance of outcomes, and is crucial to make parties and society respect and execute judicial decisions. It can also foster social trust in courts and public confidence in the judicial system. The widespread public belief that a court is a legitimate political institution facilitates acceptance of (controversial) courts’ decisions.74x T.R. Tyler, Why People Obey the Law (1990); Tyler (2006), above n. 60; J. Ulenaers, ‘The Impact of Artificial Intelligence on the Right to a Fair Trial: Towards a Robot Judge?’ 11(2) Asian Journal of Law and Economics 1 (2022); Chronowski et al., above n. 35; A. Mentovich, J. Prescott & O. Rabinovich-Einy, ‘Legitimacy and Online Proceedings: Procedural Justice, Access to Justice, and the Role of Income’, 57(2) Law & Society Review 189-213 (2023).
      Please note: K. Martin and A. Waldman argue that not all procedural rights have the same effect on legitimacy. In their study, in the context of firms’ legitimacy, they discovered that only an appeal to a human authority tends to legitimise algorithmic-based decisions. See: K. Martin and A. Waldman, ‘Are Algorithmic Decisions Legitimate? The Effect of Process and Outcomes on Perceptions of Legitimacy of AI Decisions’, 183 Journal of Business Ethics 653-670 (2023).

      Last, as mentioned, the legitimacy of judicial authorities is directly linked to procedural justice. According to the process-based model of regulation proposed by Tom R. Tyler, procedural justice is linked to legitimacy and legitimacy to compliance. Legitimacy is obtained and maintained by adhering to procedural rights. Procedural justice reinforces perceived legitimacy of legal authorities, which in turn promotes citizens’ compliance. People believe authorities are more legitimate when they see their actions as consistent with fair procedures. While procedural justice is not the only basis for legitimacy, it is an influential aspect, and, thus, legitimacy has a normative status: it can shape and influence society.75x D. Johnson et al., ‘Public Perceptions of the Legitimacy of the Law and Legal Authorities: Evidence from the Caribbean’, 48, Law & Society Review 947-78 (2014); I. Feygina and T.R. Tyler, ‘Procedural Justice and System-Justifying Motivations’, in J. Tost et al. (eds.), Social and Psychological Bases of Ideology and System Justification 351-70 (2009).

    • 4. Impact of Generative AI Systems on the Duty to State Reasons

      4.1 Setting the Scene: Research Objective

      In light of the importance of the judicial duty to state reasons and its underlying normative goals, and if we are to continue to safeguard these in the age of automation, it is imperative to assess how and to what extent the duty and its goals are impacted by JDSS used in courts. Despite existing research indicating that there can be an impact, there remains a gap in understanding the precise nature of how and to what extent the impact reaches. Therefore, this article examines how and to what extent generative AI systems assisting judges in their legal drafting can affect the judicial duty to state reasons by focusing on one underlying normative goals of the duty, notably the legitimacy of judicial decision-making.
      I argue that while generative AI systems may in some instances enhance the legitimacy of judicial decision-making and, by extension, the judicial duty to state reasons, they may at the same time negatively affect them. The extent of the negative impact will depend on how judges will concretely use the generative AI system.

      4.2 How Generative AI Systems Can Enhance the Legitimacy of Judicial Decision-Making

      The legitimacy of courts depends on how society perceive courts and judges. When judges rely on generative AI systems to assist them in summarising case law or facts, judges can allocate more time to critical tasks in the judicial decision-making process, such as justifying their decisions. Similarly, when judges rely on generative AI systems to assist them in resolving legal questions, formulating justifications and drafting judgments, the judicial decision-making process can be expedited. As both the Colombian and the Indian judges emphasised, generative AI systems allow judges to generate well-organised and structured information that streamline court processes in a rapid way. In turn, more consistent and streamlined processes can be beneficial for legitimacy, as unpredictable judges can erode legal stability.76x G. Yalcin et al., ‘Perceptions of Justice by Algorithms’, 31 Artificial Intelligence and Law 269-92 (2023). By speeding up judicial processes and alleviating persistent backlogs,77x European Commission, ‘The 2023 EU Justice Scoreboard’, CPM(203) 309, https://commission.europa.eu/system/files/2023-06/Justice%20Scoreboard%202023_0.pdf. cases can be heard faster and subsequently strengthen substantial rights, such as the right to access to justice. In this way, the efficiency gains and bolstering of substantial rights through the deployment of generative AI systems can enhance the perceived legitimacy of individuals in judicial decision-making and, by extension, reinforce the judicial duty to state reasons.
      Besides accelerating the administration of justice, generative AI systems can also be deployed as so-called ‘virtual sparring partners’ to improve the quality of judges’ reasons. Generative AI systems can be deployed as virtual sparring partners in the sense that judges can interact with generative legal chatbots to check their line of reasoning and argumentation – similarly to how they would do this with a fellow judge. Judges can submit their arguments and reasoning to the system, which critically examines them and provides feedback. This way, judges are encouraged to reflect on their reasons and judicial decision-making. One can thus argue that judges will consider their decisions more thoughtfully, including the reasons for the decisions. The increased robustness of reasons can foster greater trust in and legitimacy of the functioning of the judiciary. This would be similar to other new technologies, such as online proceedings,78x Mentovich et al., above n. 74. which have been proven to improve the legitimacy of courts. The US judge who experimented with ChatGPT to explore the meaning of ‘physically restrained’ is a noteworthy example of this. By posing factual questions, he was able to reflect on and challenge his own reasoning, thereby using it as a tool for critical examination. Similarly, in Belgium, a project is currently in progress within the research master’s programme at KU Leuven’s law faculty on virtual applications of self-questioning through the use of generative AI.79x Judicial Lawmaking, research master law faculty KU Leuven, https://onderwijsaanbod.kuleuven.be/syllabi/e/C01F9AE.htm#activetab=doelstellingen_idp215312. In the project, students and professors are experimenting with ChatGPT for self-reflection, exploring the importance of crafting precise prompts and assessing the usefulness of the generated output. While this initiative is currently academic, its findings could eventually be applied in real-world judicial practice.

      4.3 How Generative AI Systems Can Jeopardise the Legitimacy of Judicial Decision-Making

      Despite the potential of generative AI systems to enhance the legitimacy of judicial decision-making and render the judicial duty to state reasons itself more robust, these systems may equally undermine the legitimacy and the duty. Several factors may play a role in compromising the legitimacy of judicial decision-making. Most of the factors cited are already examined in a more general context. This article goes a step further by examining them specifically in the context of the judiciary, the judicial duty to state reasons and the legitimacy of judicial decision-making. I focus on the following factors: the importance of training data and the training process of the AI systems; the involvement of private companies; judicial independence; the inability of generative AI systems to understand and reason like humans; concerns regarding privacy and data protection; and some ethical considerations.
      The extent to which the legitimacy and the duty are affected will depend on how judges concretely apply the system in their judicial decision-making process. At one extreme, judges can simple copy-paste and replicate the system’s output, which constitutes the most pronounced example of how the systems can erode legitimacy. On the other hand, judges may choose to completely disregard the system’s output, thereby limiting its adverse impact. Alternatively, judges may opt for a nuanced approach by selectively incorporating elements into their decisions while still drafting their judgments largely or entirely themselves. Furthermore, the impact of generative AI systems on the judicial duty to state reasons will vary depending on the nature of the duty itself. As explained, a formal duty requires judges’ reasons, regardless of their correctness or accuracy, whereas a substantial duty demands more detailed and robust reasoning. Consequently, the type of duty in question will influence how generative AI systems impact it.
      A first concern pertains to the training data and processes of generative AI systems and their direct impact on the judicial duty to state reasons, including its underlying normative goal of legitimacy. The quality and reliability of the AI-generated output depend on the input or data used to train these systems. If the underlying large language model (LLMs) are not trained on a massive corpus of text data that is sufficiently representative, diverse and up-to-date, there is a risk that the output will be incomplete or unreliable, thereby undermining the duty to provide well-reasoned decisions. Let this be precisely one of the tricky issues with generative AI systems: it is usually not known what data has been used to train the systems, let alone whether this data is representative and up-to-date. ChatGPT, for example, is trained only on sources up to early 2022.80x ChatGPT, ‘Introducing ChatGPT’, https://openai.com/index/chatgpt (last visited 8 May 2024). In the context of the judiciary, it is even more crucial that the systems are trained on a sufficiently large data sets of diverse legal sources specific to the judicial process and particular jurisdiction. Most often, however, not all judgments are published, much less available as input to train the system. In the Netherlands, only 4% of all judgments are published,81x De Rechtspraak, ‘Rechtspraak Jaarverslag 2023’, https://www.rechtspraak.nl/SiteCollectionDocuments/Jaarverslag%20Rechtspraak%202023.pdf. while in Belgium, this number drops to just 1%.82x V. Hendrickx, ‘Het Centraal register voor Belgische rechtspraak: een (digitale) stap vooruit?’, 4 Computerrecht 281-289 (2023). When judges use generative AI systems to summarise case law – a seemingly innocuous task – it can lead to unreliable output if the system is trained only on a limited number of cases. On top of that, since ChatGPT is predominantly trained on English sources,83x M.L. Seghier, ‘ChatGPT: Not All Languages Are Equal’, Nature (2023), https://www.nature.com/articles/d41586-023-00680-3; V.D. Lai e.a., ‘ChatGPT Beyond English: Towards a Comprehensive Evaluation of Large Language Models in Multilingual Learning’, arXiv (2023), arXiv:2304.05613. its responses may not accurately reflect the legal principles of non-English-speaking jurisdictions. This limitation jeopardises the ability of judges to rely on such systems for accurate legal summaries, case law retrieval or legal arguments, thereby affecting their ability to fulfil the duty to state reasons effectively.
      Copyright issues may further complicate the training of generative AI systems. While judgments themselves are generally excluded from copyright protection84x For instance, in Belgium, official acts of the government, such as legislation and case law, are explicitly excluded from copyright protection in Art. XI.172 §2 Code of Economic Law. and can be freely used to train the systems – at least to the extent they are published and available – judges also rely on corresponding legal literature, which is most often protected by copyright.85x For a more detailed research on generative AI systems and copyright challenges, see: N. Lucchi, ‘ChatGPT: A Case Study on Copyright Challenges for Generative Artificial Intelligence Systems’, 1 European Journal of Risk Regulation 1-23 (2023); A. Guadamuz, ‘A Scanner Darkly: Copyright Liability and Exceptions in Artificial Intelligence Inputs and Outputs’, 73(2) GRUR International 111-127 (2024); J. Vanherpe, ‘AI and IP: Great Expectations’, in J. De Bruyne and C. Vanleenhove (eds.), Artificial Intelligence and the Law 233-67 (2022). As a result, AI systems may lack access to such copyrighted legal literature, limiting the scope and accuracy of their output. This can impair judges’ ability to provide reasoned justifications for their decisions.
      Moreover, under-representative data sets may also lead to the perpetuation and exacerbation of biases within LLMs, impacting the reliability of their outputs and thereby undermining the duty to state reasons.86x UNESCO, ‘Challenging Systematic Prejudices: An Investigation into Bias Against Women and Girls in Large Language Models’, https://unesdoc.unesco.org/ark:/48223/pf0000388971; V. Hendrickx, ‘Women’s Rights in the Age of Automation’, CiTiP Blog (17 April 2024), https://www.law.kuleuven.be/citip/blog/womens-rights-in-the-age-of-automation/. Simply put, LLMs are never neutral but rather inherently politically biased.87x F. Motoki, V.P. Neto & V. Rodrigues, ‘More Human than Human: Measuring ChatGPT Political Bias’, Public Choice 3-23 (2024); S. Feng et la., ‘From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models’, arXiv (2023), arXiv:2305.08283; L. Winner, ‘Do Artifacts Have Politics?’, Daedalus 121-36 (1980). This issue becomes particularly concerning in the judiciary, where judges using such biased LLMs could be influenced in politically sensitive cases, such as those involving abortion, assisted suicide or racially motivated violence.88x J. Baum and J. Villasenor, ‘The Politics of AI: ChatGPT and Political Bias’, Brookings (8 March 2023), https://www.brookings.edu/articles/the-politics-of-ai-chatgpt-and-political-bias/. For instance, when a certain stream of case law is over-representative in the training data, the system’s output may perpetuate the biased tendency in its output.89x M. Winkler, S. Köhne & M. Klöpper, ‘“Not All Algorithms!” Lessons from the Private Sector on Mitigating Gender Discrimination’, INFORMATIK 1289-303 (2022). Such biases can distort judicial decisions, thereby diminishing public trust in the judiciary’s legitimacy. Research has demonstrated that AI systems often exhibit biases against people of colour, as exemplified by the controversial COMPAS software (Correctional Offender Management Profiling for Alternative Sanctions). Used in some US courts to predict the likelihood of someone reoffending, COMPAS was originally said to mitigate bias and improve recidivism risk assessments.90x T. Brennan and W. Dieterich, ‘Correctional Offender Management Profiles for Alternative Sanctions (COMPAS)’, in J.P. Singh et al. (eds.), Handbook on Recidivism Risk/Needs Assessment Tools (2017), https://doi.org/10.1002/9781119184256.ch3. However, subsequent research revealed that it produced discriminatory outcomes instead, exacerbating existing biases rather than eliminating them.91x J. Angwin et al., ‘Machine Bias’, ProPublica (2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. If similar biases are found in LLMs used by judges, it can undermine the legitimacy of the judicial decision-making process.
      In addition, the inherent opacity of the systems, often referred to as the ‘black box’, makes it difficult for judges or the public to understand how decisions are generated. Even if the sources used for training are known and accessible, the internal mechanisms often remain obscure. This opacity and lack of transparency pose challenges for the judicial duty to state reasons, as judges must be able to clearly explain the basis for their decisions. Without understanding how the AI reached its conclusions, judges cannot adequately justify their reliance on such tools, which undermines both the reasoning process and the broader perception of judicial legitimacy.92x W.J. von Eschenbach, ‘Transparency and the Black Box Problem: Why We Do Not Trust AI’, 34 Philosophy and Technology (2021); M. Almada, ‘Governing the Black Box of Artificial Intelligence’, SSRN (2023), https://doi.org/10.2139/ssrn.4587609.
      While it is argued that humans are equally black boxes, since nobody knows the inner workings of judges’ reasoning, this argument overlooks human decision-making as being more transparent than algorithmic decisions in the sense that human explanation has a self-regulative feature. The latter refers to the ability of people to govern or regulate themselves in conformity with the reasons they give whenever they explain their decisions – which AI is not capable of. On that note, Zerilli et al. contend that human brains are ‘epistemically privileged’. Although the inner working and reasoning of algorithms and humans are both difficult to ascertain, a double standard holds in favour of humans based on two arguments. On the one hand, algorithmic-led decisions have the potential to produce effects on a large scale and at a rapid pace, while, on the other, algorithmic-led decisions come with unresolved accountability issues.93x W. De Mulder et al., ‘Are Judges More Transparent Than Black Boxes? A Scheme to Improve Judicial Decision-Making by Establishing a Relationship with Mathematical Function Maximization’, 84 Law and Contemporary Problems 47-67 (2021); U. Peters, ‘Explainable AI Lacks Regulative Reasons: Why AI and Human Decision-making Are Not Equally Opaque’, 3 AI and Ethics 963-74 (2022); J. Zerilli et al., ‘Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?’, 32 Philosophy and Technology 661-83 (2019).
      Related is the risk of generative AI systems hallucinating. This refers to systems generating highly convincing yet inaccurate or incorrect information. For instance, a New York lawyer recently referenced a false case after relying on ChatGPT in his argumentation.94x M. Bohannon, ‘Lawyer Used ChatGPT In Court – And Cited Fake Cases. A Judge Is Considering Sanctions’, Forbes (8 June 2023), https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions/. Hallucinations can happen due to either inconsistencies in training data or flaws in the design in the training process.95x M. Ajevski et al., ‘ChatGPT and the Future of Legal Education and Practice’, 57(3) The Law Teacher 352-64 (2023), https://doi.org/10.1080/03069400.2023.2207426; G. Bennet, ‘Is ChatGPT Any Good at Legal Research – and Should We be Wary or Supportive of It?’, 23(4) Legal Information Management 219-24 (2024). When judges similarly resort to generative AI systems to retrieve precedents and the system fabricates cases, this directly undermines the reliability of the system as well as the legitimacy of judicial decision-making and the duty to state reasons. This concern is particularly problematic when judges are subject to a substantial duty to state reasons, requiring thorough and detailed justifications for their verdicts. In such cases, reliance on inaccurate AI-generated information can erode the integrity of the judicial reasoning process. In the context of a formal duty, where judges are required to state reasons without necessarily ensuring their depth or correctness, the risks of hallucinations may be less pronounced, although it still poses a potential threat to the overall credibility of courts.
      The second concern is with regard to the involvement of private companies in designing, developing and deploying generative AI systems, which has significant implications for the judicial duty to state reasons and, by extension, the legitimacy of the judicial decision-making process. Generative AI systems are currently only developed by big tech companies, like ChatGPT, by OpenAI; Gemini, by Google; or Copilot, by Microsoft. While at first glance it may seem that the design and development of such systems involves mere technical choices, they inevitably embed certain values that can affect judges.96x Diver, above n. 32; B. Friedman and D.G. Hendry, Value Sensitive Design: Shaping Technology with Moral Imagination (2019). When such systems are used in the judiciary, there is a risk that these embedded values can subtly influence judicial reasoning, potentially undermining the duty to provide justifications for verdicts. As discussed in another study, we argue that that even seemingly innocent applications like precedent analysis could be problematic as private companies might intentionally or inadvertently exert influence over the design and training processes. Developers might, for instance, exclude certain streams of case law from their data sets, resulting in unrepresentative outputs.97x Smuha and Hendrickx, above n. 6. Since intellectual property rights often obscure the internal workings of these systems,98x Gutiérrez, above n. 3. judges may be unaware of the extent to which the AI’s outputs are shaped by particular values. When these systems influence judicial reasoning without full transparency, the judicial duty to state reasons is compromised, as judges may fail to explain or justify why certain case law or arguments were chosen over others. This also erodes the legitimacy of judicial decisions, as legitimacy is fundamentally tied to public confidence and fairness of the judicial process.
      In turn, the involvement of private companies inherently touches upon the judicial independence, another key factor closely related to the duty to state reasons and legitimacy of courts.99x G. Gentile, ‘AI in the Courtroom and Judicial Independence: An EU Perspective’, EUIdeas (22 August 2022), https://euideas.eui.eu/2022/08/22/ai-in-the-courtroom-and-judicial-independence-an-eu-perspective/; A.K. Dhungel and E. Beute, ‘AI Systems in the Judiciary: Amicus Curiae? Interviews with Judges on Acceptance and Potential Use of Intelligent Algorithms’, 7 ECIS Proceedings 1-16 (2024). Judicial independence requires judges to be free from any interference or pressure and maintain an impartial attitude towards parties. However, when judges resort to generative AI systems developed by private companies, these systems can (in)directly intervene in the judicial decision-making process. This issue can be linked to the risk of automation bias,100x Malek, above n. 7. where judges over-rely on algorithms and AI systems despite their unreliability, inaccuracy and lack of robustness. If judges over-rely on AI systems, they risk abdicating their responsibility to fully explain and justify their decisions based on independent reasoning. This affects not only the duty to state reasons but also the legitimacy of courts, as the public may perceive judicial decisions as lacking accountability and being rooted in opaque, algorithm-driven processes rather than human considerations. Hence, despite the Indian judge claiming that he did not use ChatGPT for help in making his decision, it cannot be ruled out that he was (un)consciously influenced by the system’s suggestions. Equally concerning is the risk that judges over-rely on group characteristics. The output of generative AI systems is derived from statistics on particular groups and will consistently present the most favourable statistical result, often overlooking individual nuances to a great extent. This can lead to decisions that are overly standardised or formalistic, reducing the ability of judges to address the specific circumstances of each case. Such an approach risks stagnating jurisprudence, as judges lean too heavy on AI-generated output rather than contributing themselves to the evolution of legal reasoning. This might undermine the judicial duty to state reasons as decisions become less individualised and more dependent on generalised data patterns, which may not always align with principles of justice.
      In this regard, it should be pointed out that LLMs, underlying generative AI systems, do not understand human language and do not reason themselves. Although the systems generate logical and sophisticated answers and appear to understand the questions people ask, they merely predict the next token in a given sequence of words. They lack human emotions and do not relate to the real world. As Bender et al. argue, LLMs are ‘Stochastic Parrots’: ‘they merely stich together sequences of linguistic forms observed in large training data, according to probabilistic information about how to combine, but without any reference to meaning.’101x J. Morison and T. McInerney, ‘When Should a Computer Decide? Judicial Decision-making in the Age of Automation, Algorithms and Generative Artificial Intelligence’, in S. Turenne and M. Moussa (eds.), Research Handbook on Judging and the Judiciary (2024), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4723280; E. Bender et al., ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’, FAccT 610-23 (2021). So while generative AI systems very convincingly mimic human behaviour and output, they are not reliable sources of information.102x J.D. Gutiérrez, ‘AI Technologies in the Judiciary: Critical Appraisal of Large Language Models in Judicial Decision-Making’, in R. Paul, E. Carmel and J. Cobbe (eds.), Handbook on Public Policy and AI (2024), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4667572. Their inability to reason and understand human language indicates their inability to satisfactorily replace – or even assist – judges in their judicial decision-making process and within their constitutional and social role.
      Concerns also emerge in relation to privacy, data protection and the processing of sensitive data. For example, the Colombian judges who sought guidance from ChatGPT on whether to grant medical benefits to a minor might be problematic in this regard. Inquiries on children’s healthcare involve sensitive data that necessitates careful treatment. The ambiguity surrounding these systems, such as where the data is stored, its retention period, and whether it is further processed for other purposes, can undermine the legitimacy of judicial decision-making when judges rely on them. If the public perceives that judges are relying on systems that do not adequately protect sensitive data, the legitimacy of courts may be called into question. This concern is underscored by a recent study suggesting that algorithmic-based decisions in complex cases, such as healthcare of minors, are perceived as less trustworthy when compared to human decisions.103x Yalcin, above n. 76. When the reasoning behind a decision involves sensitive data, any compromise in data protection could erode the trust necessary for the judiciary to maintain its legitimacy. Moreover, the ease of reverse-engineering and well-crafted prompts makes it relatively simple to uncover questions and answers from other users, thereby violating their privacy and highlighting the systems’ vulnerability to external attacks.104x B.C. Das et al., ‘Security and Privacy Challenges of Large Language Models: A Survey’, (2024), https://arxiv.org/pdf/2402.00888. Such vulnerabilities not only threaten the confidentiality of sensitive information but also expose the judiciary to external attacks or misuse. This impacts the duty to state reasons, as judges should ensure that the legal and factual reasoning behind their decisions is not only transparent but also securely handled. Recent technological developments in running LLMs locally or on edge devices present a promising solution to mitigate these privacy and data protection risks. By operating LLMs locally, data is not sent to external providers, ensuring that all information remains securely on the user’s device. While this advancement is promising, it still requires further research and development – especially within the legal field, to ensure that such systems are sufficiently robust to handle the complexities of judicial work.105x N.Z. Lee, ‘Exploring the Limits of Small Language Models’, UCB/EECS-2023-141 (2023), http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-141.html. Moreover, it is crucial to ensure that courts possess the necessary technical expertise and financial resources to effectively implement and manage locally run LLMs. Without such resources, even locally run AI systems may fall short of protecting sensitive data, further undermining the judicial duty to state reasons and the legitimacy of the judicial decision-making process.
      Some ethical concerns might erode the legitimacy of judicial decision-making and the judicial duty to state reasons too. First, concerns can stem from the somewhat ideological perception on judges’ political, social and constitutional role in society. In fact, courts and judges are crucial components of the broader political and constitutional landscape.106x Morison and Harkens, above n. 24. Although it is debated whether judges are epistemically best placed to settle disputes, judges are often considered essential in upholding the rule of law in liberal democracies.107x L. Denning, ‘The Function of the Judiciary in a Modern Democracy’, 16(4) Pakistan Horizon 299-305 (1962); B. Leiter, ‘The Roles of Judges in Democracies: A Realistic View’,6(2) Social Science Research Network 346-75 (2017). Judges serve social functions, such as resolving conflicts, fostering trust in the judiciary or safeguarding fundamental rights.108x M. Gómez, ‘The Contribution of Judges to Society’, 101(3) ARSP 332-53 (2015); A.R. Reeves, ‘Do Judges Have An Obligation to Enforce the Law?: Moral Responsibility and Judicial-Reasoning’, 29(2) Law And Philosophy 159-87 (2009). In this line, judging is considered a typical human activity, where human dignity and subjectivity are essential.109x Morison and McInerney, above n. 101. These qualities are essential to maintaining public’s trust in the judiciary, as they ensure that decisions are not purely mechanical but are made with due regard to the complexity of human experience and justice. When parts of the judicial decision-making process are delegated to generative AI systems, the legitimacy and authority of courts as well as the constitutional role of judges could be hindered because AI lacks the ethical, social and normative qualities inherent in human judges.110x G. Gentile, ‘Artificial Intelligence and the Crises of Judicial Power: (Not) Cutting the Gordian Knot?’, in G. De Gregorio et al. (eds.), Oxford Handbook on Digital Constitutionalism (2024 forthcoming), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4731231. For example, a Texas judge banned the use of ChatGPT in his courts, arguing that such systems lack any sense of duty, honour or justice.111x M. Cerullo, ‘Texas Judge Bans Filings Solely Created by AI After ChatGPT Made Up Cases’, CBS News (2 June 2023), https://www.cbsnews.com/news/texas-judge-bans-chatgpt-court-filing/. In addition, AI systems – unlike human judges – have not obtained their mandate and authority through democratic processes such as elections or appointments; instead, they emerge without any substantial democratic scrutiny. Moreover, the deployment of generative AI systems in courts may dehumanise court experiences and reduce trust in the judiciary by loss of control and anxiety associated with the use of AI. There might also be a danger that judges start objectifying humans as numbers or probabilities.112x A. Martinho, ‘Surveying Judges About Artificial Intelligence: Profession, Judicial Adjudication, and Legal Principles’, AI & Society (2024), https://doi.org/10.1007/s00146-024-01869-4. This could undermine the duty to state reasons, as judges may increasingly rely on opaque systems, leaving the human element – critical for public trust – absent from the reasoning process.
      As demonstrated, AI systems are socio-technical constructs, meaning that they are not merely technical but also impact society.113x K. Yeung and M. Lodge, Algorithmic Regulation: An Introduction (2019). AI affects society in the sense that they are ‘social’ systems, whereby they influence and shape social structures and institutions. Given the scale at which generative AI systems operationalise and generalise, and the way their explicability differs,114x L. Naudts, ‘Fair or Unfair Differentiation? Reconsidering the Concept of Equality for the Regulation of Algorithmically Guided Decision-Making’ (PhD thesis KU Leuven, 2023). decisions formed while relying on generative AI systems may affect not only individuals but society at large.115x Albright, above n. 35. This prompts the fundamental question of what normative authority the systems gain when deployed within the judiciary, and what normative authority we envision attributing to them in the future. Judicial decisions carry normative power, guiding individuals on their rights, obligations and actions. As judging is a normative activity, the reliance of judges on generative AI systems to draft decisions endows those decisions with normative implications for society and its structure.116x R. Simmons, ‘Big Data, Machine Judges, and the Legitimacy of the Criminal Justice System’, 52 University of California Davis Law Review 1067 (2018). This raises the question of whether by delegating significant aspects of the decision-making process to AI, we are legitimising their use in the process. I argue that this should be avoided. Reliance on these systems should not strengthen their legitimacy in the judicial decision-making process, especially in light of the aforementioned concerns. Instead, human judges should maintain full responsibility for the reasoning behind their decisions to preserve both the duty to state reasons and the legitimacy of the decision-making process.

    • 5. Safeguarding the Judicial Duty to State Reasons in the Age of Automation

      This research indicates that generative AI systems can have a negative impact on the legitimacy of judicial decision-making and the judicial duty to state reasons. While there are potential beneficial uses, there are also numerous problematic implications. If we believe the judicial duty to state reasons is essential in liberal democracies, particularly in the age of automation, this research prompts us to reflect on how to safeguard this duty. Therefore, I briefly explore some avenues that might help preserve the judicial duty to state reasons in the age of automation.
      Foremost, fostering AI literacy within the judiciary is crucial,117x Gentile, above n. 110; D.T Kit Ng et al., ‘Conceptualizing AI Literacy: An Exploratory Review’, 2 Computers and Education: Artificial Intelligence 100041 (2021). as it enables judges to grasp the risks and limitations associated with generative AI systems. Establishing guidelines on the use of such systems in courts, such as those issues by CEPEJ, could serve as a good starting point. However, enhancing judges’ AI literacy through education alone is insufficient. Close collaboration with technical experts is necessary to gain a deeper understanding of AI’s complexity, enhance system reliability and deploy them safely within the judiciary.
      One suggestion for enhancing accountability is to avoid outsourcing the development of AI systems by having the judiciary develop their own systems.118x Gutiérrez, above n. 102. While this idea has its merits and would provide greater control over the design and development, I doubt its feasibility in practice. Courts often encounter financial constraints, and the considerable costs of developing sophisticated AI systems renders this suggestion unrealistic. A more practical approach instead would be to incorporate value-sensitive design into the development of AI systems used in courts. Value-sensitive design is a method for systematically integrating values into technical designs, including AI systems.119x B. Friedman and D.G. Hendry, Value Sensitive Design: Shaping Technology with Moral Imagination (2019). Since values are inherently part of any system, this approach acknowledges their existence and prioritises their integration from the outset, focusing on principles such as transparency, accountability or legitimacy.120x S. Umbrello and I. van de Poel, ‘Mapping Value Sensitive Design onto AI for Social Good Principles’, 1(3) AI and Ethics 283-96 (2021). By embedding these values early in the design process, the technical architecture of the system can be thoughtfully reconsidered. Various methods, such as stakeholder analysis or value scenarios, can facilitate this integration, ensuring that values are explicitly addressed rather than left to the discretion of developers or inadvertently overlooked.121x M. Sadek et al., ‘Designing Value-sensitive AI: A Critical Review and Recommendations for Socio-technical Design Processes’, AI and Ethics 1-19 (2023), https://doi.org/10.1007/s43681-023-00373-7.
      A somewhat ‘radical’ avenue to consider involves reassessing the formal nature of the judicial duty to state reasons. As mentioned, in Europe, a formal duty to state reasons applies where the correctness or accuracy of the reasons is deemed irrelevant. However, it could merit to strengthen this duty to encompass more substantive justifications. By requiring a more robust explanation, judges would be compelled to conscientiously assess their reliance on generative AI systems. Moreover, such measures would promote increased transparency and accountability, thereby reinstating legitimacy in courts and their judicial decision-making. By analogy, a study on the adverse impact of AI in administrative courts argues for more scrutiny of the reasoning to mitigate the risks of AI opacity or illegitimacy.122x M. Fink and M. Finck, ‘Reasoned A(I)dministration: Explanation Requirements in EU Law and the Automation of Public Administration’, 47(3) European Law Review 376-92 (2022). A more substantive duty would also address the need to answer to a higher burden of accountability when decisions significantly impact individuals, such as court decisions.123x Binns, above n. 64. Strengthening the substantial duty can therefore concretely reflect the underlying normative goals of transparency, accountability and legitimacy.
      Yet a substantive duty to state reasons is not a panacea for all ills but comes with its own challenges. First, it must be tackled what kind of additional reasons are expected under a substantial duty to state reasons. The explanation style can significantly influence people’s perceptions of judgments as more or less fair.124x J. Dodge, Q. Vera Liao & R.K.E. Bellamy, ‘Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment’, (2019) https://arxiv.org/pdf/1901.07694.pdf. Should a global explanation be preferred, or would a more detailed breakdown of the technical aspects and parameters of the AI system be more suitable? Is it preferable to provide pedagogical explanations on how the model works or rather local explanations regarding the specific output?125x Binns, above n. 64. How extensive should be the technical explanation reached: should it merely cover the general features of the model or should it also detail the selection of the training data, training and testing procedures and possible effects of decisions?126x Johnson, above n. 75. The answers to these questions may vary from case to case. While we can agree that explanations must in any case be ‘meaningful’ to allow for discussion of the consequences, it is difficult to concretely define its extent.127x D. Brughmans, L. Melis, & D. Martens, ‘Disagreement Amongst Counterfactual Explanations: How Transparency Can Be Misleading’, 32 TOP 429-462 (2024). Second, providing substantial reasons might inadvertently result in an ‘information overload’ or a ‘transparency paradox’, whereby too much information reduces transparency. This underscores the importance of the first point of understanding what exactly should constitute ‘substantial reasons’. It is also debatable whether laypeople would actually benefit from additional (technical) information: to what extent do they understand it? Some technical aspects are in fact so difficult that even experts do not understand them.128x S. Greenstein, ‘Preserving the Rule of Law in the Era of Artificial Intelligence (AI)’, 30 Artificial Intelligence and Law 291-323 (2022). This also ties into the broader debate on whether big tech companies should grant access to their software so that judges or the general public can review it. Another downside of requiring substantial reasons is the added workload for judges.129x Gutiérrez, above n. 102. Insisting on rigorous and thorough examination of any content generated by these systems defeats the exact purpose of time-efficiency arguments in favour of AI systems. Requiring more reasons by judges may also imply an undesirable accountability shift from private companies or other entities responsible for these systems to judges. Rather than these entities being held accountable for their systems and decisions, judges would be expected to justify the use of a particular system, their interaction with it, and potentially even its functioning. Finally, practical questions arise on how to implement a transition towards a substantial duty, such as constitutional considerations. Alternatively, should perhaps only a moral or professional duty to state more reasons in algorithmic-led decisions be considered?

    • 6. Conclusion

      This research concludes that while generative AI systems can bolster the judicial duty to state reasons and its underlying normative goal of legitimacy of judicial decision-making by improving efficiency and possibly enhancing the quality of reason-giving, there are numerous concerns surrounding their current deployment in the judiciary. Although these systems may not be the sole basis for decisions, they nevertheless play a key role.130x J.D. Gutiérrez, ‘ChatGPT in Colombian Courts’, Vergassungsblog (23 February 2023), https://verfassungsblog.de/colombian-chatgpt/. When judges ask fundamental legal questions or even seemingly ‘merely technical’ questions to generative AI systems, the generated output is likely not sufficiently trustworthy. Hence, in high-stakes contexts such as the judiciary it is currently inappropriate to use such systems.
      More future research is therefore needed. The theory on the judicial duty to state reasons and its underlying normative goals should be examined further, as well as the question of how generative AI systems and other algorithmic and AI systems might impact the other normative goals. It is possible that some systems may result in less problematic impacts, while others could exacerbate them. As shown, there are currently no foolproof avenues to guarantee the judicial duty to state reasons in the age of automation, and future research should hence also delve more deeply into the potential remedies.

    Noten

    • * I would like to express my heartfelt gratitude to my supervisors, Nathalie A. Smuha and Peggy Valcke, for their unwavering support and invaluable insights. My thanks also go to Adam Kirk-Smith for his thoughtful comments on this topic during a previous conference. Additionally, I am grateful to the two anonymous peer reviewers for their thorough, detailed and constructive feedback.
    • 1 L. Naudts, ‘Fair or Unfair Differentiation? Reconsidering the Concept of Equality for the Regulation of Algorithmically Guided Decision-Making’ (PhD thesis KU Leuven, 2023).

    • 2 A.R. Reeves, ‘Do Judges Have An Obligation to Enforce the Law?: Moral Responsibility and Judicial-Reasoning’, 29(2) Law And Philosophy 159-87 (2009).

    • 3 A recent UNESCO survey indicates that over 40% of judges interviewed in over 90 countries reported using algorithmic systems in their work-related activities. See, J.D. Gutiérrez, ‘UNESCO Global Judges’ Initiative: Survey on the Use of AI Systems by Judicial Operators’, UNESCO (2024), CI/DIT/2024/JI/01.

    • 4 I. Carnat, ‘Addressing the Risks of Generative AI for the Judiciary: The Accountability Framework(s) Under the EU AI Act’, SSRN (2024), https://doi.org/10.2139/ssrn.4887438; I. Cheong, A. Caliskan & T. Kohno, ‘Safeguarding Human Values: Rethinking US Law for Generative AI’s Societal Impacts’, AI Ethics (2024), https://doi.org/10.1007/s43681-024-00451-4.

    • 5 Already in the 1980s, research was conducted on automation of court applications, although mostly focusing on rule-based experts systems, referring to systems that could reason on the basis of a predetermined set of rules, often expressed in if-then statements. See, R.E. Susskind, ‘Expert Systems in Law: A Jurisprudential Inquiry’, 29(2) The Modern Law Review 168-94 (1986); D. Kolkman, F. Bex, N. Narayan & M. van der Put, ‘Justitia ex machina: The Impact of An AI System on Legal Decision-making and Discretionary Authority’, 11(2) Big Data & Society (2024), https://doi.org/10.1177/20539517241255101.

    • 6 N. Smuha and V. Hendrickx, ‘AI and the Administration of Justice: Taking “Precedent Analysis” as a Use Case to Assess the Adequacy of the AI Act’, The Law, Ethics & Policy of AI Blog (2 October 2023), https://www.law.kuleuven.be/ai-summer-school/blogpost/Blogposts/AI-administration-justice; T. Sourdin, ‘Judge v Robot? Artificial Intelligence and Judicial Decision-making’, 41(4) UNSW Law Journal 1114-33 (2018); D. Reiling, ‘Court and AI’, 11(2) International Journal for Court Administration 1-10 (2020); CEPEJ, ‘Possible Use of AI to Support the Work of Courts and Legal Professionals’, https://www.coe.int/en/web/cepej/tools-for-courts-and-judicial-professionals-for-the-practical-implementation-of-ai (last visited 7 May 2024); B. Custers, ‘AI in Criminal Law: An Overview of AI Applications in Substantive and Procedural Criminal Law’, in B. Custers and E. Fosch-Villaronga (eds.), Law and Artificial Intelligence. Information Technology and Law Series (2022) 35, at 205.

    • 7 M. Medvedeva, M. Wieling & M. Vols. ‘Rethinking the Field of Automatic Prediction of Court Decisions’, 31 Artificial Intelligence and Law 195-212 (2023); M. A. Malek, ‘Criminal Courts’ Artificial Intelligence: The Way It Reinforces Bias and Discrimination’, 2 AI and Ethics 1-13 (2022).

    • 8 W.T. Miller, C.A. Campbell, J. Papp, & E. Ruhland, ‘The Contribution of Static and Dynamic Factors to Recidivism Prediction for Black and White Youth Offenders’, 66(16) International Journal of Offender Therapy and Comparative Criminology 1779-1795 (2021).

    • 9 D. Bzdok, N. Altman, & M. Krzywinski, ‘Statistics Versus Machine Learning’, 15(4) Nature Methods 233 (2018); G. van Dijck, ‘Predicting Recidivism Risk Meets AI Act’, European Journal on Criminal Policy and Research 407–23 (2022).

    • 10 I. Taylor, ‘Justice by Algorithm: The Limits of AI in Criminal Sentencing’, 42 Criminal Justice Ethics 193-213 (2023); Judicial Commission of New South Wales, ‘Judicial Information Research System (JIRS)’, https://www.judcom.nsw.gov.au/judicial-information-research-system-jirs/ (last visited 7 May 2024); G. Sartor, et al., ‘Thirty Years of Artificial Intelligence and Law: The Second Decade’, 30(4) Artificial Intelligence and Law 521-57 (2022).

    • 11 T. Taulli, ‘Large Language Models’, in T. Taulli (ed.), Generative AI 93-125 (2023).

    • 12 For a detailed explanation of LLMs, see P. Kumar, ‘Large Language Models (LLMs): Survey, Technical Frameworks, and Future Challenges’, 57(260) Artificial Intelligence Review 1-51 (2024).

    • 13 Labour Circuit of Cartagena, case 032, 30 January 2023, https://forogpp.com/wp-content/uploads/2023/01/sentencia-tutela-segunda-instancia-rad.-13001410500420220045901.pdf; L. Taylor, ‘Colombian Judge Says He Used ChatGPT in Ruling’, The Guardian 3 February 2023, https://www.theguardian.com/technology/2023/feb/03/colombia-judge-chatgpt-ruling.

    • 14 A. Smith, A. Moloney & A. Asher-Schapiro, ‘AI in the Courtroom: Judges enlist ChatGPT Help, Critics Cite Risks’, CS Monitor 30 May 2024, https://www.csmonitor.com/USA/Justice/2023/0530/AI-in-the-courtroom-Judges-enlist-ChatGPT-help-critics-cite-risks (last visited 7 May 2024).

    • 15 The case concerned a convenience store robbery. The judge asked ChatGPT what the ordinary meaning of ‘physically restrained’ generally referred to. This term was crucial to the case’s outcome, as one of the key questions was whether the robber had ‘physically restrained’ one of the victims during the robbery.

    • 16 United States Court of Appeal, Eleventh Circuit, No. 23-10478, 9 May 2024, https://fingfx.thomsonreuters.com/gfx/legaldocs/jnpwanznepw/09062024newsom.pdf.

    • 17 H. Farah, ‘Court of Appeal Judge Praises ‘Jolly Useful’ ChatGPT After Asking It for Legal Summary’, The Guardian 15 February 2023, https://www.theguardian.com/technology/2023/sep/15/court-of-appeal-judge-praises-jolly-useful-chatgpt-after-asking-it-for-legal-summary (last visited 7 May 2024).

    • 18 Rechtbank Gelderland, ECLI:NL:RBGEL:2024:3636, 7 June 2024, https://linkeddata.overheid.nl/front/portal/document-viewer?ext-id=ECLI:NL:RBGEL:2024:3636.

    • 19 K. Topolsky, ‘Ukrainian Judges Set to Integrate AI Tools into Their Workflow’, elblog.pl 11 September 2024, https://elblog.pl/2024/09/11/ukrainian-judges-set-to-integrate-ai-tools-into-their-workflow/.

    • 20 UK Courts and Tribunals Judiciary, ‘Artificial Intelligence (AI) Guidance for Judicial Office Holders’, 12 December 2023, https://www.judiciary.uk/wp-content/uploads/2023/12/AI-Judicial-Guidance.pdf.

    • 21 CEPEJ, ‘Information Note: Use of Generative Artificial Intelligence (AI) By Judicial Professionals in a Work-related Context’, 12 February 2024, https://rm.coe.int/cepej-gt-cyberjust-2023-5final-en-note-on-generative-ai/1680ae8e01.

    • 22 Gutiérrez, above n. 3.

    • 23 N. Wang and M.Y. Tian, ‘“Intelligent Justice”: AI Implementations in China’s Legal Systems’, in A. Hanemaayer (ed.), Artificial Intelligence and Its Discontents. Social and Cultural Studies of Robots and AI (2022); N. Wang, ‘“Black Box Justice”: Robot Judges and AI-based Judgement Processes in China’s Court System’, IEEE International Symposium on Technology and Society 58-65 (2020).

    • 24 J. Morison and A. Harkens, ‘Re-engineering Justice? Robot Judges, Computerized Courts and (Semi) Automated Legal Decision-making’, 39(4) Legal Studies 618-35 (2019).

    • 25 T. Sourdin, Judges, Technology and Artificial Intelligence (2021); M. Van der Put, ‘Kunstmatige intelligentie bij rechterlijke oordeelsvorming’ (PhD thesis Tilburg University, 2022).

    • 26 M. Zalnieriute and F. Bell, ‘Technology and the Judicial Role’, in G. Appleby and A. Lynch (eds.), The Judge, the Judiciary and the Court: Individual, Collegial and Institutional Judicial Dynamics in Australia 1-21 (2021); van der Put above n. 25.

    • 27 D. Barysé and R. Sarel, ‘Algorithms in the Court: Does It Matter Which Part of the Judicial Decision-making is Automated?’ Artificial Intelligence and Law (2024); M. Hildebrandt, ‘The Issue of Bias’, in M. Pelillo and T. Scantamburlo (eds.), Machines We Trust. Perspectives on Dependable AI (2021), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3497597.

    • 28 Sourdin, above n. 25.

    • 29 B. Mittelstadt et al., ‘The Ethics of Algorithms: Mapping the Debate’, 3(2) Big Data & Society 1-21 (2016).

    • 30 Hildebrandt, above n. 27.

    • 31 U. Franke, ‘First-and Second-Level Bias in Automated Decision-making’, 35(2) Philosophy & Technology 21 (2022).

    • 32 L. Diver, ‘Digisprudence: The Design of Legitimate Code’, 13(2) Law, Innovation and Technology 325-54 (2021).

    • 33 N. Smuha, Algorithmic Rule by Law: How Algorithmic Regulation in the Public Sector Erodes the Rule of Law (2024).

    • 34 V. Dessers and P. Valcke, ‘Judicial Analytics on Trial’, 27(6) Maastricht Journal of European and Comparative Law 759-73 (2020); K. Quezada-Tavárez, P. Vogiatzoglou & S. Royer, ‘Legal Challenges in Bringing AI Evidence to the Criminal Courtroom’, 12(4) New Journal of European Criminal Law 531-51 (2021).

    • 35 A. Albright, ‘The Hidden Effects of Algorithmic Recommendations’, github (2024), https://apalbright.github.io/pdfs/albright-algo-recs-PAPER.pdf; T. Araujo et al., ‘In AI we trust? Perceptions about Automated Decision-making by Artificial Intelligence’, 35 AI & Society 611-23 (2020); Dessers and Valcke, above n. 34; N. Chronowski, K. Kálmán and B. Szentgáli-Tóth, ‘Artificial Intelligence, Justice, and Certain Aspects of Right to a Fair Trial’, 10(2) Acta Universitatis Sapientiae, Legal Studies 169-89 (2021); L. Beckman, J. Hultin Rosenberg, & K. Jebari, ‘AI and Democratic Legitimacy’, 39 AI & Society 975-84 (2022).

    • 36 J. Rawls, A Theory of Justice (1971); Morison and Harkens, above n. 24; R. Simmons, ‘Big Data, Machine Judges, and the Legitimacy of the Criminal Justice System’, 52 University of California Davis Law Review 1067-118 (2021).

    • 37 F. Schauer, ‘Giving Reasons’, 47(4) Stanford Law Review 633-59 (1995).

    • 38 For a more philosophical analysis of the right to justification, see M.F. Ibsen, ‘Rainer Forst’s Justification Paradigm of Critical Theory’, in M.F. Ibsen (ed.), A Critical Theory of Global Justice: The Frankfurt School and World Society 313-341 (2023).

    • 39 For instance, Art. 149 of the Belgian Constitution provides that ‘Every judgements should be reasoned’.

    • 40 H.L. Ho, ‘The Judicial Duty to Give Reasons’, 20(1) Legal Studies 42-65 (2006); M. Cohen, ‘When Judges Have Reasons Not to Give Reasons: A Comparative Law Approach’, 72(2) Washington and Lee Law Review 483 (2015); J. Bosland and J. Gill, ‘The Principle of Open Justice and the Judicial Duty to Give Public Reasons’, 38(2) Melbourne University Law Review 482-524 (2014); M. Weinberg, ‘Adequate, sufficient and excessive reasons’, in Handbook for Judicial Officers, https://www.judcom.nsw.gov.au/publications/benchbks/judicial_officers/adequate_sufficient_and_excessive_reasons.html (last visited 7 May 2024).

    • 41 Vetrenko/Moldova, ECHR (18 May 2010), No. 36552/02.

    • 42 For instance, this is the case in Brazil and Mexico. See Art. 489 §1 of the Brazilian code of civil procedure, https://www.lawyerinbrazil.com/wp-content/uploads/2019/06/BRAZILIAN_CODE_OF_CIVIL_PROCEDURE-1.pdf, and Art. 402 of the Mexican National Code of Criminal Procedures, https://www.wipo.int/wipolex/en/legislation/details/17432.

    • 43 Ruiz Torija/Spain, ECHR (9 December 1994), No. 18390/91.

    • 44 Grădinar/Moldova, ECHR (8 April 2008), No. 7170/02.

    • 45 Hirvisaari/Finland, ECHR (27 September 2001), No. 49684/99/.

    • 46 Higgins and Others/France, ECHR (19 February 1998), No. 134/1996/753/952.

    • 47 Helle/Finland, ECHR (19 December 1997), No. 157/1996/776/977; Rusishvili/Georgia, ECtHR (30 September 2022), No. 15269/13; Okropiridize/Georgia, ECHR (7 September 2023), No. 43627/16.

    • 48 Rusishvili/Georgia, ECHR (30 September 2022), No. 15269/13; Okropiridize/Georgia, ECHR (7 September 2023), No. 43627/16.

    • 49 Popov/Moldova, ECHR (6 March 2006), No. 19960/04; Morison and Harkens, above n. 24; Simmons, above n. 36.

    • 50 Rusishvili/Georgia, ECHR (30 September 2022), No. 15269/13.

    • 51 Hirvisaari/Finland, ECHR (27 September 2001), No. 49684/99.

    • 52 Perez/France, ECHR (12 February 2004), No. 47287/99; B. Maes, De motiveringsplicht van de rechter (1990); W. Van Gerven, De taak van de rechter in een West-Europese democratie (2013).

    • 53 Melnic/Moldova, ECHR (14 February 2007), No. 6923/03; H. Mercier and D. Sperber, ‘Why do humans reason?’, 34(2) Behavioral and Brain Sciences 57-111 (2011); P. Leanza and O. Pridal, Right to a Fair Trial: Article 6 of the European Convention on Human Rights (2014).

    • 54 Schauer, above n. 37; M. Shapiro, ‘The Giving Reasons Requirement’, 1992(1) University of Chicago Legal Forum 179-220 (1992).

    • 55 D. Miller, ‘Justice’, Stanford Encyclopedia of Philosophy 6 August 2021, https://plato.stanford.edu/entries/justice/.

    • 56 E. Sargeant, J. Barkworth & N.S. Madon, ‘Procedural Justice in the Criminal Justice System’, Oxford Research Encyclopedia of Criminology (28 September 2020), https://doi.org/10.1093/acrefore/9780190264079.013.635.

    • 57 Simmons, above n. 36.

    • 58 L. Alexander, ‘Are Procedural Rights Derivative Substantive Rights?’, 17(1) Law and Philosophy 19-42 (1998); T.C. Grey, ‘Procedural Fairness and Substantive Rights’, 18 Nomos 182-205 (1977).

    • 59 R. Vermunt and H. Steensma, ‘Procedural Justice’, in C. Sabbagh and M. Schmitt (eds.), Handbook of Social Justice Theory and Research 219-36 (2016).

    • 60 T.R. Tyler, ‘Psychological Perspectives on Legitimacy and Legitimation’, 57(1) Annual Review of Psychology 375-400 (2006); T.R. Tyler, ‘Procedural Justice, Legitimacy, and the Effective Rule of Law’, 30 Crime and Justice 283-357 (2003); T.R. Tyler, ‘Procedural Justice and the Courts’, 44(1/2) Court Review: The Journal of the American Judges Association 26-31 (2007).

    • 61 Please note, other goals might be detected as well, or they might be structured differently.

    • 62 J. Bentham, Draught of a New Plan for the Organisation of the Judicial Establishment in France (1790); G. Postema, ‘The Soul of Justice: Bentham on Publicity, Law, and the Rule of Law’, in X. Zhai and M. Quinn (eds.), Bentham’s Theory of Law and Public Opinion 267-82 (2013).

    • 63 Shapiro, above n. 54; Simmons, above n. 36; Beckman, above n. 35.

    • 64 V.M. Dryer, ‘The Epistemology and Science of Justified Reason’ 50(2) Philosophia 503-32 (2021); R. Binns et al., ‘“It’s Reducing a Human Being to a Percentage”; Perceptions of Justice in Algorithmic Decisions’, (2018), https://arxiv.org/abs/1801.10408v1.

    • 65 M. Hazelhorst, ‘The Right to a Fair Trial in Civil Cases’, in M. Hazelhorst (ed.), Free Movement of Civil Judgements in the European Union and the Right to a Fair Trial 123-75 (2017).

    • 66 Cohen, above n. 40.

    • 67 Bentham, above n. 62; Postema, above n. 62; Shapiro, above n. 54.

    • 68 M. Cohen, ‘Sincerity and Reason-Giving: When May Legal Decision Makers Lie’, 59 DePaul L. Rev 1091-150 (2010).

    • 69 Rawls, above n. 36; L. Fuller, ‘Forms and Limits of Adjudication’, 92(2) Harvard Law Review 353-409 (1978); S. F. D’Agostino and G.F. Gaus, ‘Public Reason’, 4 Ethical Theory and Moral Practice 91-92 (2001); R. Forst, The Right to Justification (2013); Beckman, above n. 35.

    • 70 M.J. Warning, ‘Concepts of Legitimacy’, in M.J. Warning (ed.), Transnational Public Governance. Transformations of the State 179-89 (2009).

    • 71 M. Weber, The Theory of Economic and Social Organization (1947); O.M. Akinlabi, ‘Understanding Legitimacy in Weber’s Perspectives and in Contemporary Society’, in O.M. Akinlabi (ed.), Police-Citizen Relations in Nigeria. Palgrave’s Critical Policing Studies 11-24 (2022); R. Cotterrell, ‘Legality and Legitimacy: The Sociology of Max Weber’, in R. Cotterrel (ed.), Law’s Community: Legal Theory in Sociological Perspectives 134-59 (2012).

    • 72 J. Sunshine and T.R. Tyler, ‘The Role of Procedural Justice and Legitimacy in Shaping Public Support for Policing’, 37(3) Law & Society Review 513-48 (2003).

    • 73 T.R. Tyler, ‘Procedural Justice and the Courts’, 44(1/2) Court Review: The Journal of the American Judges Association 26-31 (2007).

    • 74 T.R. Tyler, Why People Obey the Law (1990); Tyler (2006), above n. 60; J. Ulenaers, ‘The Impact of Artificial Intelligence on the Right to a Fair Trial: Towards a Robot Judge?’ 11(2) Asian Journal of Law and Economics 1 (2022); Chronowski et al., above n. 35; A. Mentovich, J. Prescott & O. Rabinovich-Einy, ‘Legitimacy and Online Proceedings: Procedural Justice, Access to Justice, and the Role of Income’, 57(2) Law & Society Review 189-213 (2023).
      Please note: K. Martin and A. Waldman argue that not all procedural rights have the same effect on legitimacy. In their study, in the context of firms’ legitimacy, they discovered that only an appeal to a human authority tends to legitimise algorithmic-based decisions. See: K. Martin and A. Waldman, ‘Are Algorithmic Decisions Legitimate? The Effect of Process and Outcomes on Perceptions of Legitimacy of AI Decisions’, 183 Journal of Business Ethics 653-670 (2023).

    • 75 D. Johnson et al., ‘Public Perceptions of the Legitimacy of the Law and Legal Authorities: Evidence from the Caribbean’, 48, Law & Society Review 947-78 (2014); I. Feygina and T.R. Tyler, ‘Procedural Justice and System-Justifying Motivations’, in J. Tost et al. (eds.), Social and Psychological Bases of Ideology and System Justification 351-70 (2009).

    • 76 G. Yalcin et al., ‘Perceptions of Justice by Algorithms’, 31 Artificial Intelligence and Law 269-92 (2023).

    • 77 European Commission, ‘The 2023 EU Justice Scoreboard’, CPM(203) 309, https://commission.europa.eu/system/files/2023-06/Justice%20Scoreboard%202023_0.pdf.

    • 78 Mentovich et al., above n. 74.

    • 79 Judicial Lawmaking, research master law faculty KU Leuven, https://onderwijsaanbod.kuleuven.be/syllabi/e/C01F9AE.htm#activetab=doelstellingen_idp215312.

    • 80 ChatGPT, ‘Introducing ChatGPT’, https://openai.com/index/chatgpt (last visited 8 May 2024).

    • 81 De Rechtspraak, ‘Rechtspraak Jaarverslag 2023’, https://www.rechtspraak.nl/SiteCollectionDocuments/Jaarverslag%20Rechtspraak%202023.pdf.

    • 82 V. Hendrickx, ‘Het Centraal register voor Belgische rechtspraak: een (digitale) stap vooruit?’, 4 Computerrecht 281-289 (2023).

    • 83 M.L. Seghier, ‘ChatGPT: Not All Languages Are Equal’, Nature (2023), https://www.nature.com/articles/d41586-023-00680-3; V.D. Lai e.a., ‘ChatGPT Beyond English: Towards a Comprehensive Evaluation of Large Language Models in Multilingual Learning’, arXiv (2023), arXiv:2304.05613.

    • 84 For instance, in Belgium, official acts of the government, such as legislation and case law, are explicitly excluded from copyright protection in Art. XI.172 §2 Code of Economic Law.

    • 85 For a more detailed research on generative AI systems and copyright challenges, see: N. Lucchi, ‘ChatGPT: A Case Study on Copyright Challenges for Generative Artificial Intelligence Systems’, 1 European Journal of Risk Regulation 1-23 (2023); A. Guadamuz, ‘A Scanner Darkly: Copyright Liability and Exceptions in Artificial Intelligence Inputs and Outputs’, 73(2) GRUR International 111-127 (2024); J. Vanherpe, ‘AI and IP: Great Expectations’, in J. De Bruyne and C. Vanleenhove (eds.), Artificial Intelligence and the Law 233-67 (2022).

    • 86 UNESCO, ‘Challenging Systematic Prejudices: An Investigation into Bias Against Women and Girls in Large Language Models’, https://unesdoc.unesco.org/ark:/48223/pf0000388971; V. Hendrickx, ‘Women’s Rights in the Age of Automation’, CiTiP Blog (17 April 2024), https://www.law.kuleuven.be/citip/blog/womens-rights-in-the-age-of-automation/.

    • 87 F. Motoki, V.P. Neto & V. Rodrigues, ‘More Human than Human: Measuring ChatGPT Political Bias’, Public Choice 3-23 (2024); S. Feng et la., ‘From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models’, arXiv (2023), arXiv:2305.08283; L. Winner, ‘Do Artifacts Have Politics?’, Daedalus 121-36 (1980).

    • 88 J. Baum and J. Villasenor, ‘The Politics of AI: ChatGPT and Political Bias’, Brookings (8 March 2023), https://www.brookings.edu/articles/the-politics-of-ai-chatgpt-and-political-bias/.

    • 89 M. Winkler, S. Köhne & M. Klöpper, ‘“Not All Algorithms!” Lessons from the Private Sector on Mitigating Gender Discrimination’, INFORMATIK 1289-303 (2022).

    • 90 T. Brennan and W. Dieterich, ‘Correctional Offender Management Profiles for Alternative Sanctions (COMPAS)’, in J.P. Singh et al. (eds.), Handbook on Recidivism Risk/Needs Assessment Tools (2017), https://doi.org/10.1002/9781119184256.ch3.

    • 91 J. Angwin et al., ‘Machine Bias’, ProPublica (2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

    • 92 W.J. von Eschenbach, ‘Transparency and the Black Box Problem: Why We Do Not Trust AI’, 34 Philosophy and Technology (2021); M. Almada, ‘Governing the Black Box of Artificial Intelligence’, SSRN (2023), https://doi.org/10.2139/ssrn.4587609.

    • 93 W. De Mulder et al., ‘Are Judges More Transparent Than Black Boxes? A Scheme to Improve Judicial Decision-Making by Establishing a Relationship with Mathematical Function Maximization’, 84 Law and Contemporary Problems 47-67 (2021); U. Peters, ‘Explainable AI Lacks Regulative Reasons: Why AI and Human Decision-making Are Not Equally Opaque’, 3 AI and Ethics 963-74 (2022); J. Zerilli et al., ‘Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?’, 32 Philosophy and Technology 661-83 (2019).

    • 94 M. Bohannon, ‘Lawyer Used ChatGPT In Court – And Cited Fake Cases. A Judge Is Considering Sanctions’, Forbes (8 June 2023), https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions/.

    • 95 M. Ajevski et al., ‘ChatGPT and the Future of Legal Education and Practice’, 57(3) The Law Teacher 352-64 (2023), https://doi.org/10.1080/03069400.2023.2207426; G. Bennet, ‘Is ChatGPT Any Good at Legal Research – and Should We be Wary or Supportive of It?’, 23(4) Legal Information Management 219-24 (2024).

    • 96 Diver, above n. 32; B. Friedman and D.G. Hendry, Value Sensitive Design: Shaping Technology with Moral Imagination (2019).

    • 97 Smuha and Hendrickx, above n. 6.

    • 98 Gutiérrez, above n. 3.

    • 99 G. Gentile, ‘AI in the Courtroom and Judicial Independence: An EU Perspective’, EUIdeas (22 August 2022), https://euideas.eui.eu/2022/08/22/ai-in-the-courtroom-and-judicial-independence-an-eu-perspective/; A.K. Dhungel and E. Beute, ‘AI Systems in the Judiciary: Amicus Curiae? Interviews with Judges on Acceptance and Potential Use of Intelligent Algorithms’, 7 ECIS Proceedings 1-16 (2024).

    • 100 Malek, above n. 7.

    • 101 J. Morison and T. McInerney, ‘When Should a Computer Decide? Judicial Decision-making in the Age of Automation, Algorithms and Generative Artificial Intelligence’, in S. Turenne and M. Moussa (eds.), Research Handbook on Judging and the Judiciary (2024), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4723280; E. Bender et al., ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’, FAccT 610-23 (2021).

    • 102 J.D. Gutiérrez, ‘AI Technologies in the Judiciary: Critical Appraisal of Large Language Models in Judicial Decision-Making’, in R. Paul, E. Carmel and J. Cobbe (eds.), Handbook on Public Policy and AI (2024), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4667572.

    • 103 Yalcin, above n. 76.

    • 104 B.C. Das et al., ‘Security and Privacy Challenges of Large Language Models: A Survey’, (2024), https://arxiv.org/pdf/2402.00888.

    • 105 N.Z. Lee, ‘Exploring the Limits of Small Language Models’, UCB/EECS-2023-141 (2023), http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-141.html.

    • 106 Morison and Harkens, above n. 24.

    • 107 L. Denning, ‘The Function of the Judiciary in a Modern Democracy’, 16(4) Pakistan Horizon 299-305 (1962); B. Leiter, ‘The Roles of Judges in Democracies: A Realistic View’,6(2) Social Science Research Network 346-75 (2017).

    • 108 M. Gómez, ‘The Contribution of Judges to Society’, 101(3) ARSP 332-53 (2015); A.R. Reeves, ‘Do Judges Have An Obligation to Enforce the Law?: Moral Responsibility and Judicial-Reasoning’, 29(2) Law And Philosophy 159-87 (2009).

    • 109 Morison and McInerney, above n. 101.

    • 110 G. Gentile, ‘Artificial Intelligence and the Crises of Judicial Power: (Not) Cutting the Gordian Knot?’, in G. De Gregorio et al. (eds.), Oxford Handbook on Digital Constitutionalism (2024 forthcoming), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4731231.

    • 111 M. Cerullo, ‘Texas Judge Bans Filings Solely Created by AI After ChatGPT Made Up Cases’, CBS News (2 June 2023), https://www.cbsnews.com/news/texas-judge-bans-chatgpt-court-filing/.

    • 112 A. Martinho, ‘Surveying Judges About Artificial Intelligence: Profession, Judicial Adjudication, and Legal Principles’, AI & Society (2024), https://doi.org/10.1007/s00146-024-01869-4.

    • 113 K. Yeung and M. Lodge, Algorithmic Regulation: An Introduction (2019).

    • 114 L. Naudts, ‘Fair or Unfair Differentiation? Reconsidering the Concept of Equality for the Regulation of Algorithmically Guided Decision-Making’ (PhD thesis KU Leuven, 2023).

    • 115 Albright, above n. 35.

    • 116 R. Simmons, ‘Big Data, Machine Judges, and the Legitimacy of the Criminal Justice System’, 52 University of California Davis Law Review 1067 (2018).

    • 117 Gentile, above n. 110; D.T Kit Ng et al., ‘Conceptualizing AI Literacy: An Exploratory Review’, 2 Computers and Education: Artificial Intelligence 100041 (2021).

    • 118 Gutiérrez, above n. 102.

    • 119 B. Friedman and D.G. Hendry, Value Sensitive Design: Shaping Technology with Moral Imagination (2019).

    • 120 S. Umbrello and I. van de Poel, ‘Mapping Value Sensitive Design onto AI for Social Good Principles’, 1(3) AI and Ethics 283-96 (2021).

    • 121 M. Sadek et al., ‘Designing Value-sensitive AI: A Critical Review and Recommendations for Socio-technical Design Processes’, AI and Ethics 1-19 (2023), https://doi.org/10.1007/s43681-023-00373-7.

    • 122 M. Fink and M. Finck, ‘Reasoned A(I)dministration: Explanation Requirements in EU Law and the Automation of Public Administration’, 47(3) European Law Review 376-92 (2022).

    • 123 Binns, above n. 64.

    • 124 J. Dodge, Q. Vera Liao & R.K.E. Bellamy, ‘Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment’, (2019) https://arxiv.org/pdf/1901.07694.pdf.

    • 125 Binns, above n. 64.

    • 126 Johnson, above n. 75.

    • 127 D. Brughmans, L. Melis, & D. Martens, ‘Disagreement Amongst Counterfactual Explanations: How Transparency Can Be Misleading’, 32 TOP 429-462 (2024).

    • 128 S. Greenstein, ‘Preserving the Rule of Law in the Era of Artificial Intelligence (AI)’, 30 Artificial Intelligence and Law 291-323 (2022).

    • 129 Gutiérrez, above n. 102.

    • 130 J.D. Gutiérrez, ‘ChatGPT in Colombian Courts’, Vergassungsblog (23 February 2023), https://verfassungsblog.de/colombian-chatgpt/.

I would like to express my heartfelt gratitude to my supervisors, Nathalie A. Smuha and Peggy Valcke, for their unwavering support and invaluable insights. My thanks also go to Adam Kirk-Smith for his thoughtful comments on this topic during a previous conference. Additionally, I am grateful to the two anonymous peer reviewers for their thorough, detailed and constructive feedback.

Print this article
Button_em_en