-
1 Background: How Might Automated Adjudication Work?
Automated adjudication is an undeveloped field, therefore, there are many possibilities for which technologies would likely be used in it and how they would be used. Whatever technologies end up being used in the mainstream, assuming that automated adjudication is widely implemented, there is no doubt that it will change over time. Finally, because the author’s background is in law, not computer science, this discussion will be broad and will focus on the practicalities and legal implications of automated adjudication, rather than on the specifics of the technologies at stake.
1.1 What Is Artificial Intelligence?
Artificial intelligence is a term that generally refers to the ability of computers to perform “mental tasks traditionally performed by humans”.6xD.E. Chamberlain, ‘Artificial Intelligence and The Practice of Law, or, Can a Computer Think Like a Lawyer?’, WL 10611682, 2016 Texas CLE Business Disputes, p. 25. It is a collection of various integrated technologies that produce “human-like responses and reasoning”.7xB. Lambrechts, ‘May It Please the Algorithm’, The Journal of the Kansas Bar Association, January 2020, pp. 36, 38. These technologies include deep learning, natural language processing and speech recognition.8xIbid. Among these, deep learning is primary.9xIbid. Deep learning is the ability of machines to gain information and ‘learn’ from a collection of data – similar to how humans learn from experience.10xIbid. An algorithm capable of deep learning performs a task “repeatedly, each time tweaking it a little” to improve the outcome toward the end of solving a problem.11xIbid.
Between 2017 and 2018, Congress statutorily defined AI as anartificial system … that performs tasks under varying … circumstances without significant human oversight, or that can learn from experience; … [or] that solves tasks requiring human-like perception, cognition, planning, learning, communication or physical action….12xThe John S. McCain National Defense Authorization Act for Fiscal Year 2019 (P.L. 115-232), Section 238, as quoted in Lambrechts, 2020, pp. 36, 39.
1.2 The Highest Form of AI – Deep Learning
According to the above descriptions of AI, it can solve various problems and improve its own abilities with new information. At its best, AI is capable of decision-making via “deep learning”.13xLambrechts, 2020, pp. 36, 38. While it is obvious that AI has not yet risen to equality with all human capacities,14xChamberlain, 2016, p. 1 (referencing the robot from the film “2001: A Space Odyssey,” which develops mental illness and becomes homicidal). some are optimistic that it is limitless. Ray Kurtzweil, director of engineering at Google, believes that by 2045, AI will surpass human intelligence – with the capability of capturing a human person’s “entire personality, memory, skills and history”.15xLambrechts, 2020, pp. 36, 38. The question is – is Kurtzweil correct? Could AI surpass human abilities in the future? Will it? Probably not.
Contemporary philosopher John Searle has a famous argument against the possibility of computers surpassing humans in intelligence. Searle presented his argument in what has come to be known as “The Chinese Room” analogy:Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.
The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but could not produce real understanding… Searle argues that … computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. The broader conclusion … is that the theory that human minds are computer-like computational or information processing systems is refuted. Instead minds must result from biological processes; computers can at best simulate these biological processes.16xAvailable at: https://plato.stanford.edu/entries/chinese-room/.1.3 Algorithms: The Means to AI’s End
Assuming Searle is right about AI possessing a different kind of capability than human intelligence, let us turn now to the basic building block of modern AI – the algorithm. Regardless of AI’s potential, the reality is that AI would utilize what we have come to know as ‘algorithms’ in order to make decisions in any context – including the courtroom. An algorithm is “… a set of instructions for solving a problem…”.17x‘Comment: When Is An Algorithm Invented? The Need for a New Paradigm for Evaluating An Algorithm for Intellectual Property Protection’, Albany Law Journal of Science and Technology, Vol. 15, pp. 579, 581. An algorithm is the basic unit that allows for AI to receive a set of facts and produce an answer. In AI, algorithms are rules input by human designers that allow computers to produce automatic, consistent results.
In understanding an algorithm as a set of instructions, it is apparent that AI requires some entity other than the computer to give the computer that set of instructions. The reason for the ‘A’ in ‘AI’ is that some person outside the machine is, in essence, needed to tell the machine how to behave because the machine has no will of its own. In the context of the judicial system, the question is – who should that person be? Who has the legal authority to control the programming of robojudges in an American court?1.4 The American System: Rule of Law
“This Constitution … shall be the supreme Law of the Land…”.18xUSCS Const. Art. VI, Cl2. An analysis of appropriate solutions for overcrowded, inefficient courts is unique in the American context. Solutions that may be fitting for other countries simply may not be compatible with America’s highest law.
The fundamental reason that America’s system is unique is that it is based on the concept of rule of law. Rule of law is often described as what John Adams, quoting James Harrington, called “a government of laws, not of men”.19xAvailable at: https://claremontreviewofbooks.com/a-government-of-laws-not-of-men/. The first of those laws is the Constitution.
Rule of law is essential to a free society. Retired Supreme Court Justice Anthony Kennedy did a series of academic lectures in China, and challenged himself to define the idea of rule of law to the Chinese people.20xHon. A.M. Kennedy, Lecture on Freedom of Expression in Eur. and the U.S. at the University of the Pacific, McGeorge School of Law Salzburg Study Abroad Program, 3 July 2018 [hereinafter “Justice Kennedy’s Lecture”]. Justice Kennedy derived a unique definition of rule of law that avoids clichés.21xIbid. Parts of his definition provide a framework for evaluating the idea of automated adjudication.22xIbid.
Justice Kennedy’s definition of rule of law requires, first, that, “[t]he Law must devise and maintain systems to advise all persons of their rights…” (emphasis added).23xIbid. Therefore, a key element of rule of law is justice for “all persons”.24xIbid. As I mentioned in the beginning of this discussion, the cost of justice is currently a barrier to many. In fact, many people may not seek resolution of their legal disputes because they cannot afford the cost. Elements of the high price tag include costly legal representation and the sacrifice of time away from work, to name a few. Many people do not acquire justice because it is simply too expensive to litigate. Kennedy’s definition of rule of law requires that justice be available to all. Thus, making the legal system more affordable should arguably be a goal of any society that aspires to uphold rule of law.1.5 AI in the American Courtroom?
The idea of AI in the courtroom seems repulsive to many Americans’ idea of justice. Our justice system is based on due process, inalienable rights, ceremoniality and discernment. Substituting traditional legal offices for a computer may seem revolutionary and cheap. However, in dismissing the idea of a robojudge, is the judicial system overlooking an advantageous opportunity? Proponents of AI in the courtroom believe it may have the potential to increase access to justice by making the justice system less costly.25xB. Toy-Cronin, et al., ‘Testing the Promise of Access to Justice through Online Courts’, International Journal of Online Dispute Resolution. Vol. 5, No. 1-2, 2018, pp. 39, 40. Is this assumption correct?
Hypothetically, robojudges would be cheaper to utilize because unlike human judges, they do not require a salary. Additionally, their hypothetical ability to immediately calculate and apply the law enables them to provide swift justice to litigants. Theoretically, the swifter the justice, the more accessible it is because parties would likely have the ability to attain compensation for their grievances more quickly. Instead of suffering a wrong for a prolonged amount of time, parties could have a court order in minutes. Therefore, robojudges might be able to increase access to justice and might seriously be considered as a new route for resolving legal disputes. Of course, all this assumes that enforceability of judgments is never an issue.
However, enforceability of judgments is not the only assumption that optimism for automated adjudication rests on. Proponents of the technology are quick to brush past the questions that make opponents of it uneasy. Is the justice system so bad that it should be overhauled by something as untried as a machine? Is the country justified in giving up on human judges? Is a robojudge even free from human flaws and human influence – or is it merely a tool in the hands of the human programmer? Are we essentially swapping our judges for computer programmers? These are the questions that I hope the courts will try to answer honestly before proceeding to implement such a radical method as automated adjudication. -
2 Weak Assumptions in the Case for Automated Adjudication
In order to honestly evaluate the idea of a robojudge in our courts, it is important to recognize the many assumptions that proponents of automated adjudication are relying on. Robojudges may be an aid to justice, after all – or they may be a hindrance. They may even defeat the purposes for which they are installed.
2.1 Assumption: Automated Adjudication Would Make Machines, Rather Than Humans, Responsible for a Judicial Decisions
The first assumption underlying the argument for automated adjudication is that AI possesses something like a mind of its own and could therefore somehow accept responsibility for judicial decisions, in lieu of its creators, controllers and programmers being blamed for those decisions. This crucial idea rests on yet another assumption: that the human mind can be replicated in a computer, without human flaws. But it cannot be. Creations of the human mind remain the property of their creators. Like the legal fiction of the ‘personhood’ of corporations, the ‘personhood’ of AI is also a fiction.
[E]ven corporations are reducible to relations between the persons who own stock in them, manage them, and so forth. Thus, calling a legal person a “person” involved a fiction unless the entity possessed “intelligence” and “will.” Those attributes are part of what is at issue in the debate over the possibility of AI.26xL.B. Solum, ‘Legal Personhood for Artificial Intelligences’, North Carolina Law Review, Vol. 70, 1992, pp. 1231, 1239-1240.
Human intelligence and will are necessary for an entity to truly function like a human, and therefore, robojudges are not comparable to human judges. The reasons for AI’s inferiority to human intelligence will be discussed throughout the rest of this article.
2.2 Assumption: Automated Adjudication Would Decrease or Eliminate Unfair Biases
Proponents of automated adjudication assume that AI does not possess inherent biases, like humans do. A timely example of this is the statistical fact that in recent years, certain news entities are disproportionately hidden from mainstream search engine result lists based on their political affiliations. For example, Facebook’s Mark Zuckerberg is quoted to have written in 2018 that Facebook ought to
favor content that is “broadly trusted.” How does Facebook determine whether a source is “broadly trusted”? They ask users if they are familiar with a news source and then whether they trust that news source.27xAvailable at: www.nationalreview.com/2018/03/social-media-companies-discriminate-against-conservatives/.
The problem is that the politics of Facebook users tend to veer more to the left, so there will be a disproportionate number of “trusted” left-leaning news sources, as opposed to right-leaning sources, on Facebook. The result is that the algorithms that put Facebook’s policy into action produce effects that reflect a bias towards one side of the political aisle. The Facebook example reflects the fact that human biases are inevitably reflected in the results of manmade algorithms.28xIbid.
In fact, the lack of impartiality in algorithms might be hurting both sides of the political spectrum. Groups such as the American Civil Liberties Union,29xAvailable at: https://www.technologyreview.com/2017/07/12/150510/biased-algorithms-are-everywhere-and-no-one-seems-to-care/. the Washington Post,30xAvailable at: www.washingtonpost.com/health/2019/10/24/racial-bias-medical-algorithm-favors-white-patients-over-sicker-black-patients/. and Wired,31xAvailable at: www.wired.com/story/can-ai-be-fair-judge-court-estonia-thinks-so/. have complained about algorithmic biases involving minorities, lower economic classes and gender stereotypes. Regardless of who the victims in biased algorithms are, the point must not be ignored: algorithms are biased because they are created by people and people are biased.2.3 Assumption: Automated Adjudication Would Decrease or Eliminate Judicial Corruption
Proponents of automated adjudication assume that judicial corruption can be avoided or minimized merely by the replacement of human judges with AI. This assumption can be disproved with the same logic as the logic concerning any other sort of bias. If AI is biased because it is programmed by biased humans, then those who program AI are just as susceptible to bribes as human judges are. If the programmers of AI are susceptible to bribes, then the automated adjudicators would, as a result, make decisions based on corruption.
2.4 Assumption: Automated Adjudication Does Not Exercise Nor Should It Exercise Human Values in Decision-making
While condemning human susceptibilities such as bias and corruption, the campaign for AI in the courtroom ignores positive human traits that AI lacks. Even if automated adjudicators were shown to be unbiased and not corrupt, they still lack the human faculty of conscience, and thus would not necessarily be better at delivering justice than humans.
Morality is part of justice, and is accomplished, in part, by use of a conscience. If algorithms possess human bias, they also possess human morals because morals are a type of bias – perhaps what we may call an acceptable form of bias. Automated adjudicators possess the morals of those who have programmed them, and this is not necessarily a bad thing.
Proponents of automated adjudication overlook this. They assume that automated adjudication possesses both no bias and no human values. The assumption of lack of bias is viewed as a positive aspect. This assumption, while not true, is probably correctly viewed positively. However, assuming that robojudges would have no human values, and viewing that as a positive aspect of automated adjudication, is incorrect for two reasons.2.4.1 Assumption: Automated Adjudication Would Not Exercise Human Values
First, as mentioned above, the logic that supports that automated adjudication would possess biases also supports that automated adjudication would reflect the human values of those who design its algorithms. Therefore, it is likely that algorithms reflect certain human values. The dilemma lies in the question of whose values should be reflected in a robojudge. Injustice will occur if the right party is not in control of the programming. More on this in Section 3 below.
Even if the person creating the algorithm is constitutionally authorized to be creating the algorithm and programming the robojudge (more on the constitutionally correct assignment of responsibility for courtroom decisions below), there are still issues. For starters, if it is the judge programming the machine, then each judge must program the machine in his own courtroom because there will likely be issues of legal interpretation in each case that varies between judges. So, in many cases, a one-algorithm-fits-all process would be inappropriate. If it is the legislature, instead, programming the machines, all of Congress would have to agree on the language of the algorithms, through the bicameral process, on every aspect of every algorithm in a program (this will be explained further below).2.4.2 Assumption: It Is a Positive Thing to Have an Automated Adjudicator That Does Not Exercise Human Values
It is incorrect to assume that it is a positive aspect of automated adjudication to not possess human values because human values are necessary to make courtroom decisions. The problem lies in assigning the responsibility for those decisions to a robot which has been programmed by someone who may not have the legal authority to do what is essentially the job of a judge or jury member – deciding factuality or applying law.
It is incorrect to assume that human values are not necessary or valuable to the process of courtroom decision-making. To illustrate, substitute the courtroom scene for a battlefield. The United States’ Department of Defense (DOD) has recognized that human morality and compassion are necessary in decision-making in life-or-death military situations – even where AI has a tremendous opportunity for increasing efficiency and economy. In a directive from 2017, the DOD provided that autonomous and semi-autonomous weapon systems “are to be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force”, precluding the development of fully autonomous weapons systems. In other words, AI should be a supplement to human labour – not a replacement for it. In opposition to fully autonomous weapons systems, Vice Chairman of Congress’s Joint Chiefs of Staff General Paul Selva said, “…we take our values to war …”.32xLambrechts, 2020, at 36, 39. The bottom line: efficiency and economy should not be valued over morals, especially justice.
Whether a gavel or a gun is signalling a crucial decision, why is it that human values matter?The standard for autonomous weapon systems’ compliance with the laws of war should arguably not be whether they are able to make unflawed decisions, but whether they are able to follow the principles of proportionality, military necessity and distinction, at least as well as human operators …. Opponents [of autonomous weapons’ systems] also argue that human compassion and other emotions are necessary to ethical war-fighting. Human empathy, some argue, helps soldiers to assess the objectives of potential human targets to discern whether they really pose a threat. Machines may possibly never be programmable to effectively emulate empathy. “On the other hand, proponents of such systems argue that human emotions – fear, anger and the instinct for self-preservation – may lead to adverse consequences on the battlefield. Robots, they posit, may not be subject to human errors or unlawful behavior induced by human emotions”.33xIbid., at 36, 39-40.
The same author further points out that over-reliance on automated conclusions could lead to the phenomenon of “automation bias”, which “can lead to a psychological detachment from the consequences of the delivery of weapons systems and make killing too remote for soldiers”.34xIbid.
Utilization of AI in the courtroom is open to similar risks to those inherent when AI is used on the battlefield. What is mainly at risk are the most sacred human possessions – life, liberty and property. Human values and moral sensibilities are at risk of being undervalued through the cold calculations of a machine programmed by someone who, presumably, will not be held accountable for his ‘decisions’ because the machine will be blamed instead. Further, AI in the courtroom allows the human programmers to feel very removed from the decision-making because they do not personally interact with the people who are affected by it, and thus the programmers are less likely to use appropriate levels of empathy and compassion in their work. Robots, no matter what algorithms they operate under, do not possess a human conscience that affects every decision individually, and that is a weakness of robots.35xAvailable at: www.ethicsforge.cc/robojudge-is-the-devil-in-the-data/.2.5 Assumption: Automated Adjudication Would Be More Efficient Than Human Judges
Another assumption about the use of AI in the courtroom is that it will necessarily be more efficient than humans. Supporters assume that it will be more efficient to program algorithms to make judicial decisions than it is to have judges make case-by-case decisions.
The problem with the assumption of efficiency is that it focuses on the actual decision-making process for each litigant, while undermining the time and cost of the programming of robojudges. Unlike humans, who rely on consciences and values, in addition to legal precedent, to make judicial decisions, robojudges would theoretically rely on formulas that anticipate factual scenarios and prescribe a certain outcome accordingly. It is value-based reasoning only to the extent that the programmer could have anticipated a case with similar facts and programmed it with that in mind. It is formulaic. If the factual circumstances of each unique case are not anticipated and pre-programmed ahead of time, the decision could be wildly offensive to justice.
The challenge with the need to pre-program and anticipate all sorts of factual scenarios is that it would be extremely time-consuming, not to mention that it is likely impossible.You can’t possibly have exponential consequences with exponential responsibility unless you have an exponential amount of human thought to be dedicated to those challenges … And you can’t put that just into an algorithm.36xwww.nbcnews.com/tech/social-media/algorithms-take-over-youtube-s-recommendations-dhighlight-human-problem-n867596.
And there is not exponential human thought that could be dedicated to predicting all factual scenarios. This is why in the field of lawfair, there is the concept of the ‘next case’.
The ‘next case’ is a term used to describe a factual issue in court which has no square legal precedent. In fact, the job of lawyers is to take the unique, unprecedented facts of their clients’ cases and argue that certain laws are applicable because the clients’ cases are legally analogous to the facts of certain prior cases. It is then the job of the judge to essentially decide which lawyer’s arguments are correct. It is rare to have a case that matches the legal precedent exactly. Therefore, judges decide which analogies are legally sufficient and which are not. How do they do this? Values, morals, knowledge of human life, their own human experience and most importantly, abstract reasoning applied to the new situations before them. What human judges cannot do, however, is predict each case’s facts beforehand and make a decision ahead of time. That is the impossible task we are asking of robojudges when we try to employ them as judges.
There is no replacement for the flexibility of the human conscience. Even assuming that it is feasible to create infinite programming to predict all cases, the resulting case decisions still might not make the cut for human ideas of justice and equity. ‘Wrong’ judicial decisions at the trial court level will thus be at risk of being fought and overturned in appellate courts, defeating the entire purpose of the ‘efficiency’ of the robojudge idea. And even robojudges that always make ‘right’ decisions will always be susceptible to hacking and computer viruses, like all computer programs, further lowering the robojudge’s rate of efficiency and the level of economy. -
3 The Constitutional Case against Automated Adjudication
Even if all of the optimistic assumptions about the use of automated adjudication are true, there are still major legal reasons for Americans to hesitate utilizing it in the courts. Those reasons are not small – they concern two of the most crucial principles required by the Constitution: separation of powers and due process. Violation of these principles by automated adjudication would depend on how we classify the programming of the automated adjudication systems. The programming, which would in great part consist of the creation of algorithms to decide each aspect of a case, could theoretically be viewed in one of two ways – as lawmaking or as law interpretation.
If creating algorithms to program robojudges is lawmaking, the separation of powers doctrine is violated because that doctrine, prescribed by the Constitution, requires that only the legislative branch create new laws. On the other hand, if we view the creation of algorithms to program robojudges as not the creation of law but as the interpretation of law, there is no separation of powers issue because the courts’ role is inarguably to interpret and apply the laws that are already in existence.
However, even if the job of programming robojudges poses no problem as to separation of powers because we conclude that it is purely legal interpretation rather than legislation, the very fact that we are replacing a human judge with a machine brings up the important question of whether the Constitution’s requirements for due process are violated. Proponents of robojudge-type systems should tread very carefully as they traverse the thin ice that is potentially a legally radical proposal and study the law to discover whether robojudges are constitutionally permitted.3.1 Legal Background: Separation of Powers
In the United States, the separation of powers doctrine requires that Congress does the lawmaking.37xU.S. Const. Art. I, § 1. The reason that separation of powers requires Congress to control lawmaking is that Congress is elected by the people, and therefore is accountable to the people.38xSee Roper v. Simmons, 543 U.S. 551, 616 (2005) (O’Connor, J., dissenting) (explaining “[t]he reason for insistence on legislative primacy is obvious and fundamental: ‘[I]n a democratic society legislatures, not courts, are constituted to respond to the will and consequently the moral values of the people (citation omitted)’”). If, therefore, Congress creates laws that the citizenry does not support, Congress will be held accountable by the people and removed from office through the ballot box. Robojudge programming begs the questions both of who can and who should be held accountable for incorrect judicial decisions.39xIbid. at 57.
3.1.1 Programming Automated Adjudication May Be Viewed as Legislation
If programming a robojudge is legislation, and any party besides the legislative branch programmes the robojudges, the separation of powers doctrine will be violated. Violating separation of powers transgresses the Supreme Law of the land, thus violating rule of law.40xIbid. As mentioned above, rule of law is essential to access to justice because justice requires that all decisions “uphold the rule of law”.41xJustice Kennedy’s Lecture, supra note 20. Proponents of automated adjudication often claim that its main benefit is the increase of access to justice. However, if automated adjudication decreases rule of law – one goal of which is to have justice for all people,42xIbid. then automated adjudication defeats its own purpose and is not a viable alternative to traditional court.
Viewing the job of the programmer as lawmaking assumes that existing statutes and common law rules are not available in a format that is readily conducive to translation to algorithms. For example, take the common law rule against theft. Theft is the trespassory capture and asportation of the personal property of another with the intent to steal.43xLee v. State, 59 Md. App. 28, 32, 474 A.2d 537, 539 (1984). This rule contains many elements, each of which must be proven to convict a person. Even though the definition is a simple English sentence, it likely has too many nuances to be contained in a single algorithm. Therefore, it would have to be broken down and made into many algorithms in order to be applicable to a set of facts in a theft case. Furthermore, each element would have to be broken down into different formulas that presented all the currently imaginable factual scenarios in which that element would apply. For example, the ‘capture’ element would require various formulas. Some different scenarios that include the ‘capture’ of goods would include, at minimum, where one physically picks up an item with one’s hands, where one hires another to collect an item for him, where one uses some sort of machinery to retrieve an item, and even where one uses digital technology to collect funds or something else of value. In other words, the one rule against theft would have to be made into many, many formulas in order to be useful to resolve a criminal theft case. The programmer’s job would be to create rules that are specific and simple enough to be translated into algorithms.
Creating algorithms that dictate litigation outcomes may be viewed as lawmaking because it is prescribing specific results for specific fact patterns, just like law does, via algorithms. The results produced by the algorithms would be binding and would affect people’s lives. Therefore, algorithms may be seen as a set of laws.
Given that computer programming ability is required to create these algorithms, the programmers will likely be people other than judges or clerks. Judges and clerks are learned in the law and appointed or elected for their legal credentials, not for their expertise in computer science. Based on the amount of labour, time and experience required to ascend to the position of judge,44xSee Alexander Hamilton, Federalist No. 78 (stating, “…[I]t will readily be conceived from the variety of controversies which grow out of the folly and wickedness of mankind, that the records of [legal] precedents must unavoidably swell to a very considerable bulk, and must demand long and laborious study to acquire a competent knowledge of them. Hence it is, that there can be but few men in the society who will have sufficient skill in the laws to qualify them for the stations of judges. And making the proper deductions for the ordinary depravity of human nature, the number must be still smaller of those who unite the requisite integrity with the requisite knowledge…”.). it is unrealistic to expect to be able to find judges that are also skilled in computer science.3.1.1.1 Programmers Could Likely Control Automated Adjudication Because of Their Expertise
The most likely scenario concerning the implementation of automated adjudication is that judges or other local legal experts would work with computer science professionals to translate rules of law into rules of computer application. In this scenario, viewing programming as legislation, we would have a system of courts where the judiciary performed both its role (interpretation of the law) and the role of the legislature (creation of law) in its implementation of automated adjudication. The question is, who is to take the blame when a machine produces a legally incorrect ruling?
Algorithms could produce a wrong result in various circumstances. One instance is where an algorithm is too narrow to include all of the possible factual circumstances where a particular result would apply. Take the example of an online seller-purchaser dispute. Consider if the dispute was over the interpretation of the sales contract. The applicable statute provides that in disputes over contract interpretation between online sellers of a certain size, and online retail purchasers, the purchaser’s interpretation controls. The parties do not dispute that the statute is applicable to the facts. The seller meets the size requirement of the statute, and the parties agree that the purchaser was a retail purchaser. The parties do not dispute any material facts.
In this example, the algorithm is set up for the purchaser to prevail, but only where the purchase is from a specific list of vendors that the programmer considers as ‘retailers’. Congress has not defined what a ‘retail’ purchase is, therefore, the programmer has set up the algorithm according to his own personal definition of ‘retail purchase’. If Congress would define ‘retail purchase’ differently, then the programmer of the robojudge has effectively created a law to decide the outcome of the case, potentially to the disadvantage of the purchaser.
Congress and the state legislatures do not necessarily provide the precision in statutes that algorithms require to make an adjudicative decision. Furthermore, case law does not provide the kind of exactness an algorithm would require to judge a case, nor is it exhaustive in its examples of possible factual scenarios in which certain rulings apply. As mentioned above, arguing whether a law applies to a new set of facts is the job of lawyers.
In the interest of efficiency, courts may be tempted to circumvent the constitutional lawmaking process where the legislative branch creates the laws, as well as assign the dictation of algorithms to computer engineers. However, efficiency and lowering the cost of adjudication are not sufficient justifications for overturning the constitutional process. If Congress has not been specific enough to provide rules for robojudge programming, then Congress must be more precise. The solution is not to pass the buck to unelected, unaccountable court employees, including the highest of those employees – the judges. There must not be “a trade-off between the goals of efficiency (i.e. access) and fairness (i.e. justice)”, making “the kind of justice [delivered] to a weaker party … of lower quality”.45xP. Cortes, ‘Using Technology and ADR Methods to Enhance Access to Justice’, International Journal of Online Dispute Resolution. Vol. 5, No. 1-2, 2018, pp. 102, 111.3.1.1.2 Congress Would Likely Not Control Automated Adjudication Based on Its History: The Administrative State
Here, it may be noted that assigning to Congress the job of creating rules for robojudge algorithms would not likely be well-received by the legislative branch because it is a huge responsibility and an undeveloped area of law. Further, it would require technical assistance of programmers and would be initially costly. A good faith attempt to adhere to the separation of powers doctrine of the Constitution by keeping the rulemaking tasks with Congress might be a bit much to expect in the real world, where Congress can’t even create an environmental regulation without the giant helping hands of the EPA.46xU.S. Const. Art. I, § 1.
The sheer difficulty of having Congress complete the task of making laws specific enough to be immediately implemented in a robojudge is enough to consider that such a task is not feasible. In fact, the discussion of who should be in control of robojudges brings up the question of why we need people to control them, anyway. If AI really is a form of intelligence, why are humans necessary to implement and sustain them anyway? The answer is that AI is not really intelligence, it is just an advanced collection of technology that can, in some contexts, do some tasks that humans can do, much more quickly than most humans can do them.Take, for example, self-driving cars. [They] have cameras on them, and one of the things that they’re trying to do is collect a bunch of data by driving around. It turns out, there is an army of people who are taking the video inputs from this data and then just tracing out where the other cars are—where the lane markers are as well … [T]he funny thing is, we talk about these AI systems automating what people do. In fact, [AI systems are] generating a whole bunch of manual labor for people to do47xAvailable at: www.mckinsey.com/featured-insights/artificial-intelligence/the-real-world-potential-and-limitations-of-artificial-intelligence. (emphasis added).
Even if Congress is realistically capable of completing the task of programming robojudge systems, Congress is unlikely to accept such a responsibility. We can assume this based on Congress’s record of shifting responsibility when the task becomes tedious. The most pernicious example of this buck-passing is the current administrative state.
3.1.2 History of Administrative Law
Since 1825, the Supreme Court has given varying levels of approval at the practice of Congress of ‘delegating’ its lawmaking power to what we have come to know as ‘administrative agencies’.48xAvailable at: www.law.cornell.edu/constitution-conan/article-1/section-1/delegation-of-legislative-power#fn57art1. The delegation of power comes in the form of a statute, which prescribes guidelines for the extent of the delegation of power to the agency.49xAvailable at: www.heritage.org/political-process/report/administrative-state-constitutional-government. Within those bounds, the agency has the power to create rules that will have the force of law on the public.50xSupra note 48. These laws are classified as ‘regulations’.51xIbid.
Legislative delegation is a furiously controversial subject. Legal scholars are divided over whether the Constitution permits Congress’s delegation of legislative tasks or not. This article takes the position that delegation is not permissible according to the Constitution. Delegation is an illicit product of the High Bench’s creativity.52xSupra note 49. In fact, the Supreme Court has consistently patted itself on the back for its permission of delegation: since 1935, it has not declared unconstitutional any congressional act of delegation.53xSupra note 48. The Executive Branch’s power has since grown as could be expected, with the repeated stamp of approval of the Supreme Court.54xIbid.
As of 2009, the federal government now employs 2.7 million civil servants.55xSupra note 49. Only about 2,500 of those employees were political appointees.56xIbid. In other words, the executive branch employs nearly 2.7 million unelected bureaucrats.3.1.3 The Administrative Rulemaking Process
Administrative agencies create laws according to the process prescribed by the Administrative Procedure Act (APA).57xAvailable at: www.federalregister.gov/uploads/2011/01/the_rulemaking_process.pdf. The APA has certain requirements geared toward keeping the rulemaking process public and open to comment from concerned citizens, but this level of accountability is nowhere near the accountability that the ballot box commands over Congress because the public does not control who works for these agencies and who does not.
3.1.4 Administrative Lawmaking Is the Majority of Federal Lawmaking
Regardless of one’s view on the constitutionality of legislative delegation, it constitutes most of the legislation that the federal government accomplishes. As mentioned above, the common justifications for delegation are efficiency and expertise. Efficiency because rulemaking is much more quickly accomplished outside the constitutional structure of bicameralism and presentment than within that structure. The structure’s sluggishness is the reason for the common complaint from members of both political sides – that ‘Congress gets nothing done’.
When citizens complain about the inefficacy of Congress, they are overlooking the fact that the slowness of Congress is part of its design. The House and the Senate were designed to keep each other balanced so that the whole of Congress would not be overly powerful or tyrannical.58xJames Madison, Federalist No. 51., “Checks and Balances”, in Benjamin F. Wright (Ed.), The Federalist, Barnes and Noble Books, USA, 2004, p. 355. The design of Congress evidences that efficiency and economy should not always be government’s highest values. The judiciary, like the legislature, should not be made capable of doing things as quickly and cheaply as possible at the cost of justice.3.2 Legal Background: Due Process
The Constitution prescribes ‘due process of law’ in both the Fifth and Fourteenth amendments. There are differing views of the extent of this constitutional promise, but one Supreme Court justice has described the entirety of it as a “substantive guarantee against ‘unfairness’.”59xAvailable at: www.cato-unbound.org/2012/02/06/timothy-sandefur/why-substantive-due-process-makes-sense. Due process is “best seen as a pledge against arbitrary or unauthorized government action”.60xIbid.
Due process is typically divided into two kinds – procedural and substantive. In the courtroom, procedural due process is very important. It is the reason that trial by a jury of one’s peers, notice, opportunity to be heard, discovery, cross-examination, access to counsel and other American legal traditions are elements of our litigatory procedures. Substantive due process, instead of dealing with a just process of applying the laws, deals with the fairness of the laws themselves. “Without a substantive guarantee, a coin toss would suffice as a trial”.61xIbid.
If we view the job of robojudge programming as the interpretation of law, due process will be violated in multiple ways. Certain elements of due process are specified in the Constitution, such as the right to a jury trial for criminal proceedings, the right to not incriminate oneself and the protection against unreasonable search and seizure. However, many due process rights are not specified in the Constitution.62xU.S. Constitution. In Mathews v. Eldridge, 424 U.S. 319, the Supreme Court provided a test with which to determine what measures of due process were required for various proceedings. The Mathews case revealed that there is no single set of requirements for due process across different venues.63xMathews v. Eldridge, 424 U.S. 319. For example, different elements are due for an administrative proceeding than for a court case. Additionally, there are less procedural requirements for civil litigation than for a criminal prosecution. However, “The core of these requirements is notice and a hearing before an impartial tribunal”.64xAvailable at: www.law.cornell.edu/constitution-conan/amendment-14/section-1/procedural-due-process-civil. In the discussion about automated adjudication, all of these elements – notice, hearing and impartiality – are at issue.
We have already discussed the issue of impartiality at length. So here let us begin with the hearing element. Is the hearing required to be with a person? This question seems so basic and yet it is answered in the negative by automated adjudication. The idea of having a hearing by a non-person seems unreal. We could go into a long essay on the differences between humans and robots (which is already touched on above), or we could simply focus on the issue of credibility.
The ability to judge credibility is a huge vulnerability with automated adjudication. In jury trials, the job of the jury is that of fact-finder, while the judge decides the applicable law. In a non-jury trial, the judge does both the job of fact-finding and of deciding which law applies. The job of fact-finding is essentially the job of determining the credibility of evidence and of intuiting the parts of the story for which there is no evidence.
Determining credibility is a job that does not lend itself to a formula.65xAvailable at: www.cod.uscourts.gov/Portals/0/Documents/Judges/JLK/Judging_Credibility_LITMAG_Spring07_kane.pdf. at 31. Rather, it is a unique, intuitive process that requires knowledge of human nature and experience based on one’s own nature and experience, and knowledge of the nature and experience of others. In other words, it requires both empathy and experience. While perhaps experience can be quantified and input as an algorithm, it is limited to the experiences known to the person who writes the program for the machine. This means that the perspective of the person who controls the algorithm in a non-jury automated trial will be disproportionately represented in every robojudge which he programmes. The implication is that if he programmes machines in various courtrooms with the same algorithms, those courtrooms will essentially be ruled by the same judge, even though if there were human judges, they would be ruled by various judges. Empathy, however, is more difficult to quantify, as sources in the U.S. military acknowledged, above.66xLambrechts, 2020, at 36, 39.
Furthermore, robots are notoriously bad at processing and utilizing information about the real world. This is because, first, robots have shown themselves to have serious difficulties ‘understanding’ physics.67xAvailable at: www.wired.com/story/ai-smart-cant-grasp-cause-effect/?itm_campaign=BottomRelatedStories_Sections_2&itm_content=footer-recirc.Industrial robots can increasingly sense nearby objects, in order to grasp or move them. But they don’t know that hitting something will cause it to fall over or break unless they’ve been specifically programmed—and it’s impossible to predict every possible scenario….68xIbid.
So all real-world consequences must be pre-programmed into a robot for it to be useful in responding to information about cause and effect. If it is impossible for a robot to be completely programmed to respond to the material world because it requires a prediction of all factual possibilities, how would it be possible for a robot to be programmed to comprehend the immaterial world of motives, human nature, truth, lies and believability?
Second, robots must be programmed with all the possibilities of wordplay in the English language in order to make proper evaluations of factual circumstances. Robots must be programmed to be able to properly process euphemisms, sarcasm, exaggeration and other verbal gymnastics that humans naturally perceive based on context and our general knowledge of people. If a robot is given a fact, for example, that “women are less likely to die from increased alcohol use than men … ‘[a]n AI system with no notion of causality might infer that the way to reduce mortality is to administer sex-change operations to men.’”69xIbid. We use the term ‘sex change’ as a euphemism to denote an operation that, in reality, does nothing to change the chromosomal makeup of the patient, but a robot would not know that without somehow being specifically programmed to know it. This weakness of robots implies that not only must all physical possibilities be programmed into an automated adjudicator but also all the history of everything, and its impact on human language, to prevent the robot from taking every use of language literally. Even if this is possible, here we are again defeating the purpose of efficiency for which the robojudge was first proposed. There should be serious doubt when it comes to the idea of the ability of the robojudge to reliably judge credibility. -
4 Conclusion
AI has improved American life in countless ways. From ordering food quickly at McDonald’s, to instantaneously giving us millions of search results to most questions online, to helping us type messages with correct spelling and grammar, and a million other examples. My purpose is not to dissuade Americans from making their lives easier. My purpose is to point out to readers that humans, with all our weaknesses and vulnerabilities, have the unique gift of judgment. Humans should be careful to surrender control of their dignity to machines made by anonymous, unaccountable government employees. Justice should not necessarily be as easy as much as it is just. Machines do not know humans like humans know humans. Technology is at its best when man uses it in proper proportions and in fitting ways.
-
1 M.J. Cartwright & D. Greiling, ‘Court-Connected Online Dispute Resolution: Outcomes from Family, Civil, and Traffic Cases in the United States’, International Journal of Online Dispute Resolution, Vol. 5, No. 1-2, 2018, pp. 4, 5.
-
2 X. Fang, ‘Recent Development of Internet Courts in China’ International Journal of Online Dispute Resolution, Vol. 5, No. 1-2, 2018, p. 49.
-
3 Professor Daniel Rainey, Lecture in Online Dispute Resolution at the University of the Pacific, McGeorge School of Law, 2 January 2020.
-
4 Ibid.
-
5 Ibid.
-
6 D.E. Chamberlain, ‘Artificial Intelligence and The Practice of Law, or, Can a Computer Think Like a Lawyer?’, WL 10611682, 2016 Texas CLE Business Disputes, p. 25.
-
7 B. Lambrechts, ‘May It Please the Algorithm’, The Journal of the Kansas Bar Association, January 2020, pp. 36, 38.
-
8 Ibid.
-
9 Ibid.
-
10 Ibid.
-
11 Ibid.
-
12 The John S. McCain National Defense Authorization Act for Fiscal Year 2019 (P.L. 115-232), Section 238, as quoted in Lambrechts, 2020, pp. 36, 39.
-
13 Lambrechts, 2020, pp. 36, 38.
-
14 Chamberlain, 2016, p. 1 (referencing the robot from the film “2001: A Space Odyssey,” which develops mental illness and becomes homicidal).
-
15 Lambrechts, 2020, pp. 36, 38.
-
16 Available at: https://plato.stanford.edu/entries/chinese-room/.
-
17 ‘Comment: When Is An Algorithm Invented? The Need for a New Paradigm for Evaluating An Algorithm for Intellectual Property Protection’, Albany Law Journal of Science and Technology, Vol. 15, pp. 579, 581.
-
18 USCS Const. Art. VI, Cl2.
-
19 Available at: https://claremontreviewofbooks.com/a-government-of-laws-not-of-men/.
-
20 Hon. A.M. Kennedy, Lecture on Freedom of Expression in Eur. and the U.S. at the University of the Pacific, McGeorge School of Law Salzburg Study Abroad Program, 3 July 2018 [hereinafter “Justice Kennedy’s Lecture”].
-
21 Ibid.
-
22 Ibid.
-
23 Ibid.
-
24 Ibid.
-
25 B. Toy-Cronin, et al., ‘Testing the Promise of Access to Justice through Online Courts’, International Journal of Online Dispute Resolution. Vol. 5, No. 1-2, 2018, pp. 39, 40.
-
26 L.B. Solum, ‘Legal Personhood for Artificial Intelligences’, North Carolina Law Review, Vol. 70, 1992, pp. 1231, 1239-1240.
-
27 Available at: www.nationalreview.com/2018/03/social-media-companies-discriminate-against-conservatives/.
-
28 Ibid.
-
29 Available at: https://www.technologyreview.com/2017/07/12/150510/biased-algorithms-are-everywhere-and-no-one-seems-to-care/.
-
30 Available at: www.washingtonpost.com/health/2019/10/24/racial-bias-medical-algorithm-favors-white-patients-over-sicker-black-patients/.
-
31 Available at: www.wired.com/story/can-ai-be-fair-judge-court-estonia-thinks-so/.
-
32 Lambrechts, 2020, at 36, 39.
-
33 Ibid., at 36, 39-40.
-
34 Ibid.
-
35 Available at: www.ethicsforge.cc/robojudge-is-the-devil-in-the-data/.
-
37 U.S. Const. Art. I, § 1.
-
38 See Roper v. Simmons, 543 U.S. 551, 616 (2005) (O’Connor, J., dissenting) (explaining “[t]he reason for insistence on legislative primacy is obvious and fundamental: ‘[I]n a democratic society legislatures, not courts, are constituted to respond to the will and consequently the moral values of the people (citation omitted)’”).
-
39 Ibid. at 57.
-
40 Ibid.
-
41 Justice Kennedy’s Lecture, supra note 20.
-
42 Ibid.
-
43 Lee v. State, 59 Md. App. 28, 32, 474 A.2d 537, 539 (1984).
-
44 See Alexander Hamilton, Federalist No. 78 (stating, “…[I]t will readily be conceived from the variety of controversies which grow out of the folly and wickedness of mankind, that the records of [legal] precedents must unavoidably swell to a very considerable bulk, and must demand long and laborious study to acquire a competent knowledge of them. Hence it is, that there can be but few men in the society who will have sufficient skill in the laws to qualify them for the stations of judges. And making the proper deductions for the ordinary depravity of human nature, the number must be still smaller of those who unite the requisite integrity with the requisite knowledge…”.).
-
45 P. Cortes, ‘Using Technology and ADR Methods to Enhance Access to Justice’, International Journal of Online Dispute Resolution. Vol. 5, No. 1-2, 2018, pp. 102, 111.
-
46 U.S. Const. Art. I, § 1.
-
47 Available at: www.mckinsey.com/featured-insights/artificial-intelligence/the-real-world-potential-and-limitations-of-artificial-intelligence.
-
48 Available at: www.law.cornell.edu/constitution-conan/article-1/section-1/delegation-of-legislative-power#fn57art1.
-
49 Available at: www.heritage.org/political-process/report/administrative-state-constitutional-government.
-
50 Supra note 48.
-
51 Ibid.
-
52 Supra note 49.
-
53 Supra note 48.
-
54 Ibid.
-
55 Supra note 49.
-
56 Ibid.
-
57 Available at: www.federalregister.gov/uploads/2011/01/the_rulemaking_process.pdf.
-
58 James Madison, Federalist No. 51., “Checks and Balances”, in Benjamin F. Wright (Ed.), The Federalist, Barnes and Noble Books, USA, 2004, p. 355.
-
59 Available at: www.cato-unbound.org/2012/02/06/timothy-sandefur/why-substantive-due-process-makes-sense.
-
60 Ibid.
-
61 Ibid.
-
62 U.S. Constitution.
-
63 Mathews v. Eldridge, 424 U.S. 319.
-
64 Available at: www.law.cornell.edu/constitution-conan/amendment-14/section-1/procedural-due-process-civil.
-
65 Available at: www.cod.uscourts.gov/Portals/0/Documents/Judges/JLK/Judging_Credibility_LITMAG_Spring07_kane.pdf. at 31.
-
66 Lambrechts, 2020, at 36, 39.
-
67 Available at: www.wired.com/story/ai-smart-cant-grasp-cause-effect/?itm_campaign=BottomRelatedStories_Sections_2&itm_content=footer-recirc.
-
68 Ibid.
-
69 Ibid.
Accessing the legal system is among the most expensive and time-consuming tasks of modern American life. Justice ought to be swift and blind. Yet, the reality is that the legal system is not a realistic solution to all legal disputes because not all people can afford the time and money it demands for success.1xM.J. Cartwright & D. Greiling, ‘Court-Connected Online Dispute Resolution: Outcomes from Family, Civil, and Traffic Cases in the United States’, International Journal of Online Dispute Resolution, Vol. 5, No. 1-2, 2018, pp. 4, 5.
Innovative American minds have considered a solution that parts of the world are already utilizing: a ‘robojudge’. For example, China has made itself a leader in the untried world of what might be called ‘automated adjudication’ – in several ways,2xX. Fang, ‘Recent Development of Internet Courts in China’ International Journal of Online Dispute Resolution, Vol. 5, No. 1-2, 2018, p. 49. one of which is the ‘robojudge’.3xProfessor Daniel Rainey, Lecture in Online Dispute Resolution at the University of the Pacific, McGeorge School of Law, 2 January 2020. A robojudge is essentially a computer that allows a litigant to input the facts of his case and his desired remedy, and then produces a binding legal judgment based on algorithms that apply existing law to the purported facts.4xIbid. For this discussion, automated adjudication will be limited to a discussion of this model of the robojudge – a machine that takes in facts and produces a binding legal result based on artificial intelligence (AI) applying algorithms installed by the government.5xIbid. Here, I will use the term ‘robojudge’ synonymously with the term ‘automated adjudication’.
Robojudges demonstrate the use of AI in the courtroom. While there may be many potential benefits to reap from the efficiency and economy of a machine that is able to reduce caseloads and clear dockets, the arguments for these benefits rest on many tenuous assumptions. This article examines some of those assumptions and why they may be untrue, as well as some legitimate concerns about the legality of the technology.