-
1 Introduction
A set of panelists at a recent LexisNexis Canada webinar shared their insights, perspectives and experiences on the topic of artificial intelligence (AI) technology in law. The discussion began with a thought-provoking quotation by Forbes contributor Rob Toews, to which the panelists were asked to respond.
Among the social sciences, law may come the closest to a system of formal logic. To oversimplify, legal rulings involve setting forth axioms derived from precedent, applying those axioms to the particular facts at hand, and reaching a conclusion accordingly. This logic-oriented methodology is exactly the type of activity to which machine intelligence can fruitfully be applied.1x R. Toews, ‘AI Will Transform the Field of Law’, Forbes, 2019.
The reactions to this statement varied from panelist to panelist, each of whom presented a unique perspective to the application of AI into the legal practice. One of the main issues with Toews’ viewpoint, which the panelists point to, is that it accepts legal formalism – a theory that views the law as a systematic, almost mathematical or scientific decision-making process, in which judges simply identify the relevant legal principles from various sources of legal authority, such as statutes, regulation and case law and apply them to the facts of the case to logically deduce a rule that will govern the outcome of the dispute. Legal formalists hold that legal rules stand separate from other social and political institutions, so that once laws are created, judges must apply them to the facts of a case without regard to social interests and public policy. While legal interpretation is part of judges’ decision-making, formalism is a simplistic view of the process that has been rejected for some time.2x M. Matczak, ‘Why Judicial Formalism Is Incompatible with the Rule of Law’, Canadian Journal of Law and Jurisprudence, Vol. 31, No. 1, 2018, pp. 61-85. Instead, it is understood that judges often engage in outcome-oriented reasoning with a high degree of discretion. It is not pure formal logic at play. It would be dangerous to assume, as the quotation suggests, that legal decisions can be simplified to a system of formal logic. That would ignore the fact that, although impartial and neutral in theory, judges are ultimately people with views and biases and not robots applying rules through a systematic approach.
The shortcoming in Toews’ statement, however, does not mean that AI has no role in the legal profession. To the contrary, it demonstrates exactly why AI is so powerful. AI technology is much more than just a series of if-then statements or rules in computer formal logic. AI can recognize patterns and aid legal professionals in their day-to-day practice. While there certainly are controversial areas within the AI world, its practical application is already disrupting the practice of law. This paper explores the current uses of AI in law, its role within the legal process and the ways in which it is expected to change legal practice. While the long-term future of AI may pave the way for revolutionary uses in the legal world, in its current form, it is a great tool that lawyers have at their disposal to provide better advice and services to clients in a more efficient and cost-effective manner. More specifically, section 4 explores how AI and machine learning has introduced robot mediators to resolve disputes without human mediators. Throughout this paper, I consider common hesitations and concerns surrounding various ethical and professional issues with the use of AI in legal decision-making and the legal adjudication process. It is crucial to recognize the immeasurable benefits of utilizing technology to advance innovation in the legal profession and dispute resolution. -
2 What Is AI?
AI technologies touch almost every aspect of our everyday lives. From our homes, transportation and entertainment to the health care system, AI is used to enhance and improve our lives. It is often so well integrated that many people do not realize that they are using it on their day-to-day activities. AI capabilities range from simple tasks to complicated algorithms that provide solutions to complex problem-solving and decision-making.
While it may be difficult to describe exactly how AI technologies work from a technical standpoint, AI can be understood as machines or computerized systems that perform cognitive functions which are inherently done by humans, such as perceiving, reasoning, learning and interacting. It has been defined as “[t]he theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages”.3x B. Marr, ‘The Key Definitions of Artificial Intelligence (AI) that Explain Its Importance’, Forbes, 2019. AI comprises automated reasoning, robotics, natural language processing (NLP) and machine learning (ML), some of which will be further explored in the following subsections of this paper.4x T. Walsh, Machines that Think. The Future of Artificial Intelligence, Prometheus New York, 2018.2.1 Machine Learning
Machine learning refers to statistical or algorithmic approaches that are used to train models so that they can learn to perform intelligent actions by analysing vast amounts of data to discover patterns.5x I. Gabriel and V. Ghazavi, ‘The Challenge of Value Alignment: From Fairer Algorithms to AI Safety’, forthcoming in the Oxford Handbook of Digital Ethics. The models are able to utilize the patterns recognized in a wide range of applications. Some examples include AlphaGo’s ability to play against and beat human champions in board games, Amazon’s ability to give recommendations, PayPal’s recognition of fraudulent activities, and Facebook’s ability to translate posts to other languages.6x Supra, note 3.
There are various types of ML, namely, supervised learning, unsupervised learning and reinforcement learning.7x K. Höne, ‘Mediation and Artificial Intelligence: Notes on the Future of International Conflict Resolution’, DiploFoundation, 2019. Supervised learning relies on a specified goal and set data. This model has a clear understanding of the problem to be solved and its relationship to other factors so that it can be trained to understand relationships and predict similar situations accurately. This model depends on the availability of historical data that can be categorized clearly.8x Id. Unsupervised learning, on the other hand, refers to a model with uncategorized data, which attempts to uncover correlations between different factors to present suggestions on how to best group data.9x Id. Finally, reinforcement is when the learning model starts with a specific goal to be accomplished but is not given categorized data or explicitly told how to achieve this goal. Learning occurs through trial and error. Given the specific goal to be achieved, the model adjusts its ‘approach’ and ‘assumptions’ to improve.10x Id.2.2 Natural Language Processing
Natural language processing (NLP) is a subfield of AI that is concerned with programming computers to process and analyse large amounts of natural language.11x Diplo, ‘Cybermediation: What Role for Blockchain and Artificial Intelligence?’, Diplo Blog [web blog], 12 October 2018, www.diplomacy.edu/blog/cybermediation-what-role-blockchain-and-artificial-intelligence. It deals with understanding the way humans speak and write by mimicking these human abilities. Current applications of NLP include understanding and answering questions, recognizing speech and translating between natural languages.12x K. Höne et al., ‘Mapping the Challenges and Opportunities of Artificial Intelligence for the Conduct of Diplomacy’, DiploFoundation, 2019. NLP can answer specific questions and solve problems by reading and processing large amounts of unstructured data.
The examples displayed thus far provide a high-level understanding of AI abilities; however, it is important to note that AI is “not confined to one or a few applications, but rather [it] is a pervasive economic, societal, and organizational phenomenon”.13x N. Berente et al., ‘Managing AI’, MIS Quarterly, 2019, p. 1, https://misq.org/skin/frontend/default/misq/pdf/CurrentCalls/ManagingAI.pdf. With this in mind, the following section explores the implications of treating AI as an essential tool for legal professionals. -
3 AI in the Legal Profession
The natural hesitation in using AI in the legal profession is that it will one day replace lawyers. This, however, is an unrealistic scenario due to the general resistance to legal formalism presented at the beginning of this paper. The legal profession largely disagrees with the idea that the law is performed in a vacuum of simple rules and their applications. Even though AI technologies can learn to perform legal tasks, the proper aligning of AI with human values remains a big challenge. Instead, a more constructive understanding of AI in the legal practice is the view that AI can be used to solve traditional problems. This section discusses the valuable and practical ways in which advanced technologies have successfully performed legal tasks and the benefits that it presents for clients and the legal system more broadly.
The role of barristers in dispute resolution involves determining and advising clients on the likelihood that their case will be successful. AI has tremendous potential to enhance the way lawyers approach files, particularly for price sensitive clients, who appreciate lawyers that can apply technologies and AI techniques to aid their critical thinking in processing complex legal issues. Ultimately, if lawyers have the right tools at their disposal, they will advise clients better and more efficiently. AI can inform lawyers what the expected outcome is and how a court is likely to rule a case. While lawyers with experience can properly assess a case’s probability of success, the proposition is that with the help of ML and AI capabilities, the whole process becomes more efficient and effective. For example, technology-assisted document review (TAR) is becoming part of standard e-discovery practices. TAR software can analyse large amounts of data to identify, classify and prioritize through the early review of documents. This can help achieve accurate results and immediate conclusion on the merits of a case by following an early case assessment process at a fraction of the cost. Although the technology is fairly new, with more than half of U.S. corporations reporting their use of TAR in e-discovery, it is starting to be considered an essential tool in e-discovery.14x Thomson Reuters, ‘Myths and Facts about Technology-Assisted Review’, Thomson Reuters, https://legal.thomsonreuters.com/en/insights/articles/myths-and-facts-about-technology-assisted-review. As with most AI technologies, TAR software can save significant amounts of time and money during the review process.
There is a general reluctance to trust technology to do a job better than humans can. A 1985 study by Blair and Maron found that TAR software outperformed human review in accurately identifying responsive documents.15x Id. In the study, the paralegals who performed the review only found 20% of the relevant documents, while TAR identified 75%. The study also points out that the paralegals shared a mistaken belief that they had identified the same 75% of relevant data as the TAR had, which was not the case.16x Id. This study exemplifies the inaccuracy of human perception and demonstrated the importance of highlighting the benefits of advancing innovation that can improve our practices. With the increased use of technology since this study was conducted, it is likely that the trust factor has increased so that legal professionals are more comfortable relying on technology-assisted software to perform the initial legal review. Ultimately, technology is best utilized when it is designed to supplement human capabilities. TAR’s initial review can help detect patterns faster to collect a shorter list of relevant documents for humans to focus on, leading to better and more accurate decisions.
When it comes to solicitors’ work, AI is already actively used in legal automation. Various tools have been developed to conduct negotiations and automate the drafting process for legal professionals, focusing on process improvement and speed.17x See e.g. www.arteria.ai/, www.cybersettle.com/, www.smartsettle.com/. Setting aside some of the more controversial uses of AI, practitioners are beginning to recognize that they can still benefit from AI to become more efficient in how they do things. Software applications provide greater opportunities to enhance practitioners’ skills and improve their processes. This supports the idea that AI is not meant to replace lawyers, but rather add to their expertise and replace tedious and time-consuming tasks, supplementing their practice to serve clients better. It is another tool to assist lawyers in their everyday practice.
AI presents an important opportunity to challenge and reconsider how we think about the legal profession and the use of advanced technology in everyday practice. The use of AI can have the potential of serving access to justice goals and help meet the legal needs of more people by lessening the cost, time and complexity. This opportunity is further explored in the following sections. -
4 Robot Mediators
AI can serve as a tool for mediators and their teams to support their services and increase the effectiveness of the mediation process. Awareness of the tools and the opportunities and challenges associated with them is important. This section illustrates the application of specific AI tools in relation to mediation using specific examples.
AI technology entered the mediation room in 2019, when a ‘robot mediator’ successfully settled a three-month dispute over £2,000 of unpaid counselling fees.18x N. Hilborne, ‘Robot Mediator Settles First Ever Court Case’, Legal Futures, 2019, www.legalfutures.co.uk/latest-news/robot-mediator-settles-first-ever-court-case. The Canadian electronic negotiation specialists iCan Systems became the first company to resolve a dispute in a public court in England and Wales using mediation software. The company’s AI tool, Smartsettle ONE, replaced a human mediator and reached a resolution in less than an hour by bringing the parties closer together through a blind-bidding mechanism.
Some view this as possibly threatening to replace trained human mediators. However, while this technology is useful for settling simple disputes over how much a party should pay, it falls short in resolving more complex interpersonal issues. The blind-bidding system is a quick and cost-effective tool and has potential to increase access to dispute resolution services. However, it is not built with AI that is capable of processing complicated conflict resolution cases, at least not in its current form.
Many practitioners argue that this new technology can be appropriate for dealing with small financial claims, but it is not ready for the mediations and arbitrations that leading law firms are involved in. When dealing with complex disputes over large amounts of money, it is difficult to trust a robot mediator to replicate the experience and skill of a persuasive negotiator. Skilled mediators help frame the issues and guide parties to a settlement because they understand and appreciate people’s motivations and worries. The reality is that disputes are rarely purely about money, even when the only point of contention is finding an amenable amount to settle. Mediators are trained to look beyond figures to assess the parties’ interests through open dialogue and communication. There is often a power imbalance and external motivations that drive the disagreement, which may not be explicitly apparent. Mediators seek this information by reading the room and carefully observing and hearing the parties. Mediators have instinctive abilities to understand the specific needs of parties, remind them of alternative solutions to a settlement and encourage them emotionally towards reaching a deal that will be better in the long term. The human touch of mediation is essential for crossing the difficult roadblocks in the mediation process.By design, ‘the mediation process is inherently a human one’.19x A. Davis, ‘The Future of Law Firms (and Lawyers) in the Age of Artificial Intelligence’, American Bar Association, 2020, www.americanbar.org/groups/professional_responsibility/publications/professional_lawyer/27/1/the-future-law-firms-and-lawyers-the-age-artificial-intelligence/. While a robot mediator, with the use of advanced AI technology, can settle disputes, it is far from replacing the benefits that traditional mediation offers. Parties are often searching for closure, which is a highly emotional process. Even if an AI driven resolution is satisfactory from a logical perspective, parties may be left dissatisfied because they were not given the opportunity to speak their minds, which is more conducive to emotional satisfaction. Despite all that AI can do – and will be able to do in the future – mediation will always need the human touch.
Despite the general reluctance to introduce automated technology into the ADR process, it is nevertheless worthwhile to grapple with the implications of the theoretical application of mediation software more broadly. The following section entertains the idea that robot mediators will one day become more widely relied on as a dispute resolution avenue using complex AI capabilities, and attempts to answer the question of how we can teach AI agents to understand human empathy, values, preferences and ethics to resolve interpersonal conflict. -
5 Concerns over Value Alignment and Morality
Relying on machines to understand morality and trusting that they can make ethical decisions is a difficult proposition. Not only do we face questions about what values AI should be taught to follow, but there is also the concern that machines will do so at the expense of other important values. This is the challenge of value alignment in the design and use of AI systems. The notion that technology has moral consequences has been considered in a variety of disciplines.20x Supra note 5. From a philosophical perspective, value is seen as what ought to be promoted in the world, with concepts, such as autonomy, justice, care, well-being, and virtue all forming part of the discussion. Friedman and Hendry have also offered a definition of value in the context of technological design as ‘what is important to people in their lives, with a focus on ethics and morality’.21x Id. Moreover, the field of science and technology studies explores the impact of technology in the norms and ways of life. Sheila Jasanoff explains that ‘far from being independent of human desire and intention, [technologies] are subservient to social forces all the way through’.22x Id.
Iason Gabriel and Vafa Ghazavi explore the foundational questions about the relationship between technology and value. They assert that artificial agents are designed to pursue a specified goal or objective, which raises normative questions about what kind of goal or objective AI systems should be designed to pursue.23x Id. They discuss three prominent approaches from the AI research community to address the question of ‘alignment with what?’ The first approach, referred to by prominent AI researcher Stuart Russell as the ‘standard model’, focuses on alignment with instructions, which aim to include as much value-preserving information as possible in the orders that the AI system receives. 24x Supra note 5, p. 13. The concern with this approach is that the instructions may be understood too literally by agents because they lack the requisite contextual understanding, which could have negative consequences. The second approach, reward modelling, attempts to avoid this risk by creating agents that behave in accordance with the user’s true intentions. It aims to ensure that AI understands the implied meaning of terms.25x D. Handfield-Menell and G. Hadfield, ‘Incomplete Contracting and AI Alignment’, AIES ’19: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, Hawaii, United States, 2019, pp. 417-422. To do so, it uses learned reward functions trained with human oversight and monitoring to supplement reinforcement learning.26x Supra note 5.
To conceptualize the issue of negative consequences an example has been presented by various researchers. Imagine a robot learns to move boxes from one side of the room to another, where the training environment has no obstacles.27x Supra note 25. However, when it is deployed, there is a water vase in the middle of the robot’s path. If this was not contemplated by the robot designer so that the robot gets rewarded for transferring boxes but not penalized for knocking over the vase, it will ignore the vase in its path and knock it over. If a human was given the same task, they would simply walk around the vase to avoid breaking it. Even though this was not part of the original training, humans use common sense reasoning to modify their behaviours accordingly. This is a simplified demonstration of the difficulty in ensuring robots can make changes based on their environment across different settings. The problem of side effects arises where the agent assumes that everything that is not specified in the reward function is of zero value, which can lead to many negative side-effects such as the one described in the box scenario. A potential and perhaps obvious solution is to penalize the robot for having an impact on the environment. This, however, would require pre-defining behaviours that should be penalized, which could prevent some unforeseen circumstances from being identified. The concern persists because it relies on humans to show the machine what is considered right or wrong. When discussing the various methods for aligning AI agents with human value, normative questions about what behaviour is considered good or allowed would need to be addressed.
The third approach attempts to align artificial agents with human preferences by using inverse reinforcement learning (IRL). IRL systems do not directly tell agents what reward function it should maximize. Instead, the agent must ascertain the optimal behaviour through the observation of datasets, environments and a set of examples of human conduct.28x Supra note 25. The goal of this exercise is for the AI agent to infer, understand and align with human preferences, rather than pursuing an independently specified goal or outcome.29x Supra note 5. This approach to value alignment has some challenges because even when the agent appears to be acting morally it is hard to identify exactly what it learned from the dataset and examples it observed. There is an alternative method called reward modelling, which acts in the same way as the reward function, where the agent interacts with the environment by observing and receiving rewards the difference being that in order to understand what reward a human would give to a particular action, the robot in this model receives actual human feedback. Essentially, the system relies on humans to provide feedback to the agent, which it can then use to define the task and modify its behaviour.30x P. Christiano et al., ‘Deep Reinforcement Learning from Human Preferences’, NIPS, 2017. This approach could be a promising avenue for robot mediators. By imputing human feedback into the system, the AI mediator can better understand the interests of parties in various situations when conducting mediations.
Prominent AI researcher Stuart Russell explains that a failure of value alignment arises when we ‘inadvertently, imbue machines with objectives that are imperfectly aligned with our own’.31x Supra note 5. This concern in AI research speaks to the potential misalignment in the design of AI agents. AI designers attempt to implement structures, such as learning algorithms and reward function to guide agent behaviour and achieve an intended goal. This is a challenging task, which may result in misalignment with our goals and human values. An AI agent is said to be misaligned if it chooses behaviours based on a reward function that is different from the true welfare of humans. Misalignment is often referred to as accidents or situations in which a system produces harmful and unexpected results, despite it being designed and deployed by an AI designer with an objective in mind. Think back to the scenario where the agent is tasked with moving boxes across the room and is not instructed on what to do when an obstacle gets in the way. These accidents can be caused by negative side-effects, reward hacking, limited capacity for human oversight, differences between training and deployment environments, and uncontrolled or unexpected exploration after deployment.32x Supra note 25.
Philosopher John Searle distinguishes between weak and strong AI. He asserts that strong AI requires computers to be able to have a mind of their own, while weak AI is merely able to simulate one. From this distinction, he concludes that strong AI is not possible. Even though the computer may appear to be intelligent because of its ability to perform tasks, it lacks the deeper understanding of what it is doing.33x Supra note 12. This is exactly the shortcoming that many practitioners and legal professionals grapple with and why it is so difficult to imagine that the proper aligning of AI with human values is even possible. Weak AI is ‘the use of software to accomplish specific problem solving or reasoning tasks’, but with a limited sense of context and implication.34x Id.
Dylan et al. suggest that aligning AI with human values will require the building of the technical tools to allow a robot to replicate the human agent’s ability to read and predict the responses of human normative structures.35x Supra note 25. Human intelligence is highly driven by our ability to read and participate in normative social structures, so for AI to be aligned with humans, they must learn to do this. Aligning AI agents with humans requires technical tools that allow AI to do what humans do naturally, which is to import the costs associated with taking actions that are considered wrongful by human communities into their assessment of rewards.36x Id. Other alignment problems persist in representing and implementing human values, such as problems of fairness and bias in ML algorithms. Aleksander argues that becauserobots and machines operate in an algorithmic way and not in a truly cognitive and conscious human way, AI in general can impose serious threats to humanity if the algorithms are biased.37x S. Han et al., ‘Reflections on Artificial Intelligence Alignment with Human Values: A Phenomenological Perspective’, European Conference on Information Systems (ECIS), 2020, p. 4.
The question then becomes what data should be used to train the systems with, how to justify this choice, and how to ensure that what the model learns is free from unjustifiable bias so that the result is a reward model similar to the one that humans really want.
One of the central challenges is the potential for technology to identify and follow a particular set of values, imposing what is referred to as algorithmic bias.38x Supra note 12. Algorithmic tools and models may reflect forms of bias because of the data they were trained on or the way it was curated or labelled. Given its reliance on historic data, predictive software inherently contains human flaws and systemic bias that is hard to override - for example, when using natural language processing, in one case, algorithms learned to associate certain job types with gender stereotypes, leading to biased predictions that disadvantaged women.39x Id. Another example of how machines can overlook the changing values in society given their reliance on past data is in the context of the criminal justice system, such as parole recommendations and predictive policing, which can lead to racially biased recommendations.40x Supra note 5.
These challenges in value alignment certainly pose challenges in the context of dispute resolution, specifically. For example, it may be difficult to eliminate algorithmic bias when using TAR software or to teach AI mediators how to quantify qualitative factors, such as relationships, emotions and illogical human responses when conducting a mediation. However, this does not mean that the opportunities that AI introduces should not be seriously considered. In the realm of dispute resolution, AI’s added value has a lot of potential. Legal professionals should work in conjunction with AI technologies and innovative dispute resolution tools available to advance their services, while monitoring for value-alignment problems through integrated human feedback in IRL systems.5.1 Other Applications
In addition to the concept of a robot mediator, there are various other AI applications to mediation, such as knowledge management and conflict analysis. This section explores these applications in greater detail and introduces ways in which the use of technology and AI can increase inclusivity and promote access to justice.
Mediators seek information and learning tools from a variety of sources. While there is an immense amount of information available about the process, guidelines, and best practices of mediation, this is often not readily available to mediators and their teams because traditional search methods are not highly effective when the data is mostly unstructured and spread across a variety of locations rather than organized according to predefined categories.
While no longer referred to as AI, smart searches have helped make accumulated data more readily available and easier to search. Traditional searches focus on looking for keywords in a repository and drawing connections between documents relevant for mediation. However, when the information is presented in an unstructured form, traditional searches will not yield all the results. This is where natural language processing (NLP) comes in. Introducing AI that uses NLP would help improve the access to and analysis of the available data by potentially discovering patterns and drawing connections between documents that otherwise human researchers may not have discovered.41x Supra note 11. This saves mediators’ time on research and preparing for mediations. AI is highly useful in alternative dispute resolution. AI tools, such as clustering technologies, help lawyers look through disclosure documents at an early stage to identify, prioritize and group them quickly which helps inform the direction of a case and prepare relevant folders for mediation or litigation. Some mediators maintain that humans continue to outperform AI in many aspects of dispute resolution. For example, AI would not be helpful in preparing a convincing position statement if it simply reiterates the pleadings especially considering that, often, the best mediation statements are those prepared by the clients themselves in their own words.42x J. Player, ‘Could Robots Replace Humans in Mediation?’, IPOS Mediation [web blog], 11 August 2020, https://mediate.co.uk/blog/mediation-and-ai-and-robots/. -
6 Conclusion: Where Do We Go from Here and What Is the Role of AI in the Future of ADR?
With the benefits and improvements come issues, hesitation and worry that some of the applications of AI to the law could potentially undermine the judicial and decision-making process. Transparency, reasoning, and due process are some of the most important foundations for fairness in the legal system. The full potential of AI tools will not be realized if these concerns are not addressed. How do we reconcile the use of AI technologies with the ethical and professional concerns that it raises?
According to Russell, the ultimate goal of AI research is discovering a “method that is applicable across all problem types and works effectively for large and difficult instances while making very few assumptions”.43x Supra note 5, p. 11. Similar to the quotation introduced at the beginning of this paper, this goal supposes that the law is a system of formal logic. The law is much more complex and robust than what legal formalism suggests. While AI software can be programmed to collect information and make automatic decisions, this decision is not guaranteed to be the right one for the individuals at hand. There is more than logic and formal reasoning when it comes to resolving interpersonal conflict. In the context of a mediation, a settlement may look suboptimal to a robot mediator, while being the perfect resolution for parties to a conflict. Ultimately, humans are imperfect and not always rational. Instead of viewing robot mediators and AI as entirely replacing human decision-making, they should be viewed as a tool that aids the process. This paper presents the benefits of taking advantage of the benefits that AI tools introduce, while being cautious and aware of the drawbacks and limitations of its use. While AI may never replace humans altogether, one cannot ignore the value that AI technologies introduce to the world of dispute resolution and the legal profession as a whole. -
1 R. Toews, ‘AI Will Transform the Field of Law’, Forbes, 2019.
-
2 M. Matczak, ‘Why Judicial Formalism Is Incompatible with the Rule of Law’, Canadian Journal of Law and Jurisprudence, Vol. 31, No. 1, 2018, pp. 61-85.
-
3 B. Marr, ‘The Key Definitions of Artificial Intelligence (AI) that Explain Its Importance’, Forbes, 2019.
-
4 T. Walsh, Machines that Think. The Future of Artificial Intelligence, Prometheus New York, 2018.
-
5 I. Gabriel and V. Ghazavi, ‘The Challenge of Value Alignment: From Fairer Algorithms to AI Safety’, forthcoming in the Oxford Handbook of Digital Ethics.
-
6 Supra, note 3.
-
7 K. Höne, ‘Mediation and Artificial Intelligence: Notes on the Future of International Conflict Resolution’, DiploFoundation, 2019.
-
8 Id.
-
9 Id.
-
10 Id.
-
11 Diplo, ‘Cybermediation: What Role for Blockchain and Artificial Intelligence?’, Diplo Blog [web blog], 12 October 2018, www.diplomacy.edu/blog/cybermediation-what-role-blockchain-and-artificial-intelligence.
-
12 K. Höne et al., ‘Mapping the Challenges and Opportunities of Artificial Intelligence for the Conduct of Diplomacy’, DiploFoundation, 2019.
-
13 N. Berente et al., ‘Managing AI’, MIS Quarterly, 2019, p. 1, https://misq.org/skin/frontend/default/misq/pdf/CurrentCalls/ManagingAI.pdf.
-
14 Thomson Reuters, ‘Myths and Facts about Technology-Assisted Review’, Thomson Reuters, https://legal.thomsonreuters.com/en/insights/articles/myths-and-facts-about-technology-assisted-review.
-
15 Id.
-
16 Id.
-
17 See e.g. www.arteria.ai/, www.cybersettle.com/, www.smartsettle.com/.
-
18 N. Hilborne, ‘Robot Mediator Settles First Ever Court Case’, Legal Futures, 2019, www.legalfutures.co.uk/latest-news/robot-mediator-settles-first-ever-court-case.
-
19 A. Davis, ‘The Future of Law Firms (and Lawyers) in the Age of Artificial Intelligence’, American Bar Association, 2020, www.americanbar.org/groups/professional_responsibility/publications/professional_lawyer/27/1/the-future-law-firms-and-lawyers-the-age-artificial-intelligence/.
-
20 Supra note 5.
-
21 Id.
-
22 Id.
-
23 Id.
-
24 Supra note 5, p. 13.
-
25 D. Handfield-Menell and G. Hadfield, ‘Incomplete Contracting and AI Alignment’, AIES ’19: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, Hawaii, United States, 2019, pp. 417-422.
-
26 Supra note 5.
-
27 Supra note 25.
-
28 Supra note 25.
-
29 Supra note 5.
-
30 P. Christiano et al., ‘Deep Reinforcement Learning from Human Preferences’, NIPS, 2017.
-
31 Supra note 5.
-
32 Supra note 25.
-
33 Supra note 12.
-
34 Id.
-
35 Supra note 25.
-
36 Id.
-
37 S. Han et al., ‘Reflections on Artificial Intelligence Alignment with Human Values: A Phenomenological Perspective’, European Conference on Information Systems (ECIS), 2020, p. 4.
-
38 Supra note 12.
-
39 Id.
-
40 Supra note 5.
-
41 Supra note 11.
-
42 J. Player, ‘Could Robots Replace Humans in Mediation?’, IPOS Mediation [web blog], 11 August 2020, https://mediate.co.uk/blog/mediation-and-ai-and-robots/.
-
43 Supra note 5, p. 11.
International Journal of Online Dispute Resolution |
|
Article | AI in the Legal ProfessionTeaching Robot Mediators Human Empathy |
Keywords | ADR, AI, ML, mediation, digital technology, value alignment |
Authors | Linda Mochon Senado |
DOI | 10.5553/IJODR/235250022021008002006 |
Show PDF Show fullscreen Abstract Author's information Statistics Citation |
This article has been viewed times. |
This article been downloaded 0 times. |
Linda Mochon Senado, "AI in the Legal Profession", International Journal of Online Dispute Resolution, 2, (2021):155-166
What benefits do AI technologies introduce to the law and how can lawyers integrate AI tools into their everyday practice and dispute resolution? Can we teach robot mediators to understand human empathy and values to conduct a successful mediation? While the future of AI in the legal profession remains somewhat unknown, it is evident that it introduces valuable tools that enhance legal practice and support lawyers to better serve their clients. This paper discusses the practical ways in which AI is used in the legal profession, while exploring some of the major concerns and hesitation over value alignment, morality and legal formalism. |