-
1 The Coronavirus Pandemic – Friend or Foe of ODR Development?
Over the last 20 years or so, since the first International Forum on Online Dispute Resolution (ODR) was held in Geneva, there have been a number of innovative developments in ODR. However, prior to the COVID-19 pandemic, there has not, to be fair, been a huge rush by lawyers, mediators, arbitrators and other dispute resolution professionals to make use of such systems beyond the sort of case management platforms used by the courts, Ombudsmen and dispute resolution organizations generally. Apart from the convenience of access to the courts through online filing for those with a dispute or complaint, case management primarily delivers more benefit through efficiency, both as to costs and management work, for the courts or other dispute handling organizations. It does not necessarily deliver proportionally more resolutions, or even better resolutions, which, as ever, will depend on the skills of the dispute resolution practitioners and the details of the process as well as basic elements such as time.
The most innovative developments in the early noughties of ODR-specific tools lay in blind bidding with which the parties in disputes over the amount to be paid by one party to the other could submit offers and counter-offers that were not seen by the other party. Since the machine saw the secret offers from both sides, it could identify where there was agreement and declare it. At a stroke, these systems made the practice of sitting on unaccepted offers in an attempt to persuade the other party into accepting amounts less than top/bottom line, somewhat redundant. The key benefit of blind bidding systems is that, if a secret offer was not accepted, the negotiation strategy of the party making the offer was not compromised. A number of competing blind bidding systems were launched in the early years of the century, being Cybersettle, Inter-Settle, e-Settle, Click N’Settle and two by the author, WeCanSettle and TheClaimRoom. However once Cybersettle obtained patent rights to its version, which, incidentally, followed a somewhat unique path in requiring three bids to be made at the outset by each party, it issued Cease and Desist letters to all other Blind Bidding services. Since there had not by that time been a rush to use these services, with it being very early days for ODR, and knowledge that the owners of Cybersettle at the time, a major insurance company, had a deep pocket for legal fees, all other services withdrew their products. At a stroke, the drive to innovative development in ODR was reduced with ODR-specific developments thereafter being mainly focused (with one notable exception as discussed later in this paper) on platforms for asynchronous conversation, e-filing and/or case journey assistance/management. These developments contribute significantly to the rise of ODR and increased access to justice but do not directly increase the prospect of consensual settlement being reached as opposed to more efficiently reaching a form of resolution, whether mutually agreed or adjudicated, as well as assisting unrepresented parties to participate better in pursuing their claims.
So far as mediation is concerned, there has always been a general rumbling of disaffection with technology which, in true Ned Ludd style, was felt, or at least feared, to amount to a threat to their livelihoods. If the machine is so good at resolving disputes, what will happen to the need for human mediators and decision makers and even if still needed, will their input be reduced? If reduced, will fees also reduce. All perfectly understandable concerns. With such fears came a reluctance to take up development of AI systems.
How that changed in 2020 with the onset of the global coronavirus pandemic. All of a sudden with traditional face-to-face mediation becoming unavailable overnight, there was then a rush, especially with mediators, to find out about and learn to make use of web conferencing platforms like Zoom and Teams. These, of course, are not ODR tools, except in the most general of senses, but do have the benefit of enabling the mediation procedures to take place without the mediator having to meet the parties in person. The downside of this, of course, is that mediators suddenly felt that they had finally taken that step into the world of ODR and, guess what, there was a sense of a ‘come on in the waters fine’ message. So far as ODR was concerned, there was nothing more to consider. Importantly, they were able to continue their practice and to do so without, I would imagine, any significant reduction in their success rates. A study by the Association of South West Mediators and Bristol Law Society found that 72% of respondents felt that mediating online was as, or more, effective than face-to-face mediation.1x www.bristollawsociety.com/new-study-shows-online-legal-mediation-could-be-the-future/. The problem the pandemic has brought about with mediators generally is they now felt themselves to be ODR practitioners, if not even experts in the subject, with the impact that suddenly there was no need to look at what ODR has to offer that subject or to look at any other system. Training courses appeared claiming to train in ODR when all they covered in their courses was how to use web conferencing platforms such as Zoom. Of course, the problem with web conferencing is that it does not really provide any resolution tools nor does it in any significant way bring change to the mediation process. Mediators would conduct their work in the same way as before with synchronous conversation. The big question is going to be how much continued use of web conferencing platforms will be used in mediation once the pandemic is over (if it ever will be). No doubt it will continue but to a lesser degree and not when the value in dispute, and the ability of the parties to travel, justifies the mediation taking place in person. Will mediators feel they have tried ODR, but there is now no need to experiment with different forms of ODR. -
2 Algorithms – Curse or Cure
One problem with ODR is that, as with all acronyms, it applies to many phrases or names. As relating to dispute resolution it is not the most appropriate one because for Amazon sellers the acronym refers to Order Defect Rate. The numbers of complaints about Amazon sellers is scored and once it reaches a certain figure, Amazon will apply various punishments such as requiring a larger sum of money to be always held on deposit against complaints or, generally, delaying the payment of monies to the seller.
The other acronym that is possibly keeping the brakes on ODR use and development is AI and its connected term algorithm. If ever a word needed a media advisor, it is algorithm. Simply meaning, in one broad sense, a rule followed in a process, albeit more frequently in the context of ICT, it has taken on a rather negative image in wider society. The worst examples of AI are in areas where the machine is left to its own devices to make decisions that impact on humans.
For example, in China in 2018, there was a good example of AI going wrong. They had erected large screens in the streets to display the faces and names of people caught on camera for jaywalking. The cameras would try to detect any people seen committing the offence in the street with facial recognition software interrogating a database to identify the person concerned. On one occasion, the system seemed to work in that it did identify the face in the road caught on camera of a lady who should not have been there and displayed her face together with her name and details and promptly fined her for the offence. Unfortunately, whilst the system correctly identified the lady in question, the head of a major air conditioning company, the system was not clever enough to distinguish between the flesh and blood of a real human face and an image portraying that face. What the system had seen through the camera was this lady’s face in the road when a bus was passing by. That was all the information the system needed under the algorithm to seek to identify the person concerned. Whilst it successfully identified the name of the lady concerned, in point of fact, she was not there as her face was actually on the side of the bus within an advertisement. The authorities responded to say they are now reprogramming so the system will distinguish between a real face and an advert.2x BBC News – 27 February 2018 – “Chinese AI Caught Out by Bus Ad”, available at: www.bbc.co.uk/news/technology-46357004.
A more serious problem of the many ways in which artificial intelligence (AI) and the algorithms employed can fail in an unanticipated and disturbing way arose in the UK in 2020. Every year in the UK, final year secondary school pupils sit for advanced-level qualification exams, known as A-levels. The results are used to help determine the university places for each pupil. As a result of the coronavirus pandemic, the government cancelled all A-level examinations. As an alternative, the government brought in a system under which teachers would give an estimate of how they thought their students would have performed had they undertaken the formal exams. So far so good and a reasonable approach (although unavoidably unfair perhaps on those children who mess about all year in class but then always perform brilliantly in the exams). It is about character and dealing with pressure. Those teacher estimates were then adjusted by Ofqual, the regulators of examinations. They used an algorithm that adjusted the assessments by the teachers in accordance with the past performance of the schools of each pupil. This meant that pupils could be effectively penalized by something they have no control over, being the performance of the school as a whole. The objective of Ofqual had been to try to restrict the impact of any attempt, intentional or otherwise, by schools to give higher assessments for their pupils. The thinking was that schools that suddenly had far higher assessments than in previous exam results had, it was felt, not been acting objectively. This despite the fact that they had no evidence on which to base such an opinion. The result was a huge disappointment to many pupils who were marked down and thus lost the places at their selected University. The percentage of pupils downgraded in this was very high – around 40%. Whilst the intention was also to see some balance with the assessment of certain pupils at high achieving schools actually increasing, unfortunately the percentage of pupils whose assessments were increased was only 2%. The anger was inflamed with the realization that those who had their grades increased were from wealthier schools and communities and those who were downgraded were from poorer communities suggesting that a rich–poor divide was being exacerbated. Of course, to make matters worse, the damage was seen to have been caused by one of those ‘dreadful’ algorithms. This became a major news story giving the word ‘algorithm’ a prominence not previously held, and a highly negative one at that. The result was that the UK Government abandoned the plan and agreed that pupils will simply be assessed by the teachers with no adjustments.
One can readily imagine a similar scenario in the justice system, civil or criminal, wherever AI is used. In mediation, one might anticipate systems to help the mediator identify the BATNA (Best Alternative to a Negotiated Agreement) but is that to be programmed with an algorithm that looks purely at the law as applied to the facts or does it go further and take into account any identified bias against groups in society from past case decisions. If the courts to which the dispute is heading in the absence of a resolution, or indeed where the case may already have been filed, have demonstrated bias to a community from which one of the parties belongs, should the ODR tool seeking to identify the BATNA take such bias into account advising in effect that the party at risk of bias should make more effort to settle, i.e. reduce the amount claimed? There is already fast-growing concern at bias within decision systems as a whole, e.g. in employment and health, and the need recognized to require the workings of algorithms to be audited. The lack of regulation, however, is at the real core of the problem.And that is the problem with algorithmic auditing as a tool for eliminating bias: Companies might use them to make real improvements, but they might not. And there are no industry standards or regulations that hold the auditors or the companies that use them to account.3x Alfred Ng, “The Markup”, 23 February 2021, available at: https://themarkup.org/ask-the-markup/2021/02/23/can-auditing-eliminate-bias-from-algorithms.
So AI itself will have a bad reputation with dispute resolution professionals expressing offence, I am sure, at the mere suggestion that the machine can do their job better than they can. There is some merit in this view that goes beyond the anticipated Luddite negativism in respect of any change and disruption to their environment in which they practice. If a machine is going to make a decision in a quicker time and with more reliability than a human, then they will feel it right to be concerned as to how such will affect their inflow of work.
The problem generally with technology is that it can very quickly produce a promise of what it can achieve but more often than not the value of that promise will not be achieved without considerable human effort to reinforce the accuracy of the processes by which it is programmed.
However, when applied in such areas of law in which decision-making on justice could be a relatively simple task that can objectively be achieved through well-designed algorithms, say parking fines, then surely good outcomes can be achieved with AI. The problem is that when design is not optimal, it may take a fair number of clearly bad decisions to have the problem recognized. -
3 Algorithms Going Wrong in the Courts
There is now extensive experience in the United States in using algorithms to assist the court in deciding in sentences in criminal cases. However, criticism has also been equally extensive particularly as focused at the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) system used to assist courts in deciding if someone before them for sentencing were likely to reoffend. ProPublica conducted a study using the risk scores generated by COMPAS for 7,000 people arrested in a single county in Florida, checking to see how many were charged with crimes over the following two years. That is the time frame as used by COMPAS.
Of the people who were predicted by COMPAS to commit further crimes of violence in the following two years, and, therefore, who would have, on such prediction, have been given longer sentences than might otherwise be the case, four out of five did not commit any crime. When they followed those committing a much wider range of offences, down to misdemeanours such as driving without a licence, the numbers who actually committed any crime within the two years improved but was still a very unacceptably low figure of 61%. 39% were treated more harshly on the basis they might commit any crime within the two years, yet did not do so. They also identified racial disparities in the predictions of COMPAS:In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways. The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants. White defendants were mislabeled as low risk more often than black defendants.4x Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, “Machine Bias”, ProPublica, 23 May 2016, available at: www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
-
4 The Other Side of the Coin
Notwithstanding the problems with COMPAS, supportive argument for a ‘robot judge’ making determinations in criminal sentencing is set out in a paper published in 2017 and co-authored by Dr Nigel Stobbs of the Queensland University of Technology, Australia, and Dan Hunter (Dean) and Mirko Bagaric of the Swinburne Law School, Australia in which, whilst recognizing the risk of bias in algorithms, they argued that algorithms could help eliminate subconscious bias by human judges.
In contrast to humans, computers have no instinctive or subconscious bias, are incapable of inadvertent discrimination and are uninfluenced by extraneous considerations or by assumptions and generalisations that are not embedded in their programs. They operate simply by applying variables that have been preprogrammed. Bias can infiltrate computerised sentencing only if an algorithm incorporates existing variables that result in disproportionately harsh sentences being imposed on offenders from minority groups. Consequently, for computerised sentencing to eliminate bias from sentencing decisions, the algorithm itself must be free of the discrimination that permeates the present sentencing regime. Once the programs and algorithms have been developed, there would be no scope for extraneous, racial considerations to have an impact on computerised sentencing decisions.5x https://eprints.qut.edu.au/115410/10/CLJprooffinal25Nov2017.pdf.
On a more general note, Estonia is a country far ahead of most in its adoption of AI. For example, inspectors no longer have to spend time visiting farms in order to ensure government subsidies for cutting fields are being correctly claimed. Satellite images are taken and fed into a database. The algorithm applied assesses each pixel in the images, to identify if the patch of the field has been cut. Two weeks before the mowing deadline, the automated system notifies farmers via text or email that includes a link to the satellite image of their field. The system saved € 665,000 in its first year because inspectors made fewer site visits and focused on other enforcement actions. The problem with leaving decision-making to the algorithms is that the developers may not identify all problems until testing and, therefore, care has to be taken and the system challenged. They learnt that any cattle on the land can make the image processing less reliable and so, in those cases, an inspector now visits the farm to check.
I think the message here is that algorithms and AI are not bad ‘per se’ and certainly not because of bad applications but dangerous to use without an absolutely enormous amount of hard work to anticipate when they can go wrong. Another initiative in Estonia, this time in AI-driven court decisions, was taken when the Ministry of Justice launched its robot judge adjudicating on small claims disputes under 7,000 Euros. In appropriate cases that did not involve challenges to oral evidence but which could largely be determined on the documents, then the robot will adjudicate. The protection against clear error was that the decisions could be appealed against to a human judge. Having a robot solve simple disputes, with appeals where appropriate, frees up time for the human judges to focus on the more complex cases.
If one is looking for reasons why Estonia, a small country with only 1.3 million citizens, is so far ahead of the rest of the world in the application of AI, it probably has to do with the fact that they appointed a 28-yr-old, Ott Velsberg, to be the country’s Chief Data Officer.6x “Can AI Be A Fair Judge In Court? Estonia Thinks So”, Wired Magazine, 25 March 2019, available at: www.wired.com/story/can-ai-be-fair-judge-court-estonia-thinks-so/. The role itself is ground breaking even before awarding it to a 28-yr-old. In the UK, the Minister responsible for the digital world has to spread out his responsibilities to also embrace, at the same time, culture, media and sport, whereas in Estonia, the Minister for IT has only foreign trade to divert his attention. -
5 Augmented Intelligence
The author believes that whilst allowing the machines to make, without human intervention, decisions that impact on people and society is to be only undertaken after considerable stress testing to identify the limitations, the real future for ODR is still in ‘AI’. However, the AI referred to here is not artificial intelligence but augmented intelligence. There is a huge difference that goes to the heart of what in society we want from technology. The former has the potential to significantly reduce the flow of work for which human dispute resolvers will be needed, whereas the latter requires the human element in order properly to function. Augmented intelligence uses machine learning technologies similar to AI, but does it in partnership with humans to support problem-solving, whilst AI seeks to bypass humans altogether.
Unlike AI, which does not require, in my definition, any form of human-generated action before making a decision, effectively replacing human cognitive functions, augmented intelligence, in the form I would say as most appropriate to dispute resolution, requires the human to make the decisions, with the technology augmenting the intelligence of the human and making him or her more productive, in speed, quantity, consistency and value. In effect, augmented intelligence systems will give more work to the human mediators and dispute resolvers, empowering them to operate more efficiently at a quicker pace and with better results.
Let me refer to two examples of the application of augmented intelligence in dispute resolution and which raises the need for the human neutral facilitator to play the key role. Such form of AI is not, therefore, a threat to the dispute resolution profession. In addition, neither of the systems requires the mediation itself to take place online. This is an important point in marketing ODR. Quite apart from the negative view about technology threatening to usurp the role, there is this false belief that ODR negates in-person mediation. It is not in contention that mediators, for the most part ,when doing the work in the presence of the parties, are often able to be more successful through applying their person-to-person skills. I say ‘for the most part’ because in some cases the relationship between the parties themselves is so negative that attendance in person may distract and inhibit the work of the mediator. To many mediators, they see with ODR a binary choice between online and in-person, and that given the benefits of in-person mediation, leads them to reject ODR outright. Whilst certainly being able to mediate without the presence of the parties significantly extends the benefits and reach of mediation to many more people, as we have seen during the pandemic, that does not mean that ODR is restricted to distant mediation. In many ways the acronym TADR (Technology Assisted Dispute Resolution) is probably more appropriate to apply to what we call ODR, but that boat has long since sailed and ODR it must remain. Each of the systems of augmented intelligence that I will refer to here can equally be applied to ‘face-to-face’ mediation as to distant mediation.
Smartsettle ONE is a visual blind bidding tool developed by iCan Systems Inc. of British Columbia, Canada. Its application is in disputes where, apart from one or two conditions that can be pre-agreed, the only issue in dispute is a number usually relating to the sum of money paid by one party to the other. They could however relate to the number of months for payment of an agreed amount or the number of items to be delivered in a ‘business to business’ supply chain dispute. The essence of the system, and its chief USP, is that it enables the parties to make offers that are not seen by the other party.
Smartsettle ONE enables the parties to make offers by moving a flag along a horizontal bar comprising a range of numbers from 0 to the amount claimed in the proceedings. Two flags are provided, one green and one yellow. The position of the green flag is seen by the other party. But the position at any time of the yellow flag is not seen by the other party and is effectively a blind bid. These flags can be moved up or down any number of times through a series of rounds until a resolution is reached.
The augmentation of the mediator’s intelligence in Smartsettle ONE is seen in a number of ways. Firstly, the process enables the secret bids to have a direct effect on bringing about a resolution that the mediator could not do without the system. A mediator may be given offers to put to the other party but if he is not authorized to declare them in that way, then what he is told in private discussion cannot be used. He also has to be very careful, if not authorized to declare an offer, in how he may encourage a bid by the other party whilst not breaching the confidentiality of the first party. However well he handles this pressure, the parties may naturally be more cautious at declaring an offer in face-to-face mediation that he/she does not want to be revealed, save as a settlement, out of fear that the confidentiality may in some way be breached. If he is authorized to declare an offer to the other party, then it is not secret. The fact that the settlement can be declared without either party knowing anything of the secret bids of the other party and without having to consider an actual offer other than the open offers applies a dynamic otherwise not available.
Importantly, it encourages the parties, confident in the confidentiality of their offers, to make more effort to settle. Another element of confidence with Smartsettle ONE is the knowledge that, if a secret offer, let us say by the paying party, the Respondent to the claim, does not bring about a settlement, that offer can be withdrawn to a lesser amount. This is beneficial in that it gives an opportunity for the Respondent to settle early when costs are low (and when the total liability with costs would likely be less than much later down the line when, despite a lower payment for the claim, there is a much higher cost burden) without damaging his negotiating strategy if not accepted and has to be withdrawn. This provides an extra dimension to the impact of the work of the mediator, hence the technology can be said to augment the intelligence applied by the mediator.
The second factor is that when the rules of the algorithms identify a ZOPA (Zone of Potential Agreement), whether by an overlap of offers or where the gap can be closed by the secret instructions fed into the system by the parties, a calculation is undertaken to identify which party made more effort to settle and who then is rewarded by a more favourable settlement figure to be declared by the system than just splitting the difference (overlap or gap). The impact of this rule is to encourage moves to settle early. Once again we see here the positive impact by the augmentation of the work of the mediator.
Smartsettle Infinity is an augmented intelligence platform for resolving disputes with multiple elements not just the single issue of the payment of a sum of money or agreement on another number. It does not seek to automatically declare a settlement, but develops its intelligence from a combination of information fed in by the mediator or, in direct inter-party resolution, by the negotiators, as to the comparative preferences and level of importance each party has in the different elements of the dispute as well as what it learns from proposals put together by the parties. The intelligence in the system produces suggestions for settlement based on the information fed into it. How important is it for example that in an employment dismissal dispute that a good positive reference be given. It might be that the employee is not concerned about receiving a good reference as may be he has already arranged good alternative employment and much prefers the money he receives to be in as high an amount as he can obtain. So the machine’s ability to come up with suggestions that are more likely to be acceptable to the other party is dependent on the input from the intelligence of the mediator/negotiator.
In addition to the ability to put together proposals, the very fact that they are generated by the machine rather than the other party helps the party receiving the proposals to look more objectively at them. In addition, Infinity allows parties to submit their own proposals but masquerades them as if they had been generated by the machine. The machine further aids the parties by assessing a value for each proposal they generate based on that party’s underlying interests as fed into the system by the mediator/negotiator. At any time a party can share a proposal with the other party and once a package has been agreed by both, a provisional agreement is declared. However, Infinity does not stop at that point but as a result of the machine learning, it will consider submitting an improved proposal that it considers better for both parties by finding ‘value left on the table’.
One can see how the ability of Infinity to learn of the preferences and interests of the parties so as to evaluate packages and to enable proposals to be put forward anonymously as well as suggesting proposals itself fit fully within augmented intelligence. At all times the mediator/negotiators remains in charge and no settlement is reached save one agreed to by both parties. The quality of the information on preferences fed in by the mediator shows the importance of a skilled mediator. These algorithms will not usurp the role of the mediator. -
6 Summary
Whilst innovative technologies were being developed around 20 years ago to assist people in dispute to more easily reach resolution, there has not in reality been significant uptake by mediators beyond platforms to manage the cases for the courts, ombudsmen and similar dispute handling organizations. The focus on processes to actively bring about resolution has been limited, with blind building-related systems, which at one time were on the increase, being halted in their commercial development by the issuing of a world patent to one such organization.
The coronavirus pandemic has suddenly increased use of online mediation, if not ODR itself, on non-specific web conferencing platforms but such is merely replicating existing practices and do not innovate. The result is that many mediators believe they are now operating ODR when they are simply mediating at a distance. The true potential for ODR may take longer to achieve. For so long as mediators believe they have already reached that point. Once mediators are able to meet again with the parties in person, what interest there has been in ODR may begin to reduce. If mediators believe that Zoom, etc. is ODR and they have ‘done ODR’, judging it on their web conferencing experience, then when they cease to use conferencing significantly will there be any interest in what ODR really has to offer. At the same time, and certainly in the field of decision-making, AI and the very word ‘algorithm’ has been developing a negative image in the mind of the general public as a result of public discussion over poor implementations. It is the contention of the author that the direction to be taken with ODR lies in the development of the application of technology not just focused on the medium for exchanging messages or managing caseload but in technology that will be seen to itself encourage and facilitate agreement.
This will-be better-focused writer identifies a distinction between Artificial Intelligence and Augmented Intelligence, the former giving control of the decision -making entirely over to the machine, whereas the latter retains control and decision‑making with the human whilst significantly assisting the human in that role in a way that could not be achieved without technology. Two examples of such systems developed by Smartsettle Resolutions Inc. of British Columbia, Canada, being Smartsettle one and Smartsettle infinity, show the way forward with what one might be tempted to call ‘Real ODR’. Such developments will help remove the fear of mediators that the technology is a threat to their work as such systems underline the necessity not just for mediators but for mediators with particular skills sufficient enough to gain significant benefit from the technology. -
1 www.bristollawsociety.com/new-study-shows-online-legal-mediation-could-be-the-future/.
-
2 BBC News – 27 February 2018 – “Chinese AI Caught Out by Bus Ad”, available at: www.bbc.co.uk/news/technology-46357004.
-
3 Alfred Ng, “The Markup”, 23 February 2021, available at: https://themarkup.org/ask-the-markup/2021/02/23/can-auditing-eliminate-bias-from-algorithms.
-
4 Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, “Machine Bias”, ProPublica, 23 May 2016, available at: www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
-
5 https://eprints.qut.edu.au/115410/10/CLJprooffinal25Nov2017.pdf.
-
6 “Can AI Be A Fair Judge In Court? Estonia Thinks So”, Wired Magazine, 25 March 2019, available at: www.wired.com/story/can-ai-be-fair-judge-court-estonia-thinks-so/.
International Journal of Online Dispute Resolution |
|
Article | What’s Good for ODR?AI or AI |
Keywords | Augmented Intelligence, Artificial Intelligence, algorithms, ODR |
Authors | Graham Ross |
DOI | 10.5553/IJODR/235250022021008001002 |
Show PDF Show fullscreen Abstract Author's information Statistics Citation |
This article has been viewed times. |
This article been downloaded 0 times. |
Graham Ross, "What’s Good for ODR?", International Journal of Online Dispute Resolution, 1, (2021):20-30
Whilst the coronavirus epidemic saw mediators turn to web conferencing in numbers to ensure mediations continued to take place, it is believed that the rate at which individual mediators, as opposed to organizations handling volumes of disputes, began to use online dispute resolution (ODR)-specific tools and platforms remained comparatively slow. Mediators may have felt that, in using web conferencing, they had made the move to ODR. Another hurdle standing in the way of generating confidence in ODR-specific tools is that exciting developments used the less were powered by artificial intelligence (AI) and yet mention of AI and algorithms would create its own barrier, in no small part due to examples of shortcomings with AI and algorithms outside of ODR. The writer feels that the future lies in developments in ODR that benefit from AI. However that is less the traditional meaning of the acronym being Artificial Intelligence but more as Augmented Intelligence. The paper explains the difference with Artificial Intelligence leaving the machine in control whilst Augmented Intelligence retains control and decision-making with the human but assisted by the machine to a degree or in a format not possible by the human alone. The paper highlights examples of two ODR systems applying Augmented Intelligence. |