-
A Introduction
In recent years, the European Union’s (EU) ‘regulatory lens’ zoomed in on artificial intelligence (AI).1x In the AI Act (COM(2021)206 final). As per Ann. I, AI as a field encompasses “(a) machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning, (b) logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems, [and] (c) statistical approaches, Bayesian estimation, search and optimization methods”. Alternatively, AI systems are defined as software developed with one or more of these techniques and approaches that can “for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with” (Art. 3.1, AI Act). In April 2018, the European Commission (EC) presented its AI for Europe plan,2x Communication from the Commission: Artificial Intelligence for Europe, COM(2018) 237 final. which set out three key objectives: boosting the EU’s technological and industrial capacity and AI uptake across the economy, preparing for AI-driven socio-economic changes and ensuring an appropriate ethical and legal framework. These principles paved the way for the work of the EC’s independent high-level expert group on AI (HLEG), whose key contribution is the Ethics Guidelines (Guidelines), from April 2019,3x Independent High-Level Expert Group on AI (HLEG), Ethics Guidelines for Trustworthy AI, 8 April 2019, https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai. endorsed by the EC in its plan to build trust in human-centric AI.4x Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: Building Trust in Human-Centric Artificial Intelligence, COM(2019)168 final. In the Guidelines, the HLEG suggested a four-pillar ethical framework for AI regulation, namely prevention of harm, respect of human autonomy, fairness and explainability. Building on this framework, in July 2020 the HLEG published a revised (and final) version of an assessment list for trustworthy AI (ALTAI).5x HLEG, The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment, 17 July 2020, https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment. This list mentions seven requirements that particularize the above-mentioned ethical principles, i.e. human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being and accountability.
Against the backdrop of the work accomplished by the HLEG, in October 2020, the European Parliament (EP) adopted a Resolution, recommending that the EC submit a legislative proposal on the ethical aspects of AI.6x European Parliament Resolution, of 20 October 2020, with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies (2020/2012(INL)). In response to the EP’s recommendation, on 21 April 2021 – marking a culminating point of a nearly five-year build-up – the EC submitted a Regulation proposal, laying down harmonized rules on AI and amending certain Union legislative acts (AI Act).7x Proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (artificial intelligence act) and amending certain Union legislative acts, COM(2021) 206 final. The AI Act was accompanied by a report assessing the proposed regulatory options.8x Commission Staff Working Document, Impact Assessment Accompanying the Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, 21 April 2021, COM(2021) 206 final. The same day, the EC published a Communication on fostering a European approach to AI,9x Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: Fostering a European approach to Artificial Intelligence, COM(2021) 205 final. stressing that the AI Act follows a risk-based approach that is “future-proof and innovation-friendly” and is designed to “intervene only where this is strictly needed”.10x Ibid.
There is little doubt that the AI Act is a laudable first step in AI regulation, in line with the EU’s ambition to get ahead in the global ‘AI race’. However, bearing in mind the objective, expressed in the Better Regulation Agenda, that “the principles of better regulation will ensure that measures are evidence-based”,11x Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, ‘Better Regulation for Better Results – An EU Agenda’, COM(2015) 215 final at 3. two questions remain open: first, did the EC launch an evidence-finding procedure in view of ‘tailoring’ the regulatory framework established by the AI Act? Second, did the evidence gathered and assessed by the EC warrant that regulatory framework? This article will address these questions by critically assessing the correspondence between, on the one hand, the available evidence on AI-related risks and their taxonomy and, on the other hand, the regulatory choices made by the EC, on these points, reflected in the AI Act’s provisions.
Conceptually speaking, the link between evidence and policy can appear counterintuitive, as evidence is usually associated with judicial or non-judicial dispute resolution proceedings. The evidence-policy interplay became a point of focus specifically in risk regulation, the legitimacy of which largely depends on proving the odds of an event (typically a harm) occurring.12x H. Riesch, ‘Levels of Uncertainty’, in S. Roeser, R. Hillerbrand, P. Sandin, M. Peterson (eds), Essentials of Risk Theory, Springer (2013), 29-56, at 30; D. J. Spiegelhalter, H. Riesch, ‘Don’t Know, Can’t Know: Embracing Deeper Uncertainties When Analysing Risks’, Phil. Trans. R. Soc. A, vol. 369 (2011), 4730-4750, at 4734. Consequently, regulators – including those of the EU – often engage in discovery procedures, akin to those applied in the context of litigation. However, in policy, evidence (in the form of, say, scientific data) is seldom the sole grounds for regulatory address. Risks are subject to “selective hearing”13x P. R. Brown, A. Olofsson, ‘Risk, Uncertainty and Policy: Towards a Social-Dialectical Understanding’, J. Risk Res., vol. 17, no 4 (2014), 425-434, at 427. from public authorities, given the diverging data offered by scientists and competing (economic, legal, social, etc.) interests at stake.14x Ibid., at 428, emphasis added. Consequently, risk regulation translates to a search for balance between, on the one hand, the careful selection of trustworthy and probative evidence and, on the other hand, regulatory efficiency, understood asthe measure of the capacity of chosen legislative patterns in obtaining results that are as close as possible to realizing the ideal expressed by the political actors, considering the context of operation.15x M. Zamboni, ‘Legislative Policy and Effectiveness: A (Small) Contribution from Legal Theory’, Eur. J. Risk Reg., no 9 (2018), 416-430, at 420.
Some argue that this balance is rarely struck, given that the policy car is frequently leading the scientific horse.16x J. A. Curry, Statement to the Committee on Science, Space and Technology of the United Sates House of Representatives, Hearing on ‘The President’s UN Climate Pledge’ 15 April 2015 cit. in L. Bergkamp, ‘The Reality of Risk Regulation’, Eur. J. Risk Reg., no 8 (2017), 56-63, at 59. Yet, as per the Better Regulation Agenda, evidence seems paramount, if policy is to “deliver tangible and sustainable benefits for citizens, business and society as a whole”.17x COM(2015) 215 final, cit. supra, at 3.
In this context, it is not self-evident that the EC’s risk-based approach to AI is evidence-based at all. The EC seems to rely on the assumption that, while all AI systems are potentially harmful, some pose a greater threat of causing harm than others. This is presumably one of the reasons why the AI Act zooms in on so-called high-risk AI,18x (2020/2012(INL)) cit. supra, para. 13. i.e. systems that through their development, deployment and use “entail a significant risk of causing injury of harm to individuals or society, in breach of fundamental rights and safety rules”.19x Ibid., Art. 4(e) of the text of the legislative proposal requested by the European Parliament. Based on this definition of ‘high-risk’, the AI Act integrates a four-level taxonomy of risks (ranging from non-high to unacceptable), with corresponding requirements for compliance. AI systems presenting unacceptable risks are, in principle, prohibited (Art. 5). This ban concerns, inter alia, systems that either use subliminal manipulation of natural person’s consciousness (Art. 5(1)(a))) or exploit vulnerabilities of a specific group of persons due to their characteristics, e.g. age, physical or psychological disability (Art. 5(1)(b)), in order to distort people’s behaviour in a way that is likely to cause physical or psychological harm. High-risk AI systems are not subject to an ex officio ban, but to mandatory compliance requirements, such as transparency (Art. 15) and human oversight (Art. 14). Two classes of AI technologies can qualify as ‘high risk’. First, systems used as safety components of products, subject to third party ex ante conformity assessment.20x AI Act, cit. supra, Art. 3(14). Second, stand-alone systems with mainly fundamental rights implications, listed in Annex III of the AI Act.21x Ibid. This Annex contains eight categories which include biometric identification and categorization of natural persons, management of operation of critical infrastructure such as road traffic or energy supply, education and vocational training, workers recruitment, access to essential private services and public services and benefits, law enforcement, migration, asylum and border control and factual and legal research and interpretation tools used by judicial authorities. Beyond this twofold definition, AI systems intended to interact with natural persons, emotion recognition or biometric categorization systems used in a context other than criminal offences detection, prevention and investigation permitted by law, and deep fake AI systems generating or manipulating image, audio or video content are deemed raising limited risks and therefore solely subject to an obligation of transparency (Art. 52). Finally, for non-high-risk AI systems, there are no mandatory compliance requirements. However, developers and users are encouraged to voluntarily apply codes of conducts integrating the ‘high-risk requirements’ (Art. 69).
Bearing in mind the Better Regulation Agenda,22x COM(2015) 215 final, cit. supra. one would expect that both the definition of AI-related risks and their taxonomy in the AI Act be backed by evidence. Yet, the AI Act is not particularly enlightening on how the evidence-to-law leap was made by the EC. To shed more light on this point, Section B will retrace the preparatory stages of the AI Act, analysing the discovery procedures launched by the EC, as well as the assessment it performed of the evidence gathered. The analysis of that assessment in Section C will allow to conclude that the AI Act does indeed appear to be primarily rooted in policy considerations, due to insufficient correspondence between the type of regulation that was prima facie warranted by the evidence gathered by the EC and the type of regulation ultimately established through the AI Act.
For the purpose of pursuing said analysis, the operative concepts of this study should be clarified. Facts will be defined as tangible and verifiable points of experience which provide the basis for addressing an enquiry on the application of a legal standard.23x F. J. Hibberd, ‘Situational Realism, Critical Realism, Causation and the Charge of Positivism’, Hist’y H.S., vol. 23, no 4 (2010), 37-51, at 40; G. A. Nunn, ‘Law, Fact and Procedural Justice’, Emory L.J., vol. 70, no 6 (2021), 1273-1324, at 1288 seq. Evidence will be understood as a knowledge-seeking process which includes[an] initial body of data or ‘facts’ used as premises, the inferences used to draw conclusions from these premises, and the use of a chain of such inferences to draw out a line of reasoning (chain of inferences) that makes more (or less) probable some proposition that is in doubt, and which needs to be settled.24x D. Walton, Legal Argumentation and Evidence, Penn. State. Univ. Press (2002), at 106.
To delineate the ‘chains of proof’, evidence scholars typically distinguish between factum probans (that which proves) and factum probandum (that which is proven).25x Cf. inter alia J. Michael, M. J. Adler, ‘The Trial of an Issue of Fact: II’, Colum. L. Rev., vol. 34, no 8 (1934), 1462-1493, at 1279. In the context of a trial, the facti probans are facts gathered (in the course of so-called fact-finding, investigation or discovery procedures) or offered as evidence by litigants in support of their claims. From the perspective of the ‘logic of proof’, the facti probans act as premises aimed at allowing a competent authority to infer the existence or non-existence of a probandum. Though originally tailored for dispute resolution, these concepts can mutatis mutandis be transposed in the context of policy, given that investigation or discovery procedures (like expert opinions or public consultations) are launched by regulators in view of gathering – the equivalent of – facti probans, for the purpose of establishing ‘that for which evidence is sought’ (a probandum), i.e. a response to the enquiry of what a given regulatory address ought to be.
Regulation will be defined as agovernment intervention in the private domain or a legal rule that implements such intervention [which is] a binding legal norm created by a state organ that intends to shape the conduct of individuals and firms.26x B. Orbach, ‘What Is Regulation’, Yale J. Reg, vol. 30, no 1 (2012), at 6.
It will be understood as a generic term which includes, but is not limited to, EU legislative instruments which will be the point of focus in this study, given that the AI Act’s enforcement follows the ordinary legislative procedure.27x Art. 294, Treaty on the Functioning of the EU (TFEU).
-
B AI Regulation, Formally Fact-based
I The Commission’s Evidence-Finding Procedure in Characterizing High-Risk AI
After the publication of the White Paper on AI,28x COM(2020) 65 final. the EC launched a public consultation, with the aim of gathering stakeholders’ opinions on the policy options suggested by the EC.29x Ibid., at 9. Five policy options were suggested. Option 1: EU legislative instrument setting up a voluntary labelling scheme; Option 2: A sectoral ‘ad hoc’ approach; Option 3: horizontal EU legislative instrument establishing mandatory requirements for high-risk AI applications; Option 3+ horizontal EU legislative instrument establishing mandatory requirements for high-risk applications + co-regulation through codes of conduct for non-high-risk applications; Option 4: horizontal EU legislative instrument establishing mandatory requirements for all applications, irrespective of the risk they pose. This consultation, which ran from 19 February to 14 June 2020, included AI developers and deployers, companies and business organizations, small and medium-sized enterprises (SMEs), public administrations, civil society organizations, academics and citizens.30x EC, Public consultation on the AI White Paper: Final Report, November 2020, available at: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=68462. A total of 1,216 respondents31x There are two differences between the information published online and in the EC’s final report of the public consultation on the AI White Paper. First, the information available online numbered 1,216 respondents. The same goes for the .csv document that contains raw data. However, the final report states that “the public consultation attracted 1,215 contributions” (at 3). Also, according to the online information, 131 business associations responded in the public consultation; 130 according to the final report. Second, the consultation ran from 20 February to 14 June 2020 according to the information available online, and from 19 February to 14 June 2020 according to the final report (at 3). Cf. EC, Public consultation on the AI White Paper: Final Report, cit. supra. answered 60 questions,32x These questions were either ‘closed’ questions with suggested answers, which allow a quantitative analysis, or open questions with free text answers, allowing a qualitative analysis. Unfortunately, the authors of this study did not have access to the 450 written answers attached to some responses. Their observations were completed on the basis of the EC’s final report of the public consultation on the AI White Paper cit. supra, and the EC’s Staff Working Document Impact Assessment Accompanying the AI Act, SWD(2021) 84 final, Part 2/2. 42 of which contained predefined answers. These questions fell into four main categories, namely achieving an ecosystem of excellence (as per Section 4 of the White Paper on AI); achieving trustworthy AI; the opportunity of an AI regulation focused on high-risk AI systems and the opportunity of updating the existing EU legislation on product safety and liability.
Though the answers received will not be detailed here, focus will be put on two points which were presumably key in shaping the EU’s regulatory address of AI. The first point pertains to the need for a new regulatory framework. In this regard, over 37% of the respondents considered that such a framework was, indeed, needed; 29.19% considered that existing EU legislation may have some gaps and 2.63% viewed the latter as sufficiently apt to regulate AI systems. It should be noted that the respondents’ free text answers show relative consensus in their preference for a domain/sector-specific rather than a blanket regulatory framework.
The second key point pertains to placing the primary focus on regulating so-called high-risk AI systems. Here, 37.66% of the respondents agreed, 27.22% disagreed and 18.17% expressed another opinion. In a similar vein, the EC enquired if the respondents endorsed the definition of ‘high-risk’ contained in the previously published White Paper on AI.33x COM(2020) 65 final at 17. The EC stressed two cumulative criteria for qualifying an AI system as high-risk. First, that the AI application is employed in a sector where, given the characteristics of the activities typically undertaken, significant risks can be expected to occur; second, the AI application in the sector in question is, moreover, used in such a manner that significant risks are likely to arise. 21.63% of the respondents agreed with said definition, 4.52% disagreed and 63.08% restrained from answering. Respondents also had the opportunity to point out the AI applications or uses which they viewed as most concerning. These included inter alia lethal autonomous weapons, remote biometric identification, autonomous vehicles, AI systems dedicated to critical infrastructure (e.g. electricity), health, human resources and employment (e.g. AI-powered recruitment tools), analysing and manipulating human behaviour, predictive policing, mass surveillance, political (dis)information and law enforcement.
Additionally, the EC launched another consultation related to the Inception Impact Assessment that proposed different policy options and regulatory instruments.34x EC, ‘Artificial Intelligence – Ethical and Legal Requirements’ available at: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/feedback_en?p_id=8242911. There is – here again – an inconsistency regarding the number of respondents, as the EC’s Staff Working Document Impact Assessment accompanying the AI Act displays only 130 different stakeholders. Cf. SWD(2021) 84 final, Part 2/2, at 19. There are 55 displayed business associations in the Staff Working Document and 54 online; 28 company or business organizations in the Staff Working Document, 31 online; 15 NGOs in the Staff Working Document, 16 online; 7 EU citizens in both the Staff Working Document and online; 7 academic or research institutions in both the Staff Working Document and online; 6 other respondents in the Staff Working Document, 5 online; 5 consumer organizations in the Staff Working Document and 4 online; 4 trade unions in both the Staff Working Document and online, and 3 public authorities in both the Staff Working Document and online. Assuming that the online information was regularly updated, the authors consider that that information is more accurate. Five policy options were suggested,35x Cf. supra, note no 31. all of which were assessed in terms of their economic impacts, costs for public authorities, social impact, impacts on safety, on fundamental rights and on environment. The EC received 131 feedback instances (open comments). These were summarized in a Staff Working Document,36x COM(2021) 206 final, SWD(2021) 84 final, at 19-20. which stressed that “most of the respondents are explicitly in favour of a risk-based approach [rather] than blanket regulation of all AI applications”.37x Ibid., at 19. Though the EC did not disclose any official statistics of this consultation,38x Except statistics by category of respondents and by country which are of lesser importance here (https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/feedback_en?p_id=8242911). an overview39x This analysis was conducted by the authors of this article and is not to be viewed as an official source of statistical data. of its results shows that less than 5% of respondents considered there is no need for a new regulatory instrument on AI. Around 26% of the respondents considered soft law is a good starting point, while approximately 45% stated that mandatory requirements ought to be limited to high-risk AI systems. Finally, 13% of the respondents favoured a combination of soft law and high-risk application options.
Amongst the respondents who commented on the enforcement of the AI Act (the number of whom is unknown), over 50% were in favour of a combined ex ante risk self-assessment and ex post enforcement for high-risk AI systems. Most highlighted the importance of clearly defining ‘high-risk’, recommending that the EC – carefully – maps out the gaps in existing legislation before moving forward with new legislation.40x This is acknowledged several times in the EC’s Staff Working Document, cit. supra, at 16. Here again, a preference was given to a sectoral, rather than a horizontal regulatory approach.41x Ibid., at 19-20.
The cited consultations can, indeed, be viewed as discovery procedures in the sense that their results were meant to uncover (or prove) prevailing trends amongst interested milieus regarding the definition of high-risk AI, the opportunity for a new regulatory framework and the type of regulation (sectoral/horizontal) to be enforced. It is reasonable to assume that said results were prima facie seen by the EC as trustworthy and probative. This can be inferred from the manner in which the consultations were organized. In particular, in the public consultation set up after the White Paper on AI, the EC took the precaution of drafting predefined answers to certain questions. An evidence scholar might interpret this as delineating the object of the probans (evidence to be gathered), hinting to the EC’s intention to receive feedback on specific – rather than general – points considered as relevant in proving the probandum, i.e. the design of the future AI regulatory framework. This being said, the conclusions drawn by the EC from the answers received42x Cf. infra, Section C. give the impression that the public consultations were also intended to validate the policy choice previously expressed in the White Paper on AI: though a discovery procedure was formally launched, the regulatory framework ultimately established was not going to rely on evidence alone, as priority seemed to already be given to a pre-established policy strategy on AI.II The Commission’s Sense-Making of the Evidence Gathered
Lidskog and Sundqvist observe that “fact finding and sense making are seen as different and discrete spheres of activity”.43x R. Lidskog, G. Sundqvist, ‘Sociology of Risk’, in S. Roeser, R. Hillerbrand, P. Sandin, M. Peterson (eds), Essentials of Risk Theory, Springer (2013), 75-105, at 83, emphasis added. These are separate (epistemic) endeavours:44x Cf. definition of evidence supra, Section A. sense-making implies inferential reasoning on the basis of the probans, allowing to conclude if the probandum is or is not sufficiently established. In the context of dispute resolution, assessment is generally a regulated process. When codified, a law of evidence usually frames the adducing and interpretation of evidentiary facts by, namely, distributing the burdens of proof, defining the relevance, admissibility and probative value of various types of evidence (documentary, witness, hearsay, expert reports), requiring impartiality from the evidence-assessing authority, etc. When evidence serves as basis for policy, evidence-assessment is, in principle, not regulated through rigid procedures. Regulators enjoy a level of discretion in choosing the type of discovery procedures and in their interpretation of the evidence thus gathered. Though evidence-gathering and assessment guidelines are mentioned in the Better Regulation Agenda,45x COM(2015) 215 final, cit. supra, at 3-4. there is no unique or uniform procedural framework that regulates – as would a law of evidence – the above-mentioned stages of evidence production and assessment.
As a general rule of thumb, evidence for the purpose of policy – as is the case with litigation – should aspire to accuracy.46x For the purpose of this study, accuracy will be understood as an adequate representation of some part of reality. In evidence theory, accuracy is associated with the degree of probability tied to a probandum. The higher the level of probability, the more accurate the evidence used is presumed to be. Cf., inter alia, L. Zagzebski, ‘Intellectual Motivation and the Good of Truth’, in M. DePaul, L. Zagzebski (eds), Intellectual Virtue. Perspectives from Ethics and Epistemology, Oxford, Oxford Univ. Press (2004), 135-154, at 135-136. This is particularly important in the context of risk regulation, given that under- or overstating risks would push regulators to “err on the side of caution”.47x C. L. Silva, H. C. Jenkins-Smith, ‘The Precautionary Principle in Context: U.S. and E.U. Scientists’ Prescriptions for Policy in the Face of Uncertainty’, Soc. Sci. Quart’y, vol. 88, no 9 (2007), 640-664, at 649. To attain a – plausible – level of accuracy of risks, the 2016 EFSA Recommendation, e.g. encourages for a systematic and structured approach in discovery and assessment, as a way to avoid risk misrepresentation.48x EFSA Scientific Committee, Guidance on Uncertainty in EFSA Scientific Assessment, EFSA Journal (2016), at 54. The analogy with evidence law on the point of accuracy is also visible in the Monsanto case,49x CJEU, 8 September 2011, Monsanto SAS et al., joined cases C-58/10 to C-68/10, EU:C:2011:553. See also ECJ, 5 February 2004, Commission v. France, case C-24/00, EU:C:2004:70, para. 55. Cf. also COM(2015) 215 final, cit. supra, at 6. in which the Court of Justice of the European Union (CJEU) stated that, in the absence of conclusive evidence of possible harm, regulators should rely on the best available evidence.50x This is a mutatis mutandis application of the best evidence rule, seminally mentioned in J. Morgan, Essays upon the Law of Evidence, New Trials, Special Verdicts, Trials at Bar and Repleaders, vol. 1, London, Joshnsonn (1779), at 2-3. Though the selection of the ‘best available evidence’ is left to the regulators’ discretion, accuracy as an end-goal – in principle – requires that the ‘best evidence’ be selected based on the properties of the facts discovered, not the preferences of the evidence-finding body.51x ECJ, 5 May 1998, UK v. Commission, case C-180/96, EU:C:1998:192, para. 42.
In the context of the AI Act, it is precisely from the perspective of accuracy (as transposed from evidence theory) that one can discern a dissonance in the evidence/law correspondence: assuming that the answers gathered through the public consultations give an ‘accurate portrayal’ of the stakeholders’ views on AI regulation, they do not seem to have played a decisive role in the regulatory choices expressed in the AI Act. First, the EC did not seem to follow the answers of the respondents as regards the type of the framework, given that most respondents were, in fact, in favour of a sectoral rather than a horizontal regulation. An evidence scholar might find this surprising: when a majority of respondents’ shares an opinion, that opinion is usually a strong indicium on the direction that a future instrument ought to take. Second, only a minority of respondents (21.63%) agreed on the definition of high risk given in the White Paper on AI.52x Cf. supra, Sub-section B.I. In spite of this absence of consensus, the EC assumed that said definition was to be integrated in the AI Act after all. Based on this definition, the EC then specified the criteria – listed in the AI Act – used in classifying high-risk AI systems. A point where the EC did factor in the results of its ‘discovery’ is the regulatory focus on high-risk AI. Amongst the 131 feedback instances of the public consultation on the Inception Impact Assessment, those who expressed their views on high-risk AI systems (the number of whom remains unknown) did, indeed, consider that the new mandatory requirements ought to primarily address those systems.53x EC Staff Working Document, cit. supra, at 19-20. -
C AI Regulation, (Exclusively?) Policy Driven
That policy objectives influence discovery is neither surprising nor unprecedented in EU risk regulation. However, attaining those objectives should be supported by the – best – available evidence of the risks addressed through regulation. In the field of the precautionary principle, e.g. though “what is an ‘acceptable’ level of risk for society is an eminently political responsibility”,54x COM(2000) 1 final, at 3. decision makers faced with an unacceptable risk, scientific uncertainty and public concerns “have a duty to find answers”.55x Ibid.
The role of policy objectives in addressing various types of risks is essentially exclusionary: while there might be data on a variety of risks of harm, only those falling under a pre-established standard of protection will – presumably – be priority candidates for regulatory address. The CJEU confirmed thatpolicy is to aim at a high level of protection and is to be based in particular on the principles of preventive action […] integrated into the definition and implementation of other Community policies.56x ECJ, 5 May 1998, National Farmers’ Union et al., case C-157/96, EU:C:1998:191, para. 64.
Similarly, the EC’s Communication on the precautionary principle57x COM(2000) 1 final. recalls the requirement of consistency with existing EU legislation, the important specification being that this requirement be observed when ‘new’ regulation is enacted in areas “where scientific data are available”.58x Ibid., at 4. See also G. Majone, ‘Foundations of Risk Regulation: Science, Decision-Making, Policy Learning and Institutional Reform’, E. J. Risk Reg., no 1 (2010), 5-19, at 6. It follows that consistency with policy objectives should not be interpreted as creating a discharge from gathering and assessing evidence. Taking those objectives and the available evidence as equally important – as opposed to policy-superseding evidence – reflects the idea of balance, mentioned above.59x Cf supra, ‘Introduction’. Granted, the evidence gathered for the purpose of policy may not – because it sometimes cannot – be conclusive. The CJEU has stated that for the sake of effective prevention, weak or (statistically) insufficient evidence can suffice for a regulatory response to be justified.60x ECJ, 5 May 1998, National Farmers’ Union et al., case C-157/96, cit. supra, para. 63. In such cases, policy objectives may indeed warrant primarily, but never exclusively the enforcement of new risk regulation: gathering and assessing evidence – however probative – remains required.
In this context, the fact-to-policy leap in the AI Act seems peculiar. If policy is to be evidence-based, it must – to some degree at least – reflect the evidence gathered, its key role being to inform the policymakers of a specific understanding of the causal processes, stemming from statistical associations in the data, or the lack of data itself.61x M. Neil, N. Fenton, M. Osman, D. Lagnado, ‘Causality, the Critical but Often Ignored Component Guiding us Through a World of Uncertainties in Risk Assessment’, J. Risk Res., vol. 24, no 5 (2021), 617-621. The EC’s shift from the results of the public consultations to the AI Act remains opaque. A comparison between, on the one hand, the content of the views assembled from said consultations62x Cf. supra, Sub-Section B.I. and, on the other hand, the provisions of the AI Act shows that the latter does not seem evidence-based on three points: 1. that horizontal regulation was the way to go; 2. that the definition of ‘high-risk AI’ in the White Paper on AI was to be the cornerstone for the four-level taxonomy of risks contained in the AI Act;63x Ibid. 3. that in practice, there is indeed four-stage crescendo in the threats of harm AI systems do, or are likely to raise. This gives the impression that the AI Act is minimally evidence-based, which counters the ambition for evidence-based regulation as a means for ‘better regulation’, and more generally, with the EC’s duty to state reasons.64x Art. 296(2) TFEU. It remains to be seen if, during the ordinary legislative procedure, the AI Act will, perhaps, be amended in the sense of reinforcing the EC’s compliance with this duty.65x Ibid. Considering that the regulatory framework in the AI Act seems to be created in spite of – as opposed to based on – the evidence collected by the EC, it is, at this stage, possible to argue that the requirement of evidence/policy correspondence (expressed namely in the Monsanto case66x CJEU, 8 September 2011, Monsanto SAS et al., joined cases C-58/10 to C-68/10, cit. supra.) is not fully met. -
D Concluding Remarks
Does the policy cart lead the scientific horse in the context of the AI Act? It certainly seems so. Due to the ever-evolving nature of AI as a class of new technologies, the EU legislature – no doubt, rightfully – chose to focus on the gravity of the risk and create a fourfold category of AI systems, i.e. unacceptable, high-risk, limited risk and non-high-risk. However, from the perspective of the evidence/policy correspondence, the key issue is whether the evidence gathered by the EC through the two public consultations cited in this study had any impact at all in designing the regulatory framework that the AI Act creates. The analysis of the results of said consultations and a comparison between those and the provisions of the AI Act allow to consider that the evidence-gathering undertaken by the EC was, in the end, not crucially informative on the appropriate or desirable regulatory approach, but was merely ancillary to an approach the EC already seemed to have chosen. It follows that policy concerns (namely ‘updating’ the EU’s legislation relative to the Digital Internal Market) are almost exclusively found in the rationale underlying the AI Act. Taking such concerns into account in the EU’s risk regulation is – as argued in this article – not unprecedented. However, it follows from the Better Regulation Agenda, as well as the CJEU’s case law, that the attaining of policy objectives should not be interpreted as creating a discharge from engaging in evidence-gathering and assessment, in view of tailoring a properly evidence-based regulatory approach. At the time of the drafting of this article, the AI Act is still undergoing the ordinary legislative procedure. It remains to be seen if any amendments will be suggested in view of making the evidence gathered by the EC more prominent in the reasons stated for the drafting and submission of the AI Act proposal.
-
1 In the AI Act (COM(2021)206 final). As per Ann. I, AI as a field encompasses “(a) machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning, (b) logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems, [and] (c) statistical approaches, Bayesian estimation, search and optimization methods”. Alternatively, AI systems are defined as software developed with one or more of these techniques and approaches that can “for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with” (Art. 3.1, AI Act).
-
2 Communication from the Commission: Artificial Intelligence for Europe, COM(2018) 237 final.
-
3 Independent High-Level Expert Group on AI (HLEG), Ethics Guidelines for Trustworthy AI, 8 April 2019, https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.
-
4 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: Building Trust in Human-Centric Artificial Intelligence, COM(2019)168 final.
-
5 HLEG, The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment, 17 July 2020, https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment.
-
6 European Parliament Resolution, of 20 October 2020, with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies (2020/2012(INL)).
-
7 Proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (artificial intelligence act) and amending certain Union legislative acts, COM(2021) 206 final.
-
8 Commission Staff Working Document, Impact Assessment Accompanying the Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, 21 April 2021, COM(2021) 206 final.
-
9 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: Fostering a European approach to Artificial Intelligence, COM(2021) 205 final.
-
10 Ibid.
-
11 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, ‘Better Regulation for Better Results – An EU Agenda’, COM(2015) 215 final at 3.
-
12 H. Riesch, ‘Levels of Uncertainty’, in S. Roeser, R. Hillerbrand, P. Sandin, M. Peterson (eds), Essentials of Risk Theory, Springer (2013), 29-56, at 30; D. J. Spiegelhalter, H. Riesch, ‘Don’t Know, Can’t Know: Embracing Deeper Uncertainties When Analysing Risks’, Phil. Trans. R. Soc. A, vol. 369 (2011), 4730-4750, at 4734.
-
13 P. R. Brown, A. Olofsson, ‘Risk, Uncertainty and Policy: Towards a Social-Dialectical Understanding’, J. Risk Res., vol. 17, no 4 (2014), 425-434, at 427.
-
14 Ibid., at 428, emphasis added.
-
15 M. Zamboni, ‘Legislative Policy and Effectiveness: A (Small) Contribution from Legal Theory’, Eur. J. Risk Reg., no 9 (2018), 416-430, at 420.
-
16 J. A. Curry, Statement to the Committee on Science, Space and Technology of the United Sates House of Representatives, Hearing on ‘The President’s UN Climate Pledge’ 15 April 2015 cit. in L. Bergkamp, ‘The Reality of Risk Regulation’, Eur. J. Risk Reg., no 8 (2017), 56-63, at 59.
-
17 COM(2015) 215 final, cit. supra, at 3.
-
18 (2020/2012(INL)) cit. supra, para. 13.
-
19 Ibid., Art. 4(e) of the text of the legislative proposal requested by the European Parliament.
-
20 AI Act, cit. supra, Art. 3(14).
-
21 Ibid. This Annex contains eight categories which include biometric identification and categorization of natural persons, management of operation of critical infrastructure such as road traffic or energy supply, education and vocational training, workers recruitment, access to essential private services and public services and benefits, law enforcement, migration, asylum and border control and factual and legal research and interpretation tools used by judicial authorities.
-
22 COM(2015) 215 final, cit. supra.
-
23 F. J. Hibberd, ‘Situational Realism, Critical Realism, Causation and the Charge of Positivism’, Hist’y H.S., vol. 23, no 4 (2010), 37-51, at 40; G. A. Nunn, ‘Law, Fact and Procedural Justice’, Emory L.J., vol. 70, no 6 (2021), 1273-1324, at 1288 seq.
-
24 D. Walton, Legal Argumentation and Evidence, Penn. State. Univ. Press (2002), at 106.
-
25 Cf. inter alia J. Michael, M. J. Adler, ‘The Trial of an Issue of Fact: II’, Colum. L. Rev., vol. 34, no 8 (1934), 1462-1493, at 1279.
-
26 B. Orbach, ‘What Is Regulation’, Yale J. Reg, vol. 30, no 1 (2012), at 6.
-
27 Art. 294, Treaty on the Functioning of the EU (TFEU).
-
28 COM(2020) 65 final.
-
29 Ibid., at 9. Five policy options were suggested. Option 1: EU legislative instrument setting up a voluntary labelling scheme; Option 2: A sectoral ‘ad hoc’ approach; Option 3: horizontal EU legislative instrument establishing mandatory requirements for high-risk AI applications; Option 3+ horizontal EU legislative instrument establishing mandatory requirements for high-risk applications + co-regulation through codes of conduct for non-high-risk applications; Option 4: horizontal EU legislative instrument establishing mandatory requirements for all applications, irrespective of the risk they pose.
-
30 EC, Public consultation on the AI White Paper: Final Report, November 2020, available at: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=68462.
-
31 There are two differences between the information published online and in the EC’s final report of the public consultation on the AI White Paper. First, the information available online numbered 1,216 respondents. The same goes for the .csv document that contains raw data. However, the final report states that “the public consultation attracted 1,215 contributions” (at 3). Also, according to the online information, 131 business associations responded in the public consultation; 130 according to the final report. Second, the consultation ran from 20 February to 14 June 2020 according to the information available online, and from 19 February to 14 June 2020 according to the final report (at 3). Cf. EC, Public consultation on the AI White Paper: Final Report, cit. supra.
-
32 These questions were either ‘closed’ questions with suggested answers, which allow a quantitative analysis, or open questions with free text answers, allowing a qualitative analysis. Unfortunately, the authors of this study did not have access to the 450 written answers attached to some responses. Their observations were completed on the basis of the EC’s final report of the public consultation on the AI White Paper cit. supra, and the EC’s Staff Working Document Impact Assessment Accompanying the AI Act, SWD(2021) 84 final, Part 2/2.
-
33 COM(2020) 65 final at 17. The EC stressed two cumulative criteria for qualifying an AI system as high-risk. First, that the AI application is employed in a sector where, given the characteristics of the activities typically undertaken, significant risks can be expected to occur; second, the AI application in the sector in question is, moreover, used in such a manner that significant risks are likely to arise.
-
34 EC, ‘Artificial Intelligence – Ethical and Legal Requirements’ available at: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/feedback_en?p_id=8242911. There is – here again – an inconsistency regarding the number of respondents, as the EC’s Staff Working Document Impact Assessment accompanying the AI Act displays only 130 different stakeholders. Cf. SWD(2021) 84 final, Part 2/2, at 19. There are 55 displayed business associations in the Staff Working Document and 54 online; 28 company or business organizations in the Staff Working Document, 31 online; 15 NGOs in the Staff Working Document, 16 online; 7 EU citizens in both the Staff Working Document and online; 7 academic or research institutions in both the Staff Working Document and online; 6 other respondents in the Staff Working Document, 5 online; 5 consumer organizations in the Staff Working Document and 4 online; 4 trade unions in both the Staff Working Document and online, and 3 public authorities in both the Staff Working Document and online. Assuming that the online information was regularly updated, the authors consider that that information is more accurate.
-
35 Cf. supra, note no 31.
-
36 COM(2021) 206 final, SWD(2021) 84 final, at 19-20.
-
37 Ibid., at 19.
-
38 Except statistics by category of respondents and by country which are of lesser importance here (https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/feedback_en?p_id=8242911).
-
39 This analysis was conducted by the authors of this article and is not to be viewed as an official source of statistical data.
-
40 This is acknowledged several times in the EC’s Staff Working Document, cit. supra, at 16.
-
41 Ibid., at 19-20.
-
42 Cf. infra, Section C.
-
43 R. Lidskog, G. Sundqvist, ‘Sociology of Risk’, in S. Roeser, R. Hillerbrand, P. Sandin, M. Peterson (eds), Essentials of Risk Theory, Springer (2013), 75-105, at 83, emphasis added.
-
44 Cf. definition of evidence supra, Section A.
-
45 COM(2015) 215 final, cit. supra, at 3-4.
-
46 For the purpose of this study, accuracy will be understood as an adequate representation of some part of reality. In evidence theory, accuracy is associated with the degree of probability tied to a probandum. The higher the level of probability, the more accurate the evidence used is presumed to be. Cf., inter alia, L. Zagzebski, ‘Intellectual Motivation and the Good of Truth’, in M. DePaul, L. Zagzebski (eds), Intellectual Virtue. Perspectives from Ethics and Epistemology, Oxford, Oxford Univ. Press (2004), 135-154, at 135-136.
-
47 C. L. Silva, H. C. Jenkins-Smith, ‘The Precautionary Principle in Context: U.S. and E.U. Scientists’ Prescriptions for Policy in the Face of Uncertainty’, Soc. Sci. Quart’y, vol. 88, no 9 (2007), 640-664, at 649.
-
48 EFSA Scientific Committee, Guidance on Uncertainty in EFSA Scientific Assessment, EFSA Journal (2016), at 54.
-
49 CJEU, 8 September 2011, Monsanto SAS et al., joined cases C-58/10 to C-68/10, EU:C:2011:553. See also ECJ, 5 February 2004, Commission v. France, case C-24/00, EU:C:2004:70, para. 55. Cf. also COM(2015) 215 final, cit. supra, at 6.
-
50 This is a mutatis mutandis application of the best evidence rule, seminally mentioned in J. Morgan, Essays upon the Law of Evidence, New Trials, Special Verdicts, Trials at Bar and Repleaders, vol. 1, London, Joshnsonn (1779), at 2-3.
-
51 ECJ, 5 May 1998, UK v. Commission, case C-180/96, EU:C:1998:192, para. 42.
-
52 Cf. supra, Sub-section B.I.
-
53 EC Staff Working Document, cit. supra, at 19-20.
-
54 COM(2000) 1 final, at 3.
-
55 Ibid.
-
56 ECJ, 5 May 1998, National Farmers’ Union et al., case C-157/96, EU:C:1998:191, para. 64.
-
57 COM(2000) 1 final.
-
58 Ibid., at 4. See also G. Majone, ‘Foundations of Risk Regulation: Science, Decision-Making, Policy Learning and Institutional Reform’, E. J. Risk Reg., no 1 (2010), 5-19, at 6.
-
59 Cf supra, ‘Introduction’.
-
60 ECJ, 5 May 1998, National Farmers’ Union et al., case C-157/96, cit. supra, para. 63.
-
61 M. Neil, N. Fenton, M. Osman, D. Lagnado, ‘Causality, the Critical but Often Ignored Component Guiding us Through a World of Uncertainties in Risk Assessment’, J. Risk Res., vol. 24, no 5 (2021), 617-621.
-
62 Cf. supra, Sub-Section B.I.
-
63 Ibid.
-
64 Art. 296(2) TFEU.
-
65 Ibid.
-
66 CJEU, 8 September 2011, Monsanto SAS et al., joined cases C-58/10 to C-68/10, cit. supra.
European Journal of Law Reform |
|
Article | Of Hypothesis and FactsThe Curious Origins of the EU’s Regulation of High-Risk AI |
Keywords | AI Act, artificial intelligence, evidence, regulation, risk, risk-based approach |
Authors | Ljupcho Grozdanovski en Jérôme De Cooman |
DOI | 10.5553/EJLR/138723702022024001008 |
Show PDF Show fullscreen Abstract Author's information Statistics Citation |
This article has been viewed times. |
This article been downloaded 0 times. |
Ljupcho Grozdanovski and Jérôme De Cooman, 'Of Hypothesis and Facts', (2022) European Journal of Law Reform 123-134
In the spirit of the European Commission’s (EC) risk-based approach to artificial intelligence (AI), the AI Act (COM(2021) 206 final) contains a four-level taxonomy of AI-related risks, ranging from non-high to unacceptable. For so-called high-risk AI, it sets out a priori technical standards, the observance of which is meant to prevent the occurrence of various types of harm. However, based on a quantitative/qualitative analysis of the results from two public consultations conducted by the EC, this study shows that the views gathered by the EC are not reflected in the AI Act’s provisions. Although in ‘standard’ EU risk regulation, the objective of attaining a desired level of protection can justify a regulatory address, evidence remains required, for the purpose of avoiding risk misrepresentations. Bearing in mind the requirement for evidence-based policy, expressed in the 2015 Better Regulation Agenda, this study argues that the AI Act, as it currently stands, is not based on the evidence gathered and analysed by the EC, but that a pre-existing policy strategy on AI seems to primarily – if not, exclusively – constitute the grounds on which the EC based the regulatory framework which took shape in the AI Act. |