Right to Access Information as a Collective-Based Approach to the GDPR’s Right to Explanation in European Law
-
1 Introduction
The discriminatory potential of automated decision-making solutions has been debated for some time now. Yet, it has only recently received more attention because of the growing, and sometimes contentious, capacities of algorithmic solutions. Publications such as Weapon of Math Destruction 1xC. O’Neil, Weapons of Math Destruction. How Big Data Increases Inequality and Threatens Democracy (2016). or Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor 2xV. Eubanks, Automating Inequality. How High-Tech Tools Profile, Police, and Punish the Poor (2017). inform the broader audience on the threats created by the profiling and algorithms to the most vulnerable groups in society. The fact that algorithms often tend to reproduce human biases and, therefore, to repeat existing discriminatory mechanisms inspires the search for solutions that could guarantee the transparency of automated decision-making processes.
One of these solutions is the right to explanation. The controversy concerning the right to explanation was sparked by colliding opinions on the existence of this right in the General Data Protection Regulation3xRegulation 2016/679, OJ 2016 L 119/1. (hereinafter GDPR) and the scope of the GDPR’s provisions. The right to explanation can be briefly described as tools that allow the person who is subjected to automated decision-making to be informed about this fact and about the reasoning standing behind this decision. Its function is to provide an individual with instruments that would allow to avoid the discriminatory potential of automated decision-making solutions. The boundaries of this concept’s embodiment in the GDPR provoke discussion among scholars triggering the need to search other solutions that may address the threats and challenges posed by the discriminatory potential of automated decision-making solutions.4xIn favour of a presence of the right to explanation in the GDPR: B. Goodman and S. Flaxman, ‘European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”’, 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016) https://bit.ly/2wchh2x (last visited 4 May 2018); against such a possibility: S. Wachter, B. Mittelstadt & L. Floridi, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’, 7 International Data Privacy Law 76 (2017). In the article, I present an alternative approach on the basis of perceiving algorithms as information.
I argue that the right to access information could be considered as a more collective-based5xUnder the term ‘collective-based’ and ‘collective’, I understand (1) the special role of media and NGOs, which has been recognised especially by the European Court of Human Rights when realising the right to access information; (2) the character of explanation, which not just refers to a particular individual, but rather offers a model-centric explanation, thus referring to the system, not to the particular decision. alternative to right to explanation. The motivation for seeking such an alternative results from limited scope of the right to explanation implemented in the GDPR. I examine the legal possibilities of achieving model-centric explanation.6xFor the explanation of model-centric approach: L. Edwards and M. Veale, ‘Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions?”’, 16 IEEE Security & Privacy 46 (2018). Under this term, I understand the solutions that would allow to infer how a system of automated decision-making is structured, for example, inform on all the factors that are taken under consideration in a certain automated decision-making system, their weights, method of assessing the results and so forth. The article is an attempt to examine the possibilities and the limits of applying the right to access information as a way to realise the right to explanation. This would allow us to avoid, to a certain extent, the discriminatory treatment that could result from automated decision-making implemented by the state. The current analysis is indeed strictly focused on automated decision-making solutions that are linked to the state’s operations and constitute the examples of state’s ‘monopoly of information’. Such limitation is warranted by the case law of the European Court of Human Rights (hereinafter ECHR) on which I base my arguments. Even though the ECHR broadened the interpretation of Article 10 of the Convention for the Protection of Human Right and Fundamental Freedoms (hereinafter European Convention),7xConvention for the Protection of Human Rights and Fundamental Freedoms, 4 November 1950, ETS No. 005. it is debatable if and to what extent the said article is applicable to private entities. Although I do not intend to exclude the possibility of using the approach on the basis of the right to access information in a broader range of situations (e.g. concerning horizontal relations), this article focuses specifically and solely on automated decision-making that may occur in the state’s operations. In this vein, the article aims to primarily present the reasoning justifying the usage of the right to access information so that a model-centric explanation of automated decision-making solutions used by states is made available for scrutiny.
In order to achieve this goal, the article is structured as follows. The second section starts with some initial remarks on the potential sources of discriminatory treatment in case of automated decision-making. It presents the characteristics of the prohibition on discrimination in EU law and, by doing so, the scope of application of the reasoning developed in the article: this includes exploring the relation between, on the one side, the European Convention and its interpretation and, on the other side, the Charter of Fundamental Rights of the European Union (hereinafter Charter)8xCharter of Fundamental Rights of the European Union, OJ 2012 C 326. and its impact on the European law. The third section provides an overview of the possible limitations arising from the approach based on the right to explanation as set out in the GDPR. This would stress the need of having further legal means in order to achieve higher level of automated decision-making transparency. The section ends with the reasons why there is a need to approach the automated decision-making discriminatory potential from a more collective perspective. The fourth section discusses the right to access information in the European law. In this section, the evolution of the interpretation of Article 10 of the European Convention is presented. Its aim is to assess the possibility of using the right to access information whereby states’ institutions employ automated decision-making, for example, when providing health services, benefits for the unemployed or the recruitment processes in case of public education. The goal of this section is to present the reasoning standing behind the argument that the right to access information can, to a certain extent, constitute an alternative to the right to explanation. The fifth section concludes. -
2 Discriminatory Potential of Solutions Using Automated Decision-Making
2.1 Technological Perspective on Discriminatory Potential of Automated Decision-Making Solutions
It is important to notice that the discriminatory potential of automated decision-making has several sources. There are two main sources of concerns, which result from the methods used while preparing solutions allowing automated decision-making. The first one is the character of data used to develop the algorithms. The second one is the choices that are made when deciding which of the collected data should be perceived as important factors influencing the final result of processing.9xThough these two reasons differ, when analyzing certain cases, they usually appear to be linked to each other. Automated decision-making is – paradoxically – resistant to social changes. Firstly, the input is historically biased: as data on which decisions are based are historical, they can be inherently burdened with prejudice against minorities.10x‘However, when the input data used by the algorithms are generated by human beings, even algorithms become susceptible to human biases.’ – M. Ahsen, M. Ayvaci & S. Raghunathan, ‘When Algorithmic Predictions Use Human-Generated Data: A Bias-Aware Classification Algorithm for Breast Cancer Diagnosis’, forthcoming at Information System Research, at 2 (2017) https://bit.ly/2LQXzj6 (last visited 30 July 2018). Secondly, the decision which of the analysed data should be considered as important is a matter of choice. Machine bias,11xThis has been subjected to research as early as 1980. Conclusion of the T. Mitchell’s study was, ‘If biases and initial knowledge are at the heart of the ability to general beyond observed data, then efforts to study machine learning must focus on the combined use prior knowledge, biases, and observation in guiding the learning process. It would be wise to make the biases and their use in controlling learning just as explicit as past research has made the observations and their use.’ T. Mitchell, ‘The Need For Biases in Learning Generalizations’, Techreport, at 3 (1980) https://bit.ly/2IkB6t0 (last visited 4 May 2018). which is the result of the necessary choices made when testing the program, is the result of the necessity to subject data to generalisation in order to achieve any meaningful results. Therefore, the categorisation and segmentation, when trying to create automated decision-making solutions, is necessary. However, it must not be forgotten that the choice of what criteria are used for the categorisation are not neutral. Allowing artificial intelligence to analyse the discriminatory present, in order to make automated decisions that determine the future, causes the impression of objectiveness. Lack of human input into this process could be perceived as a tool for making it fairer. However, one should not forget who provides data and tools for analysis.12xAs V. Eubanks puts it, ‘Once the big blue button is clicked and the AFST [Allegheny Family Screening Tool] runs, it manifests a thousand invisible human choices. But it does so under a cloak of evidence-based objectivity and infallibility’, above n. 2, at 316 [epub edition].
Referring to the example of algorithms that should support crime prevention, one can say that the selection of a post code as a meaningful variable illustrates the machine bias problems.13xMore on the discriminatory character of automated-decision making solutions in the context of crime prevention: ‘Profiling and data mining may seem to work up to a point, but inevitably lead to actions against very large numbers of innocent people, on a scale that is both unacceptable in a democratic society…’ – D. Korff, ‘New Challenges to Data Protection Study’, Working Paper No. 2: Data Protection Laws in the EU: The Difficulties in Meeting the Challenges Posed by Global Social and Technical Developments 2010: 52; study conducted by ProPublica: J. Angwin, J. Larsona, S. Mattu & L. Kirchner, ‘Machine Bias. There’s Software used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks’, ProPublica (2016) https://bit.ly/1XMKh5R (last visited 4 May 2018). As it is known that certain districts are inhabited mostly by people of colour, using this variable to assess the risk that the individual may pose in the future has highly discriminatory potential.14xAbovementioned mechanisms allow scholars to claim, ‘The use of algorithmic profiling for the allocation of resources is, in a certain sense, inherently discriminatory: profiling takes place when data subjects are grouped in categories according to various variables, and decisions are made on the basis of subjects falling within so-defined groups. It is thus not surprising that concerns over discrimination have begun to take root in discussions over the ethics of big data’ – B. Goodman and S. Flaxman, above n. 4, at 3. Another example is the usage of automated decision-making technology to determine what kind of support unemployed person should get: the variables that are taken into account might affect the kind of help that one gets.15xFor more information on this topic: J. Niklas, K. Sztandar-Sztanderska & K. Szymielewicz, Profiling the Unemployed in Poland: Social and Political Implications of Algorithmic Decision Making (Warsaw 2015) https://bit.ly/1PrMorh (last visited 7 May 2018). Arbitrary selection of the meaningful variables may lead to the discrimination of certain groups in the society based on their ethnicity, gender and so forth, thus repeating the discriminatory mechanisms that exist nowadays.
The described mechanisms refer to groups of individuals who share a common characteristic. The discriminatory potential of automated decision-making solutions may therefore have an impact on whole groups, being a potential threat for collective discrimination. However, it can be questioned whether the concept of dividing individuals into groups must necessarily involve discrimination. One could argue that the mechanisms that caused the segmentation of individuals and led to differentiated treatment have always been somehow present. Therefore, the collective discrimination – which can be the result of the above-mentioned mechanisms – is not a unique phenomenon that appears when applying automated decision-making solutions. Moreover, one could argue that it is too early to accuse the technologies that are being developed of discriminatory potential. However, what makes the segmentation in the digital space different from the one in the traditional services sector are the numerous obstacles to the transparency of the divisions that are implemented, for example, intentional concealment by states and corporations or lack of adequate technical and digital literacy of the individuals. From the legal perspective, the obstacles for reaching transparency are, for example, regulations that ensure protection of intellectual property and trade secrets that are necessary to protect the profits of companies developing such solutions.16xFor elaboration on some of the obstacles regarding the transparency of automated decision-making: J. Burrell, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’, 3 Big Data & Society 1 (2016). The conflict of interest between subjects making profit – in terms of both monetary character and the efficiency of the processes – thanks to the use of databases and algorithms and the subjects of decisions that are based on big data analysis, will have an impact on the process of spreading automated solutions. As the number of areas in which algorithms are used grows,17xFor complex enumeration of such branches and analysis of the algorithms’ impact on society in popular science: O’Neil, above n. 1. so grows the disproportion in knowledge on automated decision-making between the broader public and narrow groups of specialists. As a result, these processes produce the need to provide a regulatory framework that would ensure compliance of automated decision-making solutions with the general prohibition on discrimination.2.2 Discriminatory Potential of Automated Decision-Making Solutions and the Prohibition on Discrimination in European Law
The above-described discriminatory potential of automated decision-making solutions may be perceived as – to a certain extent – a threat to the prohibition on discrimination in the European law. This section presents the character of the prohibition on discrimination in the European law. In doing so, it also presents the scope of the usage of the reasoning, which I present in the article.
On the basis created by the European Convention, the prohibition on discrimination on the grounds indicated in the Article 14 refers to the enjoyment of the substantive rights that are guaranteed by the European Convention itself. To a certain extent, the scope of the prohibition was expanded by Protocol 12 to the European Convention.18xProtocol No. 12 to the Convention for the Protection of Human Rights and Fundamental Freedoms, 4 November 2000, ETS No.177. According to Protocol 12, the ban on discrimination covers any right that is guaranteed at the national level, even where this does not fall within the scope of a European Convention.19xEuropean Union Agency for Fundamental Rights/Council of Europe, Handbook on European Non-Discrimination Law. 2018 edition at 18 (2018). As only a few countries ratified Protocol 12, the level of protection against discrimination differs across Europe. The consequences of the possible usage of the right to access information in cases referring to the automated decision-making are as follows. In countries that are parties of the European Convention, the case would have to refer to the right to access information on the functioning of discriminatory automated decision-making system in the area covered by the substantive rights guaranteed by the European Convention. The hypothetical example could refer to the usage of the right to access information on the functioning of the automated distribution of cases between judges in relation to the possible threat to the realisation of the right to fair trial.20xFor a similar argument see M. Matczak, ‘List do Trybunału Sprawiedliwości Unii Europejskiej ws. praworządności w Polsce’ (2018) https://bit.ly/2Fw6pRz (last visited 4 November 2018). In countries that ratified Protocol 12, the case could additionally refer to rights guaranteed at the national level. In both possibilities, the right to access information would serve as a tool to realise effectively another right that must have been endangered due to the possible discriminatory treatment.
In terms of the prohibition on discrimination in the EU, the relevant provision is set out in Article 21 of the Charter. The scope of prohibition on discrimination refers to the EU’s institutions and bodies actions and the actions of the Member States when implementing the EU’s law.21xArt. 51, Charter of Fundamental Rights of the European Union, above n. 8. It is necessary to note that according to the Charter the content of rights should be understood in accordance with the ones guaranteed by the European Convention.22xArt. 52, ibid. This is also the reason why in the article I focus on the analysis of the content of the ECHR’s case law referring to the relevant article. Additionally, selected areas and grounds of potential discrimination are covered by the equality directives: the Employment Equality Directive,23xWhich prohibited discrimination on the basis of sexual orientation, religion or belief, age and disability, in the area of employment: Council Directive 2000/78/EC, OJ 2000 L 303. the Racial Equality Directive,24xThe Directive prohibits discrimination on the basis of race or ethnicity in the context of employment. Moreover, it refers also to the access to the welfare system, social security, and goods and services: Council Directive 2000/43/EC, OJ 2000 L 180. the Gender Goods and Services Directive25xThe Directive Council Directive 2004/113/EC, OJ 2004 L 373. and the Gender Equality Directive.26xThe Directive refers to the equal treatment in relation to social security: Council Directive 2006/54/EC, OJ 2006 L 204. The character of the prohibition of discrimination for the EU law may also be enshrined by the fact of recognising it as a general principle of the EU law: ‘The principle of equal treatment is a general principle of EU law, enshrined in Article 20 of the Charter, of which the principle of non-discrimination laid down in Article 21(1) of the Charter is a particular expression.’27xPara. 43, Case C-356/12, Wolfgang Glatzel v. Freistaat Bayern, [2014], ECLI:EU:C:2014:350. However, it must be noted that the overall material scope of prohibition on discrimination in the EU law remains limited:the material scope of specific non-discrimination provisions in EU law is often quite limited and uneven. For example, whilst Directive 2000/78/EC only applies in the field of employment and occupation, the material scope of Directive 2000/43/EC is considerably broader, also including e.g. employment-related social security, further access and supply of goods and services, and other matters such as education and social advantages. The only exception to this is the prohibition of discrimination on grounds of nationality, which applies in the full scope of EU law.28xCh. Tobler, ‘Equality and Non-Discrimination under the ECHR and EU Law A Comparison Focusing on Discrimination against LGBTI Persons’, 74 Zeitschrift für auslandisches öffentliches Recht und Völkerrecht at 532 (2014).
As the result of such a character of the prohibition on discrimination in the EU law, the reasoning presented in the article might be used in case of automated decision-making implemented by the EU’s institutions and bodies. Moreover, it could be used in case of the EU’s Member States in the areas covered by the EU law. The scope of the possible discriminatory treatment resulting from the usage of automated decision-making solutions would have to refer to the grounds on which discrimination is prohibited in this area. The exception would be, as indicated in the quote above, discrimination on grounds of nationality. The character of ban on discrimination on grounds of nationality is more general. If interpreted in accordance with the case law analysed in this article, the right to access information might provide a tool to check if the automated decision-making solutions implemented by the state is within the area of the EU law. The right to access information could provide an insight into the question if automated decision-making solutions implemented by the state and concerning, for example, employment or access to vocational education as guaranteed by Directive 2000/43/EC are not a source of a discriminatory treatment on the basis of sexual orientation, religion or belief, age or disability.
Before presenting the arguments that support such a hypothesis, it is necessary to present the regulatory solutions proposed so far to deal with the issue of potential discrimination resulting from the automated decision making. Such a solution is the right to explanation as implemented in the GDPR. The analysis of the said right is the heart of the next section. -
3 Right to Explanation: An Approach Based on the Data Protection Framework
3.1 Right to Explanation in the GDPR
The term and concept of the right to explanation has been developed as a tool to ensure privacy protection and should – for now – be understood mainly as an element of data protection law. The discriminatory character of automated decision-making procedures is to a certain extent addressed at the EU level by the GDPR. The data subject, according to the Articles 13-15 of the GDPR, should be informed about:
the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.29xArts. 13-15, above n. 3.
The Articles 13-15 of the GDPR refer to, respectively, information that is to be provided where personal data are collected from the data subject, information that is to be provided where personal data have not been obtained from the data subject and the right of accessing data by the data subject. The common provision regarding ‘meaningful information’, which should be delivered to the data subject, can be perceived as a step towards increasing the level of consciousness of individuals in the area of automated decision-making. To a certain extent, these obligations may address the above-mentioned issue of insufficient digital literacy. However, the lack of precision regarding the scope of ‘meaningful information about the logic involved’ leads to a broad informational obligation that seems difficult to pin down.
Moreover, the possibility of combating online discrimination on the basis created by the GDPR is weakened by the fact that – as a general rule – the GDPR allows both automated individual decision-making and profiling.30xProfiling in GDPR is presented as a special category of individual decision-making: Art. 22, ibid. According to the GDPR, the data subject is granted the right ‘not to be subjected to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her’.31x Ibid. The threshold set for the possibility of opposing the automated decision-making is relatively high. Firstly, this right refers to a decision, not to the processing itself. Therefore, it allows developing technologies that may be discriminatory and introduces its control on the last level of the process, when the decision in question has been already made. The adopted form of the GDPR does not address the problems that result from the lack of the automated decision-making technologies’ transparency, from the perspective of the individual. Secondly, Article 22 of the GDPR refers to a decision based solely on automated processing. As a result of such phrasing, decisions predominantly based on automated processing would be excluded from its scope.32xThe authors of ‘Why a right to explanation of automated decision-making does not exist in the general data protection regulation’ point out the evolution of the proposed scope of the Art. 22. The legislative process led to the exclusion of denomination ‘predominantly’ from the final version of this legal act: S. Wachter, B. Mittelstadt & L. Floridi, above n. 4, at 92. This may significantly limit the number of decisions that may be questioned on the basis guaranteed by the GDPR. Thirdly, doubts should be raised with regard to the understanding of the denotation ‘similarly significantly affects’. The impact of the decision may differ depending on the individual conditions of, for example, economic or social character. The phrasing implemented in the GDPR can strengthen the role of discretion in the process of assessing the decision’s character. Moreover, there are three grounds on which automated individual decision-making can be justified33xArt. 22(2), above n. 3. – including the user’s consent – which make it even more difficult to visualise the potential impact of Article 22 as threatening the practices of automated decision-making and profiling in the web. Even though the GDPR provides grounds to debate the right to explanation and its character, it seems to offer limited possibilities to effectively address the challenges linked to the discriminatory potential of automated decision-making technologies.
Having said that, it is necessary to note two additional factors that provide motivation for searching alternative legal means to ensure a non-discriminatory character of the digital space. The first is the extent to which the logic involved in automated processing should be revealed to the data subject. As is stated in recital 63 of the GDPR, ‘that right should not adversely affect the rights or freedoms of others, including trade secrets or intellectual property and in particular the copyright protecting the software’.34x Ibid., Rec 63. The unrestrained development of data-driven35xM. Mandel, ‘Beyond Goods and Services: The (Unmeasured) Rise of the Data-Driven Economy’, Progressive Policy Institute: Policy Memo (2012) https://bit.ly/2FLBcVk (last visited 4 May 2018). economy and high level of personal data protection is hardly achievable, which can be illustrated by the above-mentioned example of limiting the initial scope of the GDPR’s Article 22: the protection against automated decision-making refers to the decision based solely on automated processing, which leaves aside the decisions based predominantly on automated processing. On the one hand, it does not impede the possibility of developing solutions using automated decision-making as the vital factor influencing certain decision. On the other hand, due to such phrasing the individual’s right to explanation may cease to have any real effect.
The second problem is predominantly individual character of the right to explanation included in the GDPR. Even the phrasing, namely the term ‘automated individual decision-making’, shows its focus on an individual perspective: it is the individual who is subjected to the decision in question. It is the individual who can object to the decision based on automated decision-making. Such an approach somehow leaves aside the question of a possible collective character of discriminatory practices, which are based on big data analysis. Simultaneously, so-far-identified and described impact of the machine bias when implementing the automated decision-making solutions shows that it affects the minorities and the most vulnerable groups in the society.36xFor detailed case study, see: Eubanks, above n. 2. The possibility of collective discrimination resulting from automated decision-making should provoke questions about the legal means in the GDPR, which can allow to combat such threats.3.2 Doubts Concerning the Collective Dimension of the Right to Explanation in the GDPR
In case of automated decision-making one should ask: what if ‘I’ is also a ‘we’? What if this particular decision that has been taken in one case is in fact representative for a whole group in the society, which has been defined on the basis of big data analysis? The tension between personalisation and big data–based technologies becomes more evident nowadays: the individualisation of content presented to individuals is only possible due to the analysis of data of millions. Defining common characteristics allows to undertake actions in scale of millions of individuals. Effectiveness of profiling is the result of the algorithms’ being fed enormous data collections. Therefore, one could wonder what law can offer in terms of applying right to explanation in order to address the collective dimension of discriminatory potential and risks posed by the automated decision-making technologies. In terms of the GDPR’s provisions, one could evoke Article 35. It refers to carrying out a data protection impact assessment if it is likely to result in high risk to the rights and freedoms of natural persons.37xArt. 35, above n. 3. Art. 35(3) includes list of three cases in which impact assessment shall be required. However, it must be noted that impact assessment is not addressed to the broader public. It does not empower the users or groups of users, and it does not allow the users or groups of users to take any control over the process of assessing the potential impact of data processing. As such, it does not constitute an element of the right to explanation.
Considering the GDPR’s collective dimension, it is necessary to examine Article 80.38xArt. 35, ibid. It allows the data subject to mandate a not-for-profit body, organisation or association to lodge the complaint on its behalf. Moreover, Article 80(2) provides the Member States of the EU with the opportunity to grant anybody, an organisation or an association referred to in Article 80(1) independently of the data subject’s mandate, the right to lodge a complaint and to exercise certain rights included in the GDPR.39xFor detailed analysis of this issue: L. Edwards and M. Veale, ‘Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”?’, 16 IEEE Security & Privacy 46 (2018) https://bit.ly/2IDsBcO (last visited 12 February 2019). However, this representation refers to the rights granted in the GDPR and therefore the limits to the right to explanation apply to the proceedings initiated on the basis of Article 80. They focus on the particular decision referring to the individual. The abstract control, understood as a legal equivalent of the above-described model-centric explanation, potentially performed by an NGO may, but does not have to, be allowed by the Member States. This can lead to a conclusion that in the GDPR there are no obligatory legal means that ensure transparency of the overall mechanisms standing behind automated decision-making solutions. There is only a slight possibility for single individuals to receive information on the grounds of a decision about their own individual case. However, it is not possible for a potentially discriminated group to examine in abstracto the systemic dimension of automated decision-making solutions and their discriminatory potential. The discretional power of the Member States on this matter could prevent the potential development of tools which would allow wide engagement of the civil society organisations in issues related to the right to explanation. Therefore, I propose to analyse to what extent the right to access information may fill the GDPR’s shortcomings. Does focusing not on ‘data’ itself but on ‘information’ may strengthen the users’ position? May it result with providing the individuals with the insight into the logic standing behind the automated decision-making solution? May it be a tool used for receiving model-centric explanation instead of one focused on a particular decision?3.3 Right to Explanation in the GDPR and Right to Access Information: The Necessity of Shifting from Individual- to Collective-Based Approach
It is necessary to note that the above-mentioned right to explanation in the GDPR technically could refer both to the overall system functionality focused on a certain group (model-centric explanation)40xL. Edwards and M. Veale, ‘Slave to the Algorithm? Why a “Right to an Explanation” Is Probably Not the Remedy You Are Looking For’, 16 Duke Law & Technology Review 18 (2017). and to the specific decisions concerning an individual.41xWachter, Mittelstadt & Floridi, above n. 4, at 78. The term used in Articles 13-15 of the GDPR, namely, ‘logic involved’, could – if interpreted broadly – provide the user with more general information on the system that allows automated decision-making. However, it might as well refer solely to the elements of the system, which had an impact on the decision concerning individual in the particular case. As the approach presented in the GDPR seems to suggest, the information on the logic involved in the processing should predominantly help to understand why this particular ‘one’ was subjected to a certain decision. This approach – more probable when one takes into account the valuable character of programmes used to perform activities leading to automated decision-making – contradicts the attitude presented by some scholars regarding the specific character of big data analysis: to a certain extent collecting and processing data may lead to ‘learning nothing about an individual while learning useful information about a population’.42xC. Dwork and A. Roth, ‘The Algorithmic Foundations of Differential Privacy’, 9 Theoretical Computer Science 211, at 215 (2013). Similarly: ‘We should acknowledge the change, and accept that privacy is a public and collective issue’ – P. Casanovas, L. De Koker, D. Mendelson & D. Watts, ‘Regulation of Big Data: Perspectives on Strategy, Policy, Law and Privacy’, 7 Health and Technology 1, at 13 (2017); and ‘predictions based on correlations do not only affect individuals, which may act differently from the rest of the group to which have been assigned, but also affect the whole group and set it apart from the rest of society’ – A. Mantelero, ‘Personal Data for Decisional Purposes in the Age of Analytics: From an Individual to a Collective Dimension of Data Protection’, 32 Computer Law & Security Review 238, at 239 (2016). Far from espousing such a one-sided approach, I would argue that big data–based technologies cause a feedback loop effect: as growing collection of data on individuals increases the possibilities of identifying group characteristics, the detailed characteristic of a group allows to complete an individual profile on the basis of information about the group, to which one seems to belong. Referring to the term used by M. Hildebrandt,43xM. Hildebrandt, ‘Profiling: From Data to Knowledge. The Challenges of a Crucial Technology’, 30 Datenschutz and Datensicherheit at 548 (2006). this can lead to the creation of ‘non-distributive group profiles’: assigning one to a certain group on the basis of selected characteristics of an individual (selected personal data). Even though there may be significant determinants that are not taken into account, and which could change the way in which one is classified, they are not considered as valid for such a classification.44xWhich is the effect of above-mentioned source of potential discrimination, namely the choices made during the meaningful variables data selection.
The limitations of the approach based on the personal data protection can be stressed by evoking the Court of Justice of the European Union’s (hereinafter ECJ) case law concerning personal data. In the case YS v. Minister voor Immigratie, Integratie en Asiel the ECJ notices that ‘the data in the legal analysis contained in that document, are “personal data” within the meaning of that provision, whereas, by contrast, that analysis cannot in itself be so classified’.45xJoined Cases C-141/12 and C-372/12, YS v. Minister voor Immigratie, Integratie en Asiel and Minister voor Immigratie, Integratie en Asiel v. M and S., [2014] ECLI:EU:C:2014:2081. The analogy with automated decision-making system shows that the individual may receive access to the personal data used to make a decision and to the decision itself; however, the analysis remains outside the scope of the term ‘personal data’ and therefore cannot be subjected to such an access. The concept of personal data involves the possibility of linking certain information with a particular individual, for example, one’s name and surname, e-mail address containing one’s surname and place of work or IP address.46xCase C-582/14, Patrick Breyer v. Bundesrepublik Deutschland, [2016], ECLI:EU:C:2016:779. As explained earlier, the source of potential discrimination in case of automated decision-making solution may not be linked to the individual and his or her personal data: it may be the result of how the particular automated decision-making system was structured.
In order to achieve effective protection against the possible discrimination, it is necessary to shift from the perspective focused on an individual and personal data to the perspective that focuses on a group and the information on how the automated decision-making system works. The advantage of the solution based on the right to access information is its more systemic approach towards the prohibition on discrimination. Taking into consideration the material scope of the non-discrimination provisions in the EU law explained earlier, its possible usage might be illustrated with the following example of the potential discrimination on grounds of nationality. The approach based on the right to access information would allow, for example, to check if the automated decision-making solution, which is implemented by the state, is somehow determined to result with the unequal treatment of the country’s citizens and the nationals of the other Member States due to the factors that are taken into account when analysing data. It would allow to determine whether the systemic solutions based on automated decision-making and implemented by the Member State, are in accordance with the prohibition on discrimination.
The next section presents reasoning standing behind the hypothesis that the right to access information might be a tool to achieve such a model-centric explanation, focused on exploring the discriminatory potential of automated decision-making solution, instead of being focused on protection of individual’s personal data, which in fact only fuels the automated decision-making solution. -
4 Right to Explanation: An Approach Based on the Right to Access Information
4.1 Right to Access Information as a Human Right: Evolution of Interpretation of the European Convention’s Article 10
Recognising the right to access information as a human right is not obvious. Even though Article 10 of the European Convention and Article 11 of the Charter provide the individuals with the ‘right … to receive and impart information and ideas without interference by public authority and regardless of frontiers’,47xThe phrasing of ECHR and the Charter is in this regard the same. The content of the Articles is similar to the Art. 19 of the Universal Declaration of Human Rights (‘to seek, receive and impart information and ideas through any media and regardless of frontiers’) – Universal Declaration of Human Rights, 10 December 1948, General Assembly resolution 217 A; and the Art. 19(2) of the International Covenant on Civil and Political Rights (‘this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice’) – International Covenant on Civil and Political Rights, 16 December 1966, General Assembly resolution 2200A (XXI). Lack of the verb ‘seek’ in the European Convention results with doubts concerning the possibility of interpreting the Art. 10 as containing the right to access information. These doubts are illustrated by the evolution of the case law presented in the article. only in 2006 has the ECHR begun to interpret Article 10 of the European Convention broadly. The ECHR’s judgements stress the conditionality of the right to access information and therefore remain behind other human right bodies, for example, Inter-American Court of Human Rights, which have already recognised a self-standing right to access information.48x‘…the Court finds that, by expressly stipulating the right to “seek” and “receive” “information,” Article 13 of the Convention protects the right of all individuals to request access to State-held information, with the exceptions permitted by the restrictions established in the Convention’ – Inter-American Court of Human Rights, Claude Reyes et al. v. Chile, Judgment, 19 September 2006, para. 77. The reason for such temperance is the grounds on which the broad interpretation of Article 10 is based. The ECHR’s interpretation results not from the literal reading of the European Convention. It is mostly the result of broad consensus that can be observed regarding the right to access information both on the international level and on the level of the domestic laws of the overwhelming majority of Council of Europe Member States.49x‘The Convention cannot be interpreted in a vacuum and must, […], be interpreted in harmony with other rules of international law, of which it forms part. Moreover, […] the Court may also have regard to developments in domestic legal systems indicating a uniform or common approach or a developing consensus between the Contracting States in a given area’ – Magyar Helsinki Bizottság v. Hungary (2016) No. 18030/11, para. 138. In this section, I present the selected case law that illustrates the change in the ECHR’s approach towards the right to access information and the general tendencies concerning the ECHR’s interpretation of the right to access information, which can be drawn from the analysed cases.
The recognition of a right to access information in the ECHR’s case law dates back to 2006. The case Sdružení Jihočeské Matky v. Czech Republic 50x Sdružení Jihočeské Matky v. Chech Republic, ECHR (2006) No. 19101/03. concerned information demanded by a non-governmental organisation about a nuclear power plant. Even though the ECHR decided that essentially technical information about the nuclear power station51xIt is worth noticing that the roots of direct recognition of the right to access information can be linked to the protection of the environment. It has been implemented in Art. 4 of Convention on Access to Information, Public Participation in Decision-Making and Access to Justice in Environmental Matters, 25 June 1998, UNTS 2161 at 447. did not reflect a matter of public interest, it opened the possibility of interpreting Article 10 of the ECHR as a source of demanding access to administrative documents from public institutions. The shift that came with Sdružení Jihočeské Matky v. Czech Republic is unprecedented. Even though Article 10 offers several reasons for which the scope of information shared publicly may be limited,52xAnalysed in detail below. the overall attitude towards the right to access information has changed. The right to access information has been recognised as an element of Article 10: as a rule – under certain conditions – the public should be given access to the relevant information, and as an exception the limitations to this rule could be evoked.
The confirmation of such a notion can be found in Társaság a Szabadságjogokért v. Hungary. 53x Társaság a Szabadságjogokért v. Hungary, ECHR (2009) No. 37374/05. The Hungarian NGO requested the Constitutional Court to grant them access to the complaint pending before it. The Constitutional Court denied the request, explaining that a complaint could not be made available to outsiders without the approval of its author on the basis of the protection of the Member of Parliament’s personal data. The ECHR explicitly stated, ‘The Court has recently advanced towards a broader interpretation of the notion of freedom to receive information and thereby towards the recognition of a right of access to information’.54x Ibid., para. 35. Due to the public character of the information requested by the NGO, the ECHR confirmed that denying access to the complaint was a violation of Article 10.
The occasion to strengthen the trend of broad interpretation of Article 10 resulted from the proceeding initiated by the Austrian non-governmental organisation demanding access to decisions regarding transfers of ownership of agricultural and forest land in Tirol: Österreichische Vereinigung zur Erhaltung, Stärkung und Schaffung Eines Wirtschaftlich Gesunden Land- und Forst-Wirtschaftlichen Grundbesitzes v. Austria.55x Österreichische Vereinigung zur Erhaltung, Stärkung und Schaffung Eines Wirtschaftlich Gesunden Land- und Forst-Wirtschaftlichen Grundbesitzes v. Austria, ECHR (2013) No. 39534/07. According to the judgement,the applicant association was therefore involved in the legitimate gathering of information of public interest. Its aim was to carry out research and to submit comments on draft laws, thereby contributing to public debate.56x Ibid., para. 36.
The ECHR stated that the reasoning standing behind such an interpretation can be based on the fact that the state’s monopoly on information actually interferes with the activities performed by NGOs as social ‘watchdogs’.57x‘…stating that the most careful scrutiny was called for when authorities enjoying an information monopoly interfered with the exercise of the function of a social watchdog’ – ibid., para. 41.
When explaining the threshold criteria, which need to be fulfilled in order to evoke the right to access information in the case Magyar Helsinki Bizottság v. Hungary, the ECHR enumerates four conditions. Firstly, ‘the purpose of the person in requesting access to the information held by a public authority is to enable his or her exercise of the freedom to “receive and impart information and ideas” to others’.58x Magyar Helsinki Bizottság v. Hungary, above n. 49, para. 158. This illustrates subsidiary character of the right to access information as a provision included in the Article, which reflects on the freedom of expression. Therefore, as explained in the Sub-Section 4.3., the special role of media and NGOs when executing the right to access information must be stressed. Secondly, the information, data or documents to which access is sought must meet a public interest test.59x Ibid., para. 161. The ECHR does not elaborate on the conditions that shall be fulfilled in order to comply with this test, claiming that this definition ‘depend[s] on the circumstances of each case’.60x Ibid., para. 162. I hypothesise on the possible meaning of this test in regard to the algorithms in the Sub-Section 4.2. Thirdly, ‘an important consideration is whether the person seeking access to the information in question does so with a view of informing the public’. This functional approach towards the information requested was envisioned in the above-mentioned case law. It also strengthens the position of media and NGOs as natural candidates for seeking access to the information in purpose of informing the public (see Sub-Section 4.3.). Additionally, the ECHR notes thatthe fact that the information requested is ready and available ought to constitute an important criterion in the overall assessment of whether a refusal to provide the information can be regarded as an ‘interference’ with the freedom to ‘receive and impart information’ as protected by that provision.61x Ibid., para. 170.
I refer to this condition in Sub-Section 4.2.
Such conditions provide an argument that is crucial when analysing the possibility of using the right to information as an alternative to the right to explanation. The role of the state as a guarantee of the right to information – seen from the perspective of the ECHR judgements – has evolved. From being viewed as a purely passive actor, whose function was not to disturb the flow of information,62xThe example of such an approach: ‘The Court observes that the right to freedom to receive information basically prohibits a Government from restricting a person from receiving information that others wish or may be willing to impart to him’ – Leander v. Sweden, ECHR (1987) No. 9248/81, para. 74; or: ‘That freedom cannot be construed as imposing on a State, in circumstances such as those of the present case, positive obligations to collect and disseminate information of its own motion’ – Guerra and Others v. Italy, ECHR (1998) No. 14967/89, para. 53. The fact that state is under no circumstances obliged to disseminate information of its own motion has been confirmed in Magyar Helsinki Bizottság v. Hungary, above n. 49, para. 156. The tension between lack of positive obligations from the state’s side and its more active role promoted by the above-mentioned judgements probably will result with continuation of the case law explaining the conditions that should be met when using the right to access information, for example, what is information of public interest? How to address the state’s monopoly of information? state may be considered more active player if state monopoly of information is under consideration.63xSimultaneously not being obliged to perform information activities out of its own motion, see above n. 62. The shift in the ECHR’s interpretation of Article 10 of the European Convention and towards the relationship between the state and the guards of democratic values embodied by the media and NGOs could have an impact on the right to access information in regard to digital space. However, the possibilities and limits of such a concept in regard to algorithms need to be explored. In the next sub-section, I present the issues that should be considered in order to apply Article 10 to scrutinise or prevent discriminatory treatment when applying automated decision-making technologies.4.2 Right to Access Information in Digital Space: Algorithms as Information of Public Interest
In order to examine the legal viability to apply the right to access information to issues resulting from the development of digital economy, three issues shall be considered. Firstly, I analyse whether algorithms on which automated decision-making is based can be viewed as information. Secondly, I examine the condition of being information of public interest, as it may limit the extent to which Article 10 can apply in regard to automated decision-making. Thirdly, the character of information that could potentially be received in case of automated decision-making technologies should be identified.
The possibility of understanding an algorithm as an information is based on the view that algorithms, in their broad – and original – meaning, are chains of commands, or, as Robin K. Hill briefly puts it, ‘finite, abstract, effective, compound control structure’.64xR.K. Hill, ‘What an Algorithm Is’, 29 Philosophy & Technology 35, at 44 (2016). Their characteristics include ‘accomplishing a given purpose under given provisions’.65x Ibid., at 47. This understanding of algorithms implies that they do not have to be even digitized: ‘Algorithms need not be software: in the broadest sense, they are encoded procedures for transforming input data into a desired output, based on specified calculations’ – T. Gillespie, ‘The Relevance of Algorithms’, in T. Gillespie, P. Boczkowski & K. Foot (eds.), Media Technologies, Essays on Communication, Materiality, and Society (2014) 167, at 167. However, nowadays a semantic shift from this purely theoretical sense towards a more pragmatic meaning is taking place. In public discourse, the term algorithm usually refers to ‘the implementation and interaction of one or more algorithms in a particular program, software or information system’.66xB.D. Mittelstadt, P. Allo, M. Taddeo, S. Wachter, & L. Floridi, ‘The Ethics of Algorithms: Mapping the Debate’, Big Data & Society at 2 (2016). In both cases – the mathematical approach and the one represented in public discourse – an algorithm can be presented as a nexus: it allows for analysis of data and gaining meaningful results. Therefore, it may be perceived as information on how the process is organised. The key element of applying Article 10 to automated decision-making technologies is to disenchant algorithms and view them simply as information on how the architecture of automated decision-making processes – irrespective of the level of their complexity – has been designed, that is, which variables are considered as meaningful. This perspective on the algorithm complies with the above-described condition of the requested information being ‘ready and available’. On the basis of the relevant case law, it is impossible to argue that the state should provide the analysis of how automated decision-making solution works. Nevertheless, it could be obliged to provide the access to the raw algorithm itself. This might be perceived as a path to ensuring model-centric explanation of automated decision-making solutions to broader public.
Considering the second issue, this analysis is limited to information of public interest, even though the automated decision-making process can refer to an infinite number of issues. Due to, for example, the dominant character of the ECHR’s case law regarding the right to access information as well as above-mentioned conflict of rights between the intellectual property rights and the right to access information, my argument here is strictly limited to the automated decision-making technologies used by the state’s institution (the algorithms that underpin operations of the state).67xIt might be possible to broaden the scope of right to access information: ‘The Court has further emphasised the importance of the right to receive information also from private individuals and legal entities. While political and social news might be the most important information protected by Article 10, freedom to receive information does not extend only to reports of events of public concern, but covers cultural expressions and entertainment as well….’: European Court of Human Rights, Internet: Case-Law of the European Court of Human Rights, 2011 (update: 2015), at 43 https://bit.ly/2HYhITm (last visited 7 May 2018). Following the case law of the ECHR, the condition that would have to be fulfilled on demanding the access to information in question is the existence of state’s ‘monopoly of information,’68x‘The Constitutional Court’s monopoly of information thus amounted to a form of censorship.; Társaság a Szabadságjogokért v. Hungary, above n. 53, para. 28. which is described in the ECHR’s case law as a form of censorship. The logic presented in the ECHR’s case law runs as follow: in case of the refusal of access to the information on how the system works, the state who possessed ‘monopoly of information’ would limit the possibilities on media and NGOs to exercise their function of conducting informed public debate. Therefore, the hypothesis of this article could be applied to algorithms that determine the knowledge about issues that constitute matters of public interest, as their importance for the public debate may not be questioned.
I would suggest that the automated decision-making technologies used to determine access to social benefits or automatically assign juries could serve as possible examples. I would argue that in case of automated decision-making solutions used to provide public services, such as public insurance, public education or public health services, the relevant algorithms could be subjected to the more proactive interpretation of the right to information, which has been developed by the ECHR. Not only do the states exercise information monopoly in these areas, but their impact on public matters of special interest to the society could also be considered as a reason for ensuring the transparency of the organisation process.
This characteristic of the right to access information shows the differences between approaching the right to explanation from the perspective of data protection and from the perspective of the right to access information. Contrary to the GDPR-based approach, which ultimately is focused on the effects of automated decision-making for a particular individual, the approach based on the right to information would allow for a more abstract and general control of the mechanisms determining automated decision-making. Firstly, it could justify access to the documents that regulate decision-making procedures concerning groups of people, allowing to apply a more collective perspective than the one focused solely on the individual, as is the case with the GDPR.69xEven though data protection may provide tools that to certain extent allow auditing the processes standing behind automated decision-making, they are mostly of voluntary or self-regulatory character: abovementioned data-processing impact assessments and codes of conduct or the possibility of establishing certification mechanisms, the latter two not having obligatory character. For presentation of this possibilities see: B.W. Goodman, ‘A Step Towards Accountable Algorithms? Algorithmic Discrimination and the European Union General Data Protection’, 29th Conference on Neural Information Processing Systems (NIPS 2016), at 4-5 https://bit.ly/2rlBzSf (last visited 7 May 2018). Secondly, the collective dimension of the right to access information is strictly linked to the special position of the media and NGOs in executing the freedoms and rights guaranteed in Article 10 of the European Convention, to which is dedicated the next sub-section.4.3 Who Is a ‘We’? Media and Non-Governmental Organisations as Citizens’ Representatives
It should not be overlooked that the processing of big data is based on mechanisms that allow for dividing individuals into groups that have certain common characteristics. The collective character of the potential discrimination seems to be an inescapable argument, tilting the scale for the possibility of recognising the right to information as an alternative to the tightly restricted right to explanation implemented in the GDPR. The special position of the media and NGOs has been stressed by the ECHR in numerous judgements and has been approached from the functional perspective:
However, the function of creating forums for public debate is not limited to the press. That function may also be exercised by non-governmental organisations, the activities of which are an essential element of informed public debate.70x Österreichische Vereinigung zur Erhaltung, Stärkung und Schaffung Eines Wirtschaftlich Gesunden Land- und Forst-Wirtschaftlichen Grundbesitzes v. Austria, above n. 55, para. 34. See also: ‘However, the realisation of this function is not limited to the media or professional journalists. In the present case, the preparation of the forum of public debate was conducted by a non-governmental organisation. The purpose of the applicant’s activities can therefore be said to have been an essential element of informed public debate’ – Társaság a Szabadságjogokért v. Hungary, above n. 53, para. 27.
The unique position of media and NGOs in regard to the right to access information is firmly embedded in case law concerning the right to access information. As the actors whose function is enabling and participating in the informed debate, their primary task is to provide the information to the broad public. Therefore, they fulfil the conditions set out by the ECHR in regard to recognition of the right to access information. The judiciary practice of the ECHR continuously recognises a special role of the media and non-governmental organisations as guards of democracy and somehow privileged actors in terms of executing rights included in Article 10 of the European Convention.71xFor the analysis of the role of media in the ECHR’s case law concerning Art. 10 see: T. Mendel, A Guide to the Interpretation and Meaning of Article 10 of the European Convention on Human Rights (2017), at 14-17 https://bit.ly/2OwOACd (last visited 30 July 2018). Not only are they perceived by the ECHR as actors whose mission is to inform the public on most important issues, but they are also legitimised to demand access to public information from the governmental institutions in order to inform broader public. They seem to be the subject most befitting this function: as they are the representatives of civil society, the impact of their actions should be more fruitful than legal actions undertaken solely by individuals. Moreover, one of the obstacles mentioned in the introduction to this article that limits the transparency of the implemented solutions is the lack of adequate digital literacy of individuals. Specialised NGOs72xIt is worth to note that in the Art. 80 of the GDPR the conditions that the organisation representing the individual has to meet include: ‘…and is active in the field of the protection of data subjects’ rights and freedoms with regard to the protection of their personal data’. This element may limit the number of organisations that would be allowed to represent the individuals in cases initiated in order to ensure the execution of the right to explanation based on the GDPR’s provisions – Art. 80(1), above n. 3. or well-informed journalists could instead act as intermediaries between the individual and the decision makers (or shall we say, decision-making automated solutions).
The presence of such representatives as NGOs is crucial for ensuring fairness and non-discriminatory treatment when applying automated decision-making.73xMoreover, it is necessary to admit that the analysis is partly inspired by a ruling of the Polish Voivodship Administrative Court in Warsaw, which decided that algorithms could be treated as public information and which was initiated by the Polish non-governmental organisation Panoptykon. The case regarded algorithms that are involved in providing services for the unemployed. It allowed dividing them into three groups, which determined the scope of support granted to each individual. The administrative court decided that the mechanism that formed the basis for the classification should be revealed accordingly to the regulations concerning public information. Judgement of WSA in Warsaw, II SAB/Wa 1012/15, 5 April 2016. Moreover, recently the case concerning the access to the algorithm determining the System of Random Allocation of Cases has been initiated: K. Izdebski, ‘Algorithms of Fairness’, Medium, 15 February 2018 https://bit.ly/2GeO7zH (last visited 30 July 2018). In the moment of preparing this article, the outcome of the proceeding has been unknown. The potential of using traditional importance of the media and NGOs in regard to the right to access information allows, as I argue, for the possibility of bringing the issue of automated decision-making to the collective dimension understood as a right to model-centric explanation. Instead of focusing on the explanation of a decision referring to one particular individual, it could focus on the architecture of the system used to determine the automated decision-making rules. It answers the systemic challenges created by the automated decision-making solutions. It provides the organisations representing certain groups with power to question the fairness of the system created to determine automated decision-making solutions. However, even their privileged position should be subjected to certain limitations, which I examine in the next sub-section.4.4 Limits of Right to Access Information in Digital Space
The consequences of applying Article 10 to algorithms that determine automated decision-making in case of state’s operations bring up the necessity to analyse limitations imposed on the right to information by the European Convention itself. I would argue that the right to access information, as understood by the ECHR, can refer to the state’s areas of activity. The examples of operations included in the scope of this article’s hypothesis could include automated decision-making systems, which determine access to public services (e.g. unemployment benefits).74xNiklas, Sztandar-Sztanderska & Szymielewicz, above n. 15.
However, according to Article 10(2) the exercise of freedoms guaranteed by Article 10 may be subject to restrictions prescribed by law and necessary in a democratic society, among others, in the interest of national security and public safety, for the prevention of disorder or crime, for the protection of health, and for preventing the disclosure of information received in confidence.75xArt. 10(2), above n. 7. In the above-mentioned case Österreichische Vereinigung zur Erhaltung, Stärkung und Schaffung Eines Wirtschaftlich Gesunden Land- und Forst-Wirtschaftlichen Grundbesitzes v. Austria, 76x Österreichische Vereinigung zur Erhaltung, Stärkung und Schaffung Eines Wirtschaftlich Gesunden Land- und Forst-Wirtschaftlichen Grundbesitzes v. Austria, above n. 55. the ECHR analysed in detail if the interference with the applicant association’s right to receive and to impart information as enshrined in Article 10(1) was justified on grounds offered by Article 10(2), namely, prescribed by law, pursuing one or more of the legitimate aims set out in that paragraph77xThe catalogue of the legitimate aims is included in the Art. 10(2) – ‘The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary.’: Art. 10(2), above n. 7. and necessary in a democratic society. The conclusion of the judgement in this aspect may be perceived as a test of conditions that have to be met in order to be able to lawfully refuse providing the information: according to the ECHR, the refusal was prescribed by law and pursued the legitimate aims. However, it was not considered by the ECHR as necessary in a democratic society:the reasons relied on by the domestic authorities in refusing the applicant association’s request for access to the Commission’s decisions – though ‘relevant’ – were not ‘sufficient’. While it is not for the Court to establish in which manner the Commission could and should have granted the applicant association access to its decisions, it finds that a complete refusal to give it access to any of its decisions was disproportionate.78x Österreichische Vereinigung zur Erhaltung, Stärkung und Schaffung Eines Wirtschaftlich Gesunden Land- und Forst-Wirtschaftlichen Grundbesitzes v. Austria, above n. 55, para. 47.
Tensions between the technological possibilities offered in areas such as health insurance (e.g. adjusting an offer) or crime prevention and the execution of the right to information are impossible to avoid. Time will show how the ECHR will resolve the issue of setting the boundaries between the right to access information and the state’s justified interests to protect its activities. Nevertheless, the outcome of the test of conditions that should be met when justifying the refusal of information applied in Österreichische Vereinigung zur Erhaltung, Stärkung und Schaffung Eines Wirtschaftlich Gesunden Land- und Forst-Wirtschaftlichen Grundbesitzes v. Austria 79x Ibid. proves that the condition of being ‘necessary in a democratic society’ included in Article 10(2) may lead to possible restrains of the right to access information.
-
5 Conclusions: The Right to Access Information and the Rule of Law in the Digital Space
The necessity to rethink what is information and how it should be treated is growing because of the spreading of automated decision-making technologies and big data analyses. Moreover, the datasets used for such analyses are constantly growing, and ‘the Big Data of today can easily become the little data of tomorrow.’80xP. Casanovas, L. De Kokerl, D. Menderson, & D. Watts, ‘Regulation of Big Data: Perspectives on Strategy, Policy, Law and Privacy’, 7 Health and Technology 335, at 337 (2017). There is a strong need to confront the methods applied to such analyses with the general prohibition on discrimination, which is crucial to ensure the democratic fundaments of European countries. As Hildebrandt claims,
The Rule of Law aims to create an institutional environment that enables us to foresee the legal effect of what we do, while further instituting our agency by stipulating that such effect is contestable in a court of law – also against big players (…) Such a – procedural – conception of the Rule of Law implies that both automation and autonomics should be constraint in ways that open them up to scrutiny [emphasis of the author] and render their computational judgements liable to being nullified as a result of legal proceedings.81xM. Hildebrandt, ‘The New Imbroglio – Living with Machine Algorithms’, in L. Janssens (ed.), The Art of Ethics in the Information Society. Mind You at 56 (2016) https://bit.ly/2wn0b1I (last visited 8 May 2018).
The usage of right to access information could ‘open up to scrutiny’ at least certain automated decision-making solutions and provide the citizens with the answers whether the decisions that are made in their cases have been taken on grounds, which include potentially discriminatory criteria. The ongoing digital transformation seems to leave no time for the adequate lex speciali regulatory solutions to develop. Therefore, it is worth considering if the ones already existing cannot provide us with innovative answers to the new challenges, using their dynamic interpretation. I argue that when facing the challenges created by the automated decision-making solutions, the existing right to information can serve as a way of improving the current state of the art. Rethinking the character of the right to access information in the light of the debate on the right to explanation may be seen as a step towards an updated, dynamic interpretation of a well-known human rights acts’ provision. In absence of solutions focused strictly on automated decision-making technologies, the right to access information sets the fundaments for a technologically neutral regulatory framework that may prove to be useful when preventing discriminatory treatment by technological solutions, which few seem to understand whilst all may be subjected to their decisions.
Noten
- * This research was supported by National Science Centre, Poland: Project number 2018/29/N/HS5/00105 titled Automated decision-making versus prohibition of discrimination in the European law.
-
1 C. O’Neil, Weapons of Math Destruction. How Big Data Increases Inequality and Threatens Democracy (2016).
-
2 V. Eubanks, Automating Inequality. How High-Tech Tools Profile, Police, and Punish the Poor (2017).
-
3 Regulation 2016/679, OJ 2016 L 119/1.
-
4 In favour of a presence of the right to explanation in the GDPR: B. Goodman and S. Flaxman, ‘European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”’, 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016) https://bit.ly/2wchh2x (last visited 4 May 2018); against such a possibility: S. Wachter, B. Mittelstadt & L. Floridi, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’, 7 International Data Privacy Law 76 (2017).
-
5 Under the term ‘collective-based’ and ‘collective’, I understand (1) the special role of media and NGOs, which has been recognised especially by the European Court of Human Rights when realising the right to access information; (2) the character of explanation, which not just refers to a particular individual, but rather offers a model-centric explanation, thus referring to the system, not to the particular decision.
-
6 For the explanation of model-centric approach: L. Edwards and M. Veale, ‘Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions?”’, 16 IEEE Security & Privacy 46 (2018).
-
7 Convention for the Protection of Human Rights and Fundamental Freedoms, 4 November 1950, ETS No. 005.
-
8 Charter of Fundamental Rights of the European Union, OJ 2012 C 326.
-
9 Though these two reasons differ, when analyzing certain cases, they usually appear to be linked to each other.
-
10 ‘However, when the input data used by the algorithms are generated by human beings, even algorithms become susceptible to human biases.’ – M. Ahsen, M. Ayvaci & S. Raghunathan, ‘When Algorithmic Predictions Use Human-Generated Data: A Bias-Aware Classification Algorithm for Breast Cancer Diagnosis’, forthcoming at Information System Research, at 2 (2017) https://bit.ly/2LQXzj6 (last visited 30 July 2018).
-
11 This has been subjected to research as early as 1980. Conclusion of the T. Mitchell’s study was, ‘If biases and initial knowledge are at the heart of the ability to general beyond observed data, then efforts to study machine learning must focus on the combined use prior knowledge, biases, and observation in guiding the learning process. It would be wise to make the biases and their use in controlling learning just as explicit as past research has made the observations and their use.’ T. Mitchell, ‘The Need For Biases in Learning Generalizations’, Techreport, at 3 (1980) https://bit.ly/2IkB6t0 (last visited 4 May 2018).
-
12 As V. Eubanks puts it, ‘Once the big blue button is clicked and the AFST [Allegheny Family Screening Tool] runs, it manifests a thousand invisible human choices. But it does so under a cloak of evidence-based objectivity and infallibility’, above n. 2, at 316 [epub edition].
-
13 More on the discriminatory character of automated-decision making solutions in the context of crime prevention: ‘Profiling and data mining may seem to work up to a point, but inevitably lead to actions against very large numbers of innocent people, on a scale that is both unacceptable in a democratic society…’ – D. Korff, ‘New Challenges to Data Protection Study’, Working Paper No. 2: Data Protection Laws in the EU: The Difficulties in Meeting the Challenges Posed by Global Social and Technical Developments 2010: 52; study conducted by ProPublica: J. Angwin, J. Larsona, S. Mattu & L. Kirchner, ‘Machine Bias. There’s Software used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks’, ProPublica (2016) https://bit.ly/1XMKh5R (last visited 4 May 2018).
-
14 Abovementioned mechanisms allow scholars to claim, ‘The use of algorithmic profiling for the allocation of resources is, in a certain sense, inherently discriminatory: profiling takes place when data subjects are grouped in categories according to various variables, and decisions are made on the basis of subjects falling within so-defined groups. It is thus not surprising that concerns over discrimination have begun to take root in discussions over the ethics of big data’ – B. Goodman and S. Flaxman, above n. 4, at 3.
-
15 For more information on this topic: J. Niklas, K. Sztandar-Sztanderska & K. Szymielewicz, Profiling the Unemployed in Poland: Social and Political Implications of Algorithmic Decision Making (Warsaw 2015) https://bit.ly/1PrMorh (last visited 7 May 2018).
-
16 For elaboration on some of the obstacles regarding the transparency of automated decision-making: J. Burrell, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’, 3 Big Data & Society 1 (2016).
-
17 For complex enumeration of such branches and analysis of the algorithms’ impact on society in popular science: O’Neil, above n. 1.
-
18 Protocol No. 12 to the Convention for the Protection of Human Rights and Fundamental Freedoms, 4 November 2000, ETS No.177.
-
19 European Union Agency for Fundamental Rights/Council of Europe, Handbook on European Non-Discrimination Law. 2018 edition at 18 (2018).
-
20 For a similar argument see M. Matczak, ‘List do Trybunału Sprawiedliwości Unii Europejskiej ws. praworządności w Polsce’ (2018) https://bit.ly/2Fw6pRz (last visited 4 November 2018).
-
21 Art. 51, Charter of Fundamental Rights of the European Union, above n. 8.
-
22 Art. 52, ibid. This is also the reason why in the article I focus on the analysis of the content of the ECHR’s case law referring to the relevant article.
-
23 Which prohibited discrimination on the basis of sexual orientation, religion or belief, age and disability, in the area of employment: Council Directive 2000/78/EC, OJ 2000 L 303.
-
24 The Directive prohibits discrimination on the basis of race or ethnicity in the context of employment. Moreover, it refers also to the access to the welfare system, social security, and goods and services: Council Directive 2000/43/EC, OJ 2000 L 180.
-
25 The Directive Council Directive 2004/113/EC, OJ 2004 L 373.
-
26 The Directive refers to the equal treatment in relation to social security: Council Directive 2006/54/EC, OJ 2006 L 204.
-
27 Para. 43, Case C-356/12, Wolfgang Glatzel v. Freistaat Bayern, [2014], ECLI:EU:C:2014:350.
-
28 Ch. Tobler, ‘Equality and Non-Discrimination under the ECHR and EU Law A Comparison Focusing on Discrimination against LGBTI Persons’, 74 Zeitschrift für auslandisches öffentliches Recht und Völkerrecht at 532 (2014).
-
29 Arts. 13-15, above n. 3.
-
30 Profiling in GDPR is presented as a special category of individual decision-making: Art. 22, ibid.
-
31 Ibid.
-
32 The authors of ‘Why a right to explanation of automated decision-making does not exist in the general data protection regulation’ point out the evolution of the proposed scope of the Art. 22. The legislative process led to the exclusion of denomination ‘predominantly’ from the final version of this legal act: S. Wachter, B. Mittelstadt & L. Floridi, above n. 4, at 92.
-
33 Art. 22(2), above n. 3.
-
34 Ibid., Rec 63.
-
35 M. Mandel, ‘Beyond Goods and Services: The (Unmeasured) Rise of the Data-Driven Economy’, Progressive Policy Institute: Policy Memo (2012) https://bit.ly/2FLBcVk (last visited 4 May 2018).
-
36 For detailed case study, see: Eubanks, above n. 2.
-
37 Art. 35, above n. 3. Art. 35(3) includes list of three cases in which impact assessment shall be required.
-
38 Art. 35, ibid.
-
39 For detailed analysis of this issue: L. Edwards and M. Veale, ‘Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”?’, 16 IEEE Security & Privacy 46 (2018) https://bit.ly/2IDsBcO (last visited 12 February 2019).
-
40 L. Edwards and M. Veale, ‘Slave to the Algorithm? Why a “Right to an Explanation” Is Probably Not the Remedy You Are Looking For’, 16 Duke Law & Technology Review 18 (2017).
-
41 Wachter, Mittelstadt & Floridi, above n. 4, at 78.
-
42 C. Dwork and A. Roth, ‘The Algorithmic Foundations of Differential Privacy’, 9 Theoretical Computer Science 211, at 215 (2013). Similarly: ‘We should acknowledge the change, and accept that privacy is a public and collective issue’ – P. Casanovas, L. De Koker, D. Mendelson & D. Watts, ‘Regulation of Big Data: Perspectives on Strategy, Policy, Law and Privacy’, 7 Health and Technology 1, at 13 (2017); and ‘predictions based on correlations do not only affect individuals, which may act differently from the rest of the group to which have been assigned, but also affect the whole group and set it apart from the rest of society’ – A. Mantelero, ‘Personal Data for Decisional Purposes in the Age of Analytics: From an Individual to a Collective Dimension of Data Protection’, 32 Computer Law & Security Review 238, at 239 (2016).
-
43 M. Hildebrandt, ‘Profiling: From Data to Knowledge. The Challenges of a Crucial Technology’, 30 Datenschutz and Datensicherheit at 548 (2006).
-
44 Which is the effect of above-mentioned source of potential discrimination, namely the choices made during the meaningful variables data selection.
-
45 Joined Cases C-141/12 and C-372/12, YS v. Minister voor Immigratie, Integratie en Asiel and Minister voor Immigratie, Integratie en Asiel v. M and S., [2014] ECLI:EU:C:2014:2081.
-
46 Case C-582/14, Patrick Breyer v. Bundesrepublik Deutschland, [2016], ECLI:EU:C:2016:779.
-
47 The phrasing of ECHR and the Charter is in this regard the same. The content of the Articles is similar to the Art. 19 of the Universal Declaration of Human Rights (‘to seek, receive and impart information and ideas through any media and regardless of frontiers’) – Universal Declaration of Human Rights, 10 December 1948, General Assembly resolution 217 A; and the Art. 19(2) of the International Covenant on Civil and Political Rights (‘this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice’) – International Covenant on Civil and Political Rights, 16 December 1966, General Assembly resolution 2200A (XXI). Lack of the verb ‘seek’ in the European Convention results with doubts concerning the possibility of interpreting the Art. 10 as containing the right to access information. These doubts are illustrated by the evolution of the case law presented in the article.
-
48 ‘…the Court finds that, by expressly stipulating the right to “seek” and “receive” “information,” Article 13 of the Convention protects the right of all individuals to request access to State-held information, with the exceptions permitted by the restrictions established in the Convention’ – Inter-American Court of Human Rights, Claude Reyes et al. v. Chile, Judgment, 19 September 2006, para. 77.
-
49 ‘The Convention cannot be interpreted in a vacuum and must, […], be interpreted in harmony with other rules of international law, of which it forms part. Moreover, […] the Court may also have regard to developments in domestic legal systems indicating a uniform or common approach or a developing consensus between the Contracting States in a given area’ – Magyar Helsinki Bizottság v. Hungary (2016) No. 18030/11, para. 138.
-
50 Sdružení Jihočeské Matky v. Chech Republic, ECHR (2006) No. 19101/03.
-
51 It is worth noticing that the roots of direct recognition of the right to access information can be linked to the protection of the environment. It has been implemented in Art. 4 of Convention on Access to Information, Public Participation in Decision-Making and Access to Justice in Environmental Matters, 25 June 1998, UNTS 2161 at 447.
-
52 Analysed in detail below.
-
53 Társaság a Szabadságjogokért v. Hungary, ECHR (2009) No. 37374/05.
-
54 Ibid., para. 35.
-
55 Österreichische Vereinigung zur Erhaltung, Stärkung und Schaffung Eines Wirtschaftlich Gesunden Land- und Forst-Wirtschaftlichen Grundbesitzes v. Austria, ECHR (2013) No. 39534/07.
-
56 Ibid., para. 36.
-
57 ‘…stating that the most careful scrutiny was called for when authorities enjoying an information monopoly interfered with the exercise of the function of a social watchdog’ – ibid., para. 41.
-
58 Magyar Helsinki Bizottság v. Hungary, above n. 49, para. 158.
-
59 Ibid., para. 161.
-
60 Ibid., para. 162.
-
61 Ibid., para. 170.
-
62 The example of such an approach: ‘The Court observes that the right to freedom to receive information basically prohibits a Government from restricting a person from receiving information that others wish or may be willing to impart to him’ – Leander v. Sweden, ECHR (1987) No. 9248/81, para. 74; or: ‘That freedom cannot be construed as imposing on a State, in circumstances such as those of the present case, positive obligations to collect and disseminate information of its own motion’ – Guerra and Others v. Italy, ECHR (1998) No. 14967/89, para. 53. The fact that state is under no circumstances obliged to disseminate information of its own motion has been confirmed in Magyar Helsinki Bizottság v. Hungary, above n. 49, para. 156. The tension between lack of positive obligations from the state’s side and its more active role promoted by the above-mentioned judgements probably will result with continuation of the case law explaining the conditions that should be met when using the right to access information, for example, what is information of public interest? How to address the state’s monopoly of information?
-
63 Simultaneously not being obliged to perform information activities out of its own motion, see above n. 62.
-
64 R.K. Hill, ‘What an Algorithm Is’, 29 Philosophy & Technology 35, at 44 (2016).
-
65 Ibid., at 47. This understanding of algorithms implies that they do not have to be even digitized: ‘Algorithms need not be software: in the broadest sense, they are encoded procedures for transforming input data into a desired output, based on specified calculations’ – T. Gillespie, ‘The Relevance of Algorithms’, in T. Gillespie, P. Boczkowski & K. Foot (eds.), Media Technologies, Essays on Communication, Materiality, and Society (2014) 167, at 167.
-
66 B.D. Mittelstadt, P. Allo, M. Taddeo, S. Wachter, & L. Floridi, ‘The Ethics of Algorithms: Mapping the Debate’, Big Data & Society at 2 (2016).
-
67 It might be possible to broaden the scope of right to access information: ‘The Court has further emphasised the importance of the right to receive information also from private individuals and legal entities. While political and social news might be the most important information protected by Article 10, freedom to receive information does not extend only to reports of events of public concern, but covers cultural expressions and entertainment as well….’: European Court of Human Rights, Internet: Case-Law of the European Court of Human Rights, 2011 (update: 2015), at 43 https://bit.ly/2HYhITm (last visited 7 May 2018).
-
68 ‘The Constitutional Court’s monopoly of information thus amounted to a form of censorship.; Társaság a Szabadságjogokért v. Hungary, above n. 53, para. 28.
-
69 Even though data protection may provide tools that to certain extent allow auditing the processes standing behind automated decision-making, they are mostly of voluntary or self-regulatory character: abovementioned data-processing impact assessments and codes of conduct or the possibility of establishing certification mechanisms, the latter two not having obligatory character. For presentation of this possibilities see: B.W. Goodman, ‘A Step Towards Accountable Algorithms? Algorithmic Discrimination and the European Union General Data Protection’, 29th Conference on Neural Information Processing Systems (NIPS 2016), at 4-5 https://bit.ly/2rlBzSf (last visited 7 May 2018).
-
70 Österreichische Vereinigung zur Erhaltung, Stärkung und Schaffung Eines Wirtschaftlich Gesunden Land- und Forst-Wirtschaftlichen Grundbesitzes v. Austria, above n. 55, para. 34. See also: ‘However, the realisation of this function is not limited to the media or professional journalists. In the present case, the preparation of the forum of public debate was conducted by a non-governmental organisation. The purpose of the applicant’s activities can therefore be said to have been an essential element of informed public debate’ – Társaság a Szabadságjogokért v. Hungary, above n. 53, para. 27.
-
71 For the analysis of the role of media in the ECHR’s case law concerning Art. 10 see: T. Mendel, A Guide to the Interpretation and Meaning of Article 10 of the European Convention on Human Rights (2017), at 14-17 https://bit.ly/2OwOACd (last visited 30 July 2018).
-
72 It is worth to note that in the Art. 80 of the GDPR the conditions that the organisation representing the individual has to meet include: ‘…and is active in the field of the protection of data subjects’ rights and freedoms with regard to the protection of their personal data’. This element may limit the number of organisations that would be allowed to represent the individuals in cases initiated in order to ensure the execution of the right to explanation based on the GDPR’s provisions – Art. 80(1), above n. 3.
-
73 Moreover, it is necessary to admit that the analysis is partly inspired by a ruling of the Polish Voivodship Administrative Court in Warsaw, which decided that algorithms could be treated as public information and which was initiated by the Polish non-governmental organisation Panoptykon. The case regarded algorithms that are involved in providing services for the unemployed. It allowed dividing them into three groups, which determined the scope of support granted to each individual. The administrative court decided that the mechanism that formed the basis for the classification should be revealed accordingly to the regulations concerning public information. Judgement of WSA in Warsaw, II SAB/Wa 1012/15, 5 April 2016. Moreover, recently the case concerning the access to the algorithm determining the System of Random Allocation of Cases has been initiated: K. Izdebski, ‘Algorithms of Fairness’, Medium, 15 February 2018 https://bit.ly/2GeO7zH (last visited 30 July 2018). In the moment of preparing this article, the outcome of the proceeding has been unknown.
-
74 Niklas, Sztandar-Sztanderska & Szymielewicz, above n. 15.
-
75 Art. 10(2), above n. 7.
-
76 Österreichische Vereinigung zur Erhaltung, Stärkung und Schaffung Eines Wirtschaftlich Gesunden Land- und Forst-Wirtschaftlichen Grundbesitzes v. Austria, above n. 55.
-
77 The catalogue of the legitimate aims is included in the Art. 10(2) – ‘The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary.’: Art. 10(2), above n. 7.
-
78 Österreichische Vereinigung zur Erhaltung, Stärkung und Schaffung Eines Wirtschaftlich Gesunden Land- und Forst-Wirtschaftlichen Grundbesitzes v. Austria, above n. 55, para. 47.
-
79 Ibid.
-
80 P. Casanovas, L. De Kokerl, D. Menderson, & D. Watts, ‘Regulation of Big Data: Perspectives on Strategy, Policy, Law and Privacy’, 7 Health and Technology 335, at 337 (2017).
-
81 M. Hildebrandt, ‘The New Imbroglio – Living with Machine Algorithms’, in L. Janssens (ed.), The Art of Ethics in the Information Society. Mind You at 56 (2016) https://bit.ly/2wn0b1I (last visited 8 May 2018).