Turkish Law Blog

Algorithms Meet Transparency: Why There is a GDPR Right To Explanation?

Öykü Dalgıç Öykü Dalgıç/ University of Sussex
08 April, 2020
346

In today’s world, the impact of automated decision making could be observed in practices of many industries and governments across the globe. Indeed, automated decision making is used in fraud detection systems, managing limited sources by municipalities, ranking performances of employers, calculating credit scores or by Amazon, Airbnb, or Uber for dynamic product pricing. [1] Private entities and governments using data-driven algorithms could reach decisions that might vitally affect individuals. However, how can individuals know applications of algorithmic decision-making systems are fair? Problematically, workings of algorithmic decision-making systems are not transparent enough. Hence, data subjects do not have a clear idea of how much of their data has transferred or how it is utilized and processed. Frank Pasquale points out this problem by using the term "Black Box," like the working of these systems remains inscrutable. [2] Within this scope, firstly, personal data processing via using opaque algorithms could exacerbate information asymmetries between data subjects and data controllers. This asymmetry could cause a feeling of helplessness and powerlessness in individuals, which is considered intangible harm of data processing. [3] Secondly, the opaque nature of algorithmic decision-making systems makes it hard to assess these systems and understand whether they lead to unfair discrimination. EU data protection law acknowledges discrimination as tangible harm which could arise as a result of automated data processing and profiling activities.[4]In this context, The EU General Data Protection Regulation ("GDPR”) tries to eliminate unfair discrimination and possibly information asymmetries in two ways: by (i) granting individuals more control over their data through adopting a rights-based approach and (iii) limiting decision-making based solely on automated processing. Considering these tangible and intangible harms that might occur as a result of data processing, GDPR also specifically obliges controller to enact compliance measures.

Within this scope, through Articles 22 and 13-15 GDPR, data subjects are entitled to (i) obtain meaningful information about logics, significance and envisaged impacts of automated decision making; (ii) not to be subject to automated decision making with several safeguards and restraints for the limited circumstances in which automated decisions are met the requirements. [5] However, the vague language of the Articles 22 and 13-15 of GDPR and technical limitations creates problems concerning the scope and applicability of these provisions. Particularly, there is a fierce dispute over whether provisions mentioned above can provide a 'right to explanation' for data subjects since it is not explicitly mentioned at Article 22(3), but only at recital 71, GDPR. Some scholars have interpreted related provisions as a new right to algorithm explanation. [6] On the other hand, by adopting a more skeptical approach and focusing on technical constraints of implementing a right to explanation, Edwards and Veal concluded that a limited right to explanation could be constructed. [7] Others argued that based on the analytical framework of ML systems data subjects rights are too limited to create a 'right to explanation.[8] Lastly, by exercising a contextual analysis on Articles 13-15 and 22, Selbst and Powles stated that scope of these provisions is not restricted as argued and these articles should be interpreted functionally, and flexible to provide a right to explanation as this paper also agrees. [9] By considering principles implicit in GDPR and contextually and semantically analyzing the texts of Articles 13-15, 22, this paper will attempt to reveal that 'a right to explanation' could be constructed by employing a comparative method. Although this paper does not ignore possible technical limitations, nonetheless it will not focus on the analytical framework of machine learning systems.

The Legal Framework for 'Right To Explanation' in the GDPR

Concerning automated decision-making processes Article 22 indicates that, "the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her [10]. However the right shall not apply solely in three circumstances: (a) if the decision “is necessary for entering into, or performance of, a contract between the data subject and a data controller”; (b) “is authorized by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests”; or (c) "is based on the data subject's explicit consent" [11] Pursuant to Article 22(3) in the event of existence of circumstances (a) and (c) the data controller is obliged to “implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.” [12] Further according to recital 71 suitable measures mentioned in Article 22(3) should include “specific information to the data subject” and “an explanation of the decision reached after such assessment.”[13]

Further, in the event of a “decision based solely on automated processing, including profiling, which produces legal effects regarding data subjects or similarly significantly affects them, data subjects are entitled to know the existence of that processing and meaningful information about its logic, significance, and consequences as it is indicated under Articles 13(2)f, 14(2)g and 15(1)h.

However, it has been contended that Article 22 has a limited applicability, since, it only applies to 'decisions based solely on automated processing', when there is even minimal human intervention included in the algorithmic decision-making process, safeguards indicated under Article 22(3) and also a 'right to explanation' would not be applied.[14] Conversely, Bygrave indicates that, when a person does not actively evaluate the outcome of the algorithmic processing before it is formalized as a decision, that decision shouldn't fall outside the scope of Article 22 and considered as automated processing. [15] Again, according to WP29, true human involvement should be meaningful. Which means that a human being should have the authority and competence to alter the decision and 'evaluate all the input and output data presented.[16] The UK Information Commissioner’s Office (“ICO”) could be considered as the earliest adopter of this interpretation. According to ICO in the context of Article 22(1), the phrase 'solely' should be interpreted to include automated decision-making processes 'in which humans exercise no real influence on the outcome of the decision.[17]

Moreover, Article 22(1) is conditioned by the requirement of 'production of legal or similarly significant effects' after the decision-making process. However, the term 'similarly significant' is ambiguous and needs interpretation. [18] This issue might also affect the broad applicability of the safeguards based on the argument that in some cases such as online behavioral advertisements, online price discrimination, no legal or significant effect arises. [19] According to W29, an outcome of an automated decision should fall under the scope of Article 22 if the decision significantly affects 'interests' of the individual irrespective of whether that decision directly affects his human rights. [20] Further, in W29 opinion as, GDPR added the word 'similarly,' which was not included in Article 15 of Directive 95/46/EC (“DPD“) [21], in order to meet significant effect threshold, the impact of the automated decision must be similar to a legal effect. [22] Within this scope, A29WP indicates that 'significant' decisions is used to refer the decisions which might 'significantly influence the circumstances, behavior or choices of the individuals concerned and could lead to data subjects' "exclusion or discrimination."[23] Hence, according to Edwards and Veal, price discrimination would also be regarded as significant; for instance, when high prices effectively block individual access to specific products or services. [24] Parallelly, in the context of price discrimination, it has stated by the Belgian data protection authority that a personalized advertisement which contains 'a reduction and therefore a price offer 'will be considered to have a legal effect under Article 22(1).[25] Additionally, different than the DPD, A29WP accepts that some automated targeted advertisements might have a significant effect depending on, for example, the way the advert is delivered or the particular vulnerabilities of the data subjects targeted. [26] However in any case, even though there is no right not to be exposed to the targeted advertisement, considering that online behavioral advertisements exploit consumers’ biases, it should be accepted that there is a legitimate interest not to be subjected to unfair commercial practices or digital manipulation. [27]

Furthermore, considering the recent proposal to expand the human rights interpretation to neuromarketing problems, so to include the 'right to cognitive liberty', the 'right to mental privacy' and the 'right to mental integrity', [28] it could be concluded that not to be subjected to digital marketing manipulation should be recognized as a legitimate interest adopting a broad interpretation approach by data protection authorities, national and supranational judges.

Indeed, some Member States have adopted a wider scope. For instance, the French Law interpreted Article 22 as 'a decision which has legal effects or significant effect on a person' [29], the Austrian law interpreted it as 'decisions based only on automated processing, including profiling, which have detrimental consequences for the data subject or that could significantly affect them' [30] and according to the Belgian Law 'any decision based exclusively on automated processing, including profiling, which produces adverse legal effects for the data subject or significantly affects him/her is considered to fall under the scope of related article. [31] Thereby, possibly in these states, the algorithmic decisions regulation might be less fragmented, more comprehensive, and could include a broad range of significant effects. [32]

This section attempted to clarify the scope and applicability of Article 22, which is the precondition of the “right to know meaningful information about decision making." The next section will focus on the analysis of the legal existence of the right to explanation.

Analysis of Legal Existence of 'Right To Explanation'

(i) Algorithmic Transparency and Accountability

Apart from the right to respect for private life, the EU Charter of Fundamental Rights provides a specific right to data protection, which mainly contains the right to access to personal data. [33] Further, it has been indicated that, while privacy is an instrument of opacity that 'designates a zone of noninterference' and protects personal liberty from intrusion, 'data protection promotes transparency and accountability.' [34] Parallelly, GDPR is aiming to eliminate unfair discrimination and power imbalances between data subjects and controllers by imposing transparency and accountability requirements.

In the context of the accountability principle, data controllers not only obliged to comply with GDPR provisions but also should be able to demonstrate their compliance. On the other hand, transparency principle regulated under Article 5(1)(a) is one of the foundation stones of EU data protection, which is closely related with the principle of fairness. [35] Transparency requirement also strengthened through

Article 12 of the GDPR. Transparency provides both ‘algorithmic accountability’ [36] and controllability of the processing. [37] If data subjects are not provided with necessary information and control, “they will be subject to decisions that they do not understand and have no control over” [38]. Thus, algorithmic transparency could be considered as an instrument to face "the opacity of algorithms."[39] It has even argued with a normative view on algorithmic transparency that, “algorithmic decisions that vitally affect individuals’ capabilities must be formed in ways that are comprehensible and contestable” and these algorithmic decision-making systems' should not be employed unless these systems underlying reasoning can be explained to individuals. [40] Hence, an extensive 'right to explanation' should be acknowledged since individuals affected by algorithmic decisions are entitled to transparency over the outputs and how they are concluded. [41] Indeed, considering possible critical outcomes of automated decision makings, Oxford Internet Institute also believes that, digital society should do a better job in creating systems that allow individuals to comprehend how algorithms are impacting their lives. [42] Again, according to IBM, it is crucial that the explanations reveal how intelligent systems interpreted individuals input and why they created a specific output. [43]

(ii) Right to explanation in the context of Article 22 in GDPR

Article 22 does not explicitly contain the phrase 'a right to explanation,' and it solely seems like a right to stop processing unless a human is introduced to evaluate the decision on a data subjects objection. However, Article 22(3) requires the implementation of safeguards by data controllers if automated decision-making process meets requirements of Article 22(2) and sensitive data is processed based on explicit consent. Within this scope Recital 71, could be a base for a 'right to explanation’ that might be derived from Article 22 since it explicitly refers 'right to obtain an explanation concerning algorithmic data processing’ as a safeguard. However, in EU legislation, recitals used to provide guidance and help states to interpret the legislative provisions, but they do not establish legally binding rights. [44] Hence Watcher and others argue that, since recitals are instruments designed to help states to understand the intent behind legislation and do not have an 'autonomous legal effect,' Recital 71 does not establish a legally binding duty of explanation for controllers. [45] On the other hand, based on EU Court of Justice Decision Malgieri indicates that as Recital 71 does not deviate from Article 22, neither contradicts with its wording and thus, in its explanatory nature it can be considered a supplementary normative instrument. [46] Thereby, he argues that when algorithmic decision making is meet the requirements of Article 22(2) the right to obtain an explanation of the decision reached should always be exercisable’”. [47] However, Edwards and Veale stress the fact that, mandating a right to an explanation based on Recital 71 might not be possible as Article 22(3) and Recital 71 contains different texts and because of a confused mixture of texts member states might interpret the right differently. [48]

In sum, this paper reckons that Recital 71 of Article 22 does not constitute a legally binding right to explanation. However, safeguard measures indicated under Article 22(3) are not exhaustive and in principle, member states should be able to guarantee higher level of protection for its citizens provided that it conforms with EU legislation’s objectives[49] thus, a 'right to obtain an explanation concerning algorithmic data processing' could be included as a safeguard by member states.

(iii) Right to explanation in the context of Articles 13-15.

Irrespective of the safeguards stated under Article 22 and Recital 73, according to this paper 'a right to explanation' could be derived from Articles 13(2)(f), 14(2)(g), and 15(1)(h). However, as a result of ambiguous language of the GDPR, it is not clear whether data controllers are obliged to provide 'meaningful information about the logic involved' or solely an explanation about 'the existence' of automated decision making. Again, even if it is accepted that controllers are obliged to provide 'meaningful information,' there is no consensus on if the explanation should be about system functionality of algorithm architecture solely or it should be an explanation regarding specific algorithm implementation which affected the decision.

Based on aforementioned uncertainties, Watcher has argued that, Article 15(1)(h) cannot provide an explanation for specific decisions but only for system functionality as the wording of Articles 13(2)(f), 14(2)(g), and 15(1)(h) is identical. Particularly by focusing on phrases' envisaged consequences' and 'existence' within mentioned articles, Watcher states that, by using the same wording for ex-ante notification duties and right to access, legislators intended to impose the same disclosure obligation on data controllers also with regards to the content of the information.[50] According to Watcher, data controllers are obliged to foresee the possible consequences of their automated decision making systems and the usage of the phrase 'envisaged' in Article 13-15 limits their predictions to solely ex ante system functionality explanation.[51] Moreover, by comparing the right to access under DPD and GDPR, Watcher concludes that usage of future-oriented wording, which was not included in the DPD regarding decision-making timeline, and identical terminology used both in articles regulating right to access and ex-ante notification duties in GDPR, legislators intended to limit the right of access concerning algorithmic decision making to solely system functionality explanations. Disappointingly, W29 also indicated that Article 15 is designed to ensure, 'a more general form of oversight, rather than a right to an explanation of a particular decision.'[52] However, this approach of W29 has been found 'fatally damaging' to create 'a personalized ex-post 'right to explanation' from Article 15(h).[53]

Furthermore, usage of the same sentences should not mean that Article 13-14 and Article 15 have to contain the same explanation content. As an illustration, Article 13(1)(c) and 14(1)(c) refer "the purpose of the processing for which the personal data 'intended'" which could be considered future-oriented, while Article 15(1)(a) just mentions the 'purposes of the processing' without any specific indication concerning when in time. [54] Additionally, although 'envisaged consequences' could be interpreted as future-oriented, the existence of the logic 'involved' could also be considered as past-oriented. [55]

Moreover, even though, the wording of these Articles is identical, Article 13(2)(f) and 14(2)(g) impose a proactive obligation on data controllers to notify data subjects concerning automated processing activity details prior to start of the automated processing activity, whereas Article 15(1)(h) introduces the right to access, which can be exercised for the disclosure of the automated processing activity details at any time. Therefore, an ex-post explanation containing details such as rationale and circumstances concerning specific automated decisions could be provided to data subjects through Article 15 access rights. [56]

It is also contended that usage of the term 'existence' suggests an explanation solely about automated decision-making techniques are being used in data processing and thereby, limits to the scope of the right to obtain meaningful information about the logic behind the automated decision[57]. However, this paper disagrees with this argument. Firstly, it should not be ignored that Article 15(1)(h) requires two different things (i) the right to know the existence about automated decision-making, 'and' (ii) the right to receive meaningful information about the logic involved, the significance and the envisaged consequence. Therefore, member states and courts should not consider these two rights as one since the conjunction 'and' distinguishes the mention of 'existence' from the mention of 'meaningful information. [58]

Moreover, according to Selbst and others, the content of 'meaningful information' should be interpreted based on the perspective of data subjects. [59] In the context of 'meaningful information,' thus, instead of revealing the full code of algorithms and detailed technical descriptions of algorithmic decision-making processes, the content of the disclosure should be in a level which is comprehensible for an individual without specific technical expertise. [60] Additionally, in a semantic context, due to the polysemous nature of 'meaningful,' it means both 'intended to show' and 'serious, important, useful. [61] Hence it could be argued that, while 'meaningful' in the sense of 'significant, serious' refers to detailed information, meaningful' in the sense of 'intended to show the meaning' refers also to understandable information [62] The CNIL also emphasizes that "what would seem to matter…is the capacity to understand the general logic underpinning the way algorithm works.". Hence, instead of using code lines, logic should be explained in words in order to ensure that it is understood by everyone. [63] Article 12(1) of GDPR also supports this interpretation.

While implementing the right to explanation, it should be focused on the functionality of the right and strengthening data protection as a fundamental right. [64] The functional and flexible interpretation could ensure thoroughly enjoying the benefits of an explanation. Indeed, explaining algorithms has many advantages; it could improve decision-making accuracy, encourage fair and unbiased decision making, reduce information asymmetries between data subjects and data controllers, increase data subjects autonomy, promote legitimacy and trust in private and governmental institutions.[65] Hence, 'meaningful information' should not be tried to shape too rigidly by focusing on ex-ante and ex-post versions of explanation and in a way that reducing mentioned benefits. As each algorithm is different, data controllers and governments should keep work on what can be disclosed and try to develop a process of information disclosure models considering how could individuals use information disclosed. [66] Indeed, there is already research on interpretability on AI, and it has confuted the claim of how AI has arrived at a specific decision is impossible to explain.[67]

On the other hand, Edwards and Veal stress the fact that some algorithmic decisions that affect people might solely involve non-personal or anonymized data and this will limit the applicability of the right to explanation derived from Article 15 as GDPR only applies where decisions are based on personal data. [68] However, data has a dynamic nature; it can relate to an individual without being directly about him. [69] Further, this issue might be resolved with advanced methods of data analytics, which could turn all personal data into personal data in the future. [70]

Moreover, granting a 'right to explanation’ will also ensure the effective exercise of safeguards provided under Article 22(3). For instance, logically in order to challenge a decision data subjects must obtain an explanation about the automated decision-making mechanism. [71] Furthermore, actually by stating that 'the data subject will only be able to challenge a decision or express their view if they fully understand how it has been made and on what basis' AWP29 also justifies the existence of right to explanation. [72]

National GDPR Implementations of 'Right to Explanation’

Finally, it is essential to look at several domestic data protection laws implementing the GDPR when analyzing automated decision making and 'right to explanation’ because Article 22(b) explicitly refers member states law that could permit specific cases of automated decision-making provided that they adopt 'suitable safeguards' in those cases.

(i) French Law

The first French data protection law in 1978 already granted individuals “the right to know and to challenge information and the reasoning used by automated processing whose results concern them”.[73] That being said, in accordance with its previous legislation, French data protection law guarantees a right to an explanation of algorithmic decisions while implementing GDPR. Under Article 10 of the French Law, upon request data subjects are entitled to obtain specific information about the main features of the implementation of the automated data processing concerning them and

explanation concerning rules defining the data processing. [74] In the event of administrative decisions, data controllers are obliged to present an ex-ante notification when an algorithmic decision is adopted, and they should also ensure control of algorithmic data processing and its evolutions so it can be explainable in a detailed and an intelligible form. [75] Notably, French Law requires both ex-ante and ex-post information concerning algorithmic decision-making process from data controllers.

(ii) Hungarian Law

Hungarian Law also provides an explicit right to explanation. According to Hungarian law, data controllers should notify data subject regarding "methods and criteria" utilized in a specific algorithmic decision-making system.[76] Methods and criteria appear to include also weighting parameters employed for scoring and profiling of individuals. [77] However, different than French law, there is no clear distinction whether explanation should be based on the technical functionality of algorithm (exante) or specific rationale and circumstances behind the decision adopted (ex-post).

(iii) United Kingdom Law

Although the UK Data Protection Act 2018 does not contain a 'right to explanation' as a specific safeguard, it is possible to infer a form of right to algorithmic explanation indirectly.[78] Although these national laws do not include any specific safeguard concerning individuals understanding of the algorithmic decision, they explicitly regulate procedures to follow in the event of an appeal against the automated decision. Within this framework, data subjects could receive more details on the algorithm workings and on the decision adopted since data controllers are obliged to inform data subjects about the steps taken to comply with the request and the outcome of the data subjects appeal request. [79]

Within this framework, compared to other member states only French law and Hungarian law presents an explicit legal recognition of the ‘right to explanation'. Hence, these states recognition of the said micro-right also supports the argument regarding the existence of the right.

Conclusion

In the light of everything mentioned above, this paper reckons that scope of Article 22(1) is not narrow as argued by some commentators. Based on Article 5(1)(a), Article 12 and rights to 'meaningful information about the logic involved' in Articles 13–15, a right to explanation does exist in GDPR. To fulfill the objective of transparency principle, to eliminate unfair discrimination, reduce information asymmetries and secure the objective of enhanced individual control over personal data, national data protection authorities or national or supranational judges should interpret these provisions extensively. Moreover, as it has been indicated 'the 'right to explanation' is a vital part of achieving accountability.'[80] Indeed, to demonstrate compliance data controllers should be able to explain how personal data has been processed and what was the rationale behind reaching a specific algorithmic decision. Although, there are concerns in information societies with regards to possible effects of right to explanation on intelligent systems, [81] right to explanation could improve the accuracy and effectiveness of algorithmic decisions for data controllers. The notion that data subjects should have sufficient control over their personal data is a crucial part of EU data protection law. However, it is essential to note that, transparency is only one of the fundamental steps towards meaningful control, and it does not end itself. Indeed, besides the right to explanation usage of the right to data portability, right to be forgotten and incentives towards 'privacy by design' could also be useful in reaching the objective of increased data subjects control and generating fairer algorithms. [82] Lastly, even though technology creates complexity and opacity, it is also a useful tool to solve these problems. Technological advancements in data analytics, improvements in AI interpretability could improve the quality of both ex-ante and ex-post right to explanation concerning algorithmic decisions while blockchain and trusted computing systems could solve problems surrounding the trade secret factor.[83]


[1] Nicholas Diakopoulos, ‘Accountability in Algorithmic Decision Making’ (2016) 59 Communications Of The ACM < https://dl-acm-org.ezproxy.sussex.ac.uk/citation.cfm?doid=2886013.2844110> accessed 22 May 2019 56.

[2] Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015) 3.

[3] Orla Lynskey, The Foundations of EU Data Protection Law (Oxford University Press, 2015) 211.

[4] ibid 200.

[5] Regulation (EU) 2016/679 — protection of natural persons with regard to the processing of personal data and the free movement of such data (GDPR) [2016] OJ L 119 Arts 13, 14, 15 and 22.

[6] Bryce Goodman and Seth Flaxman, ‘European Union regulations on algorithmic decision-makings and a "right to explanation"' (2017)38 AI Magazine < https://search-proquest.com.ezproxy.sussex.ac.uk/docview/1967052651?rfr_id=info%3Axri%2Fsid%3Aprimo > accessed 22 may 2019 6,7.

[7] Lilian Edwards and Michael Veale, 'Slave To The Algorithm? Why A 'Right To An Explanation' Is Probably Not The Remedy You Are Looking For' 2017(16) Duke Law & Technology Rev.<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2972855 > accessed 22 May 2019 52,65.

[8] Sandra Watcher, Brent Mittelstadt and Luciano Floridi, 'Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation' (2017)7 International Data Privacy Law 76, 96.

[9] Andrew D Selbst and Julia Powles, 'Meaningful information and the right to explanation' (2017)7 International Data Privacy Law 233,242.

[10] GDPR(n 5) Art 22(1).

[11] ibid Art 22(2).

[12] ibid Art 22(3).

[13] GDPR Recital 73.

[14] Watcher et al (n 8) 92.

[15] Lee A. Bygrave, 'Automated individual decision-making, including profiling' In Christopher Kuner and Lee Bygrave, Christopher Docksey, Commentary on the EU General Data Protection Regulation (GDPR) (Oxford University Press 2019) (Forthcoming) 14.

[16] Article 29Working Party (A29WP), 'Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679' (WP 251, 3 October 2017) < https://perma.cc/3X54-2DGC>accessed 22 May 2019 9,10.

[17] Information Commissioner's Office, Feedback request – profiling and automated decision-making, 2017, 19, <https://ico.org.uk/media/aboutthe-ico/consultations/2013894/ico-feedback-requestprofiling-and-auto mated-decision-making.pdf> accessed 15 May 2019.

[18] Watcher et al (n 8) 93.

[19] ibid 92.

[20] AWP29 ( n 16) 10.

[21] Michael Veale and Lilian Edwards, 'Clarity, surprises, and further questions in the Article 29 Working Party draft guidance on automated decision-making and profiling' (2018)34 Computer Law & Security Rev. 398, 401.

[22] AWP29 (n 16) 10.

[23] ibid.

[24] Veale et al (n 21) 401.

[25] Gianclaudio Malgieri and Giovanni Comande 'Why a Right to Legibility of Automated Decision- Making Exists in the General Data Protection Regulation' (2017)7 International Data Privacy Law 243, 254.

[26] AWP29 (n 16) 11.

[27] Malgieri et al (n 25) 243, 253.

[28] ibid.

[29] Gianclaudio Malgieri, "Automated Decision-Making in the EU Member States: The right to Explanation and other 'suitable safeguards'" (2019) Computer Law and Security Review (Forthcoming) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3233611 > accessed 22 May 2019 28.

[30] ibid 25.

[31] ibid 28.

[32] ibid.

[33] Article 8 of Charter Of Fundamental Rights of The European Union (2012) (‘The Charter’) OJ C 326.

[34] Lynskey (n 3) 213.

[35] GDPR (n 5) Art 12.

[36] Hemant Taneja, 'The need for algorithmic accountability' (Tech Crunch, 2016) <https://techcrunch.com/2016/09/08/the-need-for-algorithmic-accountability/> accessed 22 May 2019.

[37] Lilian Mitrous, 'Data Protection, Artificial Intelligence, And Cognitive Services:Is The General Data Protection Regulation (Gdpr) "Artificial Intelligence Proof" (Research Gate, April 2019)< www.researchgate.net/publication/332698357_AI_and_GDPR_Study_Prof_MItrou_Mar19_F NAL> acessed 22 May 2019 59.

[38] 'Artificial Intelligence, Robotics, Privacy and Data Protection' (Room Document for the 38th Internetional Conference of Data Protection and Private Commissioners, October 2016) < https://edps.europa.eu/sites/edp/files/publication/16-10-19_marrakesh_ai_paper_en.pdf > accessed 22 may 2019 4.

[39] Mitrous (n 37) 54.

[40] Malin Eiband and Hanna Schneider, Daniel Buschek, 'Normative vs Pragmatic: Two Perspectives on the Design of Explanations in Intelligent Systems' (Semantic Scholar, 2018) < www.semanticscholar.org/paper/Normative-vs.-Pragmatic%3A-Two-Perspectives-on-the-of-Eiband- Schneider/50185b0a2b2c52a8ef914715081af7d7afcd27ac > accessed 22 May 2019.

[41]Committee on Science and Technology, Algorithms in decision- making (HC 351 15, May 2018) <https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/351/351.pdf> accessed 22 May 2019 24.

[42] ibid 30.

[43] ibid.

[44] Watcher et al (n 8) 80.

[45] ibid.

[46] Malgieri et al (n 25) 255.

[47] ibid.

[48] Lilian Edwards and Michael Veale' Enslaving the Algorithm: From a "Right to an Explanation" to a "Right to Better Decisions"?' (2018)16 IEEE Security & Privacy < https://ieeexplore-ieeeorg. ezproxy.sussex.ac.uk/document/8395080> accessed 22 May 2019 3.

[49] Malgieri (n 29) 29.

[50] Watcher (n 8) 84

[51] ibid 83.

[52] AWP29 (n 16) 24.

[53] Veale et al (n 21) 400.

[54] Malgieri et al (n 25)256.

[55] ibid 256.

[56] Edwards et al (n 7) 52.

[57] Watcher et al (n 8) 84.

[58] Malgieri et al (n 25) 256.

[59] Selbst et al (n 9) 236.

[60] Christopher Kuner and Dan Jerker B.Svantesson, Fred H. Cate, Orla Lynskey, Christopher Millard, 'Machine learning with personal data: is data protection law smart enough to meet the challenge?' (Editorial) 2017(7) International Data Privacy Law 1,2.

[61] Malgieri et al (n 25) 257.

[62] ibid.

[63] Mitrous (n 37) 56,57.

[64] Selbst et al (n 9) 235.

[65] Katherine Strandburg, Power Point Presentation on Decisionmaking, Machine Learning and the Value of Explanation < www.dsi.unive.it/HUML2016/assets/Slides/Talk%202.pdf> accessed 22 May 2019

[66] Diakopoulos (n 1) 61.

[67] Mitrous (n 37) 58.

[68] Edwards et al (n 48) 4.

[69] Michele Finck, Blockchain Regulation and Governance in Europe (Cambridge University Press 2019) 130.

[70] ibid

[71] Malgieri (n 29) 33.

[72] ibid 6.

[73] See French Data Protection Act 1978, Art. 3

[74] Malgieri (n 29) 22.

[75] ibid.

[76] ibid 24.

[77] Ibid 35.

[78] ibid 36.

[79] Data Protection Act 2018 (UK) Art. 14(5).

[80] Committee on Science and Technology HC 351 (n 41) 31.

[81] Nick Wallace, 'EU's Right to Explanation: A Harmful Restriction on Artificial Intelligence' (Techzone, 2017) < www.techzone360.com/topics/techzone/articles/2017/01/25/429101-eus-right-explanationharmful-restriction-artificial-intelligence.htm> accessed 22 May 2019.

[82] Lilian Edwards (ed) Law, policy, and the Internet (Hart Publishing 2019) 103.

[83] Tae Wan Kim and Bryan R. Routledge, 'Informational Privacy, A Right to Explanation, and Interpretable AI' (2018 IEEE Symposium on Privacy 2018) 65.

Leave a comment

Please login or register to comment

Comments