Understanding the Right to an Explanation in GDPR and AI Decision Making

Slide Note
Embed
Share

The paper delves into the necessity for Explainable AI driven by regulations such as the GDPR, which mandates explanations for algorithmic decisions. It discusses the debate surrounding the existence of a legally binding right to explanation and the complexities of accommodating algorithmic machines in daily life.


Uploaded on Jul 27, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Right to an Explanation Considered Harmful Andy Crabtree www.andy-crabtree.com

  2. Right to an Explanation GDPR and explainable algorithmic machines AI Will Have to Explain Itself The need for Explainable AI is being driven by upcoming regulations, like the European Union s General Data Protection Regulation (GDPR), which requires explanations for decisions Under the GDPR, there are hefty penalties for inaccurate explanations making it imperative that companies correctly explain the decisioning process of its AI and ML systems, every time. https://www.comparethecloud.net/articles/2018-ai-evolution/ GDPR s policy on the right of citizens to receive an explanation for algorithmic decisions highlights the pressing importance of human interpretability in algorithm design. (Goodman & Flaxman 2016) DOI 10.1609/aimag.v38i3.2741

  3. Right to an Explanation? The right is contested and contestable Oh no it doesn t, oh yes it does, well it should The right does not exist (Wachter et al. 2017) DOI 10.1093/idpl/ipx005 This rhetorical gamesmanship is irresponsible (Selbst & Powles 2017) DOI 10.1093/idpl/ipx022 There is a case to be made for the establishment of a legally binding right to explanation (ATI 2017) http://data.parliament.uk/WrittenEvidence/CommitteeEvidence.svc/EvidenceDocument/Science%20and%20Technology/Algorithms %20in%20decisionmaking/written/69165.html Our view is that it should be considered harmful Explainable AI not sufficient to address the social imperative of accommodating increasingly inscrutable and non-intuitive algorithmic machines in everyday life

  4. The Basis of Our Argument [1] The law Legal-tech scholars attribute the perceived right and its implications for technology development to Goodman & Flaxman s (2016) reading of GDPR DOI 10.1609/aimag.v38i3.2741 And specifically to the assertion that articles 13, 14, 15 and 22 mandate a right to an explanation of an algorithm s decision Let s take a look and see what the Articles say, and what they might actually mandate

  5. Articles 13 & 14 Information to be provided where personal data are collected from the data subject ( user in HCI terms) OR have not been obtained from the data subject The relevant paragraphs are (f) and (g) respectively , the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject. The meaningful information clause underpins the view that GDPR mandates the right to an explanation of an algorithm s, algorithmic model s, or algorithmic machine s decision

  6. Problem 1 There is no mention of the word explanation in any Article in GDPR Only Articles create legally binding obligations So the meaningful information required by Articles 13 & 14 is not information that explains an algorithm s decision What is required by GDPR then? GDPR appears to ask for a functional description of the model governing decision-making such that a data subject can vindicate her substantive rights under the GDPR and human rights law. (Selbst & Barocas 2018) DOI 10.2139/ssrn.3126971

  7. Problem 2 The information required by Articles 13 & 14 is prospective in nature, not retrospective An ex ante account that applies to all potential cases of an algorithm s decision-making, not just the particular case to hand it is concerned with the operation of the model in general, rather than as it pertains to a particular outcome. (Selbst & Barocas 2017), DOI 10.2139/ssrn.3126971 So whatever an explanation might amount to as mandated by Articles 13 and 14 it has nothing to do with explaining the specific decisions made by an algorithm

  8. What About Article 15? Right of access by the data subject Including the right to obtain confirmation of processing, to object, to rectify errors, restrict further processing and erase personal data Paragraph (h) also mandates the provision of meaningful information as per Article 13 & 14 if automated decision-making, including profiling, applies Article 15 appears to be retrospective in nature insofar as it applies to data that are being or have been processed Article 15 thus seems to mandate the provision of tailored knowledge about specific decisions arrived at by algorithmic machines (Edwards and Veale 2017) DOI 10.2139/ssrn.2972855

  9. Problem 3 So it seems that GDPR does contain a right to an ex post (after the fact) explanation of specific decisions Well it might do, sometimes If the decision-making is based solely on automated processing (Article 22) If it produces legal effects (e.g., it impacts a person s legal status or their legal rights) (Article 22) If it has consequences that significantly affects the data subject s circumstances, behaviour or choices (e.g., refusal of credit) (Article 22) If the data subject requests it (Article 15) If that request does not affect the data controller s trade secrets and IP (Recital 63), or those who enable the controller s processing operation. So no general provision for the explanation of algorithmic decisions in GDPR

  10. Problem 4 And it gets worse (for AI) Article 22. Automated individual decision-making, including profiling (3) the data controller shall implement suitable measures to safeguard the data subject s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision. Recital 71 clarification of obtain human intervention any form of automated processing of personal data should be subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision. So insofar as GDPR does mandate an explanation, it is to be provided by a HUMAN BEING

  11. Explaining Automated Decision-Making GDPR mandates general ex ante and specific ex post explanations, but only in certain circumstances if certain conditions apply However, a specific ex post account is not required to explain how an algorithm arrived at a decision when we talk about an explanation for a decision we generally mean the reasons or justifications for that particular outcome, rather than a description of the decision- making process. (Budish et al. 2017) DOI 10.2139/ssrn.3064761 When we seek to evaluate the justifications for decision-making that relies on a machine learning model, we are really asking about the institutional and subjective choices that account for the decision-making process. (Selbst & Barocas 2018) DOI 10.2139/ssrn.3126971 So not only does GDPR mandate that explanations for specific decisions be conditionally (if) provided by a person, not a machine, it also requires that the explaining is done with reference to factors external to algorithmic operations

  12. The Basis of Our Argument [2] The technology ML methods of interpretability not sufficient to explain algorithmic decision-making in law But surely they can support humans (data controllers) and help them provide justifiable explanations of the decisions arrived at by automated means it is reasonable to suppose that any adequate explanation would, at a minimum, provide an account of how input features relate to predictions. (Goodman and Flaxman 2016) DOI 10.1609/aimag.v38i3.2741 Is it?

  13. A Survey of Methods for Explaining Black Box Models Over 100 ML papers reviewed by Guidotti et al. (2019) DOI 10.1145/3236009 Explanation construed of as an interface between humans and an automated decision-maker Provided by an arsenal of global and local explanator methods Global methods explain the whole logic of a model and different possible outcomes Local methods explain only the reasons for a specific decision. However, in the literature, very little space is dedicated to a crucial aspect: the model complexity. The evaluation of the model complexity is generally tied to the model comprehensibility, and this is a very hard task to address.

  14. Problem 5 Model complexity / comprehensibility The rise of inscrutable algorithmic machines You can t just look inside a deep neural network to see how it works. A network s reasoning is embedded in the behaviour of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation ... https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/ Not just neural nets, same applies to complex decision trees, etc. Each feature space with more than 3 dimensions is just not imaginable for humans. https://christophm.github.io/interpretable-ml-book

  15. Problem 6 Algorithmic models may not only be inscrutable, the ways in which they make decisions may also be non-intuitive we can t know the precise structure and workings of algorithms that evolve continuously by a process of machine learning (Walport 2017) https://www.wired.co.uk/article/technology-regulation-algorithm-control So even if we can understand how a decision is arrived at (a big if), the decision itself may be non-sensical (e.g., pneumonia and asthma)

  16. Problem 7 Causes vs. reasons Assuming a) inscrutability and b) non-intuitiveness can be mitigated, then causal (how) explanations are not sufficient Justifying automated decision-making requires (ex post) that the reasons for carrying out decision-making in the way that it was be made accountable to the law (why) So ML s explanator methods limited in scope, even if they can deal with a) & b)

  17. Right to an Explanation Considered Harmful Legal requirements and technical challenges mean ML/AI cannot deliver legally explainable machines Law requires that people explain specific algorithmic decisions, and only in certain circumstances if certain conditions apply Technology can only (at best, if a&b can be satisfied) explain how specific decisions were arrived at, not why the decision-making process is as it is (a job for humans) Hence the strong suggestion that the right to an explanation be considered harmful It sets up unrealisable expectations for ML (it cannot deliver legally defensible explanations) And unrealistic expectations for society at large (rise of inscrutable, non- intuitive machines)

  18. The Basis of Our Argument [3] The limits of explanation What crash rate might be acceptable for vehicles driven by computer systems? At one level, any rate associated with less human suffering than vehicles driven by humans would be an improvement on the present. (Walport 2017) https://www.wired.co.uk/article/technology-regulation- algorithm-control Social acceptability is critical to accommodating inscrutable non- intuitive machines in our everyday lives

  19. The Social Imperative Accommodating algorithmic machines Acceptability currently confined to legal and ethical narratives Explanation and fairness (bias and discrimination) Assumes citizens are Rational actors who will be convinced by legal and ethical assertions Strong need for broader societal representation in development and application of algorithmic machines A people first approach balancing science and technology drivers Putting citizens at the centre of the data revolution

  20. We Need Not Wait As these new capabilities for computer systems become increasingly mundane, our relationship with and expectations of the technologies at hand will evolve. This raises questions about the long-term effects upon or expectations from people who have grown up with machine learning systems and smart algorithms in near-ubiquitous usage from an early age. (Royal Society 2017) https://royalsociety.org/~/media/policy/projects/machine- learning/publications/machine-learning-report.pdf Do we really have to wait for people to grow up with algorithmic systems to broaden participation and understand societal expectations? The future has been a longstanding design problem in HCI an arsenal of human centred methods Explore with citizens new kinds of interfaces that go beyond explanation and shape ML/AI around the mundane expectations and concerns of society's members

  21. Should We Dispense with Explainable AI? Explainable AI especially explainable machine learning will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners. DARPA XAI Not only warfighters, but judges, doctors, pilots, etc. However, need to recognise the limits of explanation That different kinds of explanation are required in different contexts (e.g., causes vs reasons) And that ML methods of interpretability/causal explanations are of limited use in law and arguably in a great many mundane situations too (cars, boilers, thermostats, etc.) Hence, need to complement XAI with fundamental research into social acceptability and the challenges of accommodating of algorithmic machines in everyday life

  22. Thank You Legal-tech scholars Lachlan Urquhart & Jiahong Chen Questions?

Related


More Related Content