AI and Individual Rights Webinar: Ensuring Data Protection in Artificial Intelligence Systems
Explore the critical aspects of individual rights in AI systems, including human oversight, data processing transparency, and accountability. Learn how to navigate personal data rights throughout the AI lifecycle and address challenges related to data contained within AI models.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
AI and Individual Rights Webinar Ahmed Razek, Principal Technology Advisor, Technology Abigail Hackston, Senior Policy Officer, Innovation Alister Pearson, Senior Policy Officer, Technology Prof Reuben Binns, Associate Professor, University of Oxford
Agenda How do organisations ensure individual rights in their AI system? What is the role of human oversight? What information do organisations need to provide to individuals whose personal data they will process using AI?
Background Selected published guidance ICO and The Alan Turing Institute, Explaining decisions made with AI ICO, Guidance on AI and data protection Webinars 1. AI, accountability and governance (Sept) 2. AI, lawfulness, fairness, and transparency (Oct) 3. AI, security and data minimisation (Nov) 4. AI and individual rights
How do individual rights apply to different stages of the AI lifecycle? Individual rights apply wherever personal data is used across the AI lifecycle. During the training stage, personal data may be converted into a form which makes it harder to link to a named individual. However, this does not necessarily take it out of the scope of data protection. It is important to decide and make clear who is responsible for ensuring individual rights when procuring AI services from another organisation. During the inference stage, some rights (eg right to rectification) may not apply as they are intended as prediction scores as opposed to statements of fact.
How do individual rights relate to data contained in the model itself? Some models (eg Support Vector Machines) contain some key individual examples from the training data. Although the chances that the relevant individual makes a request are very small, it remains possible. You must consider how you would retrieve the examples and whether it would require re-training the model.
How should we fulfil requests about data contained in models by accident? It may be difficult or impossible to fulfil individual rights requires where data is contained in the model by accident. You should regularly and proactively evaluate the possibility of personal data being inferred from models in light of the state-of-the- art technology, so that you minimise the risk of accidental disclosure.
How do we ensure individual rights relating to solely automated decisions with legal or similar effect? Articles 13(2)(f) and 14(2)(g) says you must give individuals whose data you are processing meaningful information about the logic involved, as well as the significance and the envisaged consequences. Article 15(2)(h) says you must tell them this if they submit a subject access request. Article 22(3) gives individuals the right to obtain human intervention, to express his or her point of view, and to contest the decision.
Why could rights relating to automated decisions be a particular issue for AI systems? Even the most statistically accurate machine learning system will occasionally reach the wrong decision in an individual case. Two particular reasons why you may need to overturn an ML system- derived decision are because: The individual is an outlier ; and Assumptions in the AI design can be challenged. You should: Consider the system requirements necessary to support a meaningful human review from the design phase. Design and deliver appropriate training and support for human reviews; and Give staff the appropriate authority, incentives and support to address or escalate individuals concerns and, if necessary, override the AI system s decision.
What is the role of human oversight? Key considerations for meaningful human oversight are: Human reviewers should not just apply the automated recommendation to an individual in a routine fashion and be involved in the checking. Reviewers involvement must be active and not just a token gesture. reviewers must weigh-up and interpret the recommendation, consider all available input data, and also take into account other additional factors. There may not be meaningful human oversight where automation bias occurs or where there is a lack of interpretability. Decisions during the design and build phase as well as training for human reviewers will affect the level of risk for automation bias and a lack of interpretability.
When does information need to be provided to individuals? GDPR focus on large-scale automated processing, specifically profiling and automated decision making Specific requirements in these circumstances Need to establish significance of decisions made Information required under Articles 13-15 of the GDPR
Article 13 & 14 requirements The controller shall provide the data subject with: the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.
Article 15 requirements The data subject shall have the right to obtain from the controller: the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.
How can organisations ensure they are able to provide this information? Think about providing information through the lifecycle of the AI system Use of simple explainable models or supplementary models where required Information provided should be thought of as the start of a conversation
Call to action We currently have two surveys out looking for your views on: 1. An AI risk toolkit to help risk practitioners assess their AI system. 2. The usability and effectiveness of the Explaining decisions made with AI guidance. Links to these surveys will be sent out after the webinar.