A Comparison of AI Risk Management in EU and U.S.

Slide Note
Embed
Share

A detailed analysis of AI risk management practices in the EU and U.S., showcasing key differences in regulatory approaches and implications for technology transfer and commercial applications. The comparison covers subfields such as AI for human processes, socioeconomic decisions, consumer products, and regulatory frameworks set forth by E.O. 13859 and the EU AI Act. It delves into binding agency guidance in the U.S., voluntary frameworks like NIST, and the mandatory agency regulatory plans existing in America. The content sheds light on specific examples of AI applications and the evolving landscape of AI governance in both regions.


Uploaded on Oct 03, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. EU and U.S. AI Risk Management A Comparison and Implications for the TTC Alex C. Engler Mastodon / Twitter: @alexcengler Email: aengler@brookings.edu

  2. With support from:

  3. AI Risk Management Subfield Examples AI for Human Processes / Socioeconomic Decisions AI in Hiring, Educational Access, Financial Services Approval AI in Consumer Products Chatbots Social Media Recommender & Moderation Systems Medical devices, partially autonomous vehicles, Sales or customer service chatbots on commerce websites Newsfeeds on TikTok, Twitter, Facebook, Instagram, LinkedIn Algorithms on Ecommerce Platforms Algorithms for search or recommendation of products and vendors on Amazon or Shopify Foundations Models / Generative AI Facial Recognition Targeted Advertising Stability AI s Stable Diffusion and OpenAI s GPT-3 Clearview AI, PimEyes, Amazon Rekognition Algorithmically targeted advertising on websites and phone applications

  4. U.S. AI Risk Management E.O. 13859 Maintaining American Leadership in AI Mandatory agency regulatory plans, but ignored by all agencies except HHS AI Bill of Rights Non-binding, even does not constitute government policy. NIST AI Risk Management Framework Voluntary suggestions, guidance (official release is tomorrow) All have AI principles of principles: accuracy, safety, fairness, transparent, accountable, and mention explainability and privacy

  5. U.S. AI Risk Management Binding Agency Guidance EEOC requires transparency, non-discrimination, human oversight in AI hiring processes for people with disabilities; CFPB requires explanations for adverse actions (rejections/denials) of AI models in credit decisions FTC can enforce some data privacy, truth in advertising, commercial surveillance restrictions HUD tackling discrimination in property appraisal models

  6. EU AI Risk Management EU AI Act Subfield AI for Human Processes / Socioeconomic Decisions EU Policy Developments High-risk AI applications in Annex III of EU AI Act will need to meet quality standards, implement risk management system, and perform conformity assessment AI in Consumer Products EU AI Act consider AIs implemented within products already regulated under EU law to be high risk, must have standards incorporated into current regulatory process EU AI Act will require disclosure that a chatbot is an AI (i.e. not a human) EU AI Act will include some restrictions on remote facial recognition / biometric identification. EU Data Protection Authorities have fined facial recognition companies under GDPR Draft proposals of the EU AI Act consider quality and risk management requirements Chatbots Facial Recognition Foundations Models / Generative AI

  7. EU AI Risk Management Other Subfield Social Media Recommender & Moderation Systems EU Policy Developments EU Digital Services Act creates transparency requirement for these AI systems, also enables independent research and analysis Algorithms on Ecommerce Platforms EU Digital Markets Act will restrict self-preferencing algorithms in digital markets. Individual anti-trust actions (see Amazon case, or Google Shopping) to reduce self-preferencing in Ecommerce algorithms and platform design GDPR enforcement, including EDPB fines against Meta for using personal user data for behavioral ads. The Digital Services Act bans targeted advertising to children and certain types of profiling (e.g. by sexual orientation). It also requires that targeted ads explanations and control over what ads they see. Targeted Advertising

  8. Emerging Challenges AI in Consumer Products/Socioeconomic Decisions: EU standards bodies will have to simultaneously write standards for variegated set of AI applications, potentially in private. U.S. and EU alignment on risk-based approach does not resolve mismatch between U.S. agency authority and broad scope of AI Act AI in Platforms/Websites: The EU has passed legislation with significant implications for AI in social media, Ecommerce, and online platforms in general, while the U.S. does not appear yet prepared to do so. More platforms means more crossing international borders, including emerging platforms in education, finance, and healthcare, as well as business management software

  9. TTC Developments Three Workstreams in AI Risk TTC Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management Pilot project on Privacy-Enhancing Technologies (PETs) A report on the impact of AI on the workforce, co-written by the European Commission and the White House Council of Economic Advisors

  10. Policy Recommendations U.S. should enforce E.O. 13859, creating agency AI regulatory plans EU should create more flexibility in its AI definition, enabling adjustments to inclusion EU should take steps to open its standards setting process to the world, especially w.r.t. AI with greater extraterritorial impact U.S. and EU should collaborate on AI regulatory capacity (best practices, talent exchange, AI sandbox pilot, etc).

Related


More Related Content