Comprehensive AI Legislation Overview

an outline of potential ai legislation l.w
1 / 17
Embed
Share

Explore the potential AI legislation outlined in bills like SB487 & HB747 in Virginia, focusing on defining high-risk AI systems, operating standards for developers, and the individuals subject to regulation. Learn from successful bills in other states like Colorado and Connecticut. Discover the implications for AI use and procurement policies in this regulatory landscape.

  • AI Legislation
  • Virginia
  • High-risk AI
  • Regulations
  • State Bills

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. An outline of potential AI legislation

  2. Context: SB487 & HB747 SB487: Called for CIO to establish AI use & procurement policies & procedures, with a requirement that AI systems do not unlawfully discriminate or have disparate impact on individuals based on specified characteristics HB747: Establishes operating standards for developers and deployers of high-risk AI systems, with a requirement that AI systems do not unlawfully discriminate or have disparate impact on individuals based on specified characteristics

  3. What could comprehensive AI legislation look like, based on bills proposed in Virginia and successful bills in other states? References: Virginia s SB487, HB747; VITA s Utilization Policy; Colorado s SB24-205, SB22-113; Connecticut s SB1103; Maryland s SB818

  4. Definitional approach targeting high-risk AI systems 1.1. Defining the system AI as an umbrella term: SB24-205 uses the EU AI Act definition High-risk AI: AI that is used in consequential decision making Exclusions listed in HB747 & SB24-205 Includes AI that impacts rights or safety (SB818) Consequential decision: using the lists in HB747, SB24-205, and/or SB1103 Rights-impacting AI: using the definition in SB818 Safety-impacting AI: using the definition in SB818 Algorithmic discrimination: using the definitions from SB487 and SB24-205 Potential to include the use of proxies, e.g. zip code as a proxy for race Disparate impact: using the same characteristics listed in algorithmic discrimination definition and/or as specified in SB487

  5. Individuals subject to this regulation 1.2. Operating standards for developers (HB747) 1.2.A. Detailed documentation that must be shared (HB747, SB24-205) 1.2.B. Cooperation with deployers for impact assessments (HB747) From HB747: "Developer" means any person doing business in the Commonwealth that develops or intentionally and substantially modifies a high-risk artificial intelligence system that is offered, sold, leased, given, or otherwise provided to consumers in the Commonwealth.

  6. Individuals subject to this regulation 1.3. Operating standards for deployers (HB747) 1.3.A. No discrimination or disparate impact. 1.3.B. Risk management policies required, must be at least as stringent as NIST AI RMF (HB747, SB24-205) From HB747: "Deployer" means any person doing business in the Commonwealth that deploys or uses a high-risk artificial intelligence system to make a consequential decision

  7. Individuals subject to this regulation 1.3. Operating standards for deployers (HB747) 1.3.C. Impact assessments: Minimum: before deployment & within 90 days of a significant system update (HB747). Includes minimum reporting requirements (HB747, SB24-205) 1.3.D. Consumer disclosures. List of what consumer must be told when high-risk AI is involved in decision making & when there s an adverse decision made (SB24-205) 1.3.E. Consumer rights. Right & procedures in place to appeal adverse decisions. Requirement that humans review appeals. Opt-out policies/procedures. Consideration: Rather than taking an opt-out approach, consumers must opt-in for their data to be used to train future models

  8. Additional considerations for government entities 1.4. High-risk AI use by state government (in addition to sections 1.2 and 1.3) 1.4.A. Procurement standards: minimum requirements (SB1103.2.1.C & VITA Policy) with justified exemptions allowed at CIO s discretion. Mandatory approval process (VITA Policy), includes ensuring no discrimination or disparate impact, using NIST AI RMF (or similar) as a minimum (HB747). 1.4.B. Impact assessments: Minimum: before deployment & annually after (SB487), with an additional one within 90 days of a significant system update (HB747). CIO may require more frequently if there are anomalies.

  9. Additional considerations for government entities 1.4. High-risk AI use by state government (in addition to sections 1.2 and 1.3) 1.4.C. Annual system registration/inventory & report (SB487, SB818). Includes specifics of what should be in report and inventory. 1.4.D. Mandatory disclosure, appeals, & opt-out processes. Must disclose use of AI in decision-making. Include the requirement that humans review all appealed decisions or consumers can skip the AI and go straight to human decision-making. 1.4.E. No system shall be used without meeting the above standards/VITA approval (SB487)

  10. Enforcement & exemptions 1.5. Enforcement 1.5.A. Attorney General s powers 1.6. Exemptions (HB747) 1.6.A. Nothing requires disclosing trade secrets, etc. 1.6.B. Nothing prohibits following other laws, etc.

  11. Definitions

  12. Consequential decisions From HB747: "Consequential decision" means any decision that has a material legal, or similarly significant, effect on a consumer's access to credit, criminal justice, education, employment, health care, housing, or insurance. From SB24-205: "Consequential decision" means a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of: (a) education enrollment or an education opportunity; (b) employment or an employment opportunity; (c) a financial or lending service; (d) an essential government service; (e) health-care services; (f) housing; (g) insurance; OR (h) a legal service.

  13. Consequential decisions From SB1103: "Critical decision" means any decision or judgment that has any legal, material or similarly significant effect on an individual's life concerning access to, or the cost, terms or availability of, (A) education and vocational training, including, but not limited to, assessment, accreditation or certification, (B) employment, worker management or self- employment, (C) essential utilities such as electricity, heat, water, Internet or telecommunications access or transportation, (D) family planning services, including, but not limited to, adoption services or reproductive services, (E) financial services, including, but not limited to, any financial service provided by a mortgage company, (F) services from a creditor or mortgage broker, (G) health care, including, but not limited to, mental health care, dental care or vision care, (H) housing or lodging, including, but not limited to, any rental, short-term housing or lodging, (I) legal services, including, but not limited to, private mediation or arbitration, (J) government benefits, (K) public services, or (L) any other opportunity, program or service;

  14. Rights & safety impacting AI From SB818: Rights-impacting artificial intelligence means artificial intelligence whose output serves as a basis for decision or action that is significantly likely to affect civil rights, civil liberties, equal opportunities, access to critical resources, or privacy. Safety-impacting artificial intelligence means artificial intelligence that has the potential to meaningfully significantly impact the safety of human life, well-being, or critical infrastructure.

  15. Algorithmic discrimination From HB747: "Algorithmic discrimination" means any discrimination that is (i) prohibited under state or federal law and (ii) a reasonably foreseeable consequence of deploying or using a high-risk artificial intelligence system to make a consequential decision From SB24-205: (a) Algorithmic discrimination means any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law. (b) Algorithmic discrimination does not include (I) the offer, license, or use of a high-risk artificial intelligence system by a developer or deployer for the sole purpose of: (A) the developer s or deployer s self-testing to identify, mitigate, or prevent discrimination or otherwise ensure compliance with state and federal law; or (B) expanding an applicant, customer, or participant pool to increase diversity or redress historical discrimination; OR (II) an act or omission by or on behalf of a private club or other establishment that is not in fact open to the public, as set forth in Title II of the Federal Civil Rights Act of 1964 42 U.S.C.SEC. 2000a (e), as amended

  16. Excluded technology From SB24-205: (b) "High-risk artificial intelligence system does not include: (I) an artificial intelligence system if the artificial intelligence system is intended to (A) perform a narrow procedural task; or (B) detect decision-making patterns or deviations from prior decision-making patterns and is not intended to replace or influence previously completed human assessment without sufficient human review; OR (II) The following technologies, unless the technologies, when deployed, make, or are a substantial factor in making, a consequential decision: (A) anti-fraud technology that does not use facial recognition technology; (B) anti-malware; (C) anti-virus; (D) artificial intelligence-enabled video games; (E) calculators; (F) cybersecurity; (G) databases; (H) data storage; (I) firewall; (J) internet domain registration; (K) internet website loading; (L) networking; (M) spam- and robocall-filtering; (N) spell-checking; (O) spreadsheets; (P) web caching; (Q) web hosting or any similar technology; or (R) technology that communicates with consumers in natural language for the purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful.

  17. Option: Use case approach targeting high-risk systems 1.1. Defining the system High-risk automated decision making system: technology is used in consequential decision making (SB1103) Exclusions listed in HB747 & SB24-205 Includes systems that impact rights or safety (SB818) Consequential decision: using the lists in HB747, SB24-205, and/or SB1103 Rights-impacting: using the definition in SB818 Safety-impacting: using the definition in SB818 Algorithmic discrimination: using the definitions from SB487 and SB24-205 Potential to include the use of proxies, e.g. zip code as a proxy for race Disparate impact: using the same characteristics listed in algorithmic discrimination definition and/or as specified in SB487

Related


More Related Content