AI in Canadian Criminal Law: A Lifecycle Analysis
The Law Commission of Ontario (LCO) is at the forefront of assessing the impacts of AI on civil administrative and criminal law. While Canada has been slower in adopting AI tools compared to other jurisdictions, concerns such as access to justice, due process, and Charter rights are being raised. The LCO has identified various issues around the use of AI in criminal law, despite the absence of specific regulatory structures. Nonetheless, institutions like the Toronto Police Services Board and RCMP are taking steps towards utilizing AI technologies in law enforcement.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
AI in Canadian Criminal Law: A Lifecycle Analysis Ryan Fritsch, Legal Counsel Law Commission of Ontario June 2022 https://www.lco-cdo.org/crimai rfritsch@lco-cdo.org
Background: The LCOs Law Reform Work on AI The LCO is taking a forward-looking approach to assessing the distinct impacts of AI on both civil administrative law and criminal law. Each sector shares similar fundamental concerns with AI but faces very different law reform considerations in relation to access to justice, due process, legal and regulatory frameworks, and operational and policy considerations. The LCO is an established leader on these issues in Ontario. We have: Published a series of papers: The Rise and Fall of AI in American Justice (November 2020); Regulating AI: Critical Issues and Choices (April 2021); a chapter in the book Litigating AI (May 2021); AI Case Study: Probabilistic Genotyping DNA Tools Used in Canadian Courts (June 2021); and Comparing European and Canadian AI Regulation (Nov 2021) Facilitated 2 x four-part workshop series on AI in Govt with the Ontario Digital Service: Legal Issues and Government AI Development (March 2021 and 2022) Convened Canada s first Roundtable on Artificial Intelligence in Criminal Law (March 2019) with over 40 justice sector and civil society participants Delivered numerous presentations on AI to the judiciary, administrative bodies, legal aid groups, and academic fora
AI Issues the LCO has Identified in Criminal Law Canada has been slower to adopt AI tools than other jurisdictions, but there s no reason to think we will be any different In other jurisdictions, AI is already being used to guide predictive policing, generate evidence, conduct risk assessments, engage in population profiling and automated surveillance, determine sentences, review disclosure, draft submissions, and analyze case law to predict outcomes and identify litigation strategies AI accordingly raises significant concerns for access to justice, due process, procedural fairness, and Charter rights in which criminal law sets the highest standards for admissibility, reliability, explainability, transparency, discrimination and profiling, etc.
AI Issues the LCO has Identified in Criminal Law Notwithstanding these trends, there is at present to no regulatory or guideline structure specifically around the use of AI in criminal law, either federally or provincially Concurrently, individual institutions are beginning to act. For instance, the Toronto Police Services Board adopted a policy for the Use of Artificial Intelligence Technology (Feb 2022) while the RCMP announced the adoption of new AI governance technology (Nov 2021) and want to use AI to decrypt data obtained in investigations (Nov 2021). Canadian courts are also actively hearing cases involving AI- mediated evidence, as discussed in LCO s case study of probabilistic genotyping.
LCOs Criminal AI Lifecycle Project LCO s Criminal AI Lifecycle Project proposes to chart the impact of AI at different stages of the criminal investigation and prosecution process. The goal is to 1) systematically identifying expected impacts of AI on the Canadian criminal justice system, and 2) identifying a range of potential law and policy reform opportunities to guide these developments. The lifecycle consists of five stages: 1) 2) 3) 4) 5) Police investigations & community diversion Crown consideration of charges First appearance, risk assessment, bail and sentencing Pre-trial, Trial, and Appeal Systemic oversight mechanisms This will be the first study of its kind in Canada
Police Investigations & Community Diversions Systemic Oversight Mechanisms First appearance, risk assessment, bail and sentencing Pre-Trial, Trial, & Appeal - Police AI policies incl governance, procurement, risk assessment, and reliance on 3rd party AI info - Charter and due process rights to AI-related notice, transparency, explainability, fairness Crown Consideration of Charges - Wrongful convictions and Independent Case Review Commission - Reliance on community and criminal risk assessment tools (diversion, bail, sentencing) - Community input and oversight of AI practices - Adequate representation, resources for defense counsel and systemic & Charter challenges - Role of technology in over- representation - Interpretation of AI- generated evidence and investigations - Due process rights to transparency, explainability, fairness - Specific technologies incl. decryption, predictive policing, facial recognition, automated surveillance, DNA analysis, social media surveillance - Reliance on community and criminal risk assessment tools (diversion, bail, sentencing) - Technology review and assessment - Role of expert witnesses - Amplification of existing concerns with plea bargains - Role and resources of duty counsel in relation to AI assessments - Adequacy of existing oversight law and institutions (privacy, tech assessment, police boards, police reviews and discipline, etc.) - Identification of Charter, human, and due process rights in relation to AI systems - Technological competence of court officers - Use of warrants and other judicial procedures in relation to AI - Bias, discrimination, technological deference and increased risk of over-representation, arbitrary detention, depravation of liberty - Fresh evidence on appeal - Systemic funding for test cases, adequate defense, etc. - Consistent policies and practices in courts across the province - Anticipating Charter rights, human rights, privacy and data use, admissibility of evidence, etc. - Balancing disclosure and intellectual property rights - Role of professional bodies in regulating technological competence of court officers - Balancing Jordan rights and technological expediency - Procurement and deployment policies and practices - Community risk assessment, diversion agreements, coercion and surveillance Lifecycle Stages: Artificial Intelligence thru Criminal Justice Process
Stage 1: Police Investigations & Community Diversions Key Issues & Questions Not everyone feels safer with the increased power that law enforcement has as a result of the technology Communities most likely subject to these technologies may not have any input into their purpose, adoption, use, and ongoing monitoring and oversight Existing legal frameworks privacy law, Charter of Rights, human rights, data governance and practices, procurement, judicial authorizations in investigations, police accountability and discipline may have blind spots in relation to AI, such as reliance on 3rd party AI generated evidence Ability of police to effectively explain the use and workings of AI technologies to Crown and at trial Can AI be used to improve police performance, oversight, and transparency and thus increase public trust and confidence? Use of AI in community risk assessments and diversion agreements may increase potential for coercion, surveillance, and a lack of access to justice. Police Investigations & Community Diversions - Police AI policies incl governance, procurement, risk assessment, and reliance on 3rd party AI info - Community input and oversight of AI practices - Specific technologies incl. decryption, predictive policing, facial recognition, automated surveillance, DNA analysis, social media surveillance - Use of warrants and other judicial authorizations in investigations related to AI technologies - Anticipating Charter rights, human rights, privacy and data use, admissibility of evidence, etc. - AI-enabled community risk assessment and diversion agreements leading to increased coercion, surveillance, and limited access to justice
Stage 2: Crown Consideration of Charges Key Issues & Questions A range of AI tools conduct risk assessments for bail, sentencing, recidivism, and parole, as well as community assessments for risk of homelessness, child welfare investigations, mental health, and other risks Crown counsel may rely on such tools to interpret issues including likelihood of conviction, community safety, diversion, conditions, parole eligibility, etc. Crowns may also interpret AI-generated evidence for issues like admissibility, reliability, explainability, integrity of the investigative process, and other Charter rights Crowns may play an important gatekeeping function in setting norms and practices related to AI, but may not have a comprehensive framework for doing so Consistency is also a concern given that practices may vary across the some 68 criminal courts in Ontario. Crown Consideration of Charges - Interpretation of AI- generated evidence and investigations - Reliance on community and criminal risk assessment tools (diversion, bail, sentencing) - Identification of Charter, human, and due process rights in relation to AI systems - Consistent policies and practices in courts across the province
Stage 3: First appearance, risk assessment, bail and sentencing Key Issues & Questions Risk assessment tools may recommend whether bail ought to be allowed, with an impact on conditions, restrictions on liberties, and sureties Algorithmic predictions are generalized predictions with unclear relationship to legal rights including reasonable grounds , hearsay , habeus corpus There is inconsistent assessment, auditing and validation of risk assessment tools for issues as bias, discrimination, consideration of social determinants, consistent implementation and use Concerns for technological deference and technological competency of court officers to critically assess (rather than merely defer to) such tools There is unclear allocation of adequate resources to contest automated tools, such as supports for duty counsel and systemic challenges Unclear relationship between AI and community standards and values First appearance, risk assessment, bail and sentencing - Reliance on community and criminal risk assessment tools (diversion, bail, sentencing) - Due process rights to transparency, explainability, fairness - Role and resources of duty counsel in relation to AI assessments - Bias, discrimination, technological deference and increased risk of over-representation, arbitrary detention, depravation of liberty - Procurement and deployment policies and practices
Stage 4: Pre-trial, Trial, and Appeal Key Issues & Questions How should AI-generated evidence or AI-informed decisions of people be judged admissible, reliable, challenged and cross-examined? What is the role of an expert witness and a justiciable threshold for AI explainability, reasons for recommendation, standards for performance audits, etc? How will courts balance intellectual property protections against due process rights to transparency and disclosure of the case against accused? What might qualify as fresh evidence on appeal for machine learning technology that may be constantly adapting and changing? What resources need to available to ensure adequate representation, resources for defense counsel and systemic & Charter challenges How will courts strike a balance between Jordan rights to a timely hearing and technological expediency? Pre-Trial, Trial, & Appeal - Charter and due process rights to AI-related notice, transparency, explainability, fairness - Adequate representation, resources for defense counsel and systemic & Charter challenges - Role of expert witnesses - Amplification of existing concerns with plea bargains - Technological competence of court officers - Fresh evidence on appeal - Balancing disclosure and intellectual property rights - Balancing Jordan rights and technological expediency
Stage 5: Systemic Oversight Mechanisms Key Issues & Questions Wrongful convictions and Independent Case Review Commission Role of technology in over-representation Systemic Oversight Mechanisms Role of AI for good, for instance, assessment of systemic biases in the justice sector - Wrongful convictions and Independent Case Review Commission Technology review and assessment - Role of technology in over- representation Adequacy of existing oversight law and institutions (privacy, tech assessment, police boards, police reviews and discipline, etc.) Systemic funding for test cases, adequate defense, etc. - Technology review and assessment - Adequacy of existing oversight law and institutions (privacy, tech assessment, police boards, police reviews and discipline, etc.) Role of professional bodies in regulating technological competence of court officers - Systemic funding for test cases, adequate defense, etc. - Role of professional bodies in regulating technological competence of court officers
LCOs Criminal AI Lifecycle Project At each of these stages the project would ask: How AI-informed investigations and prosecutions align with or diverge from major and leading criminal case law precedents How AI-informed prosecutions interpret, operationalize or confound the fundamental requirements of due process and Charter rights and freedoms What the potential regulatory gaps are and identifying options to address these This work will also have implications for civil litigation and provincial jurisdictions: Provincial Offences Act prosecutions; prosecutorial guidelines; AI-generated or AI-informed evidence, with lessons for the Evidence Act; and opportunities to modernize the Courts of Justice Act, Statutory Powers Procedures Act, and tribunal powers on production of evidence
LCOs Criminal AI Lifecycle Project Timeline LCO has completed the process of retaining 11 expert co-authors to draft a paper for each of the five stages in the lifecycle review. Work on the drafts will commence in May. Timeline to complete first drafts is Fall 2022 Working groups will support and provide input to the co-authors. LCO staff and summer students will provide ongoing research and writing support to the authors and working groups and will share a substantial body of background research Consistent with other LCO projects, an Advisory Committee will oversee the project and ideally include representation from the defense bar, Crown, legal aid, government Ministries and agencies, civil society groups, police, the judiciary, technologists, and academics A public consultation process will follow publication of the papers
Project Authors Police Investigations Lynda Morgan, Defense Counsel Addario Law Group and Co-Chair, Osgoode Hall Annual TechCrime Program Dubi Kanengisser, Policy Advisory, Toronto Police Services Board Crown Consideration of Charges Alpha Chan, Detective and CISO Toronto Police Services Mabel Lai, Crown Law Office Criminal Ministry of the Attorney General Risk Assessments in Bail, Sentencing, and Community Diversion Gideon Christian, Professor of Law University of Calgary Dina Zalkind, Criminal Policy Counsel & Criminal Duty Counsel Legal Aid Ontario Trials and Appeals Paula Thompson, Strategic Initiatives Ministry of the Attorney General Eric Neubauer, Defense Counsel Neubauer Law, and Co-Chair Criminal Lawyers Association Technology Committee Systemic Oversight Mechanisms Brenda McPhail, Director, Privacy, Technology & Surveillance Program, Canadian Civil Liberties Association Jagtaran Singh, Legal Counsel Ontario Human Rights Commission Marcus Pratt, Director of Policy, Legal Aid Ontario
Project Advisory Group Marcus Pratt Legal Aid Ontario Paula Thompson Strategic Initiatives, MAG Dina Zalkind Legal Aid Ontario Diana Grech Strategic Analytics Unit, MAG Rosemarie Juginovic Ontario Superior Court of Justice Rosanna Giancristiano Director, Court Operations, MAG Gerald Chan Stockwoods LLP Barristers Michelina Longo Director, External Relations, SolGen Lynda Morgan Addario Law Group Jessica Mahon Policing Standards Section, SolGen Eric Neubauer Neubauer Law Michael Swinburne Senior Policy Advisor, CHRC Dubi Kanengisser Toronto Police Services Board Jagtaran Singh Legal Counsel, OHRC Alpha Chan Toronto Police Service Prof David Murakami Wood Criminology, UOttawa Brenda McPhail Canadian Civil Liberties Association Prof Gideon Christian Faculty of Law, UCalgary Jane Mallen MAG & LCO Board of Governors Daniel Konikoff Criminology & Sociolegal Studies, UToronto Mabel Lai Crown Law Criminal, MAG
Appendix: Early Signals: AI in Canadian Criminal Law s.9 arbitrary detention challenges to police interactions in the community, eg where predictive policing algorithms in use akin to carding & profiling? s. 7 challenges to automated decisions that deprive individuals of life, liberty, security of the person? s.7/s.11(d) challenges to non- disclosure of proprietary technology needed to make full answer and defence? s. 12 cruel and unusual punishment challenges to algorithmic risk prediction tools in sentencing? s.15 equality and discrimination challenges to predictive policing that targets socio-economically disadvantaged and/or racialized communities? s. 15 challenges to biased AI? s. 24(1) and 24(2) remedies?
Appendix: Early Signals: AI in Canadian Criminal Law Risk Assessment: Risk assessment tools may be invalid where they don t take into account identity and historical discrimination (Ewert v. Canada (2018 SCC 30) Increased and more pervasive surveillance: expectation of privacy in public places? (R v Spencer 2014 SCC 43; R v Wise [1992] 1 S.C.R. 527); expectation of privacy where tech enables police to see inside otherwise private spaces? (R v Gomboc (2010 SCC 55); R v Wong [1990] 3 S.C.R. 36) Digital records of online communications and movements, and GPS tracker records of physical movements (R v Mills (2019 SCC 22); R v Marakah (2017 SCC 59); R v Spencer (2014 SCC 43) Mass aggregation and analysis of meta data: public social media posts, online shopping, GPS locations, information from IoT connected home electronics, internet-connected medical devices. Do we have an expectation of privacy in aggregated bits of information that on their own do not reveal a biographical core of information? (Mosaic theory of privacy: R v Spencer); Do we have standing to bring s.8 challenges to aggregated data? (R. v. Rogers (2016 ONSC 70)) AI-generated DNA evidence: 2 x cases of probabilistic genoptyping (2021, not yet reported)
Appendix: Case Study on AI-generated Evidence: Genotyping LCO s recent release is a commissioned paper authored by Jill Presser and Kate Robertson, AI Case Study: Probabilistic Genotyping DNA Tools in Canadian Criminal Courts (May 2021) PG is a complex form of AI-based DNA evidence. It uses machine learning to interpret fragmentary samples of DNA that can not be matched to a specific individual. Instead, PG presents a likelihood ratio that a DNA sample could belong to an accused based on general DNA characteristics that can be measured PG evidence has been subject to substantial criticism in the US, including by the Obama White House President s Council of Advisors on Science and Technology. Nonetheless, the technology is used regularly in the USA PG evidence is relatively new and untested in Canada. There have only been two, and only very recent cases, in which PG evidence has been of central importance to a verdict and conviction Our expectation is that these early examples of PG will raise significant questions in the profession, with lessons that can be applied to other forms of AI-generated evidence
AI in Canadian Criminal Law: A Lifecycle Analysis Ryan Fritsch, Legal Counsel rfritsch@lco-cdo.org https://www.lco-cdo.org/crimai https://www.lco-cdo.org/about