ResearchTransparency and Accountability

Submission to the Toronto Police Services Board’s Use of New Artificial Intelligence Technologies Policy

Below is an excerpt of the joint submission between the Women’s Legal Education and Action Fund (LEAF) and the Citizen Lab to the Toronto Police Services Board. You can find the full letter here.

Citizen Lab has conducted in-depth analysis of the human rights impacts of emerging technologies in the areas of predictive policing and algorithmic surveillance. Its findings and law reform recommendations are found in a report that was released in 2020 by the Citizen Lab and the International Human Rights Program, titled To Surveil and Predict: A Human Rights Analysis of Algorithmic Policing in Canada. Read the full report and our explanatory guide that provides a summary of research findings as well as questions and answers from the research team. We also provide a fact sheet of our key investigative findings here.

December 15, 2021

Dear Dr. Kanengisser:

Re: Submission to the Toronto Police Services Board’s Use of New Artificial Intelligence Technologies Policy

We write to you as a group of experts1 in the legal regulation of artificial intelligence (AI), technology-facilitated violence, equality, and the use of AI systems by law enforcement in Canada. We have experience working within academia and legal practice, and are affiliated with LEAF and the Citizen Lab who support this letter.

Introduction

We commend the Toronto Police Services Board (TPSB) for engaging in this public consultation and welcome this opportunity to submit comments on the TPSB’s Use of Artificial Intelligence Technologies Policy (AI Policy). In this submission, we urge the TPSB to centre precaution, substantive equality, human rights, privacy protections, transparency, and accountability in its policy on the use of AI technology by the Toronto Police Services (TPS). Further, we implore the Board to continue to seek out the guidance and expertise of AI and technology scholars and advocates; equality and human rights experts; affected communities and their members, including historically marginalized communities; and other relevant stakeholders when developing and implementing policies related to the adoption and use of AI by the TPS today, and into the future. Finally, we recommend that the TPSB place an immediate moratorium on law enforcement use of algorithmic policing technologies that do not meet minimum prerequisite conditions of reliability, necessity, and proportionality.1 We appreciate and recognize that our comments will be shared publicly.

We have reviewed the draft policy and provide comments and recommendations focused on the following key observations:

  1. Police use of AI technologies must not be seen as inevitable
  2. A commitment to protecting equality and human rights must be integrated more thoroughly throughout the TPSB policy and its AI analysis procedures
  3. Inequality is embedded in AI as a system in ways that cannot be mitigated through a policy only dealing with use
  4. Having more accurate AI systems does not mitigate inequality
  5. The TPSB must not engage in unnecessary or disproportionate mass collection and analysis of data
  6. TPSB’s AI policy should provide concrete guidance on the proactive identification and classification of risk
  7. TPSB’s AI policy must ensure expertise in independent vetting, risk analysis, and human rights impact analysis
  8. The TPSB should be aware of assessment challenges that can arise when an AI system is developed by a private enterprise
  9. The TPSB must apply the draft policy to all existing AI technologies that are used by, or presently accessible to, the Toronto Police Service

Recommendations

Recommendation 1:

We recommend that any policies used by TPSB to govern the TPS’ procurement or use of AI-based technologies include a requirement that all AI-systems must meet the minimum prerequisites of reliability, necessity, and proportionality.

Recommendation 2:

We recommend that whenever a proposed or currently used AI-system cannot meet the prerequisites of reliability, necessity, and proportionality that the technology or system should either be banned or severely limited in its uses.

Recommendation 3:

We recommend that there be clear language in the policy that allows for the outright rejection of certain AI systems and a requirement to reverse course on a technology that is already in use if it is later found to violate the prerequisites of reliability, necessity, and proportionality.

Recommendation 4:

We recommend that under s. 5(g) of the policy, reporting to the TPSB include the identification of potential individual and systemic human rights violations and harms.

Recommendation 5:

We recommend that under s. 5(h) of the policy, reporting to the TPSB include the identification of any potential human rights violations that could be caused by an AI system on an individual level as well as the identification of any potential systemic harms that could be caused or replicated by the use of an AI system (e.g. crime prediction systems).

Recommendation 6:

We recommend that when conducting the risk analysis for the AI categorization that the policy explicitly state that it will consider the privacy, equality, and human rights on an individual and systemic level. Such language might be included under s 1(c)(i)(4)).

Recommendation 7:

We recommend that the word “socioeconomic” be added after the word “gender” in paragraph (h) on page 7.

Recommendation 8:

We recommend that TPSB explicitly acknowledge that limits on and oversight of AI system use is not itself enough to ensure TPS use of such systems will not engage constitutional and human rights. This recommendation is accompanied by the recommendations concerning vetting requirements that we discuss below in sections 6 and 7 as one mechanism for addressing some of the concerns raised here.

Recommendation 9:

We recommend that the policy include a required assessment of, and justification for, TPS collection and use of data to be processed through or otherwise utilized by algorithmic policing systems.

Recommendation 10:

We recommend that the policy recognize that the accuracy of an AI system does not mean that it will necessarily be appropriate to use, and further that the policy recognize that equality, human rights, and privacy must always be prioritized.

Recommendation 11:

We recommend that the policy classify as extreme risk AI technologies that repurpose historic, police data sets for algorithmic processing in order to draw inferences that may result in the increased deployment of police resources based on those inferences.

Recommendation 12:

We recommend that s. 1(c)(i)(2) be changed to read “Where the use of the application results in mass surveillance defined as the discriminate or indiscriminate monitoring of a population or a significant component of a population”.

Recommendation 13:

We recommend that the policy identify mass collection of data itself as an extreme risk, distinct from the risk of mass surveillance and monitoring, and that this risk identification explicitly include any mass collection of data that may be characterized as ‘publicly accessible’ on the Internet or in physical public or private spaces.

Recommendation 14:

We recommend that mass data collection be defined in the policy and that there be an assessment requirement for when the TPS is collecting massive amounts of data, including but not limited to scraping content from the internet or purchasing data from data brokers.

Recommendation 15:

We recommend that the TPSB include clearer language within the ‘extreme risk’ category to explicitly limit the mass analysis of data previously collected, or collected without the use of an AI-system, in addition to the limit on the mass collection of data as noted in recommendation 13.

Recommendation 16:

We recommend that the TPSB standardize its defined terms (moderate vs. medium) to be consistent with its terminology.

Recommendation 17:

We recommend that the extreme, high, and medium/moderate risk categories be more clearly defined by including factors that require proactive identification of potential risk, rather than solely relying on examples of the types of technologies that might fall into the three categories.

Recommendation 18:

We recommend that the classification of risk be based on potential for harm rather than known or proven harm caused by a technology.

Recommendation 19:

We recommend that the power imbalances between the police and the communities, as well as the inability of individuals to opt out of being assessed by the AI, both be included as factors in the proactive identification of risk of human rights impacts.

Recommendation 20:

We recommend that a requirement to consult with experts be added to the risk analysis process that is contemplated for all new AI technologies. At present, the draft policy only requires (at paragraph 1) consultation in the development of the risk assessment procedures in general.

Recommendation 21:

We recommend that a non-exhaustive list of expertise be recognized in the policy, including members of historically marginalized communities, members of communities who have experienced or are at risk of biased assessments by AI, legal, racial justice, equality, and technology and human rights scholars, public interest technologists, and security researchers.

Recommendation 22:

We recommend that the TPSB provide more direction in this policy with respect to the specific content of the risk assessment process for each AI technology, in line with the recommendations in To Surveil and Predict regarding the proper content of algorithmic impact assessments.46

Recommendation 23:

We recommend that the TPSB recognize that all High and Moderate/Medium risk AI technologies will need to be monitored ongoingly and indefinitely. As such, we recommend that the one-year limit be removed from paragraphs 5(n) and 10 of the draft policy.

Recommendation 24:

We recommend that the TPSB policy require that the TPS engage external expertise when developing the monitoring and oversight mechanisms associated with specific AI and predictive policing technologies.

Recommendation 25:

We recommend that, at a minimum, reviews of AI systems be conducted at least annually in addition to taking into account all expert advice on what monitoring processes will be required for a particular technology. We note that a five-year gap between reviews is an extraordinarily long time in the realm of AI research. As such, we recommend that the five-year period in paragraph 18 should be replaced with a requirement that a review be conducted “at least once every year”.

Recommendation 26:

We recommend that all uses of AI technology must incorporate statistical tracking and a formal documentation process for all known errors and incidents. Formalizing documentation requirements is an essential aspect of effective accountability and oversight systems that focus not just on transparency but accountability as well.

Recommendation 27:

We recommend that the TPSB policy require the imposition of ongoing tracking and monitoring protocols that specifically incorporate monitoring practices that are attentive to patterns that reflect bias, with the content of that tracking to be developed in consultation with experts.49

Recommendation 28:

We recommend that the policy should require regular and independent auditing, with the content and frequency of those audits to be developed in consultation with experts.50

Recommendation 29:

We recommend that the TPSB publicly release an annual report of all unintended consequences associated with AI-systems so as to provide transparency and accountability about the operation of such systems.

Recommendation 30:

We recommend that whenever TPS proposes to enter into an agreement to purchase or procure new AI systems, TPSB must require that public interest legal standards and public sector control will apply to those commercial purchases, particularly when criminal jeopardy is at stake. Companies must agree in contract to waive trade secret or other protections in pertinent circumstances, which should be well defined with a view to protecting due process and other human rights. Alternatively, TPS may develop systems in-house, and in doing so “might follow the model of the Saskatchewan Police Predictive Analytics Lab, which is developing its predictive analytics technology in-house and
in partnership with academic experts, under a university research ethics protocol.”

Recommendation 31:

We recommend that when the TPSB is assessing the potential use of an AI system developed by a private company that could impact individual rights, if the company cannot provide its source code, training data, or other pertinent information about the system’s development or ongoing operations that are needed to comprehensively explain the workings and operation of the system, the system should thus should be classified as an extreme risk (i.e. one that is prohibited).

Recommendation 32:

We recommend that if high-risk AI technologies are to be used at all, the TPS should develop their own AI system whenever possible in order to better ensure that source code and related details of that AI will be publicly available and in machine-readable and human readable forms. Thisinformation should be available to the public and to researchers.52 Such a system must still be subject
to the entire review process and requirements of the policy.

Recommendation 33:

We recommend that the TPSB to apply any policy applicable to AI technologies to all AI technologies in the possession of, or accessible to, the Toronto Police Service. This must include the TPS’ use of facial recognition technology, the TPS’ collaborations with Environics Analytics, the TPS’ access to IBM’s Cognos Analytics and SPSS (Statistical Package for the Social Sciences) software, or other similar technologies that may be unknown to the public.

  1. Kristen Thomasen (co-author and signatory; Assistant Professor, Peter A. Allard School of Law, University of British Columbia); Suzie Dunn (co-author and signatory; Member of LEAF’s Technology-Facilitated Violence Advisory Committee; Assistant Professor, Dalhousie University’s Schulich School of Law); Kate Robertson (co-author and signatory; Research Fellow, Citizen Lab; criminal and regulatory litigator, Markson Law); Pam Hrick (reviewer and signatory; Executive Director & General Counsel, Women’s Legal Education and Action Fund); Cynthia Khoo (reviewer and signatory; Research Fellow, Citizen Lab); Rosel Kim (reviewer and signatory; Staff Lawyer, Women’s Legal Education and Action Fund); Ngozi Okidegbe (reviewer and signatory; Member of LEAF’s Technology-Facilitated Violence Advisory Committee; Assistant Professor of Law at Cardozo School of Law); and Christopher Parsons (reviewer and signatory; Senior Research Associate, Citizen Lab).
  2. Robertson, Khoo, & Song, To Surveil and Predict, supra at p. 5, 150-151, 154-155.