Executive Summary
This report examines algorithmic technologies that are designed for use in criminal law enforcement systems. Algorithmic policing is an area of technological development that, in theory, is designed to enable law enforcement agencies to either automate surveillance or to draw inferences through the use of mass data processing in the hopes of predicting potential criminal activity. The latter type of technology and the policing methods built upon it are often referred to as predictive policing. Algorithmic policing methods often rely on the aggregation and analysis of massive volumes of data, such as personal information, communications data, biometric data, geolocation data, images, social media content, and policing data (such as statistics based on police arrests or criminal records).
In order to guide public dialogue and the development of law and policy in Canada, the report focuses on the human rights and constitutional law implications of the use of algorithmic policing technologies by law enforcement authorities. This report first outlines the methodology and scope of analysis in Part 1. In Part 2, the report provides critical social and historical contexts regarding the criminal justice system in Canada, including issues regarding systemic discrimination in the criminal justice system and bias in policing data sets. This social and historical context is important to understand how algorithmic policing technologies present heightened risks of harm to civil liberties and related concerns under human rights and constitutional law for certain individuals and communities. The use of police-generated data sets that are affected by systemic bias may create negative feedback loops where individuals from historically disadvantaged communities are labelled by an algorithm as a heightened risk because of historic bias towards those communities. Part 3 of the report then provides a few conceptual building blocks to situate the discussion surrounding algorithmic policing technology, and it outlines how algorithmic policing technology differs from traditional policing methods.
In Part 4, the report sets out and summarizes findings on how law enforcement agencies across Canada have started to use, procure, develop, or test a variety of algorithmic policing methods. The report compiles original research with existing research to provide a comprehensive overview of what is known about the algorithmic policing landscape in Canada to date. In the overview of the use of algorithmic policing technology in Canada, the report classifies algorithmic policing technologies into the following three categories:
- Location-focused algorithmic policing technologies are a subset of what has generally been known as ‘predictive policing’ technologies. This category of algorithmic policing technologies purports to identify where and when potential criminal activity might occur. The algorithms driving these systems examine correlations in historical police data in order to attempt to make predictions about a given set of geographical areas.
- Person-focused algorithmic policing technologies are also a subset of predictive policing technologies. Person-focused algorithmic policing technologies rely on data analysis in order to attempt to identify people who are more likely to be involved in potential criminal activity or to assess an identified person for their purported risk of engaging in criminal activity in the future
- Algorithmic surveillance policing technologies, as termed in this report, do not inherently include any predictive element and are thus distinguished from the two categories above (location-focused and person-focused algorithmic policing). Rather, algorithmic surveillance technologies provide police services with sophisticated, but general, surveillance and monitoring functions. These technologies automate the systematic collection and processing of data (such as data collected online or images taken from physical outdoor spaces). Some of these technologies (such as facial recognition technology that processes photos from mug-shot databases) may process data that is already stored in law enforcement police files in a new way. For ease of reference, this set of technologies will be referred to as simply algorithmic surveillance technologies throughout the rest of this report. The term should be understood to be confined to the context of policing (thus excluding other forms of algorithmic surveillance technologies that are more closely tied to other contexts, such as tax compliance or surveilling recipients of social welfare).
The primary research findings of this report show that technologies have been procured, developed, or used in Canada in all three categories. For example, at least two agencies, the Vancouver Police Department and the Saskatoon Police Service, have confirmed that they are using or are developing ‘predictive’ algorithmic technologies for the purposes of guiding police action and intervention. Other police services, such as in Calgary and Toronto, have acquired technologies that include algorithmic policing capabilities or that jurisdictions outside of Canada have leveraged to build predictive policing systems. The Calgary Police Service engages in algorithmic social network analysis, which is a form of technology that may also be deployed by law enforcement to engage in person-focused algorithmic policing. Numerous law enforcement agencies across the country also now rely on a range of other algorithmic surveillance technologies (e.g., automated licence plate readers, facial recognition, and social media surveillance algorithms), or they are developing or considering adopting such technologies. This report also uncovers information suggesting that the Ontario Provincial Police and Waterloo Regional Police Service may be unlawfully intercepting private communications in online private chat rooms through reliance on an algorithmic social media surveillance technology known as the ICAC Child On-line Protection System (ICACCOPS). Other police services throughout Canada may also be using or developing additional predictive policing or algorithmic surveillance technologies outside of public awareness. Many of the freedom of information (FOI) requests submitted for this report were met with responses from law enforcement authorities that claimed privilege as justification for non-disclosure; in other cases, law enforcement agencies did not provide any records in response to the submitted FOI request, or requested exorbitant fees in order to process the request.
Building on the findings about the current state of algorithmic policing in Canada, Part 5 of the report presents a human rights and constitutional law analysis of the potential use of algorithmic policing technologies. The legal analysis applies established legal principles to these technologies and demonstrates that their use by law enforcement agencies has the potential to violate fundamental human rights and freedoms that are protected under the Canadian Charter of Rights and Freedoms (“the Charter”) and international human rights law. Specifically, the authors analyze the potential impacts of algorithmic policing technologies on the following rights: the right to privacy; the right to freedoms of expression, peaceful assembly, and association; the right to equality and freedom from discrimination; the right to liberty and to be free from arbitrary detention; the right to due process; and the right to a remedy. The major findings of this analysis are presented as follows:
- Implications for the Right to Privacy and the Right to Freedom of Expression, Peaceful Assembly, and Association: The increasing use of algorithmic surveillance technologies in Canada threatens privacy and the fundamental freedoms of expression, peaceful assembly, and association that are protected under the Charter and international human rights law. The advanced capabilities and heightened data requirements of algorithmic policing technologies introduces new threats to privacy and these fundamental freedoms, such as in the repurposing of historic police data, constitutionally questionable data sharing arrangements, or in algorithmically surveilling public gatherings or online expression, raising significant risks of violations. The Canadian legal system currently lacks sufficiently clear and robust safeguards to ensure that use of algorithmic surveillance methods—if any—occurs within constitutional boundaries and is subject to necessary regulatory, judicial, and legislative oversight mechanisms. Given the potential damage that the unrestricted use of algorithmic surveillance by law enforcement may cause to fundamental freedoms and a free society, the use of such technology in the absence of oversight and compliance with limits defined by necessity and proportionality is unjustified.
- Implications for the Right to Equality and Freedom from Discrimination: Systemic racism in the Canadian criminal justice system must inform any analysis of algorithmic policing, particularly its impacts on marginalized communities. The seemingly ‘neutral’ application of algorithmic policing tools masks the reality that they can disproportionately impact marginalized communities in a protected category under equality law (i.e., communities based on characteristics such as race, ethnicity, sexual orientation, or disability). The social and historical context of systemic discrimination influences the reliability of data sets that are already held by law enforcement authorities (such as data about arrests and criminal records). Numerous inaccuracies, biases, and other sources of unreliability are present in most of the common sources of police data in Canada. As a result, drawing unbiased and reliable inferences based on historic police data is, in all likelihood, impossible. Extreme caution must be exercised before law enforcement authorities are permitted, if at all, to use algorithmic policing technologies that process mass police data sets. Otherwise, these technologies may exacerbate the already unconstitutional and devastating impact of systemic targeting of marginalized communities.
- Implications for the Right to Liberty and to Freedom from Arbitrary Detention: It is incompatible with constitutional and human rights law to rely on algorithmic forecasting to justify interfering with an individual’s liberty. By definition, algorithmic policing methods tend to produce generalized inferences. Under human rights law and the Charter, loss of liberty (such as detention, arrest, denial of bail, and punishment through sentencing) cannot be justified based on generalized or stereotypical assumptions, such as suspicion based on beliefs about an ethnic group or on the location where an individual was found. Reliance on algorithmic policing technologies to justify interference with liberty may violate Charter rights where the purported grounds for interfering with liberty are based on algorithmic predictions drawn from statistical trends, as opposed to being particularized to a specific individual. Violations may include instances where an individual would not have been detained or arrested but for the presence of an algorithmic prediction based on statistical trends, all other circumstances remaining the same.
In addition to these major findings, the report documents problems that are likely to arise with respect to meaningful access to justice and the rights to due process and remedy, given that impactful accountability mechanisms for algorithmic policing technology are often lacking, and in light of the systemic challenges faced by individuals and communities seeking meaningful redress for rights violations that do not result in charges in Canadian courts. The absence of much needed transparency in the Canadian algorithmic policing landscape animates many of the core recommendations in this report. The authors hope that this report provides insight into the critical need for transparency and accountability regarding what types of technologies are currently in use or under development and how these technologies are being used in practice. With clarified information regarding what is currently in use and under development, policy- and lawmakers can enable the public and the government to chart an informed path going forward.
In response to conclusions drawn from the legal analysis, the report ends with a range of recommendations for governments and law enforcement authorities with a view to developing law and oversight that would establish necessary limitations on the use of algorithmic policing technologies. Part 6 provides a list of these recommendations, each of which is accompanied by contextual information to explain the purpose of the recommendation and offer potential guidance for implementation. The recommendations are divided into priority recommendations, which must be implemented now, with urgency, and ancillary recommendations, which may be inapplicable where certain algorithmic policing technologies are banned but must be implemented if any such technologies are to be developed or adopted. The following is a condensed summary of those recommendations.
Recommendations
A. Priority recommendations for governments and law enforcement authorities that must be acted upon urgently in order to mitigate the likelihood of human rights and Charter violations associated with the use of algorithmic policing technology in Canada:
- Governments must place moratoriums on law enforcement agencies’ use of technology that relies on algorithmic processing of historic mass police data sets, pending completion of a comprehensive review through a judicial inquiry, and on use of algorithmic policing technology that does not meet prerequisite conditions of reliability, necessity, and proportionality.
- The federal government should convene a judicial inquiry to conduct a comprehensive review regarding law enforcement agencies’ potential repurposing of historic police data sets for use in algorithmic policing technologies.
- Governments must make reliability, necessity, and proportionality prerequisite conditions for the use of algorithmic policing technologies, and moratoriums should be placed on every algorithmic policing technology that does not meet these established prerequisites.
- Law enforcement agencies must be fully transparent with the public and with privacy commissioners, immediately disclosing whether and what algorithmic policing technologies are currently being used, developed, or procured, to enable democratic dialogue and meaningful accountability and oversight.
- Provincial governments should enact directives regarding the use and procurement of algorithmic policing technologies, including requirements that law enforcement authorities must conduct algorithmic impact assessments prior to the development or use of any algorithmic policing technology; publish annual public reports that disclose details about how algorithmic policing technologies are being used, including information about any associated data, such as sources of training data, potential data biases, and input and output data where applicable; and facilitate and publish independent peer reviews and scientific validation of any such technology prior to use.
- Law enforcement authorities must not have unchecked use of algorithmic policing technologies in public spaces: police services should prohibit reliance on algorithmic predictions to justify interference with individual liberty, and must obtain prior judicial authorization before deploying algorithmic surveillance tools at public gatherings and in online environments.
- Governments and law enforcement authorities must engage external expertise, including from historically marginalized communities that are disproportionately impacted by the criminal justice system, before and when considering, developing, or adopting algorithmic policing technologies, when developing related regulation and oversight mechanisms, as part of completing algorithmic impact assessments, and in monitoring the effects of algorithmic policing technologies that have been put into use if any.
B. Ancillary recommendations for law enforcement authorities:
-
- Law enforcement authorities should enhance police database integrity and management practices, including strengthening the ability of individuals to verify and correct the accuracy of personal information stored in police databases.
- Law enforcement authorities must exercise extreme caution to prevent unconstitutional data-sharing practices with the private sector and other non-police government actors.
- Law enforcement authorities should undertake the following best practices whenever an algorithmic policing technology has been or will be adopted or put into use, with respect to that technology:
- Implement ongoing tracking mechanisms to monitor the potential for bias in the use of any algorithmic policing technology;
- Engage external expertise ongoingly, including consulting with communities and individuals who are systemically marginalized by the criminal justice system, about the potential or demonstrated impacts of the algorithmic policing technology on them;
- Formally document written policies surrounding the use of algorithmic policing technology; and
- Adopt audit mechanisms within police services to reinforce and identify best practices and areas for improvement over time.
C. Ancillary recommendations for law reform and related measures by federal, provincial, territorial, and municipal governments:
- The federal government should reform the judicial warrant provisions of the Criminal Code to specifically address the use of algorithmic policing technology by law enforcement authorities.
- Federal and provincial legislatures should review and modernize privacy legislation with particular attention to reevaluating current safeguards to account for the advanced capabilities of algorithmic policing technologies and to the retention and destruction of biometric data by law enforcement authorities.
- The federal government should expand the Parliamentary reporting provisions under the Criminal Code that currently only apply to certain electronic surveillance methods to specifically address police powers in relation to the use of algorithmic policing technology by law enforcement authorities.
- Governments must ensure that privacy and human rights commissioners are empowered and have sufficient resources to initiate and conduct investigations into law enforcements’ use of algorithmic policing technology.
D. Ancillary recommendations for government to enable access to justice in relation to the human rights impacts of algorithmic policing technology:
- Governments should make funding available for research to develop the availability of independent expertise.
- Governments must ensure adequate assistance is available for low income and unrepresented defendants in order that they might to retain expert witnesses in criminal proceedings.
- Governments and law enforcement agencies must make the source code of algorithmic policing technologies publicly available or, where appropriate, confidentially available to public bodies and independent experts for the purposes of algorithmic impact assessments, pre-procurement review, security testing, auditing, investigations, and judicial proceedings.
Acknowledgements
The Citizen Lab would like to thank the following funders for supporting this research: the John D. and Catherine T. MacArthur Foundation, the Sigrid Rausing Trust, the Ford Foundation, and the Oak Foundation. This research was also supported in part by a grant from the Open Society Foundation.
The International Human Rights Program (IHRP) would like to thank The Law Foundation of Ontario for its generous financial support.