In early October 2022, the House of Commons Standing Committee on Access to Information, Privacy and Ethics (“ETHI”) released the final report from their study on the “Use and Impact of Facial Recognition Technology”: Facial Recognition Technology and the Growing Power of Artificial Intelligence. The report concluded what prior Citizen Lab research has indicated, which is that “Canada’s current legislative framework does not adequately regulate FRT [facial recognition technology] and AI [artificial intelligence]. Without an appropriate framework, FRT and other AI tools could cause irreparable harm to some individuals.” The report includes nineteen recommendations to the federal government to address this issue.

Many of ETHI’s key findings and recommendations align with research and recommendations provided in previous Citizen Lab reports and submissions concerning algorithmic policing technologies and similar government systems. These include, for example, To Surveil and Predict: A Human Rights Analysis of Algorithmic Policing in Canada and Bots at the Gate: A Human Rights Analysis of Automated Decision Making in Canada’s Immigration and Refugee System—both published in collaboration with the International Human Rights Program (IHRP) at the University of Toronto Faculty of Law—and a joint submission with the Women’s Legal Education and Action Fund (LEAF) to a public consultation by the Toronto Police Services Board regarding its proposed Use of Artificial Intelligence Technology Policy.

Citizen Lab fellow Cynthia Khoo was invited to appear before the Standing Committee to discuss the Citizen Lab’s research and legal reform recommendations regarding algorithmic policing and facial recognition technologies, and her testimony is also reflected in the final report (full statement available here.)

This blog post will highlight some of the key findings and recommendations that we were encouraged to see emerge from the ETHI report, along with some concerns:

Moratorium on (Federal) Police Use of Facial Recognition Technologies

Recommendation 18 of the report advises the Government of Canada to “impose a federal moratorium on the use of facial recognition technology by (Federal) policing services and Canadian industries unless implemented in confirmed consultation with the Office of the Privacy Commissioner or through judicial authorization”. This echoes the highest-priority recommendation from the Citizen Lab’s To Surveil and Predict report, which calls for a moratorium on law enforcement use of algorithmic policing technologies, until and unless they meet preconditions of necessity, proportionality, and reliability; and a further recommendation that police use of algorithmic policing technologies be subject to prior judicial authorization.

ETHI’s recommended moratorium is unfortunately narrow, being limited to only facial recognition technologies (in fairness, the main focus of the study), and especially being limited to only federal policing services. Our research and that of others, such as investigative journalists, has demonstrated that use of and interest in facial recognition and other algorithmic policing technologies is widespread among provincial, regional, and municipal police agencies. Any national moratorium that does not include these levels of law enforcement is likely to be of limited effect in practice, even before considering whether collaboration between the RCMP and local police forces would render it trivial to circumvent. While it is understandable that ETHI recommendations to the federal government would primarily target federal policing, we hope that provincial and municipal governments will see fit to adopt this recommendation themselves and enact similar moratoria as well.

Acknowledging the Problem of Public-Private Surveillance Partnerships

What may be even more significant about the recommended moratorium is the inclusion of “Canadian industries”—one of the few examples of drawing a line on the use of facial recognition technologies by private sector entities, in addition to state agencies. This inclusion, along with Recommendation 1 (requiring any government institution under the Privacy Act “to ensure that the practices of any third party from which it obtains personal information are lawful”) recognizes concerns regarding “public-private surveillance partnerships”: arrangements where private businesses such as technology vendors, social media companies, or data brokers systematically and voluntarily share or sell personal data to law enforcement agencies. Such arrangements can result in the invisible erosion of constitutional privacy rights, in enabling law enforcement to circumvent section 8 protections under the Canadian Charter of Rights and Freedoms, by effectively outsourcing warrantless digital surveillance to private companies, which are not subject to the same level of constitutional obligations in collecting personal data.

The Citizen Lab’s and IHRP’s To Surveil and Predict report discusses the problems with such data-sharing arrangements, which were raised with ETHI, alongside similar concerns presented by many other technology, human rights, and civil liberties experts and advocates who appeared before the Committee. While the ETHI report details these concerns in a dedicated subsection, “Procurement and Private-Public Partnerships,” its recommendations focus more on improving transparency rather than mitigating or preventing their repercussions. We hope this signals only the start of more robust regulation on this front.

Increasing Public Transparency and Accountability of Algorithmic Policing Technologies

The ETHI report made several recommendations to increase public transparency around police use of facial recognition and other algorithmic policing technologies. While this is encouraging, as the Citizen Lab has strongly recommended increased transparency in the use of these technologies, we are troubled that many of the transparency recommendations are limited by being “subject to national security concerns”, including:

  • that the Canadian government “create a public AI registry in which all algorithmic tools used by any entity operating in Canada are listed” (Recommendation 5);
  • requiring “government institutions that acquire facial recognition technology or other algorithmic tools, including free trials, to make that acquisition public” (Recommendation 6); and
  • that the government “ensure the full and transparent disclosure of racial, age or other unconscious biases that may exist in facial recognition technology used by the government” as soon as any such bias is found (Recommendation 9).

Without the national security caveat, these transparency requirements are promising in their breadth of coverage — including not only facial recognition technology but “all algorithmic tools used by any entity” in Canada and “other algorithmic tools” procured by the government. This too is in line with the To Surveil and Predict report, which calls for a moratorium and regulation of all algorithmic policing technologies, not just facial recognition. However, law enforcement and security agencies have been exceptionally opaque in the national security context, to the extent of being repeatedly reprimanded by federal courts for violating their duty of candour. Each national security exception thus appears to be a disappointing loophole that cuts the legs out from under what could be otherwise significant public transparency reforms.

Recognition of Human Rights Implications and Impacts on Marginalized Communities

Last but going to the heart of the matter, the Committee acknowledged throughout its report the grave human rights implications of facial recognition technology and its harmful impacts on historically marginalized communities in particular. ETHI’s recommendations include several addressing the right to privacy, including critical prohibitions on activities such as using facial recognition and other algorithmic technologies for mass surveillance (Recommendation 11) and capturing people’s images from public spaces to populate “facial recognition technology databases or artificial intelligence algorithms” (Recommendation 17).

Using people’s images pulled from public spaces especially would dismantle the right to privacy through anonymity in public. Such a privacy violation is disproportionately dangerous for those who are historically and systemically marginalized such as women and gender minorities; Black, Indigenous, and other racialized individuals; members of the LGBTQ+ community; people with disabilities; and people who live in poverty. This, along with additional far-reaching impacts of facial recognition and algorithmic policing technologies for the right to equality and societal equity, makes it even more important that the Government of Canada follows ETHI’s suggestion that public sector use of facial recognition include consultation with marginalized groups alongside “immediate and advance public notice and public comment” (Recommendation 10).

Given ETHI’s findings and recommendations, we hope this report represents only the beginning of meaningful efforts by the Canadian government to stop and prevent unconstitutional and harmful uses of facial recognition and other algorithmic policing technologies across the country.

Read the ETHI report here:

Read the Citizen Lab and IHRP reports here: