On March 21, 2022, Citizen Lab fellow Cynthia Khoo appeared before the House of Commons Standing Committee on Access to Information, Privacy and Ethics (ETHI) as a witness in the Committee’s study on the use and impact on facial recognition technology. She was invited to provide testimony on the potential harms and human rights implications of facial recognition, including recommendations for how the Government of Canada should regulate such technology. Below is a written copy of her formal remarks.

A recording of her full testimony is available here, including comments responding to the Committee’s questions, and a transcription is available here.

Update (October 4, 2022): The Standing Committee subsequently released its report from this study, Facial Recognition Technology and the Growing Power of Artificial Intelligence. The report’s findings and recommendations appear to be informed by this testimony and Citizen Lab research conducted in collaboration with the International Human Rights Program at the University of Toronto. Read more about the report and its significance in this blog post.

 


Appearance before the House of Commons Standing Committee on Access to Information, Privacy and Ethics: Use and impact of facial recognition technology

Advance Draft Copy of Oral Testimony

Good morning. My name is Cynthia Khoo. I’m an Associate at the Center on Privacy & Technology at Georgetown Law in Washington, DC, as well as a Research Fellow with the Citizen Lab at the University of Toronto.

I appear today in a professional capacity, though I am providing my own views as an individual, based on my work at the Citizen Lab, and which are further informed by the work of my colleagues at both the Citizen Lab and the Privacy Center.

My remarks will cover four key concerns, with corresponding recommendations.

1. Facial Recognition Technology Can Accelerate and Entrench Systemic Inequality

To begin, I’d like to introduce you to three people: Robert Williams was singing in his car when a crime he had nothing to do with occurred elsewhere; Nijeer Parks was transferring funds at a Western Union; and Michael Oliver was simply at work.

All three are Black men who were wrongfully arrested by police relying on facial recognition technology. They have endured lost jobs, traumatized children, and broken relationships, not to mention the blow to personal dignity.

These are the human costs of false confidence in, and unconstitutional uses of, facial recognition technology.

This is the same technology that researchers have found is up to 100 times more likely to misidentify Black and Asian individuals, and that misidentifies more than 1 in 3 darker-skinned women, but works 99% of the time for white men.

Although I used examples from the United States, the same could easily happen here, if it hasn’t already. Racial discrimination against Black and Indigenous people imbues every stage of the Canadian criminal justice system, from carding, arrests, and bail, to pleas, sentencing, and parole.

Embedding facial recognition algorithms into this foundation of systemic bias may digitally alchemize past injustices into an even more and perhaps permanently inequitable future.

Therefore, recommendation number one is to launch a judicial inquiry into law enforcement use of pre-existing mass police data sets, such as mugshots. This is to assess the appropriateness of repurposing previously collected personal data for use with facial recognition and other algorithmic policing technologies.

2. Facial Recognition Technology Threatens Constitutional and Fundamental Human Rights and Freedoms

Turning to my second point: even if all bias were removed from facial recognition algorithms, the technology would still pose an equal or even greater threat to our constitutional and human rights.

Facial recognition used to identify people in public violates privacy preserved through anonymity in daily life, and relies on collecting particularly sensitive biometric data. This would likely induce chilling effects on freedom of expression, such as public protesting of injustice.

Such capability also promises to exacerbate gender-based violence and abuse, by facilitating the stalking of women who are just going about their lives and must be able to do so free of fear.

Facial recognition has not been shown to be sufficiently necessary, proportionate, or reliable to outweigh these far-reaching repercussions.

Thus, recommendation number two is to place a national moratorium on the use of facial recognition technology by law enforcement, until and unless it is shown to be not only reliable, but also necessary and proportionate to legitimate aims.

This may well mean a complete ban, as several U.S. cities have already done. Canada should not shy away from following suit. Software cannot bear the legal and moral responsibility that humans might otherwise abdicate to it over vulnerable people’s lives and freedom.

3. Law Enforcement Use of Facial Recognition Technologies Must Be Transparent and Accountable to the Public

The third problem is lack of transparency and accountability. This is evident in how the public only knows the full extent of police facial recognition use through media scoops, leaked documents, and freedom of information requests.

Policies governing police use of facial recognition can be even more of a black box than the algorithms themselves are said to be. This gives rise to severe due process deficits in criminal cases.

Recommendation number three is to establish robust transparency and accountability measures in the event such technology is adopted. This includes immediate and advance public notice and public comment; algorithmic impact assessments; consultation with historically marginalized communities; and independent oversight mechanisms, such as judicial authorization.

4. Commercial Technology Vendors Must Meet Public Interest Legal Standards If Collecting and Sharing Data for Law Enforcement Purposes

Fourth and lastly, we need strict legal safeguards to ensure that police reliance on private sector companies does not create an end-run around our constitutional rights to liberty and to protection from unreasonable search and seizure.

Software from companies such as Clearview AI, Amazon Rekognition, and NEC Corporation is typically proprietary, concealed by trade secret laws, and procured on the basis of behind-the-scenes lobbying.

This results in secretive public-private surveillance partnerships that strip criminal defendants of their due process rights and subject all of us to inscrutable layers of mass surveillance.

I thus conclude with recommendation number four: if a commercial technology vendor is collecting personal data for and sharing it with law enforcement, they must be contractually bound or otherwise held to public interest standards of privacy protection and disclosure. Otherwise, the law would be permitting police agencies to do indirectly what the constitution bars them from doing directly.

Thank you, and I welcome your questions.