Artificial Intelligence
Examining social bias, transparency, and accountability issues associated with artificial intelligence (AI) systems.
Read more
With the rapid development and deployment of AI and machine-learning (ML) systems across a broad spectrum of different applications and industries there is an urgent need to understand the associated risks of this technology.
The use of algorithmic decision-making systems to assist or replace human judgement raises concerns around fairness, transparency, and accountability, among other ethical considerations. In our work, we develop methods to identify how generative AI systems are censored. We examine the legal and policy implications of automated decision-making systems.

LATEST RESEARCH
-
To Surveil and Predict
A Human Rights Analysis of Algorithmic Policing in Canada
This report examines algorithmic technologies that are designed for use in criminal law enforcement systems, including a human rights and constitutional law analysis of the potential use of algorithmic policing technologies.
September 1, 2020
-
Bots at the Gate
A Human Rights Analysis of Automated Decision Making in Canada’s Immigration and Refugee System
The report finds that use of automated decision-making technologies to augment or replace human judgment threatens to violate domestic and international human rights law, with alarming implications for the fundamental human rights of those subjected to these technologies.
September 26, 2018
RECENT NEWS
BROWSE RELATED CONTENT