Citizen Lab director Ron Deibert joins Faculty of Law researcher Petra Molnar to warn of the human rights risks in Canada’s use of artificial intelligence in immigration decision-making. 

Canadian immigration and refugee authorities plan to expand their use of artificial intelligence, but offer the public few details on how this relatively-untested technology is used. The lack of transparency and oversight heightens the risk of human rights violations, warn Deibert and Molnar in an op-ed for the Globe and Mail

“Without proper oversight, mechanisms and accountability measures, the use of AI threatens to create a laboratory for high-risk experiments,” they write. 

The op-ed presents key findings from an extensive report on Canada’s use of AI in the immigration system, published by the Citizen Lab and the Faculty of Law’s International Human Rights Program. One concern is that AI algorithms can perpetuate, and even worsen, bias in visa processing and risk assessments for people coming to Canada as immigrants or refugees. 

“Automated decisions can rely on discriminatory and stereotypical markers, such as appearance, religion, or travel patterns, and thus entrench bias in the technology,” write Deibert and Molnar.