A new report from the Citizen Lab and the International Human Rights Program at the University of Toronto’s Faculty of Law investigates the use of artificial intelligence and automated decision-making in Canada’s immigration and refugee systems. The report finds that use of automated decision-making technologies to augment or replace human judgment threatens to violate domestic and international human rights law, with alarming implications for the fundamental human rights of those subjected to these technologies.

The ramifications of using automated decision-making in the sphere of immigration and refugee law and policy are far-reaching. Marginalized and under-resourced communities such as residents without citizenship status often have access to less robust human rights protections and less legal expertise with which to defend those rights. The report notes that adopting these autonomous decision-making systems without first ensuring responsible best practices and building in human rights principles at the outset may only exacerbate pre-existing disparities and can lead to rights violations including unjust deportation.

Since at least 2014, Canada has been introducing automated decision-making experiments in its immigration mechanisms, most notably to automate certain activities currently conducted by immigration officials and to support the evaluation of some immigrant and visitor applications. Recent announcements signal an expansion of the uses of these technologies in a variety of immigration decisions that are normally made by a human immigration official. These can include decisions on a spectrum of complexity, including whether an application is complete, whether a marriage is “genuine”, or whether someone should be designated as a “risk.”

The report provides a critical interdisciplinary analysis of public statements, records, policies, and drafts by relevant departments within the Government of Canada, including Immigration, Refugees and Citizenship Canada, and the Treasury Board of Canada Secretariat. The report additionally provides a comparative analysis to similar initiatives occurring in similar jurisdictions such as Australia and the United Kingdom. In February, the IHRP and the Citizen Lab submitted 27 separate Access to Information Requests and continue to await responses from Canada’s government.

The report concludes with a series of specific recommendations for the federal government, the complete and detailed list of which are available at the end of this publication. In summary, they include recommendations that the federal government:

1. Publish a complete and detailed report, to be maintained on an ongoing basis, of all automated decision systems currently in use within Canada’s immigration and refugee system, including detailed and specific information about each system.

2. Freeze all efforts to procure, develop, or adopt any new automated decision system technology until existing systems fully comply with a government-wide Standard or Directive governing the responsible use of these technologies.

3. Adopt a binding, government-wide Standard or Directive for the use of automated decision systems, which should apply to all new automated decision systems as well as those currently in use by the federal government.

4. Establish an independent, arms-length body with the power to engage in all aspects of oversight and review of all use of automated decision systems by the federal government.

5. Create a rational, transparent, and public methodology for determining the types of administrative processes and systems which are appropriate for the experimental use of automated decision system technologies, and which are not.

6. Commit to making complete source code for all federal government automated decision systems—regardless of whether they are developed internally or by the private sector—public and open source by default, subject only to limited exceptions for reasons of privacy and national security.

7. Launch a federal Task Force that brings key government stakeholders alongside academia and civil society to better understand the current and prospective impacts of automated decision system technologies on human rights and the public interest more broadly.

Read the full report here.