Skip to content

QUESTION PERIOD — Ministry of Public Safety

Impacts of Artificial Intelligence

September 19, 2024


Welcome, minister. I would like to ask you about artificial intelligence and public safety.

As you know, artificial intelligence relies on data to inform its algorithms, and artificial intelligence systems are open to bias, especially if they are using open-source data. According to the Canadian Tracking Automated Governance register, there are approximately 303 automated tools being used by our government. Several of them are in the Canada Border Services Agency, or CBSA, and some are in the Royal Canadian Mounted Police, or RCMP.

The concerns about bias are real. Bias in these public safety agency tools could be incredibly detrimental to individuals in forming decisions that have lifelong impacts.

Is the government using open-source data to inform the algorithms of the automated AI systems you use, and what training or additional safeguards are in place to combat these potential biases?

Hon. Dominic LeBlanc, P.C., M.P., Minister of Public Safety, Democratic Institutions and Intergovernmental Affairs [ + ]

Senator, your question is a very good one. I certainly share your concern about recourse to artificial intelligence or these algorithms that could, in fact, present circumstances of bias. We all work hard to remove systemic bias in government and public institutions. We certainly wouldn’t want to use technology which, in a very ironic way, would propagate or propel these biases.

Your question is a technical one. I do not know which particular algorithms are used or whether the CBSA or the RCMP would be using these particular tools, but I would be happy to take that question under advisement and ensure that Public Safety Canada, the RCMP, CBSA and other law enforcement agencies provide you with that information. It’s a very good question, and I wouldn’t dare to make up an answer to a question as important as that.

Back to top