Key takeaways
The National Security and Intelligence Review Agency has formally notified key federal ministers and organization heads about the study, which will examine how the security community defines, uses, and oversees various aspects of AI technologies.
The review comes as Canadian security agencies increasingly rely on artificial intelligence for tasks ranging from translating documents to detecting malware threats.
Scope and objectives of the review
In a letter to ministers and heads of organizations with national security roles, review agency chair Marie Deschamps outlined the study's objectives.
The findings will provide insights into the use of new and emerging tools, help guide future reviews, and highlight "potential gaps or risks" that might require attention, according to Deschamps.
"This review may also include independent inspections of some technical systems," Deschamps added in the letter, which was posted on the review agency's website.
The review agency has statutory authority to access all information held by departments and agencies under examination, including classified and privileged material, with the exception of cabinet confidences.
Requests for information may involve documents, written explanations, briefings, interviews, surveys, and system access.
Wide range of agencies under scrutiny
The letter was distributed to multiple cabinet members, including Prime Minister Mark Carney, Artificial Intelligence and Digital Innovation Minister Evan Solomon, Public Safety Minister Gary Anandasangaree, Defence Minister David McGuinty, Foreign Affairs Minister Anita Anand, and Industry Minister Mélanie Joly.
Recipients also included the heads of agencies with major security roles, such as the Canadian Security Intelligence Service, the RCMP, and the Communications Security Establishment, Canada's cyberspy service.
The letter was additionally sent to the heads of agencies that may not immediately come to mind in the security context, including the Canadian Food Inspection Agency, the Canadian Nuclear Safety Commission, and the Public Health Agency of Canada.
The RCMP responded to questions about the review by expressing support for independent oversight.
"The RCMP believes that establishing transparent and accountable external review processes is critical to maintaining public confidence and trust," the RCMP said in a media statement.
Balancing innovation with accountability
The Communications Security Establishment has publicly committed to responsible AI adoption in its artificial intelligence strategy.
The agency says it aims to develop new capabilities to solve critical problems through innovative use of AI and machine learning technologies while championing responsible and secure AI and countering threats posed by AI-enabled adversaries.
CSE chief Caroline Xavier emphasized the organization's cautious approach in a message included in the strategy.
"We will always be thoughtful and rule-bound in our adoption of AI, keeping responsibility and accountability at the core of how we will achieve our goals," Xavier stated.
"Recognizing that these technologies are fallible, we will experiment and scale incrementally, with a focus on rigorous testing and evaluation, keeping our highly trained and expert humans in the loop."
The CSE strategy notes that when deployed safely, securely, and effectively, AI capabilities will improve the agency's ability to analyze larger amounts of data faster and with more precision, enhancing the quality and speed of decision-making.
In 2024, the National Security Transparency Advisory Group, a federal advisory body, called on Canada's security agencies to publish detailed descriptions of their current and intended uses of artificial intelligence systems and software applications.
The group predicted increasing reliance on the technology to analyze large volumes of text and images, recognize patterns, and interpret trends and behaviour.
At that time, CSIS and the CSE acknowledged the importance of transparency about AI but noted there were limitations on what could be disclosed publicly, given their security mandates.
Read more:
SoftBank Completes $41 Billion OpenAI Investment, Secures 11% Stake
AI Forecast To Put 200,000 European Banking Jobs At Risk By 2030
Trooper Sues Washington State Patrol Over AI Deepfake Video