Navigating the AI frontier: A look at regulation in Canada

A look at the current status of AI regulation in Canada, with comment from Pat Poitevin on compliance challenges and what to expect next.

Canada stands at a pivotal moment in the evolution of AI. While the nation has long recognized AI’s transformative potential, the path towards comprehensive regulation is still under construction. So what are the key developments, what legislation is propsed, and what proactive steps are being taken by financial institutions.

A significant step towards federal AI regulation was the introduction of the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27. This proposed legislation aimed to establish a national framework for the responsible development and deployment of AI systems, particularly those deemed “high-impact” due to their potential to affect individuals’ rights and safety. AIDA sought to impose obligations on businesses to identify and mitigate risks, ensure transparency, and establish an AI and Data Commissioner for oversight.

However, progress was halted with the prorogation of the Canadian Parliament on January 6, 2025. As a result, Bill C-27, including the crucial AIDA, did not pass into law. This leaves Canada without a comprehensive, federal law specifically governing AI in force in 2025.

Interim measures and guidance

In the absence of formal legislation, the Canadian government has taken interim steps to guide responsible AI practices. In September 2023, a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems was introduced. This code encourages Canadian companies to adopt common standards for ethical and responsible AI development while the regulatory landscape evolves.

Furthermore, federal and provincial privacy commissioners have collaborated to issue principles for trustworthy and privacy-protective generative AI technologies, emphasizing the importance of adhering to existing privacy laws.

Ontario takes a step forward

While federal legislation remains in flux, the province of Ontario took concrete action with the enactment of the Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024. This legislation, which received royal assent on November 25, 2024, will regulate the use of AI within Ontario’s public sector, marking a significant sub-national development.

Evan Solomon.
Photo: Liberal Party of Canada

A clear signal of the federal government’s continued focus on AI governance was the recent appointment of Evan Solomon as Canada’s first Minister of Artificial Intelligence and Digital Innovation on May 13, 2025. While the specific responsibilities of this new role are still unfolding, it underscores the government’s commitment to navigating the complexities and opportunities presented by AI.

AI in the financial sector

Canadian financial services firms, including major banks, are increasingly leveraging AI tools to enhance their operations, particularly in areas like compliance and surveillance.

  • Transaction monitoring and fraud detection: Firms such as HSBC and JPMorganChase have reported significant improvements in detecting financial crime and reducing fraud through AI-powered systems.
  • Regulatory reporting and risk management: Companies such as Travelex have implemented AI to automate regulatory reporting.
  • KYC and CDD: AI is being used to streamline customer verification processes and enhance risk profiling.
  • Communications surveillance: Firms such as Global Relay note the increasing adoption of AI for monitoring electronic communications to detect potential misconduct.
  • Canadian banks lead the way: CIBC became the first major Canadian bank to sign the government’s voluntary AI code, highlighting its commitment to responsible AI development. RBC is also actively investing in AI to combat financial crime.
  • Regulators: Canadian financial regulators, including the Office of the Superintendent of Financial Institutions (OSFI) and the Financial Consumer Agency of Canada (FCAC), are closely monitoring the use of AI in the financial sector. They are emphasizing the need for robust risk management frameworks, data governance, and transparency in AI applications. The Canadian Securities Administrators (CSA) believes existing securities laws can generally accommodate AI, provided firms maintain regulatory accountability.

The path forward

While Canada’s journey towards comprehensive AI regulation has faced recent setbacks, the ongoing discussions, the voluntary code of conduct, and the appointment of a dedicated AI minister signal a continued commitment to establishing a robust framework. The experiences of the financial sector in adopting AI for compliance and surveillance will likely inform future regulatory approaches. As AI technology continues to evolve rapidly, Canada faces the critical task of balancing innovation with the need for clear, ethical, and effective regulation to ensure the technology benefits all Canadians while mitigating potential risks.

With Mark Carney’s Liberal Party securing a victory in the 2025 federal election, the direction of AI regulation in Canada is now more clearly defined. The newly elected government, under Prime Minister Carney’s leadership, will play a crucial role in shaping the nation’s approach to this rapidly evolving technology. Its priorities, as well as the composition of the newly formed parliament, will heavily influence the legislative agenda and the speed of regulatory development.

As Canada moves forward, the balancing act between fostering AI innovation and ensuring robust, ethical oversight will be paramount. The experiences gained from the financial sector’s AI adoption, and those from other industries, will be invaluable in informing the creation of effective and forward-thinking AI policies. Therefore, the importance of a clear, ethical, and adaptable regulatory framework is more important than ever, to make sure that this technology benefits all Canadians, while also reducing the potential for risks.

Current landscape

Pat Poitevin.
Photo: Private

We asked Compliance and Ethics consultant Pat Poitevin, CACM, TASA, co-founder and executive director for the Canadian Centre of Excellence for Anti-Corruption (CCEAC) & CEO of Active Compliance and Ethics Group Inc., about AI in Canada, specifically focusing on financial services and anti-financial crime surveillance:

Key takeaways

  • Regulatory gaps increase risks: Banks currently rely on stitching together multiple existing rules such as PIPEDA, PCMLTFA, and OSFI’s new model-risk guideline (E-23). Gaps appear around explainability, appropriate data use, and vendor transparency, raising compliance concerns.
  • Clearview’s lasting impact: Regulators have branded broad biometric data collection as “illegal surveillance,” significantly influencing how financial institutions use open-source intelligence (OSINT) and facial recognition in anti money laundering (AML).
  • Costly consequences of black-box models: TD Bank’s billion-dollar settlement highlights the risks of opaque AI systems that either fail to detect illegal activity or overwhelm teams with irrelevant alerts.
  • Agentic AI – a powerful but risky advancement: Autonomous AI systems (agentic AI) offer tremendous potential for real-time monitoring and risk mitigation, but only when deployed within strong governance frameworks and explicit boundaries.
  • Future alignment with EU standards: Canada’s upcoming AI regulations, including Bill C-27 (AIDA) and Ontario’s Bill 194, are expected to mirror the EU AI Act by requiring mandatory assessments and external audits for high-risk AI surveillance models.

GRIP: Given Canada’s current lack of comprehensive AI legislation, what are the biggest compliance and ethical challenges when using AI for anti-financial crime surveillance?

Pat Poiteven: From my perspective, three primary challenges stand out:

  • Explainability v accuracy: Sophisticated models effectively detect complex fraud networks but often lack the clear, understandable explanations that regulators such as OSFI require.
  • Purpose limitations: Using customer data beyond its original intent (such as leveraging regular banking data to uncover social relationships) risks violating PIPEDA, especially following the Clearview decision.
  • Third-party transparency: Many AI solution providers are unwilling to share full model details, making it challenging to audit for bias or robustness as demanded by regulators.

GRIP: How are Canadian financial institutions interpreting and applying existing regulations (like PIPEDA or AML/KYC rules) to the use of AI in surveillance, and where are the grey areas causing concern?

Pat Poiteven: I’d set it out like this.

RegulationHow banks interpret itGrey areas causing concern
PIPEDAData use for fraud detection is acceptable.Unclear if large-scale OSINT scraping needs additional consent.
PCMLTFA/FINTRACSurveillance systems must demonstrate effectiveness.No clear benchmarks on acceptable false positives or explanations.
OSFI Guideline E-23Comprehensive model-risk management rules apply.Difficulty scaling governance for rapidly adapting autonomous AI.

GRIP: What specific ethical dilemmas emerge as AI becomes more sophisticated in identifying potentially illicit activities, particularly around bias, fairness, and false positives?

Pat Poiteven: There are six particular dilemmas I’d highlight.

  • Bias and financial exclusion: AI-driven models may unfairly exclude certain groups, such as small businesses or recent immigrants, by flagging typical behaviors as suspicious.
  • False-positive overload: In response to regulatory pressure (for example, TD’s settlement), banks may issue excessive alerts, reducing efficiency and overwhelming compliance teams.
  • Due-process risks: Autonomous AI systems could independently launch secondary investigations without clear human oversight guidelines, risking privacy breaches and procedural fairness.
  • Lack of clarity in scoring: Without mandated transparency like the EU’s standards, banks struggle to clearly justify why specific customer actions trigger alerts.
  • Accountability uncertainty: Mixing internal, vendor, and autonomous sub-models creates confusion over who is accountable when issues arise.
  • Practical explainability: Current AI explanation methods can’t efficiently handle large alert volumes, making regulatory compliance difficult without clearly defined standards.

GRIP: How are financial institutions balancing AI benefits with privacy and data security?

Pat Poiteven: The main ways are through;

  • Synthetic-data environments: Banks commonly use simulated data for model training within Canada, protecting real customer information from cross-border exposure.
  • Federated learning approaches: Institutions collaboratively share analytical insights without exchanging raw data, aligning closely with privacy regulations.
  • Controlled autonomy for AI agents: AI can dynamically prioritize alerts but requires explicit human approval for accessing sensitive data like voice recordings.

GRIP: With Canada’s recent appointment of an AI Minister, what regulatory shifts do you anticipate?

Pat Poiteven: Again, I think this is usefully set out in a table, so;

Anticipated regulatory focusExpected compliance implications
Acceleration of AIDA legislationMandatory assessments and public registries for high-impact AI models.
OSFI–OPC collaborationJoint regulatory sandboxes for safely testing advanced AI.
Sector-specific guidanceAdoption of global standards (for example, ISO/IEC 42001), regular external audits.
Enhanced real-time oversightReal-time reporting of AI model performance to regulators.

GRIP: What role does human oversight play in AI-driven surveillance? What are critical human intervention points from compliance and ethics perspectives?

Pat Poiteven: That’s a really good question. I’d say;

  • Alert verification: Human analysts must confirm and escalate suspicious transaction alerts.
  • Threshold management: Compliance teams must approve significant automated changes to risk thresholds.
  • Incident analysis: Detailed human reviews are required when AI systems fail or produce problematic alerts.

GRIP: Considering global regulatory trends, what direction will Canadian AI regulations take in the next three to five years?

Pat Poiteven: There are a number of avenues here.

  • EU-style risk tiers: Regulatory alignment with EU AI Act, including required risk assessments and external audits for high-risk AI applications.
  • Cross-regulatory harmonization: Increased cooperation between OSFI, FINTRAC, CSA, and OPC.
  • Routine independent audits: External reviews of AI models will become standard practice, inspired by cases such as TD Bank.
  • Explicit AI autonomy guidelines: Clear limitations on autonomous AI, including defined oversight and robust audit logging.

GRIP: Any final thoughts?

Pat Poiteven: Treat AI-driven surveillance as critical infrastructure. Ensure cross-functional oversight, mandate clear explainability, and regularly review AI model performance and outcomes. Proper governance of advanced AI capabilities translates directly into tangible compliance benefits – avoiding regulatory penalties and safeguarding organizational integrity.