Since publishing our first comprehensive guide to artificial intelligence regulation in Canada in April 2023, the rapid rise of generative AI (Gen AI) has prompted countries to revise their approaches to safeguarding the deployment of AI systems. In this article, we examine differing approaches to AI regulation across Canada, the United States and Europe, and ask: as industries strive to keep pace with constantly evolving AI technology, how are jurisdictions in Canada and abroad shifting their regulatory policies?
AI regulation in Canada: what is the current and future (and in-between) state?
The AI and Data Act is dead
When Parliament was prorogued in January 2025, the AI and Data Act, which proposed to comprehensively regulate AI in Canada’s private sector, officially “died.” It is not yet clear whether the newly constituted federal government will reintroduce legislation aimed at regulating AI, but AI innovation and the development of domestic computing power are expected to be key priorities for the government. This focus on AI is reflected in the appointment of Canada’s first Minister of Artificial Intelligence and Digital Innovation. For more on the state of AI regulation in the legislative gap posed by the death of the Bill, read Looking ahead: the Canadian privacy and AI landscape without Bill C-27.
New sector-specific laws and guidance
Beyond new laws aimed specifically at regulating AI, many existing areas of law are fast becoming a key feature of the AI regulatory landscape in Canada, notably including employment, privacy, and human rights.
In March 2024, Ontario became the first province to pass a law requiring employers to disclose the use of AI in the hiring process—specifically, in the screening, assessment, or selection process for applications to a position (while passed, a date has not yet been set for the requirement to come into force). Though it is the first Canadian jurisdiction to impose such a requirement, Ontario follows in the footsteps of New York City’s Local Law 144, which requires similar transparency measures in the hiring process and which became enforceable in July 2023. Similar regulations have been proposed in California’s SB-7, and laws generally regulating the use of algorithms to make decisions have passed in Colorado and proposed in many other US states (for more on the intersection of AI and employment law, read Can HR use AI to recruit, manage and evaluate employees?). In November 2024, Ontario’s Enhancing Digital Security and Trust Act came into force, which regulates the use of AI in Ontario’s public sector.
On the privacy side, federal and provincial regulators in Canada came together in December 2023 to issue joint guidance for public and private sector entities on how to ensure compliance with existing privacy laws (including PIPEDA) while developing, providing, or using generative AI in their operations.
This guidance emphasizes the protection of vulnerable groups, confirms that privacy law principles of consent and transparency remain paramount in the Gen AI context, and outlines concrete practices for businesses to document compliance with privacy laws when making use of Gen AI. Certain uses of Gen AI were deemed “no-go zones” under existing privacy laws, like creating AI-generated content for malicious or defamatory purposes.
While this guidance is non-binding, it is an interpretation of existing, binding privacy laws and provides critical insight into how privacy regulators are going to adjudicate privacy-related Gen AI complaints.
Alberta’s newly proposed Protection of Privacy Act, which would regulate Alberta’s public sector, includes a provision on the treatment of personal information by automated systems.
Other lawmakers have also recently highlighted the malicious use of Gen AI as a key concern, particularly the creation of “deepfake” intimate images without consent[1]. For more on developing responsible AI practices, read our article, What should be included in my organization’s AI policy?: A data governance checklist.
Looking ahead in Québec
Québec’s recent privacy law reforms have resulted in the regulation of some technologies using artificial intelligence. In particular, fully automated decision-making processes should be disclosed to individuals. The collection of information using technological tools offered to the public which have the functions to identify, locate or profile individuals should also be disclosed and, in some circumstances, should be activated by the users.
Despite these advances, there have also been calls for the implementation of a regulatory framework to specifically govern the use of artificial intelligence. In January 2024, the Québec Innovation Council (the Council) submitted a comprehensive report to the Ministry of the Economy and Energy, outlining steps and recommendations for such a regulatory framework. Based on consultations with hundreds of experts, the Council recommends a review of existing laws, particularly in the field of employment, to accommodate the rapid changes in artificial intelligence, as well as the creation of an interim AI governance steering committee to work on regulating AI and integrating it into Québec society.
Recommendation for businesses
The overall posture in Canada remains forward-looking. While a patchwork of AI-specific requirements is taking shape, there is much that remains to be passed, confirmed, worked out, and implemented.
Canadian businesses should be mindful of existing legal risks that can arise from the use of AI (including those related to privacy, intellectual property, consumer protection, and employment) and how to manage them. Many businesses have developed, or are developing, an AI governance framework to document internal AI policies, requirements, and accountabilities. An AI governance framework that aligns with best practices will help manage existing legal risks and ease the compliance burden as new requirements come into force. Businesses that are incorporating AI systems and tools into their operations should stay on top of these developments even before they are legally binding, to avoid running the risk of having to adopt costly and difficult compliance measures after their AI practices are already entrenched once enforcement begins.
How is AI being regulated beyond Canada?
Around the world, a patchwork of regulations, treaties, and guidelines is emerging in response to the increasingly widespread adoption of AI. These key measures are designed to promote responsible development, distribution, and use of AI in Europe, the United States, and international law.
Europe: AIA and related directives
EU’s Artificial Intelligence Act
The main legislative response to AI in the European Union is the Artificial Intelligence Act (the AIA), which came into force in August 2024 and is set to take effect incrementally over the next two years. The AIA imposes obligations pertaining to risk management, data governance, documentation and record-keeping, human oversight, cybersecurity, transparency, and quality control, among others.
The AIA applies to providers, deployers, importers, distributors, and other actors responsible for AI systems within the EU, and to providers and deployers located outside the EU whose AI outputs are used within the EU. As with the EU’s data privacy regulations, this means that the AIA can apply to Canadian businesses with operations or customers in the EEA.
Notably, the AIA prohibits the use of AI systems that pose an “unacceptable risk”, including—for example—social scoring, real-time biometric identification and categorization (with some exceptions), and systems that cause harm by manipulating behaviour and exploiting vulnerable groups. As of February 2, 2025, prohibitions on unacceptable risk AI are now in force, as well as provisions on AI literacy. The AIA also focuses on systems that pose a “high risk” along with its regulation of certain aspects of general-purpose AI systems. High-risk systems include those used in the management and operation of critical infrastructure, educational and vocational training, granting access to essential services and benefits, law enforcement, border control, and the administration of justice.
The AIA also regulates general purpose AI models. Those provisions will come into effect as of August 2, 2025. A General-Purpose AI Code of Practice is expected to be published soon, which will aid general-purpose AI model providers in meeting their obligations under the Code.
Other EU directives and frameworks
In May 2024, the Council of Europe adopted the Framework Convention on Artificial Intelligence (the Convention), the first international treaty regulating the use of AI. The Convention sets out a framework that would apply to public authorities and private actors whose AI systems have the “potential to interfere with human rights, democracy and the rule of law.” It also contains provisions requiring AI providers to disclose the use of AI, conduct risk and impact assessments, and provide complaint mechanisms. The Convention is open to signatures from member states as of September 5, 2024. Canada became party to the Convention in February 2025.
In Spring 2024, the Council of Europe and European Parliaments reached a provisional agreement on the Platform Work Directive(the Directive) which introduces rules regulating the working conditions of platform—or “gig economy”—workers. The Directive ensures that platform workers cannot be dismissed solely based on automated decision-making processes, and that some degree of human oversight is required for decisions directly affecting persons performing platform work.
Some European nations have proposed federal legislation regulating AI, such as Spain and the United Kingdom.
United States: a pro-innovation approach and state legislation
Federal level regulation
There is currently no comprehensive federal legislation in the US aimed at regulating AI like the AIA in the EU. The Trump administration largely overhauled the previous government’s approach to AI regulation, with a renewed focus on promoting American AI infrastructure and innovation and curtailing further regulation and enforcement, such as through executive orders and support for legislation that would impose a 10-year moratorium on state-level AI regulation[2].
The proposed TAKE IT DOWN Act, which passed in 2024 and was reintroduced in early 2025, would prohibit the non consensual disclosure of intimate images generated using AI.
The National Institute of Standards and Technology (NIST), an agency of the United States Department of Commerce, published a voluntary set of guidelines in 2023 titled the AI Risk Management Framework, with the goal of managing AI-related risk and increasing trustworthiness in the design, development, and use of AI systems. NIST has recently updated its existing Privacy Framework to address AI risk.
State level regulation
Several US states have proposed or enacted state legislation to regulate the development, provision, or use of AI in the private sector. These include Arkansas, Montana, Colorado, Utah, Illinois, Massachusetts, Ohio, Washington, Hawaii and California. These state-level laws are more often concerned with specific categories of AI systems, including higher-impact systems, consumer protection, publicity law and Gen AI systems.
While California’s long-watched AI safety bill SB 1047 was vetoed by California Governor Newsom, the state passed a series of other laws governing particular areas affected by AI, such as health and insurance, generative AI, publicity and labour.
International instruments
Several public international legal bodies have issued instruments or guidance in this area. Though generally nonbinding, they are nevertheless instructive regarding policy priorities at the highest levels of international law, both for private and public entities.
- United Nations: In March 2024, the UN General Assembly adopted a non-binding resolution, Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development, which endorses an approach to AI regulation that prioritizes safety, respect for human rights and fundamental freedoms, and global inclusivity.
- Organisation for Economic Co-operation and Development (OECD): Adopted in 2019 and amended in May 2024, the OECD’s Recommendation of the Council on Artificial Intelligence provides non-binding guidance for member states and AI stakeholders with respect to the cross-sectoral regulation of AI, including principles for responsible stewardship of AI and recommendations for AI governance.
- The G7: The G7’s non-binding Hiroshima AI Process Comprehensive Policy Framework, launched in 2023, adopts an AI policy framework based on the following principles: promoting safe, secure, and trustworthy AI; providing actions for organizations to follow when designing, deploying, and using AI; analyzing priority risks, challenges, and opportunities of generative AI; and promoting project-based cooperation for the development of AI tools and practices.
Recommendations for businesses
Even where organizations are not directly subject to the international requirements and instruments above, they are likely to be impacted indirectly as they influence pending and new legislation in Canada, contractual requirements of international customers, vendors, or partners, and industry best practices. We recommend that businesses operating in Europe in particular should take note of these developments and work towards ensuring that their internal AI governance frameworks are aligned with anticipated laws and regulations.
[1] For further information on this matter, read Tackling the Problem of AI “Revenge Porn” in Canada: Existing Law and Upcoming Legislative Reform (June 13, 2024).
[2] See the Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure, and the Executive Order on Removing Barriers to American Leadership in Artificial Intelligence. See also the proposed One Big Beautiful Bill Act.
Authors: By Julie Himo, partner, Anne Merminod, partner, Nic Wall, senior associate, Rosalie Jette, associate, Mavra Choudhry, associate, Lauren Nickerson, associate, and Gabrielle da Silva.
Torys LLP is a respected international business law firm with a reputation for quality, innovation and teamwork.
