On June 6, 2025, the European Commission launched a targeted stakeholder consultation on high-risk artificial intelligence systems under the EU Artificial Intelligence Act (AI Act). The aim is to help the EU Commission develop guidelines for classifying and regulating high-risk AI systems in the future.
This consultation is of particular interest for life sciences and healthcare companies, many of which develop or deploy AI-based medical devices and digital health solutions that may fall within the high-risk category. The consultation seeks input from a broad range of stakeholders on the practical implementation of the AI Act’s high-risk classification rules and related obligations.
What the act proposes
The AI Act aims at establishing a harmonized legal framework for trustworthy and human-centric AI in the EU, using a risk-based approach. High-risk AI systems, as defined in Chapter III, are subject to stringent requirements, with compliance becoming mandatory from August 2, 2026.
For life sciences and healthcare companies, this is especially pertinent: AI systems used as safety components in medical devices or in healthcare settings are likely to be classified as high-risk under Article 6(1) and Annex I of the Act. The Act also covers AI systems that may impact health, safety, or fundamental rights in specific use cases (Article 6(2) and Annex III).
The European Commission, under the AI Act, is required to issue guidelines on the practical implementation of high-risk classification by February 2, 2026. This includes detailed examples of high-risk and non-high-risk use cases. Additional guidance shall clarify the application of requirements and obligations for high-risk AI systems and the allocation of responsibilities along the AI value chain.
Scope and consultation structure
The consultation is structured into five sections, each with direct relevance for life sciences and healthcare companies:
Classification rules for medical and health AI
Stakeholders are invited to comment on the definition and scope of “safety components” in AI systems, including those integrated into medical devices. Input is sought on practical examples and the application of third-party conformity assessments under existing EU medical device regulations. This is in area that has been and still is controversially discussed, mainly due to the additional burdens a separate certification procedure would bring.
High-risk use cases in healthcare
Section 2 addresses AI systems intended for use in high-risk areas, including health. Stakeholders can provide feedback on the application of exemptions, the distinction between high-risk and prohibited AI practices, and the interaction with other EU legislation such as the Medical Devices Regulation (MDR) and Medical Devices In Vitro Diagnostic Regulation (IVDR).
Horizontal aspects and overlaps
Section 3 focuses on the definition of “intended purpose,” potential overlaps between Annex I and III, and the need for further clarification or examples – issues that are particularly relevant for companies developing multi-purpose or adaptive AI systems in healthcare.
Requirements and obligations along the value chain
Section 4 of the consultation seeks views on the interpretation and practical application of mandatory requirements (such as risk management, data governance, transparency, human oversight, robustness, etc), as well as the allocation of responsibilities between providers and deployers. The interplay with existing medical device obligations and the conduct of fundamental rights impact assessments are also in scope.
Amendments to high-risk use cases and prohibited practices
Under Section 5, stakeholders can propose changes to the list of high-risk use cases and prohibited practices, including those relevant to healthcare and life sciences, to address regulatory gaps or overlaps.
Key opportunities
The consultation provides an important opportunity for life sciences and healthcare companies to highlight sector-specific challenges of the AI Act, such as the classification of AI medical devices, the burden of overlapping conformity assessments and the need for clear guidance on the allocation of responsibilities in complex value chains. The consultation offers the chance to bring concrete first-hand input to the attention of the European Commission.
The consultation explicitly encourages companies and stakeholders to provide concrete examples of issues encountered in the development and deployment of AI in healthcare, and to advocate for practical, workable solutions in the forthcoming guidelines. Concrete pain points may be addressed this way.
The outcome of this consultation is likely to impact the practical regulatory landscape for AI in life sciences and healthcare. Early engagement can contribute to future guidance being clear, proportionate, and aligned with existing sectoral regulations.
Way forward
The consultation is open until July 18, 2025. For life sciences and healthcare companies who are in the midst of preparing for regulatory compliance with the AI Act, this consultation offers the opportunity to influence the operationalization of the EU’s risk-based approach to AI regulation.
Engaging proactively can help ensure that the forthcoming guidelines address the specific challenges faced by the sector, clarify ambiguous provisions and facilitate compliance with both the AI Act and existing medical device regulations.
In a broader picture the outcome of the consultation could have significant implications for the development, market entry and deployment of AI-driven medical technologies in the EU.
Amélie Chollet is a partner in CMS’s Life Sciences & Healthcare group, specialising in legal regulatory issues affecting life sciences companies operating in the UK and in the EU. Dr. Roland Wiring is Co-Head of the CMS Life Sciences & Healthcare Group. In this position, he jointly leads an international team of more than 480 lawyers, patent attorneys, scientists and academics’ across CMS.
