Agentic AI systems’ autonomy and potential to operate with minimal human intervention raise unique challenges in terms of regulation, particularly under the EU AI Act.
The functioning of agentic AI is rooted in Large Language Models (LLMs) and other advanced machine learning algorithms. These models are trained on vast datasets to understand human language and respond intelligently to various inputs.
What sets agentic AI apart from monolithic (“stand-alone”) LLMs is their ability to interact with external systems. While traditional LLMs typically require human instructions for each task, agentic AI can retrieve real-time information, access databases, interact with other software tools, and take initiative based on such context. This dynamic interaction allows agentic AI to adapt to changing circumstances and perform tasks that go beyond the limitations of traditional LLMs.
Agentic AI exhibits several core features:
- Planning: It assesses user requests and defines the necessary steps, and plans tasks and subtasks to achieve a goal.
- Collection of data: It can gather data from various sources, including tools, databases, the internet and sensors.
- Tool use: Unlike traditional LLMs, which rely solely on their training data, agentic AI can interact with external tools and Application Programming Interfaces to gather real-time data, enhancing its functionality.
- Autonomous action: Once the steps are defined, agentic AI can execute tasks independently, reassess the original tasks and adjust the plan.
- Memory and reflection: Agentic AI can recall past interactions, learn from them, and adjust its behavior over time. It can update its knowledge base with new information improving performance and decision-making.
Agentic AI is used in diverse sectors and applications, such as in healthcare for personalized diagnostics and treatment recommendations, and in smart assistants to manage schedules and control smart home devices.
In customer service, automated chatbots manage complex queries and offer personalized suggestions. In finance, agentic AI systems are crucial for dynamic pricing, credit assessments, and fraud detection to make informed decisions and improve operational efficiency.
Agentic AI in the context of the EU AI Act
While the EU AI Act does not explicitly mention agentic AI, the nature of these systems suggests that they would undoubtedly be covered by the broad definition of AI systems, which encompasses the autonomy and adaptiveness inherent in agentic AI. The corresponding obligations for agentic AI under the EU AI Act largely depend on its risk categorization.
The EU AI Act aims to regulate artificial intelligence within the European Union by categorizing AI systems into different risk levels, ranging from unacceptable risk, high risk to minimal and no risk. Article 5 of the Act read alongside the guidelines published by the EU Commission on the February 4, 2025, prohibits AI practices including but not limited to harmful manipulation and deception, harmful exploitation of vulnerabilities, social scoring, and biometric categorization.
According to Article 6 of the EU AI Act, high-risk AI systems are those that negatively affect safety or fundamental rights and either constitute products falling under the EU’s product safety legislation or AI systems used in specific areas, such as education or law enforcement. Therefore, the classification of agentic AI into a specific risk category under the EU AI Act is contingent upon its contextual application and potential impact on fundamental rights or safety.
Risk profile of agentic AI
Risk is defined in the EU AI Act as “the combination of the probability of an occurrence of harm and the severity of that harm” (Article 3(2)). Due to their highly autonomous, goal-driven nature, agentic AI systems inherently carry an increased risk profile. Several factors contribute to this increased risk profile:
- Autonomous decision-making: Agentic AI systems can dynamically adapt to new data and situations, making decisions that can significantly impact individuals and society.
- Dynamic decision-making: The autonomous nature of agentic AI means it makes decisions in real-time based on changing data and environments. This dynamic decision-making process can lead to unpredictable outcomes, making it harder to ensure consistent reliability and safety.
- High-risk areas: Agentic AI operating in sectors such as healthcare, transportation, or education, where decisions can significantly affect people’s lives, would likely be subject to additional scrutiny and regulatory requirements.
Transparency and accountability
The EU AI Act imposes transparency and accountability obligations, especially for high-risk AI systems. For agentic AI, these obligations could be challenging to meet because of their autonomy and complexity. Some key requirements include:
- Record-keeping (Article 12): High-risk agentic AI systems must log their actions to ensure accountability and traceability. This can be particularly difficult with autonomous systems that make decisions without human input, as it may not always be clear how a decision was reached.
- Transparency (Article 13): High-risk agentic AI must provide clear and comprehensible information to users and regulators regarding how they function and make decisions. Given the complex, dynamic nature of agentic AI, ensuring transparency in these systems will require robust frameworks for explanation. Additionally, decisions made by agentic AI, such as dynamic pricing or credit assessments, must be explainable in a way that is easily understood by users.
- Human oversight (Article 14): Due to their independent decision-making capabilities, agentic AI systems often fall under stricter governance. However, the very nature of agentic AI (acting autonomously and without constant human instructions) may make it challenging to effectively monitor their actions, especially as the system becomes more complex over time.
- Technical robustness and safety (Article 15): Technical robustness and accuracy are fundamental to the successful deployment and operation of high-risk agentic AI systems, particularly given the potential implications of its application. Ensuring these systems are reliable is crucial for their effective integration into various industries. Agentic AI systems often struggle with unusual or rare situations that fall outside the norm (“edge cases”). These systems need to be robust enough to handle such scenarios without failing or making incorrect decisions.
Conclusion
Agentic AI represents a significant leap forward in the capabilities of artificial intelligence, but with this power comes the responsibility to develop AI systems that are transparent, accountable, and aligned with ethical values.
As the regulatory landscape evolves, ensuring that agentic AI complies with the EU AI Act’s provisions will be crucial to balancing innovation with safety and fairness.
Katalin Horváth is a partner in the commercial team at CMS Budapest, where she specialises in software, IT and IP law, BankTech/FinTech law, outsourcing, data protection and cybersecurity matters, as well as the legal regulation of artificial intelligence. Anna Zsófia Horváth is an associate in the Commercial and TMT departments at CMS Budapest, mainly focusing on data protection, IP/IT and technology matters.
Co-authored by by Helena Siebenrock.
