GenAI and Agentic AI embraced with caution and optimism by financial services

City Week panel highlighted “massive disruption” Agentic AI will bring, suggesting organizations prepare for “everybody to be a manager,” not of people, but of agents.

A recent City Week panel, “From LLMs and GenAI to Agentic AI: real-world applications for financial services,” brought together experts to discuss the rapid evolution of AI and its profound implications for the financial sector.

Moderated by Jan Putnis, Head of Financial Regulation Group and Chair of Financial Institutions Group at Slaughter and May, the discussion featured insightful contributions from Jamie Ovenden, CTO at Schroders; Gary Collier, CTO at Man Group; and Aidan Gomez, CEO of Cohere.

Putnis kicked off the session by highlighting the critical importance of revisiting the topic of LLMs, GenAI, and Agentic AI, noting that the rapid pace of change necessitates more frequent discussions. He identified three key reasons for its significance: opportunities for significant efficiency increases, the ability to identify opportunities and risks more effectively, and the crucial need to manage inherent risks, particularly in customer-facing applications.

Building trust

Putnis then delved into the crucial aspect of gaining buy-in from various stakeholders, including risk, compliance, and business leadership. Ovenden highlighted the benefit of having a “progressive risk group” at Schroders that is commercially minded. Their initial conversations in late 2022 focused on enabling the firm to leverage the significant opportunities of GenAI while staying within regulatory, legal, and compliance boundaries. This led to early discussions around access policies, usage policies, and ethical AI approaches. Ovenden noted that, unlike some larger US banks that initially blocked GenAI technology, Schroders gained a “good head start” due to their principled approach.

He further explained that Schroders applies the same principles to GenAI as to other AI/LLM deployments: “You are responsible for it. You have to understand exactly what it’s doing, understand the inputs, you understand the outputs.” Staff at Schroders favor a “glass box rather than black box” approach, ensuring transparency. For example, in regulated scenarios, they adopt a “breach by design” mentality, anticipating potential issues and implementing controls for information security, decision-making, and ethical security. Crucially, they break up tasks for agents, ensuring human oversight for sensitive actions like sending emails.

When asked about upskilling legal, risk, and compliance colleagues, Ovenden revealed that these groups have become “heavy users of GenAI themselves,” with risk teams being among the first to adopt Copilot and ChatGPT licenses for tasks like horizon scanning regulations. This hands-on experience has significantly aided their understanding of the technology’s capabilities and limitations.

Collier echoed the importance of generating excitement and demonstrating tangible benefits to business leaders. He stressed the critical role of data security, both internal and external, implementing controls early on to prevent inadvertent data leakage.

He also highlighted that the complexity of financial businesses and the abundance of data mean it’s no longer possible to pre-validate every decision. While AI amplifies this, it doesn’t fundamentally change it. Man Group focuses on logging diagnostics from AI systems to ensure accurate operation and emphasizes that many outputs, such as code, can be audited. The ultimate goal is to avoid situations where decisions cannot be explained, striving against the “AI told us to do it” scenario.

Agentic AI’s impact on employee and manager roles

Gomez shed light on the future direction of enterprise AI platforms, noting that model providers such as Cohere have moved up the stack to offer more integrated solutions, accelerating time to value for customers. He sees agents as a “killer application domain” for financial services, given the industry’s reliance on research. Agents can read and process information “a hundred thousand times faster than a human,” generating compelling reports, analyses, and recommendations, leading to a “hugely transformative effect.” Gomez emphasized the importance of safeguards and pointed to breakthroughs in “reasoning models,” which can provide justifications for their decisions, enhancing trust and auditability.

The discussion then turned to the highly anticipated concept of Agentic AI. Ovenden envisions Agentic AI reinventing “many, if not most, if not all of the processes, certainly in the middle and back office.” He sees human roles shifting to oversight, with agents handling the vast majority of value streams and workflows. In the front office, analysts and investors will use agents to gather information more broadly and quickly for faster, more informed decision-making.

Gomez offered a bold vision: “It’s becoming clear that AI agents are going to be employees inside of Enterprises more and more.” He predicts they will gain access to the same systems, data, and tasks as humans, with the added capability of learning from experience within the next 18 months. This means agents will evolve from an “intern” level, with general knowledge, to becoming increasingly competent, remembering past interactions and feedback. Ultimately, they will grow from entry-level to senior roles, even becoming “as capable as anyone in the organization,” serving as an “extremely scalable talent base.”

Collier reinforced this, suggesting organizations prepare for “everybody to be a manager,” not of people, but of agents. He highlighted the “massive disruption” agentic AI will bring, particularly to the highest value-add parts of organizations, like the core investment process.

Real-world applications and ethical dilemmas

An audience member raised a question about agentic AI in financial advice. Ovenden stated that while they are “not keen to put agents in front of clients,” they see significant potential in supporting advisers to provide more effective and timely advice, manage client trusts, and focus on high-value activities. Gomez agreed, seeing agents as an “augmentation to the existing employee base as opposed to a replacement.” He pointed to customer service as a lower-risk area for full automation in large banks, while in wealth management, agents can dramatically improve the “velocity and quality of recommendations and decisions” by processing vast amounts of data quickly during fast-moving market events.

Gomez acknowledged the challenges but maintained that customers often prefer interacting with an agent over waiting for a human. He stressed that “humans are the market,” and the technology’s adoption depends on it being ethical and not leading to large-scale job displacement. He dismissed fears of “Terminator Skynet scenarios” as less immediate, advocating for a focus on current practical concerns and regulatory issues.

Measuring ROI and future trends

The perennial challenge of measuring ROI for AI initiatives was also discussed. Ovenden admitted it’s “very hard,” as early investments in GenAI were often exploratory, and direct benefits were difficult to quantify. The initial focus on “how many people are you going to be able to get rid of” proved unworkable. However, the narrative has shifted as GenAI has become recognized as a game-changing technology. The acceptance that it drives “competition and productivity” has made it easier to justify investments, viewing it as a “baseline capability” that offers significant productivity gains.

Collier echoed the difficulty of measuring ROI, especially for broad deployments of ChatGPT-style tools. He believes future efforts, focused on “reimagining workflows” within specific business units, will make ROI more evident.

Looking ahead, Gomez highlighted “learning from experience” as a crucial future capability for models, moving beyond their current fixed state after initial training. He anticipates composable AI, with agentic architectures allowing specialist agents with memory to work together to achieve complex tasks. Collier shared this excitement, predicting “massive disruption” and “fantastic results” from the ability of AI to reason and generate hypotheses.

The panel concluded with a resounding agreement on the transformative potential of AI in financial services, tempered with a realistic understanding of the ongoing challenges in governance, ethics, and practical implementation. The clear takeaway was that while the technology is advancing at an unprecedented pace, human oversight, strategic deployment, and a commitment to responsible innovation remain paramount.