The emergence, understanding, and application of AI-driven techniques – and their effectiveness and feasibility in dealing with market abuse and market manipulation – was the topic of a panel discussion at the latest Market Abuse Summit in London.
As in previous years, the event was organized by City & Financial Global. This discussion was moderated by Munib Ali of AlixPartners, who were also among the sponsors of the event. Speakers included Paul Clulow-Phillips, Managing Director, Global Head of Market Abuse Surveillance, BNP Paribas; Jerome Lambert, EMEA Solution Director, Financial Markets Compliance, NICE Actimize; Conor O’Donoghue, Head of Surveillance, MarketAxess
The discussion opened with a general question around the importance of the fact that firms and regulators were now exploring the use of AI for surveillance purposes.
Clulow-Phillips kicked things off by saying: “AI is the way forward in banking. In the long term the technology will be a key tool across all surveillance related operations.” He did insist that technology was still making its way to the trade side of surveillance, and that traditional data was still more commonly used in that sector. But in terms of surveillance communications, AI had already made its way and was on the rise.
The downside is that AI is an expensive capability to have. It requires a lot of resources and training, and that’s where things can get tricky when firms need to prioritise certain costs. “For trade surveillance we need to ask whats the right time and the right use of the technology,” he said.
O’Donoghue agreed that communications was ahead of trade when it came to the use of AI for surveillance purposes. “AI is better in detecting abuse that you cannot detect with traditional surveillance. It tells you the gray areas. it tells you the severity of risk because you don’t just get an alert. You also get a level or category of the risk posed,” he added.
Traditional limitations and AI
The discussion then moved on to the question of whether AI and other advanced technologies could address the limitations of traditional surveillance techniques.
O’Donoghue replied positively, explaining that AI was especially useful across markets dealing with large amount of data. “If there is no data, there is no use for AI. You cannot learn from it without data.”
Lambert also highlighted the importance, amount and availability of data in enabling AI to do the job firms and regulators want it to do. “The way we see AI is dealing with a lot of data. We believe AI can be used in two areas, the efficiency and effectiveness of your actions,” he added. “We are implementing AI on a user case basis. For example to detect new market manipulation, spot abnormal behavior, detect soft signals within trading activity, for example in insider dealings/dealing.”
“Do you have data scientists and experts in your team? Those experts usually go to work for Google or Amazon. Thats a barrier for banks.”
Paul Clulow-Phillips, Managing Director, Global Head of Market Abuse Surveillance, BNP Paribas
Lambert went on to explain that, with thousands of alerts coming through on a daily basis, firms have to start looking at and deciding what makes sense and what doesn’t.
“AI can help you there because it remembers previous cases and gives you context when you use the technology for surveillance operations,” he concluded.
In terms of use, panelists agreed that, at present, AI was predominantly used for voice surveillance, profile information, spotting anomalies in the different features of a profile, and so on.
At operational level, the experts said AI was used for dispositioning, dealing with false positives, predictions, triage activity, and so on. All of this, they agreed, allowed firms more time to carry out a proper investigation into market abuse and manipulation.
AI for comms surveillance
The next topic was the importance and use of AI for communications surveillance specifically, especially for analyzing voice communications.
Clulow-Phillips started by saying that, although there was more to do, AI and large language model technology (LLM) was already being applied to large voice data and translations. He added that AI gave firms and regulators a context by making so much information accessible to firms, the industries and the public. “It has really moved fast in comms,” he said.
The panel was again agreed that, when it comes to trade surveillance, banks and regulators still relied on traditional techniques, but were starting to look at AI as a concept. They were looking at utilization across data feeds as data is fundamental for AI use. Bank have realized gaps in pre-trade data and are doign a lot of work to centralize and normalize data.
Roadblocks
The panel members were then each asked to mention what they thought were the top three barriers to greater use of AI to detect market abuse and market manipulation.
O’Donoghue mentioned the block box effect, or not knowing what’s inside your AI technology. “It’s a problem when you don’t understand when it’s not working. For example when you can’t tell the FCA how your ACI solutions work,” he said.
He also mentioned people’s resistance to the adoption of AI (internal resistance), and the lack of skills firms face when adopting AI and other advanced technologies.
“Using more AI means firms will need more people who understand the technology better. You have to change with data or there will be gaps,” he said.
Clulow-Phillips highlighted AI-related costs as a key barrier, and also agreed that a lack of expertise was another challenge faced by firms and regulators. He also mentioned the fact that AI meant dealing with a huge amount of data, across jurisdiction, which then raised challenges around data privacy and security.
“AI is better in detecting abuse that you cannot detect with traditional surveillance. It tells you the gray areas.”
Conor O’Donoghue, Head of Surveillance, MarketAxess
“Do you have data scientists and experts in your team? Those experts usually go to work for Google or Amazon. Thats a barrier for banks,” he said. He also referred to time restraints, insisting that building a model internally meant training it internally, which takes time.
Lambert explained that AI adoption relied on evidencing good outcomes, showing value, and justifying the use of technology internally.
“We have to prove internally that AI for example will reduce false positives, the search can be faster, results will be faster, positive hits will be higher and so on. Each and every feature of the technology needs evidence to prove improvement.”
Expectations
The final part of the discussion focused on what firms and regulators can expect from AI in the near future in regards to their surveillance operations and the ability of the technology to detect and prevent misconduct.
Clulow-Phillips insisted that AI was now key for comms surveillance, and that he would like to see the technology replace and end rule-based models. But accepted that we were still some way off that goal.
“In the short term we can run AI over the top of rule-based models until we can get to a position where we can explain technology better and use it better,” he added.
O’Donoghue said firms will move towards more AI as vendors can offer lots of solutions and firms can choose. He insisted that, in an ideal world, a vendor can give firms all solutions in one shiny technology.
Panel members also mentioned the FCA and Singaporean regulator as two of the world’s most forward looking regulators when it came to AI adoption and use.
Part of that is down to their understanding of the technology, which is improving constantly, the panelists said.