This session at the premium surveillance event for finance featured expert practitioner comment from: Simon Friend, Head of Surveillance Europe & APAC, RBC; John Lydon, Head of Surveillance, TP ICAP; Miki Mellegard, MD Compliance Surveillance, Deutsche Bank; Hammad Hanif, Director, Trade Surveillance, Lloyds.
The audience was asked what their organization’s priority in trade surveillance is.
- Ensure compliance in the wake of recent regulatory enforcement – 37%.
- Adapt our trade surveillance to cope with new demands from the business – 0%.
- Increase our effectiveness in detecting market abuse – 42%.
- Increase the efficiency of our trade surveillance process (reduce FPs, length of investigations etc) – 21%.
One of the panel started by stating that there was still a hangover from MarketWatch 68 when the big enforcement from the OCC at JPM surfaced. This has accelerated the governance around trade surveillance and is a new piece of evidence that is required for the many venues most global firms use for trade.
Risk governance frameworks are now needed around venues to ensure data completeness. Pre-trade data is the weak link here. A new sub-stream has been created and this is not a small ask on top of the team’s BAU.
Another panellist was less depressed about the collateral damage from the significant venue data enforcement and felt that it had been a welcome instigator of required action in the front office, awareness and a global review to which everyone was now paying attention.
As in almost every session at this event, the moderator turned to consideration of whether AI models were being used effectively now, and how real they were. One of the panellists said that his firm was not yet using AI in trade surveillance but was using it in comms surveillance to resolve false positives. Voice was also something being explored to identify internal speakers across a trading desk to cover market blasts and ‘group’ speech.
Arguably eComms and voice has progressed faster away from a rules-based approach than trade.
The use of voice and eComms has been driven by the ubiquity of the smartphone. The tech change has not left trade surveillance behind and advances have been made to allow complex detection across product with new sophisticated models. Arguably eComms and voice has progressed faster away from a rules-based approach than trade.
This led to a review of modern models where rules-based is still prevalent. In order to understand AI better, the panel agreed, we need to focus on what constitutes the model, and what it does. Its output can be examined by subject matter experts to determine if market abuse has potentially occurred. Smart people still need to be involved in that process.
Market abuse is very hard to identify definitively, especially in newer venues where matching and booking is different. It calls for a different lens, and this issue was covered in MarketWatch 68. A good example was gilt manipulation around QE2. There is a need to understand, under any market abuse risk assessment, the types of abuse called out and how close these detected instances come to those.
Data has to be accurate, complete and up-to-date. It is pointless deploying AI if that data is wrong.
The debate moved on to pre-trade and cross-product data, which has perennially been a struggle from order capture and OTC opaqueness perspectives. One on the panel said that his bank had invested in solving this issue a few years ago and had made progress. As always the challenge is getting the right data and governance of that. The data has to be accurate, complete and up-to-date. It is pointless deploying AI if that data is wrong.
Another panellist said that OTC data has this mythical impenetrability, but questioned if perhaps this was just laziness, especially for voice data. He asserted that it can be very expensive to obtain and time consuming – it has to be cleaned and structured before it is put through any AI models or engine.
There was agreement that there is an abundance of voice data as well as free-form text on the channels – this calls on the right culture in the front office to escalate appropriately into the second line so that the right people are in the picture. The potential manipulation now identified is so marginal (‘a basis point here or there’) and that one panelist said he often sees traders crying foul themselves in voice calls. These instances need escalation and for everyone to learn from them.
The audience was asked another survey question: What will the most promising technology to affect trade surveillance in the next 18 months be?
- Generalized behavioral analysis and signal-driven anomaly detection – 47%.
- Specific trader-based behavioral analytics and profiling – 37%.
- Improved workflow solutions around investigations – 25%.
- Integrated solutions for voice eComms and trade – 48%.
- Regulatory data solutions – 6%.
Trader profiling was discussed. It seems that in the US there is a very divided view on this approach – half love it and the other half scream in opposition to using it.
The panel said that trader profiling and risk ranking the traders can add context to alerts. It also helps to start to normalise their behavior and the way they trade so that future anomalies are easier to identify and then investigate. Classic areas to check regularly were: use of dormant (parking) accounts; high volume cancel/corrects; card swipes for unusual entries out of usual office hours; changes in product mix; big P&L swings. All of this can help to initiate leads that should be followed.
The final wish from the panel was the reality of being able to perform all of this work in an integrated platform rather than creating a virtual patchwork of data and signals that needs manual entry in case management. The conclusion was that finally the technology might be there.
Due to Chatham House restrictions, this summary does not attribute any comments to the individuals. It is also not a full transcription of the session, but contains the sense of it as interpreted and reported by the GRIP subject matter expert who attended.