How the new age of AI can guard global markets

We need to understand the creative potential of AI if we are to feel the benefits.

The exponential development in AI capabilities and their increasing availability has profoundly altered the ways in which global markets can be used, and misused. No longer limited to analytics, AI now creates, whether through generative algorithms, autonomous agents or deep-learning strategies. It has unleashed immense power with minimal latency. As machine learning transitions to deep learning and reinforcement learning, computers can independently explore, exploit and manipulate both efficient and inefficient financial markets.

This is no longer theoretical. AI-powered trading tools are already being deployed in both mainstream and fringe financial environments. They may make markets more efficient, but also more open to abuse.

As an example, a report by IOSCO, published in March 2025 titled Artificial Intelligence in Capital Markets: Use Cases, Risks, and Challenges, notes that: “Preliminary research has shown that, even when unintended, multiple black box models will eventually learn to engage in collusive behavior to maximize their profits.”

This is just one of several risk types on the increase across global markets because of the speed and power of AI. And global markets, in this context, encompass not only equities, fixed income, foreign exchange and commodities, but also globally traded funds, crypto assets and alternative traded products.

National governments are prioritizing local sovereignty and economic competitiveness over harmonization.

We all benefit from clean, trusted markets. Systemic abuse, as seen in the 2008 global financial crisis and the LIBOR manipulation scandal, damages societies, economies and trust. Market operators have an intrinsic interest in preserving integrity: without trust, there is no market.

However, bad actors persist, whether firms or individuals, seeking short-term gain at collective cost. To counter this, society has leaned on regulators. In the past two decades, global authorities have layered extensive regulatory frameworks atop financial systems. From MAR, MiFID II and EMIR in Europe, to Dodd-Frank in the US and MAS regulations in Singapore, all demand costly surveillance investments and intensive reporting. In parallel, regulators have sought to improve cross-regional harmonization of rules and supervisory coordination.

Yet that ambition has been disrupted. As the geopolitical environment grows more fragmented, national governments are prioritizing local sovereignty and economic competitiveness over harmonization.

Brexit

This shift is already visible in practice. Take Brexit, for example: following the UK’s separation from the EU, European authorities lost automatic access to transaction reports from UK-regulated firms and EU regulators now rely on the voluntary sharing of suspicious activity reports from the UK’s FCA and UK-based firms, a significant degradation from pre-Brexit conditions.

The rise of AI, however, necessitates a reassessment of this trend. AI enables faster and more complex forms of cross-border market abuse, with a high likelihood of outpacing and outwitting conventional detection systems. Addressing these risks will require greater global coordination than we have previously achieved.

Two futures are plausible: the first being re-harmonization with a renewed drive for global regulatory convergence, driven by a recognition that fragmented oversight can’t contain AI-driven markets; the second is a move toward greater self-regulation where market players, recognizing the threat to their own viability, step in to fill the regulatory void, collaborating to build further common standards and cooperative monitoring mechanisms.

In fact, a hybrid scenario is most likely. But AI introduces a new dimension of risk, which is fast-evolving, opaque and global, and that demands a globally connected supervisory framework. As Ashley Alder, chair of the FCA, noted in a 2024 speech at the UK Mission to the EU: “Co-operation around international standards and cross-border collaboration is closely tied to efficient capital formation.”

Future of surveillance

Whether under harmonized or fragmented regulation, surveillance must evolve. Pivotal trends are reshaping surveillance systems: AI-powered surveillance; and predictive surveillance and behavioral risk scoring.

AI-powered surveillance

Firms must use AI to counteract AI. Rules-based monitoring systems, built for past decades, will become increasingly inadequate. While data analytic tools of the recent past still have the capacity to provide value if deployed effectively, they lack the adaptive and anticipatory strength of AI that is being developed.

Global regulators are not blind to this shift. In the US, the SEC is investing in its own AI tools for detecting insider trading and market manipulation. In the UK, the FCA has enabled exploration of AI solutions by regulated firms, alongside the drive for appropriate governance and explainability.

Moreover, AI is not just a better detector; it promises to improve efficiency. Though still nascent, firms are starting to explore how to use AI to automate alert disposition, triage suspicious activity and manage case records. Deployed effectively, AI-enabled compliance systems can reduce false positives substantially and cut surveillance costs significantly.

Predictive surveillance and behavioral risk scoring

Beyond detection, surveillance may now shift toward prevention. Behavioral analytics, long discussed and rarely implemented by most institutions, may now become viable. With advances in natural language processing and real-time sentiment analysis, it is conceivable that controls will flag misconduct risks before they manifest. For example, the combination of financial data and communication patterns could highlight potential rogue traders or high-risk counterparties before they act.

Large language models (LLMs) can transcribe and translate communications in real time, potentially surfacing misconduct cues before misdemeanors occur. As Neta Meidav, co-founder and CEO of Vault Platform, notes: “Proactive is the name of the game with this technology. It helps companies identify repeated patterns of abuse and intervene before they develop into bigger problems.”

However, this raises serious ethical and legal questions. Where is the line between prevention and surveillance overreach? Do firms have the right to profile employees or clients based on AI predictions?

Despite all this promise, major hurdles remain. For a start, AI is only as good as the data it ingests. Many institutions operate legacy systems with poor integration and poor, inconsistent data. Smaller tech-native firms (for example, in crypto) are better placed to build clean data models.

Society must also decide how far to go in profiling behaviors and predicting misconduct. Striking the right balance between clean markets and privacy is a cultural and legal challenge. And, for the time being, AI still hallucinates, fails unpredictably, and is difficult to audit. In critical systems like surveillance, such flaws are unacceptable and, for now, human oversight remains essential.

Many AI models are currently black box solutions. Stakeholders such as regulators require explainability to hold firms accountable. Without interpretability, adoption will be limited. And there are challenges around skills and the resistance to change. Talent, budget, time and inertia pose challenges. Firms must invest in new skill sets, retool operating models, and overcome cultural resistance.

Where next? A call to action

We stand at a crossroads. Society-changing AI is not coming; it is here, reshaping the risks that global markets face. Meanwhile, the regulatory frameworks that protect those markets are strained by nationalism, fragmentation and lagging technology.

To respond appropriately, market actors and regulators alike must act now.

  • Implementation of dynamic risk assessments: firms will need to move away from static, annual reviews to ongoing assessments that better capture fast-evolving risks to market integrity.
  • AI integration across other oversight controls: surveillance is just one of many control mechanisms to prevent, detect and deter misconduct. Firms should be extending AI capability into management/supervisory dashboards, board reporting and enterprise risk systems.
  • Construction of governance and operating models fit for AI-enabled controls: institutions should proactively address key roadblocks to AI adoption in control functions, including model explainability, ethical oversight, data quality and skills development. The design of future-ready operating models will help firms to fully leverage the increased effectiveness and efficiency that AI can bring to surveillance and other controls.

Finally, while regulators are making strong efforts to coordinate supervision, they must recommit to a strengthened framework that enables harmonized regulation and cross-border collaboration. In a world of borderless technology, only international cooperation can preserve market integrity. The tools may be new, but the imperatives are timeless: trust, transparency and fairness in the markets that underpin our economies.

• This article first appeared in the Summer 2025 issue of GRIP magazine.

Munib Ali is a partner at AlixPartners, where he leads the regulatory compliance practice, helping firms navigate complex regulatory and governance challenges.