A guest editorial by Donald McElligott, vice president, compliance supervision at Global Relay

In recent years, artificial intelligence has advanced rapidly, reshaping the way employees work across a wide range of industries. Yet the financial services sector remains cautious on AI – to the point where reluctance may come at the cost of compliance.

Our State of AI in Surveillance Report 2025 found that just 31% of financial institutions are either using AI or plan to so implement it in the next 12 months. This is striking, given the intense regulatory scrutiny the sector faces – and its heavy reliance on robust surveillance systems, an area where AI-powered tools provide clear advantages.

Delay could prove risky – and costly

In the United States, regulatory bodies have ramped up enforcement related to data management and off-channel communications. This January alone, the SEC issued more than $63 million in combined fines to 12 firms for recordkeeping failures. These breaches might have been mitigated – or even prevented – with the help of AI-powered surveillance tools.

Meanwhile, the UK’s FCA has taken a more cautious stance to off-channel communications compliance, trying to balance risk mitigation with economic growth. Still, its recently published Five Year Strategy clearly signals the direction of travel. It highlights AI’s potential to “transform financial services” by delivering “greater efficiency” and accelerating response times – while acknowledging the technology’s inverse potential to also increase market abuse. The FCA is backing its words with action, investing in its own AI Lab to ensure usage safe and responsible in the sector.

With firms hesitant to fully embrace AI until they’ve seen it prove its value, one way to overcome hesitancy around adoption is to demonstrate how forward-looking surveillance teams are already integrating these tools – and the tangible benefits of doing so.

How AI is already delivering real compliance gains

Among those already using AI, our report found that the most cited reason for adoption is to reduce false positives (23%). Generative AI, working alongside other AI models, is helping compliance teams distinguish between irrelevant chatter and genuine risks in communications data by understanding the context behind communications.

Voice transcription was cited as another key use case, with 14% of firms identifying it as a top reason for implementing AI in their surveillance operations. This likely comes as little surprise to those following the regulatory signals.

In January, the FCA stressed in its ‘Dear CEO’ letter that firms are expected to maintain “effective and comprehensive” oversight to prevent harm – explicitly noting trade and communication surveillance. Meanwhile, in the U.S., the CFTC issued a $650,000 fine in late 2024 for “recordkeeping deficiencies and failure to obtain customer authorisations.”

To reduce the risk of future fines, firms must rigorously follow internal procedures across all operations. AI is a clear enabler here. Leveraging tools that accurately capture all communications, and store them securely for review, will be a game changer for firms, especially as AI becomes more sophisticated.

The explainability dilemma: are firms prioritising the wrong thing?

Concerns about AI persist, particularly surrounding data security, cost and explainability – the ability of AI systems to clearly explain the rationale behind their decisions in a way that users (and regulators) can understand.

Firms need to be assured that AI and generative models are not covertly storing or processing information when analysing data. Yet with recent leaps forward in capability producing significantly better outcomes and surpassing traditional methods of surveillance, it may be time for firms to reconsider whether prioritising explainability at the expense of accuracy is still the right approach.

Trust starts with understanding

To use AI models responsibly, firms need confidence that their tools are carefully assessed and authenticated. This means firms must have a clear understanding of how their data is processed, stored, who owns it, and be able to demonstrate that they have the skills and knowledge to manage AI tools effectively.

This places added responsibility on AI vendors to provide more than just software. They also need to offer robust training and documentation to allow firms to use their solution in line with regulatory standards and expectations.

A global signal: governments are embracing AI – will finance follow?

Regulatory momentum is not the only signal for businesses in the financial sector. In early 2025, the U.S. government issued an executive order rolling back specific AI policies, designed to keep the country competitive globally. In the UK, prime minister Keir Starmer recently laid out a vision for AI that includes the creation of regional AI growth zones designed to revitalise former industrial areas.

These developments underscore a growing political consensus: AI is central to national competitiveness and economic transformation. For communications surveillance, AI has proven its credentials by boosting efficiency, reliably reducing false positives and successfully identifying risks by understanding the context behind written communications.

As firms continue to weigh the benefits and drawbacks of AI for surveillance, one thing remains clear – the technology is here to stay. To stay compliant, competitive, and prepared for what’s next, firms must act now. Selecting trustworthy partners and equipping the workforce with the knowledge and skills to use AI safely will ensure that it can be used a force for good across the financial services sector.

Image: Global Relay

Guest Editorial
This article was produced specially for Fintech Intel by an expert guest contributor.