NCSC assesses impact of AI on cyber threat to UK until 2027

Emergence of AI means malicious actors can identify and exploits vulnerabilities a lot faster, experts have warned UK organizations.

The UK’s critical systems are at increased risk from a “digital divide” created by AI threats, the National Cyber Security Centre (NCSC) has warned.

The agency’s latest report on the corelation between AI and cyber threats also “warns that organisations unable to defend AI-enabled threats are exposed to greater cyber risk.” The report was published on the sidelines of the CyberUK conference in Manchester, where experts and officials discussed cyber resilience and countering cyber threats.

“Artificial intelligence (AI) will almost certainly continue to make elements of cyber intrusion operations more effective and efficient, leading to an increase in frequency and intensity of cyber threats,” the report says.

The warning comes at the same time a number of UK retailers, including high street giants such as Marks & Spencer and Co-op, fell victim to a series of cyber hackings and attacks.

The NSCS said it is working with the affected organizations to understand the nature of the attacks, but is unable, at this stage, to conclude if they are linked, and whether there was a single actor behind them.

There is no suggestion, at least none from official agencies, that AI was involved in the latest attacks on UK retailers. But there is no outright rejection of a potential involvement either.

The country’s cyber security watchdog says the technology “will almost certainly continue to make elements of cyber intrusion operations more effective and efficient, leading to an increase in frequency and intensity of cyber threats.”

The advice to organizations from the NCSC is simple. They must keep pace with frontier AI capabilities to ensure that they are cyber-resilient, or risk being exposed to greater threats over the next two years and beyond.

Double-edged sword

AI capabilities will continue to develop, as the technology becomes a key component of both national and private infrastructures across the globe. The benefits are obvious, which is why governments and organizations continue to make huge investments to develop or acquire the technology. AI adoption across finance, healthcare, research and academia, and other sectors is on the rise.

But the pace at which AI is spreading also makes it a useful and lethal weapon for malicious actors. These may include individuals, organized crime groups (OCGs) or state-backed actors.

And within that context: “Proliferation of AI-enabled cyber tools will highly likely expand access to AI-enabled intrusion capability to an expanded range of state and non-state actors,” the NCSC report has warned.

Experts also believe adversaries will now have a greater attack surface as AI makes its way into models and systems across the UK’s technology base, and that even critical national infrastructure (CNI) is not entirely immune from AI-enabled cyber risk.

A day before the publication of the UK NCSC report, the US’s Cybersecurity and Infrastructure Security Agency (CISA) also issued a cybersecurity alert and said malicious actors were trying to exploit the country’s operational technology (OT) and industrial control systems (ICS) devices.

The agency warned that “exposed and vulnerable OT/ICS systems may allow cyber threat adversaries to use default credentials, conduct brute force attacks, or use other unsophisticated methods to access these devices and cause harm.”

CISA has previously warned that AI is a software system like any other, has its own vulnerabilities and should therefore be secure by design.