New report questions credibility of UK approach to AI regulation

Ada Lovelace Institute raises significant questions about gaps in coverage and resourcing.

The UK government’s ambitions to make the country an “AI superpower” are at risk of being undermined by its approach to regulation. What’s more, some of the policy it is currently pushing through will actively undermine confidence in the safety of AI.

Those are some of the conclusions drawn in a new report from the Ada Lovelace Institute, which is a part of the Nuffield Foundation charitable trust, dedicated to ensuring data and AI work for people.

Michael Birtwhistle, the Institute’s associate director, said: “The UK’s credibility on AI regulation rests on the Government’s ability to deliver a world-leading regulatory regime at home”, and the report says UK ambitions “will only materialise with effective domestic regulation, which will provide the platform for the UK’s future AI economy”.

Regulating AI in the UK

The report, Regulating AI in the UK, makes a measured assessment of the UK government’s current approach before making 18 recommendations for improvement. It says the current approach being taken by the government is to build “a ‘contextual, sector-based regulatory framework’, anchored in its existing, diffuse network of regulators and laws”.

This is in contrast to the EU’s rules-based approach and means, in effect, that the development of UK policy will be left to existing regulators to apply broad principles with no new resources and no enforcement power.

The Data Protection and Digital Information Bill (No 2) currently going through parliament also needs to be reconsidered, as the Institute’s legal analysis shows people will not be properly protected by the safeguards proposed for AI-assisted decision-making.

Sectors currently unregulated

The Ada Lovelace Institute says: “Large swathes of the UK economy are currently unregulated or only partially regulated. It is unclear who would be responsible for implementing AI principles in these contexts”. Those contexts include:

  • recruitment and employment;
  • education;
  • policing;
  • benefits administration;
  • tax fraud detection; and
  • retail.

“AI is being deployed and used in every sector but the UK’s diffuse legal and regulatory network for AI currently has significant gaps. Clearer rights and new institutions are needed to ensure that safeguards extend across the economy,” the Institute says.

The deregulatory thrust of the Data Protection and Digital Information Bill (No 2) significantly amends current GDPR protections, for example by removing prohibitions on many types of automated decisions. One example given is the requirement for data controllers to have “safeguards in place, such as measures to enable an individual to contest the decision”. The Institute argues that, in practice, this is a low level of protection.

18 ways to improve AI regulation

Recommendations in the report are:

  • Rethinking elements of the Data Protection and Digital Information Bill (No 2) that are likely to undermine the safe deployment and use of AI.
  • Review existing rights under GDPR and the Equalities Act and, if necessary, introduce specific new rights for people and groups affected by AI.
  • Publish a clear statement of rights and protections that people can expect when interacting with AI.
  • Consider establishing an AI ombudsman.
  • Specify how the government’s five principles of AI will work in areas where there is no specific regulator.
  • Introduce a statutory duty for regulators to have regard to the five principles.
  • Consider introducing a common set of powers for regulators.
  • Clarify the law around legal and financial liability for AI risk and ensure it is evenly distributed.
  • Significantly increase funding for regulators to deal with AI-related harms.
  • Create formal channels to allow civil society groups to feed into regulatory processes.
  • Fund and support those groups to enable them to hold those who deploy and use AI to account.
  • Support the development of non-regulatory tools.
  • Allocate time and resource to enable a robust, legislative approach to foundation model governance.
  • Review opportunities for and barriers to the enforcement of existing laws.
  • Invest in pilot projects to improve the understanding of trends in AI.
  • Introduce mandatory reporting requirements for developers of foundation models operating in or selling to the UK.
  • Ensure the presence of diverse voices and and an expansive definition of AI safety at the forthcoming AI Safety Summit.
  • Consider public investment in AI to steer applications towards generating public benefit.