Europe pushes on with AI regulation, US takes piecemeal approach

New research identifies risks and regulatory gaps, but business remains largely unprepared.

Stanford University researchers have found that none of the large language models (LLM) used in AI tools comply with the EU’s Artificial Intelligence Act, which was approved in a vote in a European Parliament on June 14. The Stanford research findings are important because the Act is seen as the blueprint for global AI regulation.

The researchers measured 10 major model providers, including OpenAI, Google, Meta and Big Science, against the 12 requirements of the AI Act on areas including data sources, copyright, compute, risks and mitigations and downstream documentation. Compliance with each requirement was measured on a 0 to 4 scale. Only one, Big Science, scored over 75%, and most scored less than 25%. Out of a maximum 48 points that could be gained, three providers got fewer than 10.

Areas of non-compliance identified as giving greatest cause for concern were:

  • lack of transparency in disclosing the status of copyrighted training data;
  • energy use and emissions; and
  • methodology used to mitigate potential risks.

Open or closed models

The team also found open model releases led to a more robust disclosure of resources than did closed models, but that there were greater challenges around monitoring or controlling development with open model releases.

Recommendations for improving AI regulation made by the researchers include ensuring larger foundation models are held to account for transparency and accountability, and that effort needs to be made to provide greater technical resource and talent to allow for the enforcement of the Act.

But while the study recognises it will be a challenge for model providers to adapt their businesses to meet regulatory requirements it also said that many could achieve scores at least in the high 30s (of 48 available) without strong regulatory pressure.

EU and US

The EU is looking to take the leading role as an AI regulator, and the AI Act is a significant piece of legislation. Progress has been slower in the US, where there is a patchwork of current and proposed frameworks, but no comprehensive federal legislation. But more general initiatives are predicted to emerge this year by international law firm Alston & Bird, through “state data privacy law, FTC rulemaking, and new NIST AI standards”.

The firm references a number of developments.

  • The states of New York, Illinois and Maryland moving to regulate automated employment decision tools (AEDTs) to minimise or eliminate bias. In New York, AEDTs must undergo an annual bias audit, with the results made publicly available.
  • The US Equal Opportunity Employment Commission (EOEC) issuing guidance on use of Ai tools in recruitment.
  • General privacy legislation passed in California, Connecticut, Colorado and Virginia governing automated decisions that have an impact on consumers.
  • Signals from the Federal Trade Commission that it will increase focus in this area, including
    • An advanced notice of proposed rulemaking (ANPR) aimed at automated decision-making systems that was issued in August 2022.

That ANPR in particular, in the view of Alston & Bird, “marks an intentional shift toward a more holistic federal regulatory framework that addresses AI at all its phases: development, deployment, and use”. The firm also references the AI Risk Management Framework (AI RMF) issued by the National Institute for Standards and Technology (NIST).

Pressure for change is also coming through less formal, but no less important, channels. The FT is reporting that big institutional investors are increasing the pressure on tech companies to take responsibility for potential misuse of AI as concern over human rights issues mounts. A group representing 32 financial institutions with a combined $6.9trn of assets under management is pushing for a commitment to ethical AI.

But data from Deloitte, published in its 2023 CFO Signals survey, found only one in four CFOs in North America saying they are planning for the regulation of the use of AI. The 122 CFOs who contributed said that ESG issues and cyber risk were their most pressing concerns, with AI management and ethics bottom of a list of eight factors they were asked to rank. And over half those questioned said their CEOs wanted them to focus on cost reduction.