AI risks are prompting calls for new regulatory agency oversight

Senators introduce a bill to create an agency to regulate AI, and ChatGPT’s chief proposes a licensing framework.

US Senator Michael Bennet (D-Colo) and Senator Peter Welch (D-Vt) introduced the Digital Platform Commission Act (DPCA) last week. It’s the first-ever legislation in Congress to create a federal agency to provide comprehensive regulation of digital platforms to protect consumers and promote competition, with the specific task of creating a code of conduct to govern such platforms.

Amid calls for regulation of artificial intelligence (AI) and social media, the bill updates one introduced by Bennett last year, covering AI products more specifically, amending the definition of a digital platform to include companies that offer “content primarily generated by algorithmic processes”.

“We are way behind the curve, but that’s often where we reside,” said Bennett. “Technology is moving quicker than Congress could ever hope to keep up with. We need an expert federal agency that can stand up for the American people and ensure AI tools and digital platforms operate in the public interest.”

The DPCA

Under the bill, the commission would have broad oversight authority over social media sites, search engines, and other online platforms. To make it clear that it would have jurisdiction over generative AI, the technology behind popular tools such as OpenAI’s viral chatbot, ChatGPT, the bill notes the commission will oversee the use of personal data to generate content or to make a decision.

As outlined by the senators, the Digital Platform Commission Act would:

  • establish a five-member federal commission empowered to hold hearings, pursue investigations, conduct research, assess fines, and engage in public rulemaking to establish rules of the road for digital platforms to promote competition and protect consumers, for example, from addicting design features or harmful algorithmic processes;
  • empower the Commission to designate “systemically important digital platforms” subject to extra oversight, reporting, and regulation, including requirements for algorithmic accountability, audits, and explainability;
  • create a Code Council of technical experts and representatives from industry and civil society to offer specific technical standards, behavioral codes, and other policies to the Commission for consideration, such as transparency standards for algorithmic processes;
  • direct the Commission to support and coordinate with existing antitrust and consumer protection federal bodies to ensure efficient and effective use of federal resources.

Sam Altman likes it

OpenAI CEO Sam Altman last Tuesday appeared before a Senate panel on Capitol Hill, urging lawmakers to regulate artificial intelligence and describing the technology’s current boom as a potential “printing press moment” but one that requires safeguards. “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Altman said in his opening remarks before a Senate Judiciary subcommittee.

His company’s chatbot tool, ChatGPT, has prompted concerns from some lawmakers about the risks posed by the technology.

In his remarks, Altman said the potential for AI to be used to manipulate voters and target disinformation are among “my areas of greatest concern,” especially because “we’re going to face an election next year and these models are getting better”.

One way the US government could regulate the industry is by creating a licensing regime for companies working on the most powerful AI systems, Altman said on Tuesday. This “combination of licensing and testing requirements,” Altman said, could be applied to the “development and release of AI models above a threshold of capabilities”.

Licensing process

The DPCA does not include a licensing process of requirement in it, although the proposed commission would be designing rules to oversee the industry, so that could be one mechanism to accomplish it.

Altman said that AI systems “above a certain scale of capabilities” – such as having the ability to manipulate a person’s behavior – should be licensed, with safety testing standards established before systems are released and licenses being able to be withdrawn from companies for noncompliance.

Altman urged the government to advocate other countries take similar steps, citing the example of the International Atomic Energy Agency, which is a global body setting setting standards to promote nuclear safety.

Other federal initiatives

On Tuesday, the White House Office of Science and Technology Policy (OSTP) took another step toward regulating new artificial intelligence tools, asking for public input as it seeks to develop a national AI strategy to guard against misinformation and other potential risks associated with the technology. 

The Request for Information: National Priorities for Artificial Intelligence asks for input on issues such as “standards, regulations, investments, and improved trust and safety practices” that will be used by OSTP to create a strategy that will help guide federal agencies as adoption of the technology spreads, the White House said. 

This follows up a Biden Administration Blueprint for an AI Bill of Rights, issued last October, which sets out five principles the OSTP has identified as ones to guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.

The principles are meant to serve as guidance for any organizations and policymakers seeking to incorporate the protections into policy and practice, the Blueprint document says.

The principles are:

  • creating automated systems in consultation with diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impact;
  • taking proactive steps and continuous measures to protect individuals and communities from algorithmic discrimination and to use them in equitable ways;
  • building in practices to protect consumers from abusive data practices and to empower people to have agency over how data about them is used;
  • giving people information about how an automated system is being used and why, in a timely way, so they know how and why an outcome affecting them was determined by an automated system;
  • giving people the right to opt out, and being given a human alternative to help remedy problems, where appropriate.

The US approach so far

The federal government’s approach to regulating AI and AI risk specifically has been quite distributed across its patchwork of agencies. The Federal Trade Commission (FTC) can use its authority to protect against “unfair and deceptive” practices to enforce truth in advertising and some data privacy guarantees in AI systems.

The Consumer Financial Protection Bureau (CFPB) requires explanations for credit denials from AI systems.

And the Equal Employment Opportunity Commission (EEOC) has the authority to require a non-AI alternative for people with disabilities and enforce non-discrimination in AI hiring.

In late April, the FTC, CFPB, EEOC and the Department of Justice released a joint statement, outlining a commitment to enforce their respective laws and regulations to promote responsible innovation in automated systems.

In January, the National Institute of Standards and Technology (NIST) released its final version of its AI Risk Management Framework that offers comprehensive suggestions on when and how risk can be managed throughout the AI lifecycle.

Several states, such as California and Connecticut, have had their lawmakers introduce legislation to take on the potential for AI harms, although state-level laws obviously could be preempted by federal legislation tackling data privacy and the use of AI, if passed.