AI – Efforts on accountability, risk are brand-new and not-so-new

AI tools such as ChatGPT face regulatory guardrails on their use; the US Commerce Department wants comment as regulators across the globe get to grips with the challenge posed.

That first step toward articulating regulations asks the public about what accountability measures should be adopted and whether potentially risky new AI models should go through a certification process before being released.

The Department of Commerce’s National Telecommunications and Information Administration (NTIA) is spearheading the effort, and the agency notes that AI and algorithmic systems are already providing benefits, but may also introduce risks.

NTIA says that “[c]ompanies have a responsibility to make sure their AI products are safe before making them available. Businesses and consumers using AI technologies and individuals whose lives and livelihoods are affected by these systems have a right to know that they have been adequately vetted and risks have been appropriately mitigated.”

NTIA’s “AI Accountability Policy Request for Comment” seeks feedback on what policies can support the development of such audits, assessments, certifications and other mechanisms to create earned trust in AI systems – trust they that function as intended, without adverse consequences.

“Companies have a responsibility to make sure their AI products are safe before making them available.”

NTIA

NTIA is seeking input on what kinds of safety testing AI development companies and their enterprise clients should conduct; what kind of data access is necessary to conduct audits, and how regulators can incentivize and support credible assurance of AI systems, plus other forms of accountability.

NTIA points to the White House Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights as a framework to guide the design and deployment of AI, and it mentions the National Institute of Standards and Technology’s AI Risk Management Framework, which already serves as a voluntary tool that organizations use to manage risks posed by AI systems. Comments are due 60 days from publication of the request for comment in the Federal Register. 

Main concerns

A host of new AI tools are on the market now, not just ChatGPT. They can quickly produce human-like writing, images, videos, vocal recordings in multiple languages, translate legalese and take notes for you in a videoconference.

They are promoted as tools that drive efficiency – or “hacks” that help you work around the limitations of standard technology. And they truly can function this way, providing innovative ways to create and share content.

But they come with legitimate worries. Some public experiments using chatbots have given troubling advice to users who were posing as young people. And the race between OpenAI, which owns the ChatGPT tool, and competitor businesses such as Alibaba and Baidu (which recently launched chatbots) and Google, which is developing one, has policymakers worried that risk is not properly being scrutinized and mitigated along the way.

Other nations are acting

The European Commission, the EU’s executive arm, first announced a plan in April 2021 for an AI rulebook, and the European Parliament hopes to finalize its AI Act text soon. Like the EU’s General Data Protection Regulation that came into effect in 2018, the EU AI Act is designed to operate globally, protecting EU citizens wherever they might be.

In China, regulators on Tuesday released draft rules designed to manage how companies develop AI products such as ChatGPT. The Cyberspace Administration of China outlined the ground rules that generative AI services must follow, including the type of content these products are allowed to generate.

For example, the content generated by AI must not “subvert state power,” “incite secession” or “disrupt social order,” according to the draft rules.

In late September 2021, Brazil’s Congress introduced a bill that creates a legal framework for artificial intelligence, but it still lags in the Senate.

Just a couple of weeks ago, Italy became the first Western country to ban ChatGPT outright, and Germany is considering doing the same.

Even the world’s religious leaders are concerned with the Vatican’s call for an ethical approach to AI, originally published in 2020 and renewed in January 2023 with the representatives of the Muslim and Jewish faiths now as co-signatories.

We already have some guardrails

In February, the Consumer Financial Protection Bureau announced it would get tougher on AI misuse in banking. Director Rohit Chopra cautioned that AI could be abused to advance digital redlining” and “robo discrimination.

The Federal Trade Commission has cautioned that it will hold companies accountable for making false or unsubstantiated claims about AI products.

In March, the US Copyright Office issued guidance on AI and copyrights, saying works created with the assistance of AI may be copyrightable, provided they involve sufficient human authorship. But when an AI program produces a complex written, visual, or musical work in response to a prompt from a human, the “traditional elements of authorship” are being determined and executed by the technology and not the human, so the resulting work is not copyrightable.

And, of course, your chatbot communications pose regulatory compliance challenges for businesses subject to record retention requirements, such as those imposed by the SEC and the FINRA.