Skip to Primary Navigation

New York bill takes aim at AI ‘phoney professionals’

An artificial intelligence (AI) sign
Photo: Cesc Maymo/Getty Images

If a chatbot crosses into licensed territory, New York wants liability to follow, just as it would for a human.

New York state senator Kristen Gonzalez has introduced a bill that would make AI chatbots liable if they present themselves as licensed professionals and cause harm.

The proposal, S7263, applies a familiar rule to a new setting. In New York, practicing a regulated profession without a license, or pretending to have one, is already illegal. Gonzalez argues that the same standard should be applied when a chatbot claims to be a doctor, lawyer, or any other licensed expert.

The move follows documented cases of chatbots fabricating medical credentials and giving incorrect advice. Under the bill, users could seek damages if they relied on such claims.

The legislation is narrowly drawn. It does not ban general advice or the use of AI by licensed professionals. It targets one issue: chatbots crossing the line into impersonation.

If passed, it would put responsibility on AI companies when their systems misrepresent credentials and harm users.

Where the bill does the real work

Gonzalez’s proposal is less about AI in general and more about drawing a hard legal boundary around professional conduct. It opens with definitions, but they are doing practical work.

“AI system” is limited to tools that actually generate outputs that can affect people. Routine software is carved out; the bill is not trying to regulate code.

The next move is more consequential. Liability sits with the “proprietor” – the entity that puts the chatbot in front of users, not the model builder in the background. If you deploy it, you own the risk.

Then comes the core rule. A chatbot cannot give advice or take actions that would be illegal for a human without a license. The bill ties AI behavior directly to existing licensing regulation, from medicine to law. It avoids creating a new AI standard and instead plugs into a framework courts already understand.

Two lines of this bill stand out.

First, disclaimers do not help. Telling users “this is not a doctor” does not matter if the system proceeds to act like one.

Second, enforcement is left to users. Anyone harmed can sue for damages, and if the violation is willful, recover legal costs. That is a clear invitation to litigation.

There is still a disclosure requirement. Chatbots must clearly state that they are AI, in the same language and format as the rest of the interface. But the bill treats that as basic hygiene, not protection.

Why the bill exists

Kristen Gonzalez grounds the bill in something very concrete: not abstract AI risk, but documented harm.

The justification leans heavily on how these systems behave in practice. Chatbots are not just tools answering questions. They are designed to mirror users and, in some cases, form emotional bonds. That dynamic becomes dangerous when the system steps into such spheres as therapy.

The bill points to warnings from the American Psychological Association (APA). In submissions to the Federal Trade Commission, the APA flagged chatbots presenting themselves as therapists while reinforcing harmful thinking: not correcting it, not pushing back, but encouraging it.

From impersonation to accountability

Gonzalez’s bill does not sit in isolation. It fits neatly into a broader shift already underway across AI and compliance – moving responsibility away from the tool and onto the entity deploying it.

At a recent AI Summit in New York, one line captured that shift bluntly: “the human is on the hook.”

For the past two years, much of the AI debate has focused on transparency. Disclose that a system is AI, add disclaimers, inform the user. This bill moves past that phase. It treats disclosure as baseline hygiene and instead asks a harder question: What happens when the system actually causes harm?

The answer mirrors what is emerging in other areas of AI risk.

In cybersecurity, companies are already being told that it does not matter whether an attack was enabled by AI or not. If your systems are used, your organization carries the operational and regulatory consequences.

There is a second, quieter trend running through this bill: AI as a trust manipulator. The justification section points to systems that mirror users and build emotional rapport. That aligns with what security experts are seeing in “vibe hacking” and AI-driven social engineering, where tools adapt tone and messaging to exploit trust.

The bill also reflects a broader regulatory instinct now visible across jurisdictions – reuse existing legal frameworks instead of inventing new ones.

This is where the trajectory is heading: less focus on what AI is, more focus on what AI does.