Human on the hook: Cybersecurity lessons from the AI Summit NY

Speakers at the AI event reminded us that cybercriminals can weaponize anything, so it’s not the tool itself. But that doesn’t mean downplaying AI’s unique risks.

The two-day AI Summit New York, held last week at the New York City-based Javits Center, featured thought leadership, interactive workshops, live demos and networking. And GRIP was there.

One main speaker presentation track was the cybersecurity one, where panelists and solo speakers noted that data security is always hard, and in a new, complex, ever-evolving, competitive arena such as artificial intelligence, it was bound to be even harder.

Luckily, there are steps organizations can take to mitigate the risk, and everyone has a role to play, they emphasized. But getting this wrong and not fully appreciating the risk involved could be hazardous, as “the human is on the hook.”

By using that play on words of the oft-repeated phrase “human in the loop,” the speaker who did so explained: “Your business faces the reputational, operational and regulatory risks involved and potential liability – not the tool itself. Remember that.”

Educate yourself and others

If you’re like most businesses and still have over half of your employees clicking into your fake, IT-driven phishing emails, you already know your employees need training in spotting any illicit use of AI tools.

“Know what you have and what your clear objectives are for using this technology. It is good if they can solve some specific problems you have identified, but don’t expect an instant return on investment, because we have not seen that yet,” one speaker noted.

Dealing with this challenge can help can highlight areas in the business needing attention, and even the pilot program you run of your new tools (a pilot phase must be included, they emphasized) is a great exercise in itself, especially when it goes badly. It’s a learning tool. Go back to your use case and revise it. You already had one for your regular cybersecurity program, so ask what else needs to be added or changed.

Training, good policies, a zero-trust approach and periodic tests that delve deep into possible threat scenarios can be quite helpful.

One speaker pointed to the continuously updated MIT Database for AI Risk as an excellent source for developing greater AI literacy for your compliance and legal teams, among other personnel.

The database classifies over 1,700 risks – the how, when, and why these risks occur – from existing frameworks to help creators, users, policymakers, and so on, build AI governance frameworks.

AI has democratized cybercrime

Threat actors don’t need to be skilled coders to generate functional malware or write harmful code, because AI coding assistants and specialized “dark LLMs” (large language models) sold on the dark web, such as FraudGPT, WormGPT and KawaiiGPT, provide these capabilities through simple conversational prompts, one speaker explained.

“AI allows for the automation of large-scale, cyber-based criminal operations,” the speaker said. And gone are the days in which phishing emails and messages featured many of the traditional red flags, such as typos or awkward phrases, that made them easier to spot, he noted. To make matters worse, AI can generate “polymorphic malware that constantly changes its code to evade traditional, signature-based security detection systems,” he added.

“It’s never been a better time to be a cybercriminal,” speakers agreed.

One reason hackers have the advantage is because they engage in “vibe hacking.”

This is “a cybersecurity term for using AI agents and tools to automate and execute cyberattacks, particularly those involving social engineering and data extortion. At best, this features scam messages that mimic human tone and context, making them difficult to detect,” one panelist said.

At worst, the tool manipulates trust signals, sensing from your posts on social media and in texts how you feel and preys on emotions like urgency, fear, outrage, or wanting to belong, another panelist noted.

An example cited several times was the one from this past August featuring the AI company Anthropic. It published a Threat Intelligence Report outlining several examples of how its GenAI tool, Claude, had been misused, including in “a large-scale extortion operation using Claude Code, a fraudulent employment scheme from North Korea, and the sale of AI-generated ransomware by a cybercriminal with only basic coding skills.”

“The AI was used for everything from the initial reconnaissance to drafting the ransom demands.”

Event panelist

In the data extortion scheme, Claude was used to target 17 organizations, including healthcare businesses and government agencies. “The AI was used for everything from the initial reconnaissance to drafting the ransom demands,” a speaker said.

The good news is that you can use AI to detect AI-generated threats like the one Anthropic faced, they pointed out. And training, good policies, a zero-trust approach, and periodic tests that delve deep into possible threat scenarios can be quite helpful here.

Additionally, several panelists recommended using “AI honeypots” or a deception engineering tool that uses AI to create realistic decoy systems designed to lure cyber attackers “and gather their threat intelligence, such as methods and objectives, so they can better be tracked and caught.”

They all warned: “Be healthily skeptical.” Keep using your tool, but beware of the risk. And use Google Threat Intelligence. Several speakers pointed to that solution as unmatched for offering visibility into threats, but other providers can also offer tools that help with threat detection.

“Choose a solution that learns from your actions, tailoring its output to become increasingly relevant to your specific needs over time,” one speaker advised.

Pen testers have their day in the sun

The rise of AI and its use by both defenders and attackers has made traditional periodic testing insufficient, speakers noted.

“It’s better to use a hybrid approach that combines AI-driven automation with human expertise,” one speaker said. “Penetration testing is now informed by AI, getting the best of AI’s ability for pattern recognition and automating an attack, but being aided by human creativity, intuition, and understanding of context.”

AI tools can help to act as both attacker (pen tester) and defender (incident responder) to find gaps. “Pen testing finds vulnerabilities, while incident response reacts to an active breach,” they said. “The skills involved overlap, and pen testers’ real-world attack knowledge significantly helps the incident response team,” one speaker said.

Plus, AI tools can be trained to uniquely understand the business, specifically look for attacks that involve AI, and simulate the attacker’s behaviors.

Investing in safeguards

Companies are acknowledging their role in protecting their AI tools from malicious use.

Just as the AI Summit in NYC was winding down, OpenAI stated in a report that the cyberattack capabilities of its AI models are increasing, but the company also said it was “investing in safeguards to help ensure these powerful capabilities primarily benefit defensive uses and limit uplift for malicious purposes.”

It continued: “[W]e take a defense-in-depth approach, relying on a combination of access controls, infrastructure hardening, egress controls, and monitoring. We complement these measures with detection and response systems, and dedicated threat intelligence and insider-risk programs, making it so emerging threats are identified and blocked quickly.” it said.

The company also announced the creation of a Frontier Risk Council, which will serve as an advisory group bringing “experienced cyber defenders and security practitioners into close collaboration with our teams.”