Skip to Primary Navigation

Mitigating risk and Mythos madness: a cyber chat with Melissa Ventrone

Montage with Melissa Ventrone on a blurry artsy background.
GRIP Montage: Clark Hill/Peter Macdiarmid/Getty Images

Ventrone explains how cybersecurity is a contest of relentless monitoring, testing, training, and patching, and some of the most basic good practices are often overlooked.

The number one mistake companies make in the information security arena: Most organizations don’t have a full asset map – data map – of what data is within their organization.

Melissa Ventrone, leader of the Cybersecurity, Data Protection, and Privacy Practice at Clark Hill in Chicago, said this is true of thousands of the clients she has worked with over the years.

Ventrone spoke to GRIP also about cybersecurity incident trends, risk monitoring, vulnerability management, AI-related cyber risk, vendor risk, cyber hack drills, and more.

Knowing your data and monitoring

Ventrone emphasized that starting with doing an audit and understanding exactly what data and systems you have puts you on better footing right from the beginning. “It starts there,” she said. “I can talk about all of the other mistakes, such as not properly deploying your tools, not monitoring them correctly and not having the right escalation procedures, but it starts with creating a clear data and asset map of what you have in a structured and organized way.”

Ventrone said that after this is done, you can then build privacy and other governance parameters, perform your risk assessments that identify the right issues, and delineate the possible impact of a data breach pertaining to that data. “You can’t protect data you don’t know exists or understand,” she said.

What are the impediments to businesses in not performing this critical data-mapping exercise?

“It’s a big project, and that’s even more the case for mid-size and larger organizations with a lot of data. It takes a lot of time, a lot of collaborative activity and conversations to search out and identify all of this data, and it can be costly if you’re using third-party tools. And that cost has not come down,” she said.

Can external experts help with this process?

“Yes, if you have a vendor that truly knows what it is doing and understands the requirements in your specific industry. However, many vendors say they have experience in these types of projects and the law, and they really don’t.”

Ventrone provided an example. “I had a company that fell under the auspices of the New York Department of Financial Services. They were required to comply with their New York cybersecurity regulations [the Part 500 regs, as they are called] and wound up with a lot of gaps because the vendor they relied on to help them with compliance wasn’t familiar with the law,” she said.

“You own the compliance obligation here. So, if you consult external resources, and some can really do it well, you need to not just take their glossy ‘yes, we’ve done this before with these regulations,’ and get the specific details behind that information. How they did it, why that pertains to your business, and how they make sure they have done it right,” she said.

Reliance on vendors and risk with vendors

Speaking of vendors, isn’t there an appreciable amount of cyber risk in using them, sharing data with them, and relying on a handful of the ones deemed hyperscalers?

“Companies have security questionnaires [that] they have vendors complete, and that is helpful and a great first step. But organizations often don’t ask all the right questions. You have backups, but are they segmented? Are they immutable? Do you test them? Have you restored lost data before and how?”

Ventrone said vendors need to show proof of what they can do by listing the technology, skillset, and experience they offer. She also cautions organizations to be mindful when the legal department of a vendor handles the contract, but its security and technology teams don’t see that contract, nor does the compliance department.

“We need to have IT, compliance and legal in the same room on these conversations, and we are seeing more of it, but it could still be better,” she added. “There are still disconnects.”

Ventrone emphasized the need for a realistic, not-just-on-paper business continuity plan. “So, if Microsoft goes down, what is our secondary and tertiary options and our communication protocol? Are our other options equally reliant on Microsoft or AWS or whomever, and have we asked them?”

“It is good to be able to say that you use the products of these hyperscalers, but you have internal servers and applications that will help. You might not be able to access items on their cloud, but you should be able to operate on the other side of it to at least create documents, send email and store them,” she noted.

Fire drills

I asked Ventrone how important very realistic cybersecurity drills are in assessing readiness in an organization.

“I have a military background,” Ventrone said. “And I served in both active duty and in the reserves for 21 years. The saying is ‘the more you practice in peacetime, the less you bleed in war.’ And it’s very true. So drills are incredibly important. But each drill does not need to be a full-blown, comprehensive one,” she said.

There are two ways of talking about AI and cybersecurity risk. One is looking at it from the point of view of adversaries using AI against the business. The other is adversaries using the organization’s own AI against it.

“Grab a pizza, involve the communications team, sit down for two hours and walk through what has to happen in a cybersecurity incident. Continually touch it, test it and revise it. Have your full-blown drills, but also do the component, smaller ones in which two departments work together, and the next time, two different ones do. You can get creative about your approach and testing that way.”

Ventrone thinks external experts can be really valuable with these intermittent, component drills. “I have handled about 6,000 security incidents and I know to test them on their knowledge of what’s in their various contracts, particularly what those contracts say about notice requirements.

“They will say they can check on these documents, but that is not true if they’re part of the data that is now inaccessible during a breach. I remind them to summarize its contents and print it out and have it accessible. The little reminders add up,” she said.

Ventrone believes that if you have not performed a drill, “you’re probably missing about 40% of what could be easily improved with your systems and procedures, had you done the routine drill work.”

A new kind of risk called AI

There are two ways of talking about AI and cybersecurity risk. One is looking at it from the point of view of adversaries using AI against the business. The other is adversaries using the organization’s own AI against it.

“You have to remember the possibility of adversaries using your own AI tools against you. From a technology perspective, we are all a little behind in addressing this risk. We need to keep building the controls and tech tools that will help us get in front of this risk and train our staff on how to recognize both types of attacks,” she said.

“You have law firm employees giving data to someone they think comes from their IT department, all because a scammer is using AI with a social engineering component to it. The reality of these events happening means you need the education [of your employees] piece layered on top of your information technology stack and know how you’re protecting it,” Ventrone said.

“You can’t just have a firewall in place, as that won’t protect you across the board. You have to have an enterprise technology solution and additional access controls for highly confidential and sensitive information. All of it. But the training piece cannot be underappreciated,” she cautioned.

Mythos has been hacked

Anthropic’s cybersecurity tool called Mythos is an AI product designed for enterprise security that, in the wrong hands, could become a potent hacking tool. And this is according to the company, never mind what the US government, other technology and large financial services firms currently evaluating it are thinking. (The White House currently opposes access to the AI model, citing ongoing national security risks.)

Mythos has the keen ability to find and exploit the smallest and oldest software vulnerabilities out there, which could be a boon to businesses trying to track them, but it poses immense risks if it falls into the wrong hands.

Unfortunately, unauthorized users have reportedly gained access to the model, and according to TechCrunch the company says it is “investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments.” The company said it has not yet found evidence that the supposedly unauthorized activity has affected its systems at all, though.

“The cyber risk is already presenting itself. But that’s not to say Anthropic is not doing a great job in marketing it and keeping everyone updated on it. And they went out and gathered the government and large banks and some other large organizations to deploy this and identify vulnerabilities and patch it. This is all great. But think about the middle and smaller markets that are not involved in this initiative and won’t be able to use the final tool,” Ventrone said.

Since Mythos is designed to find every single low-level vulnerability that can then be combined with other vulnerabilities and create higher risk, there will be no wait period for firms any longer to deploy fixes.

“You used to have some time, especially as a smaller firm, to get to those lower-risk vulnerabilities. But now you’re going to find out about all of the new ones and the ones that have been around for 27 years. This introduces an approach to vulnerability management that is novel. It is hard to be prepared for that, and it involves a mind shift and process shift,” she said.

Regulation and lack thereof

Ventrone recently asked ChatGPT about which laws on AUI have been proposed over the last year. It came up with 1,500, which might include things Ventrone would not include, and she did not verify it, but there are some laws.

“The ones specifically about the risk that AI poses are in the employment and medical fields, where they require the human factor,” she said. “California says you can use AI to review medical records only if you have a human review the results. Other states have similar laws in these high-risk areas. But we still don’t have an overarching federal framework, and I don’t think we will.”

Ventrone explained: “The current administration is really focused on deregulation, and I think we will need to look to other organizations, such as NIST or the EU’s frameworks, to get a sense as to the importance of keeping human judgment in the process and keeping the process transparent. I mean, we’re not likely to have federal AI governance legislation if we don’t have federal cybersecurity legislation.”

We have best practices, expectations, and guidance from regulators. And certifications like ISO 27001 and SOC2 from organizations.

“Certifications are pieces of paper that an organization gets issued to them that they can provide an auditor when they ask for it. They are good for helping incentivize companies to create security policies and compliance processes. But it’s not an end-all-be-all solution and doesn’t show [that] you practice good hygiene on a daily basis. It can be a check-the-box type of thing, but absolutely is a good starting point,” she said.

“And you should ask your vendors about having obtained such certifications,” she added.

Talking to the board of directors

Compliance teams obviously need to help their boards of directors appreciate the risks posed to their organizations. And they and chief information security officers obviously will be weighing in on the risks posed by their data and interconnective networks.

“You can tell the board that you have 87% of all vulnerabilities patched, but that the remaining 13% is associated with our payment system, and this is a $1.5 billion risk because that is how much we transact through it. You are putting the risk into context, which is essential,” Ventrone said. “Everything must be conveyed in terms of risk to the organization, so they can appreciate the possible blow to profitability and competitiveness.”

Ventrone feels like these conversations are increasingly happening, especially given changes to educational systems such as law schools that have combined degrees with technology aspects included to fill skills gaps. “We are finally starting to train technologists and lawyers how to better have these conversations about technology risk.”