Trust a key issue as AI impact on security and compliance grows

Two global surveys reveal detail of challenge and opportunity posed by the rise of AI.

Increasing use of generative AI is adding to pressure on businesses to deliver on trust and security objectives, according to a survey of 2,500 business leaders across the globe. Over half those surveyed said regulation would make them more comfortable about investing in AI. But just four in 10 organizations rate their risk visibility as strong, and time pressures are leading many to deprioritize compliance.

Better security and compliance policies as well as strategy are identified as key for building trust and confidence among customers, investors and suppliers. Two thirds of the respondents said many customer groups are increasingly looking for proof of better security and compliance practices, with around 70% saying that recognition that such policies have been adopted improves customer trust.

But despite this recognition of the positive business impact of visible security and compliance policies, many businesses are not providing evidence of what they are doing and IT budgets are being cut, and staffing is being reduced. The report found;

  • just 41% of those responding provided internal audit reports;
  • 37% provided third-party audits;
  • 36% completed security questionnaires;
  • 12% don’t provide evidence when asked;
  • 60% have reduced or are planning to reduce IT budgets;
  • 33% report lack of staffing as a barrier;
  • 32% report lack of automation as a barrier; and
  • on average, only 9% of IT budgets are dedicated to security.

Respondents in the US returned the lowest figures for providing evidence of security and compliance practices when asked, with just 10% saying they did so. The highest figure, 16%, came from Australia. UK respondents reported the highest level of keeping up to date with evolving regulation, with 37% saying they did so. Germany returned the lowest level – 26%.

Compliance deprioritized

Time pressure is leading more and more companies to deprioritize compliance. The survey found respondents spent an average 7.5 hours a week, or more than nine working weeks a year, on compliance. More than half (55%) said remaining compliant with different national regulations was becoming “increasingly difficult”. US companies spend the most time on compliance – averaging nine hours a week.

Larger organizations, those with more than 250 employees, are more likely to deprioritize compliance (45% said they had done so) than smaller (38% said they had done so). But smaller businesses are less confident they have strong visibility of their risk surface, with just 35% believing they have, compared to 56% of larger organizations.

Automation is recognized as having great potential to achieve better rates of compliance and prove security, and 77% of respondents said they were using or planning to use AI or machine learning methods for tasks such as detecting;

  • high-risk actions;
  • unsecured cloud storage;
  • unassigned compliance responsibilities;
  • unrevoked access privileges.

But 54% are concerned that using AI will make secure data management more challenging, and 51% worry AI could erode customer trust.

Despite the doubts, there is a clear move to automation shown in the survey results. Asked whether or not they agreed with the statement “My company could save time and money con complying with regulations and frameworks through automation”, 63% said they agreed. In the US, 67% of respondents agreed, the highest figure in any country surveyed.

Role of automation

Overall, 68% or IT decision-makers agreed automation would save time and money, compared to 58% of business decision-makers. Eight out of 10 businesses plan to increase their use of automation, mostly for reducing manual work (42%) and streamlining vendor reviews and onboarding (37%). Respondents believed the could save at least two hours a week, or over two-and-a-half working weeks a year,  through automation.

Asked “Where do you think Ai can be most transformative for security teams?”, the answers were;

  • improving the accuracy of security questionnaires (44%);
  • eliminating manual work (42%);
  • streamlining vendor risk (37%);
  • reducing the need for large teams (34%).

The State of Trust Report, commissioned by internet security company Vanta, surveyed the opinions and behaviors of 2,500 business leaders in the US, UK, Australia, France and Germany.

The results were issued in the same week as the Google Cloud Cybersecurity Forecast 2024. The Google forecast warned that generative AI would be used to run increasingly sophisticated malicious activity in the next 12 months.

What that means in practice is that it will be harder, for example, for the average user to spot the kind of misspellings, grammar errors or telling cultural contexts that often indicate an email is a scam. This, in turn, threatens to generate increasing skepticism and lack of trust. The report also forecasts an increase in generative AI and large language models being offered as a paid service for those wishing to mount malicious attacks.

However, the report counters the slew of bad news with a reminder that the same tools are available to those wishing to defend networks and individual users against attack. It points to organizations drawing data together to contextualize it and to develop threat intelligence.