Warning over ‘AI timebomb’ as business struggles to get a grip

Executives are failing to take a lead despite growing awareness of the risk AI poses to security and data.

“C-level executives are sitting on an AI timebomb, aware of the risks, but too complacent to act.” That’s the conclusion reached by cybersecurity firm Kaspersky after surveying nearly 2,000 senior business executives across eight countries in the UK and EU.

And the findings are backed by research from ISACA, which found just 10% of companies globally have a formal, comprehensive AI policy in place. ISACA is a global membership organization for professionals in IS and IT, and it surveyed 2,300 professionals who work in audit, risk, security, data privacy and IT governance.

Almost 95% of the 1,863 executives who responded to the Kaspersky survey said they believed generative AI was used regularly by employees, and 53% expressed the view that it was “driving” certain departments. Deep concern about the extent of generative AI use was voiced by 59%, but just 22% said they had discussed establishing a system of rules and regulations to monitor its use.

Lack of training

A staggering 91% of executives said they needed more understanding of how internal data was being used by staff in order to protect against data leaks or critical security risks.

The ISACA survey found that over 40% of employees are using generative AI, and 41% of those responding did not think enough attention was being paid to enforcing ethical standards. But fewer than one third of the organizations polled saw the management of AI risk as an immediate priority.

ISACA’s research reveals that generative AI is being used in the following ways;

  • creating written content (65%);
  • increasing productivity (44%);
  • automating repetitive tasks (32%);
  • providing customer service (29%);
  • improving decision making (27%).

Despite this, only 6% of the organizations surveyed provide any training in the use of AI, with 54% providing no training at all.

“Employees are not waiting for permission to explore and leverage generative AI to bring value to their work, and it is clear that their organizations need to catch up in providing policies, guidance and training to ensure the technology is used appropriately and ethically,” said Jason Lau, ISACA board director and CISO at Crypto.com.

Five top security risks

Top risks arising from use of the technology were listed as;

  • misinformation/disinformation (77%);
  • privacy violations (68%);
  • social engineering (63);
  • loss of intellectual property (IP) (58%);
  • job displacement and widening of the skills gap (tied at 35%).

More than half (57%) of those who responded said they were very or extremely worried by the prospect of bad actors exploiting generative AI, with 69% saying that adversaries are using the tech at least as successfully as security professionals.

Asked how current job functions are involved with the safe deployment of AI, survey respondants said security (47%), IT operations (42%) and risk and compliance (both 35%). Digital trust professionals felt job losses were inevitable, with 45% saying AI will eliminate a “significant number” of jobs, but 70% believing it would have a positive impact on their own roles.

Despite the challenges, 62% said they thought AI would have a positive or neutral impact on society.