AI, geopolitical, and climate-based emerging risks highlighted in WEF Report and RIMS Riskworld event

Global, emerging risk barometers showcase risk managers’ concerns in new tech, geopolitics and climate-based forces.

The World Economic Forum’s Global Risk Report 2024 clearly documents the significant, emerging risks facing corporations globally. And conference attendees at the annual Riskworld conference put on by trade association RIMS this week in San Diego heard about them more substantively from experts in a number of business, government, nonprofit and academic fields.

For its part, the WEF Report lists the following risk as being top of mind for business leaders in 2024:

  • Rapidly accelerating technological change. No longer requiring a niche skill set, easy-to-use interfaces to large-scale artificial intelligence (AI) models have already enabled an explosion in falsified information and so-called ‘synthetic’ content, from sophisticated voice cloning to counterfeit websites. To combat growing risks, governments are beginning to roll out new and evolving regulations to target both hosts and creators of online disinformation and illegal content. Generally, however, the speed and effectiveness of regulation is unlikely to match the pace of development, the authors state.
  • The climate crisis is affecting companies’ access to supplies and affecting the costs of goods and services, which thereby have significant downstream effects on consumers and smaller businesses. So far, climate-change adaptation efforts and resources are falling short of the type, scale and intensity of climate-related events taking place, putting pressure on somewhat competing stakeholders to truly find workable solutions.
  • Underlying geopolitical tensions combined with the eruption of active hostilities in multiple regions is contributing to an unstable global order characterized by polarizing narratives, eroding trust and insecurity.
  • Persistently elevated inflation and interest rates and continued economic uncertainty in much of the world are leaving many people without avenues for buying a home or even making ends meet.

The impact of AI developments

There are significant productivity benefits and breakthroughs in fields as diverse as healthcare, education, finance and climate change that can come from the creation of new AI tools. But the authors stress that market concentration and national security incentives could constrain the scope of the important guardrails needed with AI development to rein in the novel risks that will arise from self-improving generative AI models.

Private sector-led development of a powerful dual use (both civilian and military) technology makes regulatory guardrails even more essential, they note.

Technological advances will open new markets and allow crime networks to spread as well, they say. More specifically, as advances in technology break down barriers to entry – borders, languages, skill sets – they open alternate revenue streams to bad actors, particularly in the cyber domain, and allow transnational criminal networks to spread.

The authors note that the countering the spread of misinformation and disinformation requires a balancing act that national governments could get wrong.

Given the inability to rein in abuses of new technology and given open access to increasingly sophisticated versions of them and people’s insecurities about their economic situations and decreasing trust in public institutions, the report concludes that misinformation and disinformation may radically disrupt electoral processes in several economies over the next two years. And they say new classes of crimes will also proliferate, such as non-consensual deepfake imagery and video or stock-market manipulation.

The authors note that countering the spread of misinformation and disinformation requires a balancing act that national governments could get wrong. There is a risk that some governments will act too slowly, facing a trade-off between preventing misinformation and protecting free speech, while repressive governments could use enhanced regulatory control to erode human rights.

About 10,000 risk managers and experts met last week in San Diego for the annual RIMS Riskworld conference to discuss the industry’s most pressing issues, from generative AI and geopolitical tension to punishing climate events, basically the items and issues enumerated above. As they noted in their remarks, businesses’ risk managers are in a perpetual state of growth. 

Some great quotes the WSJ team assembled from the event:

Reid Sawyer, head of US cyber risk at Marsh:

“The questions that are really coming up with our clients right now that we weren’t hearing a year ago on this are on truly appreciating the liability associated with AI.

“Whether that’s IP infringement because the training data is from a third party, or whether the training data might be bad and you’re making bad decisions off of that. I don’t think there’s enough discussion about those sets of risks.” 


Lauren Finnis, head of commercial lines insurance consulting and technology for No. America at WTW:

“A lot of organizations are in various states of transitioning to cloud computing and various sources of how they structure their data.

“To really maximize the impact of gen AI, you need to have quality data in large quantities that’s easily accessible. For many organizations, that is a risk because they are trying to do all these applications while trying to navigate a data journey.”


Adrian Cox, CEO at Beasley:

“We saw how vulnerable supply chains can be to a shock. Insurers are being more careful about the size of political risk they’ll take in and around China and Taiwan. Political risk insurance can be written for up to 10 years.

“Writing an insurance policy for 10 years in Taiwan is a slightly different matter than it might have been a decade ago when the geopolitical situation didn’t seem quite as precarious.”