Transcript: Rupert Evill podcast

Our senior reporter Carmen Cracknell spoke to risk expert Rupert Evill about geopolitical risk, ethics, and training.

This is a transcript of the podcast episode Rupert Evill on geopolitical risk and corruption between GRIP senior reporter Carmen Cracknell and Ethics Insight director Rupert Evill.

[INTRO]

Carmen Cracknell: Welcome back to The Grip Podcast. Risk management is never easy. Dealing with corruption, delivering human-centric training and deciding whether to take a preemptive or reactive approach to compliance are just some of the challenges. Today, I’ll be talking with Rupert Evill director of Ethics Insight and the author of Bootstrapping Ethics. With 20 years in risk management, much of that spent working in Asia, he has plenty to say about tackling cultural differences when handling compliance and the flaws in current approaches to risk methodology. Rupert, it’s great to have you here. Could you start by talking about your background and what you do in risk management?

Rupert Evill: Sure. Thank you for having me. My background is a varied one. I started out in political risk, counterterrorism crisis stuff, moved into investigative roles, which included proactive stuff like due diligence, business intelligence, asset tracing, that kind of stuff, but also the reactive investigations into potential wrongdoing. And over the years, that then morphed into organizations asking how best to prevent the issues we were investigating, so more of a proactive approach of risk assessment, training, programming, and implementation And so that’s my background in simplest terms.

Carmen Cracknell: And we spoke a bit before about 9/11 and how that influenced organizations and how they approach risk. Can you talk a bit about that and the influence that had?

Rupert Evill: Yeah. So at the very sort of sharp security end, obviously, people were very concerned initially, and there was a sort of a flurry of work, often a lot of it led by the insurance sector trying to reduce their risk by having sort of consultative or other work done to better calibrate what would happen if we had these sort of disastrous scenarios. And then trotting behind that came the more financial legislation around sort of The Patriot Act and the greater focus on counterterrorism financing. And over time, that grew to a greater focus on anti-corruption. I don’t know whether I can make the case that 9/11 led to that, but I think there is a recognition now more than ever that corruption is the unifying factor between whatever ill it is you’re focused on.

So if you’re focused on counterterrorism, if you’re focused on environmental impact or human trafficking, how are all of those horrible things happening? Well, they’re happening because there’s corruption and people have palms being greased to enable and facilitate this stuff. So now we’re seeing more, I guess, of a focus on sustainability, which is variously defined, but particularly in supply chains and in your value chain and project and product lifecycle. A lot of that is similar methodologies to those that have already been employed to identify who it is you’re dealing with and their background, their reputation, how they do business, their track record, etc. But my kind of view on it is that my concern is if it gets too siloed.

And my hope is that actually there’s a much more similarity than and I was having this conversation just before this call, where an organization that’s a fund has requirements from various LPs around money laundering, around anti-bribery, around ESG, etc. And at the moment, they’re sitting in separate work streams. And my view is that actually if you take a risk-based approach, there’s much more overlap than dissonance. And I think that I guess is where I hope things continue to evolve now. So we’ve gone through this sort of massive expansion in terms of risks that people are concerned about in the integrity space now, hopefully some form of sort of consolidation.

Carmen Cracknell: Yeah, obviously, right now, the sort of buzz term of the moment is ESG. Do you think regulations around that are helping combat corruption or getting in the way? What’s your view on that?

Rupert Evill: No, I think in its current iteration, though, I think the governance, the G bit is often more focused on what you would classically sort of cover as governance and not governance, risk and compliance, but just the governance bit, you know, board composition, reporting structures, that kind of stuff. So there isn’t enough of a recognition that the E and the S bit are heavily impacted by corruption and other forms of integrity risk. So I think part of the problem is the advisory side in ESG is there is quite a lot of people saying they’re all things to all people and it’s not possible. I work on projects with impact investors where they are doing ESG analysis and having buddied up with E and S specialists, they’re very different to me and the sorts of folks who’ve worked in the governance risk compliance area. That’s not to say we can’t learn from each other, but at the moment, ESG is just mainly it seems to me around a kind of reporting thing rather than actual sort of risk recognition and mitigation. More like kind of a box ticking compliance exercise.

Carmen Cracknell: Yeah, you’re mentioning corruption a lot and obviously that’s a huge problem in all across the world and a lot of countries. I went to Cambodia earlier this year and I know you’ve done quite a lot of work there. I was shocked to learn that the prime minister used to be a Khmer Rouge guy and that none of those guys were ever really brought to justice. They kind of died old and happy in their beds. That’s obviously a very corrupt country and an extreme example, but working in a country like that, how do you deal with sort of their approach to things and the concern that you’re kind of imposing western norms on them? You kind of have to deal with cultural relativity. How does that work?

Rupert Evill: Yeah, that’s a very good question. So yeah, Hun Sen, the Cambodian prime minister whose net worth is estimated somewhere between three to six billion US, was once asked how he achieved that on a salary of sort of three to four thousand US a month and he said good investment advice. So firstly, I’d love his investment advisor, but yeah, when you’re dealing with states that are kind of kleptocratic and corrupt, it can seem very intractable, but I would kind of pass it down into a few different areas, which is- avoid, resist, mitigate, transfer or escalate.

So avoidance can be, it requires a lot of local knowledge. So not everyone within institutions operates the same. So to give some practical examples, I’ve seen organizations who’ve been having frustrations with certain ports or certain ministries and then they work out how to avoid them. So sometimes that’s very tactical, like a physical rerouting. Sometimes it’s more strategic where, for example, I was working with a company involved in infrastructure developments and they would do analysis of the different ministries. So if you think about a port versus a telecoms project, the Ministry of Telecoms may operate differently to the Ministry of Transport. So if you can actually identify within the country where there are better pockets of governance, normally there are even in very high risk markets, then you can potentially avoid that.

Resistance is sometimes unavoidable. Let’s say you have a business license you need every year and that can be a process of kind of attrition of developing. So making it sort of a hard target, so you just submit your paperwork well on time, you have people sitting in the offices, so you just become basically too difficult to ignore. So it’s quite attritional, but sometimes that’s what you have to do. The mitigation can be very creative, like I’ve seen, sort of appealing to other interests. So, for example, a lot of these authorities, even if they’re in single party states, they still have stakeholders and people to appease. So if you can legitimately make them look good, like, you know, I don’t know, greening up your factory and then letting them take credit for the Go Green Initiative, then that’s a way to potentially get them on side.

Transfer, I’m not saying insurance here, I’ve seen, for example, there was a telecoms company that was dealing with a lot of extortive payment requests to access their infrastructure. And when they made access the responsibility of the national telecoms company who they were working for, they started to dissipate because it’s basically the same side, not able to extort each other. So you can transfer some of that corruption risk.

And then finally, it’s about kind of your leverage and escalation. So that’s obviously only the privilege often of larger organizations. But sometimes you can also use your diplomatic channels. So the countries you mentioned, you know, Cambodia and the Mekong region, the Nordic countries have maintained very strong development aid linkages and they were, you know, maintaining a diplomatic presence, that presence is there, including through the wars with America. And those organizations, some of the companies from those countries can sometimes escalate through their diplomatic channels to make problematic things go, you know, requests go away. So, yes, I think that in summation, it’s about having different strategies that you purpose, depending on the motivation of the person making the corrupt demand and integrating local knowledge into your resistance strategy. But the more strategic you make it, the easier it is. So the best way to avoid corruption is to try and avoid areas of the market, whether that be sectoral or physical. and sort of plan accordingly.

Carmen Cracknell: So are the compliance challenges in countries where corruption is rife completely different to the challenges here in the West and countries that are less corrupt? And how does the approach differ kind of building on what you’ve already said?

Rupert Evill: That’s really a good question. Honestly, I think there are some overlaps. So I’ve worked with a Japanese, a few Japanese organizations a while back and they looked at reporting data, exit interview data, employee complaints, all kinds of things. And we see a big correlation between certain risks. So if, for example, you have a poor workplace culture, so discrimination, harassment, those sort of issues are common, you’re likely to have other more serious issues because you have people at the top who are acting with impunity and people underneath who are having to suffer at that hand. So, yes, it might be that you’re not going to have the same corruption risk, but you could have fraud or other kind of employees, sabotage, confidential information, leakages, just lack of employees caring about your, I don’t know, cybersecurity, data privacy, other sort of vigilance issues. So it really depends on, I guess my point is, it’s around the culture around risk needs to be consistent wherever you are operating. The types of risks that will manifest depend on the operating environment. But I don’t think you can make a case to say that, you know, I wouldn’t ever want to make the case that people in certain countries, are better or worse, were all human. So it’s just that how the misbehavior manifests is different depending on the operating context.

Carmen Cracknell: And what sort of industries do you work with most and which are kind of most needing of compliance training and risk management kind of approach, help with their approach to risk management?

Rupert Evill: Well, you know, I run a small organization. So I think my view may be somewhat skewed by the people who would choose to come and work with me when there are obviously lots of big and glossy advisors out there. So the ones I tend to work with is what I call skin in the game, where if it goes wrong, it really impacts them. So organizations that are either purpose driven, or they’ve staked their reputation around doing things the right way, or maybe they’re growing very fast, starting to get successful and needing to put in place the right size risk frameworks. But at the moment, a significant flow of work is coming from impact investors, development, finance, those sort of areas where they are often taking funds from risk averse stakeholders that could be like, you know, Nordic pension funds that are investing in say, their DFIs. And then they’re looking at investing in emerging markets, but they’re trying to find the emerging market partners who have the same attitudes as them and want to change the status quo, particularly around things like corruption, human rights, money laundering, etc. So that I guess that would be, yeah, skin in the game and wanting to create impact, it would represent the sort of organizations I work with. So it has spanned many sectors. It’s more about I guess, the who they are, or how they want to do business necessarily than what they do.

Carmen Cracknell: And I know you’ve gone it alone. So presumably you work more with kind of smaller companies that don’t have an internal compliance function. But for all companies in general, what are the benefits to outsourcing risk management?

Rupert Evill: Oh, well, so I do work with some larger ones as well as the boys or often I’m working with solo contributors or small teams in the in house kind of risk function. And my view is that I think outsourcing should be done. Well, this my personal view is outsourcing is best when it’s something that it’s like ad hoc or periodic need that doesn’t make sense to have in house internally the whole time. So for example, I do a fair amount of what I call it kind of quarterbacking investigations. So you had speak up complaints, you’ve had an issue or possible near miss or something you’re concerned about, you don’t really know how to bottom out how to find out what’s going on there, it makes sense to have somebody externally who’s done a lot of investigations that can look at all the available investigative strategies and tools out there, and choose the one that’s actually going to fit your objectives best so that that’s the kind of it’s not going to make sense that say you’re er..I’m working at the moment, for example, with a couple of organizations where around 3000 to 5000 employee mark just stuck at the end of seven to nine countries, that sort of framework, it doesn’t make sense for those people to have a full time investigations framework. So that would be a case for outsourcing. And you might also get it if I don’t know training, auditing other other areas that where you don’t have a constant need. I’ll note on the kind of the broader question of risk assessment program implementation.

I would suggest that if people are looking at external providers, they’re ones who are transferring knowledge. So the call I was having this morning was with a fund that is operating in places like Cambodia. And what they want is a way to better assess risk in their investments that works for them. And they can change the kind of ratings depending on the nature of that investment, and then make sure that they’re putting in place the appropriate frameworks for each of those investments. So they could come to me each time they have an investment and pay for me to do that. But that doesn’t make much sense. What makes a lot more sense because they’re doing repeated investments is for me to help build the framework that’s right size to them and work with them to hand it over.

Carmen Cracknell: Is a lot of the work you do reactive rather than preemptive. I know here in the UK, the FCA gets a lot of flack for coming up with regulation that’s too far behind. They’re not kind of looking in advance of problems. They’re just dealing with them when they arise. What do you think about that?

Rupert Evill: That is another good question. So the reactive stuff often generates a flurry of questions not always work. So to give examples, the FCA might be one, but also the Modern Slavery Act, there’s been a number of bits of legislation where people are like, “oh, we’ve got to address this” and it prompts lots of conversation. But if the regulation is unclear, if the intentions of the regulator are unclear, or if it’s ill thought out, or it’s not properly spelled out, so the EU’s corporate sustainability due diligence directive is another one that prompts quite a lot of questions. And then, but people struggle to move to implementation. So I think if regulators aren’t really going to the effort of setting out what that kind of looks like in implementation terms and what they’re going to be looking for with enforcement, and it’s just a bit too reactive, then it is a problem for everyone.

So for example, under the Modern Slavery Statement, one of the things it says is you shouldn’t lessen the burden on your suppliers. And I think they give the example of mutually recognized audits, but nobody’s really thought that one through because how does that work in any era of anti-competition? And how actually do you go about sharing audits from the auditors? Are the auditors going to be happy about one person paying and their audit shared with multiple others? So there’s often quite ill thought out elements within the regulation that then just prompt, they actually create a kind of a block and people just talk about it and not a huge amount happens. So yeah, I agree that regulation can actually be a constraint on progress, which is why it’s sometimes easier working with organizations that are private or smaller, where they can actually think about, well, what makes sense for us in terms of our values and our risk appetite and how we want to behave in the market.

Carmen Cracknell: Yeah. And I guess related to that, what are the flaws do you see in risk, current risk methodology?

Rupert Evill: Oh, lots of them. I think there’s two sides to risk. It’s like, I guess the analogy would be like an app. So we all have like favorite apps that we like, and we like them because they’re intuitive and easy to use. We don’t know how the code is written. I think the problem with a lot of risk is we try and tell people how the code is written. So you get this training where there’s big long lectures about laws or this requirement or that statute or whatever it is, and we lose the audience instead of focusing on the user experience and the interface and what we want people to do with it. So I’ve been guilty of this in the past. And also the other problem with the very constructive risk assessment is the words we use. We’ll often say things like likelihood, probability, impact, consequences, and then we give these sort of word scales. But if I said to you, like, what’s the probability of a risk event, sorry, risk event A, we consider that to be probable. What does that mean to you? For some people, that’s a 20% likelihood. For others, it’s a 90%.

So there are just the way we gather risk data could be simplified by using a universal language like numbers, but also that kind of back-end crunching and analysis should be done by people with kind of a deep expertise of it. It’s a bit unfair to be kind of foisting that on frontline folks where we should be just gathering their insights and their inputs. And the other bit that nobody really talks about is what risk are we measuring? So we talked about corruption at the beginning. What is the corruption risk? Is it the risk of paying bribes and that harming your bottom line? Is it the risk of paying bribes and getting caught? Is it the risk of not paying bribes and suffering retaliation? And I’ve seen heavy consequences for people in emerging markets for that, like illegal detention or kinds of other things. So what risk exactly are we looking at? And then you go most ERM frameworks, enterprise risk management frameworks will have risk categorized by management time, financial costs, litigation, reputation, all this. How do you quantify that? There’s been organizations that have gone through huge scandals and kind of brushed them off and just keep rolling and other ones that have been severely impacted. So I think we get far too caught up in all of this kind of ratings and dogma and all the rest of it rather than actually focusing on what’s important, which is helping people make better decisions so we don’t find ourselves in those situations in the first place.

Carmen Cracknell: Yeah. So I’ve had a lot of people say training is too abstract. A lot of people that I’ve spoken to in other podcasts and that you need to, the people who are conducting the training need to bring in human examples and analogies and things like that. Do you have any examples of this from your own work that you can think of, sort of more successful ways of implementing training?

Rupert Evill: Yeah. Yeah. So there’s that quote that I can’t remember exactly on Mangle, but it’s, but involve me and I will learn is the end bit. And the training that is to involve the person means it has to be relevant to them, which the one size fits all training for all employees. I understand why it sometimes exists because it’s a kind of regulatory tick box, but the risks that someone in procurement face versus business development, let’s say it could be widely different. So the first thing is it needs to be relevant to the target audience. The second thing is it should be about actual knowledge transfer around what to do in those, the decision making cycles. So the training I’ve seen work best is crisis simulations where the inputs you’re using are either things that have happened within the organization, obviously anonymized, sanitized, or near misses. And then you put people through their paces, you split them into different groups or whatever it is, and then see how people actually respond in that kind of semi-live environment to the risk areas. And then that gets people engaged. It becomes more relevant to them. And it has some sort of practical value when they leave.

Let me talk you through the UK Bribery Act for half an hour. And then at the end, tell me that you can tell you three things that you’re not to do. That training, I think, doesn’t help anyone at all. There’s a fair amount of data around our retention, depending on instructional methods. So I think lecture and reading is down below 10% to 5%, 10%. This is data from a US training study. And when we’re involved in practicing, it goes up to about 75%. But where we get to about 90% is when we then have to teach other people. So another model I’ve seen used successfully is where you have kind oftrain the trainer, where you’re training risk or integrity, ethics, whatever you call them, champions, to then go and train other people. And they then become the first point of contact when those other people have potential questions or concerns. So yeah, I think that there needs to be a mixed strategy of function specific training, immersive sort of crisis simulation, and also train the trainer for it to be effective within organizations.

Carmen Cracknell: Yeah, I’ve heard other people say crisis simulations are a good training tool. And I guess it’s about kind of changing human behavioral patterns. And this sounds kind of paradoxical, because although it’s programmed by humans, it’s not human. But do you think AI will help going forward in kind of risk mitigation and training? And yeah, is it going to help kind of humans behave better and make less mistakes?

Rupert Evill: I think it could. It just depends on the questions you use. So I’ve used, I’ve just played around with various AI tools to see what scenarios and things they can spin up. And I think when you’re the person developing training, you obviously, if you’re doing say that crisis simulation, you’re not going to take a carbon copy of something that’s happened within the organization, because then people, you’re either potentially embarrassing, reaching confidentiality, or people are going to know what happened and how to respond. So you could, for example, just get the bones of that situation and then ask AI to come up with multiple potential scenarios. Some of those are going to be useful, some not be you ultimately still need the person driving that AI to have had a real experience of what happens. So I think way I look at AI is it’s a sort of a multiplier of ideas and concepts that you can then selectively use. So if I said to AI, give me 10 examples of how corruption might occur in Cambodia, I don’t know what it would say, but let’s say it came up with 10 of them, I would imagine there’d be some in there that would prompt some use that you could then go and take out or ask the AI to expand on those into particular scenarios. So yeah, I don’t see why it couldn’t be used. Ethics is a big part of compliance. How can corporations incorporate ethical behavior? Well, most people don’t want to be, they don’t want to do harm. So I think the first thing is treat people like the adults, often I see a slightly sort of patronizing approach to kind of ethics and compliance.

I also think that we just need to make it a bit, we need to bring in a bit more of the behavioral stuff. Like for example, a lot of organizations that have under the ethical decision making framework as the first step, is it in compliance with the law? I have multiple problems with that. Firstly, you’re asking people to play Gonzo lawyer, which is quite dangerous in certain areas where it’s not very immediately obvious. And secondly, the law is not there to set ethical frameworks, it’s there to catch behaviors we don’t want. And so I think we need to flip it around simple values that are actually upheld and within the organization and put into demonstrable terms in terms of how those values are lived. So I would liken it to kind of parenting. If I try to legislate and come up with laws, compliance, in other words, for every single thing I’ve done, or do want my children to do, it’d be like the Magna Carta. Whereas if we have simple values around their expected behaviors, be courteous people, like whatever it is, I mean, it’s going to be different at an organizational level for kids, but you get the idea. If you have simple values, they become the hooks on which you can hang your ethical behaviors. So I think the ethics bit is important, but it’s very badly done at the moment, in many cases for two reasons, partly because the values were just chosen because they look good on brochures and they’re utterly, they’re not lived within organizations. And partly because ethical decision making frameworks at the moment are not, they’re just a kind of a legal checklist.

Carmen Cracknell: It’s been a really interesting discussion, Rupert. Thank you so much for joining me. If people want to find more info about you and your company, where should they look?

Rupert Evill: I guess LinkedIn would be the best place. I’ll try and share a linktree link that has other sources of information, newsletters, et cetera, if that would help people as well.

Carmen Cracknell: Brilliant. And you’ve also got some pieces up on the GRIP website. So if anybody wants to read some of your content, that’s another place to look. Lovely. Thank you again.

Rupert Evill: Thank you very much.

Carmen Cracknell: Take care.

Listen to the audio.