Transcript: Christian Hunt podcast

Human Risk founder Christian Hunt believes a behavioural science-led approach can mitigate risk and lead to desirable outcomes in business and compliance.

This is a transcript of the podcast Christian Hunt on compliance “in the wild” and the risks of data bias in AI between GRIP Senior Report Carmen Cracknell and Human Risk founder Christian Hunt.

[INTRO]

Carmen Cracknell: Welcome back to The GRIP Podcast. I’m joined today by Christian Hunt, an expert in human behaviour and risk. He is the author of Humanizing Rules and the host of the Human Risk Podcast. He also has many years of experience in compliance.

Could you just also for our audience introduce yourself in your own words, Christian?

Christian Hunt: Sure. So I am the founder of a company called Human Risk, which specializes in managing and mitigating that particular risk. And that’s the risk of human decision making. And so I work with people looking in areas like ethics and compliance, but also information security, cybersecurity, HR, and a broad swathe of parts of organizations, really looking at how can we influence people to do the things that we want them to do and not do the things that we don’t. And I do that using behavioral science. My background, as you pointed out, has been in risk and compliance. I’ve been both a regulator and a compliance officer. And I guess all of this is pulled together by the fact that I’m fascinated by human behaviour, what drives us to make certain decisions. And so what I do is to try and help people to think about that in a context in which we don’t necessarily always think about the human that’s at the end of the things that we want them to do.

Carmen Cracknell: And I was interested in sort of how you got into this kind of we don’t often hear about compliance from the psychology perspective. It’s usually from the legal field. So is that how you came into it or how did you kind of arrive at compliance?

Christian Hunt: I’d always been very curious about people and what made them do things. I was one of those annoying children that would always ask my parents, point at people and ask why they were doing certain things. I studied literature at university and literature is all about people. We tend not to write books about things. We write plays and novels about people and the decisions that they make. And so there’s always that latent interest there. And through a series of career decisions, I ended up working at the regulator and I always have to stress post the 2008 financial crisis, I joined the financial services regulator, having spent a lot of my career in various roles within financial services. And I joined that regulator and then moved from the regulator to UBS, which was the bank that I had spent most of my time as a regulator looking at and anyone that knows anything about regulation will know that if you’re looking at a firm, it’s not necessarily because everything’s run swimmingly. And so I had a very unusual set of circumstances where I ended up eating my own regulatory cooking. In other words, I had imposed things on the firm that I was then having to enact myself and take responsibility for. And as I did that, I started to realize that things weren’t landing in the way that had been intended by the regulator, not a critique of UBS, but a critique generally of the way that we went about compliance. As I looked at what compliance was, ultimately what I realized we were trying to do is to influence human decision making. You couldn’t say to the bank, be compliant and expect it to respond. It’s the people within the organization that would determine whether or not that happened. So as I looked with two hats, I had a compliance hat and a risk hat on. As I looked at both of those, human decision making was at the forefront because if we wanted the organization to be compliant, then we needed to persuade the people within it to do the right things and not do other things. But equally, when things went wrong, there was always a human component involved, either causing the problem or making things in the first place or making things worse. And so as I suddenly looked and realized that what I was doing was trying to influence people, if I could get the people to behave in the right way, then we’d have less of these problems. And so I started to realize, well, the way one did that was to look at other contexts in which we’re trying to influence human beings and say, what can we learn from those? So advertising is the most obvious example. And as I started to look at that, what I realized was the way that many organizations and societies as a whole often goes around looking at these risks and challenges doesn’t think about people. It thinks theoretically. So it says, because we employ people, we can tell them what to do. And that, of course, is legally correct. But if you really want to get people on boards to do things, particularly where there’s a qualitative component– in other words, you need people to do things to a particular standard, or you need them to do it when you’re not able to monitor what they’re doing, or you need them somehow to be engaged and to think for themselves, then we have to start thinking about, well, what does this feel like for those individuals, and how can we influence them? And it was this realization that I was dealing with people that made me think, well, I really ought to try and understand what it is that drives that. So behavioral science, which is the study of human decision making, was something that had always interested me. I’d been fascinated. I’d read lots of books. And suddenly it dawned on me that this wasn’t just sort of slightly interesting for my work. It was entirely what it was about. And so I set about testing that out and running various experiments and trying this out. And needless to say, if you think about people, the humans that we’re trying to influence, you will get better outcomes because you’re reflecting the realities of the world that they’re in, rather than the theories of what people ought to do. And so began this journey of bringing behavioral science to ethics and compliance. I set up a behavioral science function within UBS. And then a few years ago, stepped outside, and now work with a range of different organizations solving that same problem.

Carmen Cracknell: Interesting. I would love to hear about some of the, if you can, talk about the experiments that you ran and the outcomes that you got from them.

Christian Hunt: Yeah. So lots of, I mean, lots and lots of simple things that you can do. We often think about these experiments, it’s complicated. An experiment sounds complicated, but just the words that you use in an email can have a major impact. So if I give you a very simple example, which is if I send an email out to a bunch of people within an organization, and I put something like compliance training or compliance update in the subject line, they’re not likely to get very, very excited. I always say that compliance is one of the world’s worst pieces of branding. If you were trying to make something sound bureaucratic and officious, you’d use a word like compliance, and you’d add a suffix like officer to make it sound doubly jumped up and sort of clipboardy little hat bureaucrat. And so there’s one really, really simple example, which is change the subject line of an email to see if more people, either more people read it, because one of the problems you might have is people not reading your emails. Or if you need them to engage with it, then at least you want them to open it in a spirit of sort of cooperation and at least interested in what the subject is. So there’s one simple example where looking at the subject line of emails can make a huge difference. So if people aren’t opening your emails, then maybe there’s a reason, you know, it’s not appealing to them in the first place. But one can get more sophisticated and look, for example, at the effectiveness of training. So one of the things that we often do when we think about training is make a presumption that because somebody has attended a training course, a bum is on a seat or somebody has logged on that somehow that is a proxy for people understanding the rules, being interested in it, caring about it. Of course, we all know that’s not true.

We’ve all sat through classes at school. We’ve all sat through situations where, you know, theoretically, we have been paying attention, but in practice, we’re not. And so, you know, what do we then do? So first thing is we’ll measure, has someone gone to the training? The second thing we’ll do is we will test them immediately on the thing that we have taught them before. And we’ll see lots of bits of mandatory training within organizations that will teach you some fact and then two minutes later, we’ll test you on that fact. Not a particularly effective form of recall, but it sort of again, it ticks the box. Not only did people attend, but we tested them. And my argument is that’s not a good proxy for have you done this properly. So what you discover is that if there are things, the rules that people aren’t complying with and you thought you would train them on it before, maybe there’s a problem. It’s not necessarily a problem with the people, maybe a problem with the training.

Equally we can think about policies. It’s pretty obvious that the longer and more tedious something is to go through, the less likely we are to read it. So good example would be most of us, if we buy a new appliance, there’s a strong likelihood if there’s a fat instruction manual that unless it looks particularly complicated or dangerous, we’re unlikely to read that. We’ll just crack on and have a look at it. So reducing the length of policies, looking at the reading age of policies, all of these things are basic, simple initiatives that you can launch to experiment. Of course, I talk about experimentation because in most cases, the things I’ve referred to there are obvious. You would think that there’d be a direct correlation between making those changes or improvements and the results that you get. And very often you do. But it’s worth having an open mind because often humans are unpredictable and we don’t always do logical things. And so the reason you have to experiment is things we might logically think would work can often have perverse outcomes.

So here’s an example of something that can often backfire that we very often don’t think about. If we have a situation where lots of people are not complying with a particular rule, then one of the things that organizations often do is send out emails to everybody making that point and saying, look, here’s a rule that we need people to comply with, but lots of you are not complying with it. And on the face of it, that’s very logical. We’re communicating to the people that we want to behave in a particular way. And we’re saying this is not good. This is what we want you to do. So expect you to follow that. But the hidden message behind an email that says lots of people are not complying with a particular rule is that you are highlighting the fact that there is non-compliance on a large scale. You wouldn’t be sending the email out if there wasn’t a reasonable amount of non-compliance. And so what you do is one of two things. For the people that are not compliant, you send a hidden signal that actually they’re not the only ones. There wouldn’t be an email and you wouldn’t state that there are lots of people not doing it if there wasn’t a big problem. So they’re looking at it going, I’m OK. I’m not the only one here. And the people that are compliant will look at that and think, well, why am I bothering if all these other people aren’t doing it? And so what we’re doing there is shining a light on something that we don’t want to have happen and sending a signal that it is quite prevalent. And so there’s lots of things that we can do.

So when we’ve got a situation like that, there are probably other techniques that one might wish to try to do that. So what I do is work with people to think about what is an alternative solution to that particular problem. And so if people aren’t doing what we want, I have a very, very simple sort of mantra that I quote to people, which is to say if one or two people aren’t complying with a particular rule, aren’t doing the thing that we want them to do and not do the thing that we don’t want to do, then you’ve probably got a people problem because everybody else manages to be compliant. Then there’s clearly something I mean, there may be a very good explanation, but generally speaking, there’ll be something strange with those people. But if lots of people aren’t complying with the rule, then you’ve probably got a rule problem because I start from the simple premise that if there is a problem, lots and lots of people can’t do something, they’re unlikely all to be intentionally setting out to break the rule. There will be something that’s problematic with the rule, the way it’s been explained, what it does, the fact that maybe they were taught about it six months ago and forgotten. There’s something around that that’s driving that behavior. So we need to think about what is it within the rule that’s driving that. So I spend a lot of time with people looking not just, of course, we need to hold people accountable to their behaviors. But if we’re getting feedback from a behavioral perspective that lots of people are doing a particular thing or not doing something we need them to do, then there’s going to be some sort of environmental factors. In other words, the organization is unintentionally somehow contributing to that challenge. And that’s where one starts to get some interesting things that you can do when you recognize what the reasons for that might be.

Carmen Cracknell: Yeah, companies talk so much about culture. And I guess that ties into what you just said. What’s at the root of this culture of people not complying and how can companies make themselves more compliant?

Christian Hunt: So it’s really around looking about what the drivers are. And the drivers that we think exist are things like, well, we’ve employed you so we can tell you what to do. So by default, you’re going to do all the things that we’ve told you. You are going to read all the policies that we’ve asked you to read. And we might even have asked you to attest that you’ve read all of and memorized and complied with all the policies that are there. Now, if you take that as an example, we can draw parallels with what I call compliance in the wild, the real world. There are lots of situations in the real world where we are asked to sign things to confirm that we’ve read them and agreed to them, where we know in reality that we don’t do it. So if I take the example of downloading software, we’ve all clicked those things that say I agree to all these terms and conditions and we don’t bother reading them. Ditto with car rental. If I want to rent a car, I need to sign probably about 10 or 11 times. I don’t know what I’m signing. And it makes perfect sense for us not to read those things in the real world because actually we can’t negotiate.

I can’t turn around to Apple or Microsoft or whoever the software provider is and say, I don’t like that. Can you strike through clause 57? We just have to take it or leave it territory. Ditto with car rental. I just want the car. We kind of assume it’s going to be OK. And so what’s interesting is if you look within companies, we deploy the same techniques, effectively saying to people, please sign here to keep your job. Now, if we’re upping the stakes and we’re giving people a scenario that they’re used to in other circumstances of where they just tick the box or sign their signature and we make it contingent on them doing that to keep their job, of course, they’re just going to sign it. So my contention would be that that’s not a desperately helpful exercise. And so if we’re relying on something that is logical, the fact that somebody has signed this or clicked on this thing to attest that they’ve done something, does that actually mean they’ve done it?

Now, I’m not saying that exercise is valueless, but what I am saying is let’s think about what that exercise means, because if we recognize a box ticking exercise, we may comply with it in a manner that also suggests we just think it’s there to tick a box. We’re not taking it seriously. So when we look at the drivers of human decision making within organizations, we often pin it on the fact that we employ people and therefore we can tell them what to do. The fact that we’ve incentivized people through bonuses and processes, the fact that we only hire honest and ethical people. And so if they’ve done something wrong, it’s automatically all their fault. Actually, what the drivers of human decision making are a combination of factors. It’s the environment that you find yourself in. It is the behavior that you see of other people. It’s your own experience and things that you’ve done. It’s your own blinker view of the naturally will be biased view of the world. And so there are lots of factors there that drive human decision making, which aren’t all about logic. And so the reason organizations focus on things like culture is that they recognize that there are these hidden factors there. If, for example, we’re told something in induction training when we join a company, this is a very moral or ethical company. Your first question might be, and you won’t necessarily ask this formally, but you might subliminally, why are they making a big song and dance about ethics? Is there a problem here, have they had issues? So we may already have some doubts in our minds, then we’ll leave the induction training. And if we discover that what we’ve been told in the training bears no resemblance to the realities on the ground, then we’ll go with the realities that we see. And so culture is one of those hidden factors where we’ll take our cues for everything that we see happening around us. And so we can easily be persuaded, and we know this as kids, we can easily be persuaded to go along with things if lots of other people are doing it. And so the challenge, again, with behavior is it’s not a simple case of we’re programming a machine to do exactly what it’s told all the time. Humans are sentient, and we see what’s going on around us, and we’ll take our cues for what makes sense to us given a range of different factors that seem relevant to us at the time.

Carmen Cracknell: Yeah, so it sounds like you’re almost saying kind of coercion doesn’t work, but psychological tactics do in getting employees to be compliant. Is that fair to say, do you think?

Christian Hunt: So one of the things I say is that we often sort of work on the basis and say, look, our aim is to get 100% compliance. And so what we need to do is we need to be able to 100% comply with the rules, and then life will be OK. Now, I make the point that I think that’s an impossible challenge, because I don’t think we can expect all of the people to do all of the right things all of the time. People are fallable they’re tired, they make mistakes. Some people will set out to break rules. And so what I think we need to do is to focus our efforts and say what really matters. And so there are going to be certain industries and certain tasks within it where you say that this is absolutely critical. This is irreversible. If this goes wrong, this causes us a major problem.

So whether that’s a matter of life and death in a safety critical industry, or whether that’s something that would be so significantly detrimental from a reputational risk perspective, customer credit card details being exposed, personal data is being exposed, or the company makes a decision that’s incredibly unethical from a reputational risk perspective, or breaks the law in other ways. There are going to be certain things where you say, we absolutely don’t want that to happen. And you can always get people to do what you want if you throw enough effort at it. If you monitor what they’re doing all of the time, if you put tightly controlled parameters around, you limit their ability to do things. The trouble is, if we do that in every single aspect of people’s lives, they get irritated by it because they find that oppressive. And we all know what it’s like to be in situations where we feel like somebody is being overbearing.

We’re being told what to do all of the time. We’re being observed. We’re being monitored. And so the things that you might do to achieve 100% compliance could be quite draconian. So we can make it happen. We can’t make it happen everywhere. And so I say to organizations, let’s think carefully about the tools that we’re using to achieve the outcome that we want. So if we have something that is significant, really important that we don’t want to have happen, and that might be something that is significant from a financial perspective, a reputational perspective, or we just don’t like the outcome, bullying, sexism, racism, those sorts of things, you might say, these are things that are absolutely mission critical. And we will throw the proverbial kitchen sink at those to prevent those from happening. But we can’t do that across the piece. And if we constantly use techniques that are oppressive, that are treating people like they are small children, at some point, they rebel against that.

So there are going to be times we want to do that. But there are also going to be times where we want to do something a little bit more subtle, where we may want people to think for themselves, not just slavishly follow rules, but actually do depending on the nature of the circumstances to respond in a particular way and to work with us. And if we want them to work with us, then we shouldn’t be shouting at them, screaming at them, treating them like we don’t trust them, because that won’t deliver the outcomes we’re looking for. So what I suggest to people is to not treat compliance like it’s one type of thing, but to say what is the specific behavioral outcome that we’re looking with this particular rule, with this particular training, with this particular control, what are we trying to do? And then let’s have a look and say what tools can we use to deploy to get that particular outcome. And very often, we don’t think about the techniques that we’re using. And therefore, what technique might be suitable, we do what we’ve always done. And when things go wrong, we often double down on doing more of the same. So if somebody isn’t doing what we want, maybe we just assume the problem is, well, there wasn’t enough training. What about if the problem was there was too much training, or the training was rubbish, and we’re just doing more and more of it. And so what I’m saying here is, we just like with anything else, we think about having the right tools for the job. And we need to work out what that job is, what’s the behavioral outcome we’re looking for, and therefore, what tool can you use. So I’m not saying don’t use very tight controls and treat people, slavishly follow these rules, or you’ll be fired. Absolutely fine in the right context. But we often get it wrong, and we don’t think about what we’re doing. If we start to think behaviorally, we can open the toolkit and have a whole load more things in there that might be more effective.

Carmen Cracknell: Yeah, that definitely resonates talking about too much training. I think a lot of people can probably relate to that. Have you seen organizations since you’ve been in compliance, working to improve in this way? Or have you come up with these solutions now to bring into action change?

Christian Hunt: So all of the things I’m talking about are road tested in one of three ways. It’s either things that I have seen in other contexts. So we often– I love talking, as I mentioned before, about compliance in the wild. So situations, we might not badge it as compliance, but it’s a situation where somebody is trying to influence a human to behave in a particular way. And that’s why the formulation of compliance as the business of influencing human decision making is really important.

Because if you start to think like that, you can suddenly include lots of activities within society, which you could see as a form of compliance exercise. So if we look at COVID, for example, one of the things that governments were doing there was trying to persuade human beings to behave in a completely different way to the way that they behave normally, whether it’s mask wearing, vaccines, staying at home, social distancing, all these things. And we can see how we felt about those different scenarios. And so what I say to people is we can find solutions to compliance problems in completely different contexts if we make that read across.

And so lots of techniques I talk to people about, they may not be necessarily in a you’re being told what to do to follow some rules, but techniques that we use to sell coffee or washing powder, there can be elements of those that can work in a compliance context, because we’re playing with the same human OS that’s that’s that. And so trick one is to look in other contexts and what not we slave if you copy it might, but but what can we learn from that particular thing? Second thing is things I have done myself. So while I was at UBS, I was doing a lot of these things. And so I field tested them myself. And then the third component of that is things I’ve worked on with my clients looking at their particular situation. So when I talk about these things, it is a combination of these these factors.

Now, the other thing that informs the work that I do is academic research. And so there are lots of academics that are doing research about human decision making, and what we can what we can learn from that. And so what I do with that is treat that the same as I do a compliance in the wild thing. It’s informative, it’s useful, it gives us an idea, we mustn’t assume that just because a an academic somewhere has managed to persuade a bunch of students to behave in a particular way that we can automatically deploy that in an employment situation, but it informs us all of this stuff is about being creative and thinking intelligently about how I can influence people. And if we treat compliance as an experimental discipline, which scares some people, but it’s genuinely one of the ways I think we need to look at it because we there’s no guarantee with this stuff, what we do each time we try and influence behavior through training or policies or procedures, or communications, what we’re doing there is an attempt to influence those people. And so what I say is recognize it’s an experiment, see where it’s working, where it’s not working and have an idea about why the technique that you’re deploying might make sense have a hypothesis. And you can then test that out. Now, in some cases, the way things are done traditionally is so appalling and so bad that you can immediately identify and say, well, if for example, I’m expecting someone to read a 5000 page policy, I can probably determine that that’s that’s that’s probably not the smartest way to go about it. And therefore, that poses a risk entirely likely that no one’s going to read that. So if I need them to read that, how are the other ways I can communicate that some other things are a little bit more subtle, because they may be context dependent, there may be things that that sort of work in some situations, but not others. So the way we might monitor and look at it is slightly different. But it’s this experimental mindset that I’m bringing, and really thinking about, wait, perhaps, can I start with something else that might inform my hypothesis, and then move into an experimental mindset in terms of trying things out? Now, that feels risky, because people say, well, I don’t want to experiment with compliance. You already are. It’s just that you don’t think about it in those terms. So all of these traditional techniques that we use, you know, lots of training, lots of if we tell you, you will automatically do it. That is an experiment, we’re just not acknowledging that it’s there. So when I suggest these things, people get scared sometimes, you’re already running experiments, wouldn’t it be better to run well informed experiments and recognize them for what they are? Then you can start to think about what you’re doing, as opposed to just slavishly rolling something out that actually isn’t effective.

Carmen Cracknell: Where does AI fit into all this? And can it help mitigate human risk?

Christian Hunt: So AI is fascinating, because if we think about what is driving AI, well, so human risk fits into AI in the sense that it’s people that have programmed the AI. So we’ve unleashed this thing on the world. And let’s think about the data sets that we’re putting into that AI. And thedatasets are of course, incredibly important, because one of the things that we can do is we can sort of program the AI to think about certain things, but we’re unleashing it on the sum total of human experience that we’re able to feed it. Now, if there’s one thing that we know about data, it comes from one place, which is the past. And the challenge that we’ve particularly got with data is that data is heavily biased. So if we think about just simply male-female data, there is a lot more male data out there, pick whichever domain you like, because of the patriarchal society that we have. Men have been in– we’ve been recording male data for a lot longer than we’ve been recording female data. And so there’s just one– and we can think about other forms of bias, there’s racial bias, there are other forms of that where we don’t have a data set that is neutral. Let’s also think about the fact that when we ask AI to suck in lots of data, there’s no quality control there. And we all know the example here, if you look at Wikipedia, well, that’s curated. It’s a genuine attempt to provide useful information, but there’s also nonsense on there.

Social media is rammed with things that are not true. So we’re feeding a load of things into these AI models, a lot of data there, that is by default not perfect. And so that’s going to produce some particularly poor outcomes. And I think the challenge that we’ve got is because AI speaks to us in very human terms, particularly generative AI is producing responses that look and feel human. And so what can we do? What do we do? We look at that and we treat it as if a human has produced it. That’s the genius of a lot of these things. And we might place blind faith in the black box to produce incredible results. And so what I see as the challenge of AI is that our brains aren’t really equipped to handle what it’s doing. We don’t really understand the processes that are there. And it is taking biased data, and it’s delivering us a result in a way that we think is powerful. And of course, there are tremendous benefits to AI. If I think about some of the, it can spot things that the human brain can’t spot, much smarter than we are at seeing patterns that we don’t see. And that’s great from a medical perspective, that can be great for identifying risks, lots of really, really good things it can do. But it’s also incredibly dangerous because it may produce incredibly biased outcomes. If, for example, you said, well, we want to use AI to deal with the climate crisis, what would AI do?

Well, the number one way you could deal with the climate crisis is to get rid of the human population. That would have a major impact on that, probably not the desirable outcome that we’re looking for when we program it. So we need to keep it in check. And I think one of the problems is it’s very hard for us to keep in check, something that’s way smarter than we are. So I’m simultaneously really excited about AI, but simultaneously utterly terrified. The other thing to note about technology in general, but AI, particularly, is that as we give technology tasks to perform, as it becomes more competent, and we’ve seen this in the physical world, it’s starting to take those roles over, it’s now moving into the cognitive world. What are human beings going to do? And the answer is we’ll do the things that the AI can’t, which are things that require skills like nuance, judgment, emotional intelligence, all the sort of personal, which is when we as human beings are at our absolute best.

I say the AI can’t, of course, one day may well be able to do that, but at the moment can’t do that. So humans will be spending more time doing things that involve human judgment and nuance and human attributes, things that artificial intelligence isn’t good at, but natural intelligence, the thing we’re born with is. The risk with that is that that is when we are at our most risky. Because if you think about it, when people are making mistakes or getting things wrong, it’s we’re amplifying some of those human attributes. We are, you know, maybe we’re overly emotional about something, we’re misreading something. And so that’s when people are at their best, but also they’re riskiest. So technology will simultaneously mitigate human risk by cutting out human error once we’ve got it working in a perfect way. And AI, of course, does make mistakes. But we’re then going to push people into a zone where they’re doing things that makes them huge assets, but also huge liabilities.

Carmen Cracknell: Well, we’ve covered a lot there. It’s really interesting. I think we’ve covered most of my questions. Have I missed anything out that you think is important?

Christian Hunt: No, I mean, you can ask you can ask me about the, you know, what advice have I got for people if they want to think about how to do it?

Carmen Cracknell: Yeah, well, please go ahead.

Christian Hunt: So my top tip when we are thinking about human risk, and whether that’s in compliance or ethics, any other context where we’re trying to influence people, the most important thing we can do is to think about things, not from our perspective, but from their perspectives. And the challenge is that’s really hard for us to do. But we understand our view of the world, because we come at it with our own experience, we come at it with our own perspectives. And that’s how we’re able to get through life. We use the past experience that we’ve gathered the information that we’ve taken on board to form views. And we’re capable of coming to views very, very quickly. So, you know, good example would be crossing the road, I don’t, I don’t process the velocity of vehicles, and do calculations, I roughly know what I’m doing there. So if I’m in that situation, I can respond very, very quickly. Now we do all of those things based on our own past experience. And the challenge that we’ve got then is we don’t know what past experience and views other people are coming at things with. And that can make other human beings very, very powerful from a cognitive diversity perspective, we’re sat in a boardroom, we want lots of people who think differently, we’ll come to different viewpoints, so powerful stuff. But it’s also, if we’re trying to influence their behavior, then we need to have some sense of not how we would like people to behave, but how they are likely to behave, the realities of what’s going on in their minds. And so putting ourselves in other people’s shoes, it’s difficult enough anyway, but it’s particularly difficult where we are a subject matter expert. And we are trying to think about influencing people who might not be. And so if you’re a compliance officer, for example, in an organization, you will understand why there are particular rules in place. Why do we have a policy on this particular thing? Why does this matter? That isn’t necessarily going to be obvious to the people who don’t spend their whole lives looking at it, right? They, and I always say jokingly, the average employee isn’t interested in compliance. Now I don’t mean they’re not interested in behaving compliantly, most people don’t show up to work, try and do the wrong thing, but they’re not passionate about the subject. And I know that because if they were, they’d be working in compliance. So what they want to do is their sales role or whatever it is that they happen to be there to do. And so all of these other factors that come into play are things that they care about on some loose level, but they’re not experts in. And if we are trying as experts in something to communicate to people who aren’t, who aren’t naturally interested in it, can be very difficult to make that piece. And so we need to translate things into language that makes sense. And the classic error that people often make is make a presumption that what is useful information for me as an expert is automatically useful information for people in the front line. And the fallacy in that is we are not trying to turn every single employee in an organization into a compliance or a mini compliance officer. That’s not their job. Their job is to do whatever it is they do in a compliant manner. And so we have to take things that are necessary, level of knowledge that we have and work out what is necessary for them to understand so that they can do it. And so we might need to understand the derivation of a regulation, the minutiae of it, where it comes, all that stuff. They may just need to know that in certain circumstances, this is what you need to do. Or if you spot this, then come get help. And so we have to work out what behavioral outcome are we looking to achieve. But we need to understand where they’re coming from. How might this land with them? And if we speak to them in technical jargon, if we talk to them in a way that we would talk to each other, we risk not getting through to them. And so that rather than compliance, or any of the other functions being a transcription service, where I take a regulation, and I say to people, here is the regulation, read this regulation. And that’s effectively what we do in many cases. What we actually need to do is to say, OK, how do I make this relevant to the people I’m trying to communicate? And what will be their own perspective? Is this something they will logically understand? And I mean, logically, in the sense of not what I’d like to happen, but in reality, will they get this? Will this make sense to them? What do I need to think about how I can communicate? Why is this relevant to them? Does this matter? They don’t need to hear that a regulation was created in 1864. And blah, blah, blah. What they need to know is what do they need to do? And what does it mean for them in their day-to-day existence? And that’s hard for us to do because we can’t put ourselves in their shoes. And so if you really want to influence human decision making, a little bit like an advertiser needs to understand their customer. Who am I trying to advertise to? And what needs am I meeting with my product or service that I’m advertising? And so we put ourselves in their shoes and we do focus groups and various things to make sure we understand what that is. And therefore, we have a greater chance of being successful. Compliance often doesn’t do that. And we just sort of presume that, well, we’ll just tell them. And because it’s fascinating to us, it will automatically be fascinating to them. Logically, possibly, in reality, not. And if we don’t think like that, we will constantly do things that run the risk of not landing in the way that we intend.

Carmen Cracknell: Some really useful tips there for compliance professionals to stay grounded. Thank you so much for joining me, Christian. Anybody who wants to know more about you, where should they look?

Christian Hunt: So lots of places you can find me on social media. If you are on LinkedIn, just search for human risk and you’ll find me human-risk.com is my website. You’ll find me there. And there’s also the Human Risk Podcast in which I interview people about human decision making and what we can learn from that. And that’s humanriskpodcast.com.

Carmen Cracknell: Awesome. Thank you very much for joining me.

Christian Hunt: Thank you.

Carmen Cracknell: Thanks. Take care.

Listen to the audio.