The following is a transcript of the podcast, Andrea Bonime-Blanc and Ryan McManus on building responsible AI governance that lasts between AI strategy and governance experts Andrea Bomine-Blanc and Ryan McManus, and GRIP’s Alexander Barzacanos.
[INTRO]
Alexander Barzacanos: Welcome to The GRIP Podcast. I’m Alexander Barzacanos. Today, we’re exploring how boards and executives can turn AI into business outcomes while building governance that lasts, covering strategy, risk, oversight, ethics, and the future of the technology with our guests, Andrea Bonime-Blanc and Ryan McManus.
Dr Andrea Bonime-Blanc is the founder and CEO of GC Risk Advisory. She advises boards, companies, and governments on governance, risks, ethics, cyber resilience, and exponential technology. Her forthcoming book, Governing Pandora: Leading in the Age of Generative AI and Exponential Technology maps practical, responsible governance for AI and adjacent tech.
Ryan McManus is a technology entrepreneur, investor, and board director and president of the National Association of Corporate Directors New York Chapter. He also writes and teaches on enterprise AI strategy and board oversight.
Ryan and Andrea, welcome. It’s good to have you here.
OK, Ryan McManus, let’s dive in. You frame “total information mastery” as an enterprise AI goal. Can you explain that concept and how it’s changing businesses?
Ryan McManus: Yeah, absolutely. Thanks very much again for having us.
When it comes to total information mastery, which I co-authored with my colleague Igor Jablikov, who is the CEO of Pryon, which is an enterprise AI company, we were having a conversation about really all of the noise that is out there in the system. We wrote this article already last year, and of course, things have changed exponentially in the intervening months.
What we were observing is that whether you’re a board director, a CEO, a member of the C-suite – frankly, anybody – there is just a ton being proposed, talked about, theorized, etc, etc. It was all sort of being talked about as if AI is a single thing. And of course, it is thousands of things when you get into the various use cases and technologies.
So what we wanted to do was provide a framework for leaders to understand a few different principal areas, a few different speeds of how to consider AI implementation and benefit capture across the enterprise.
Just very briefly, we looked at four areas principally. Number one is just a basic general administrative productivity jump, which is largely available through the big LLMs. We call that the enablement capability – a very low threshold for adoption. Generally, we can train people very quickly on how to get some benefit out of that, and we’ve seen some pretty spectacular productivity gains through some of that initial implementation.
The second level is really an explicit experimentation capability. Because of how fast AI is moving, if we only anchor our strategies and our use cases in what is actually happening today, given the speed of the development and innovation cycles, we have a very strong risk of falling behind very, very quickly without even realizing that something was happening. Having, again, an explicit sort of instrumentation of exploration and experimentation is very best practice.
Thirdly, of course, every SaaS platform, every SaaS opportunity, every SaaS capability is rapidly deploying their own AI and GenAI capabilities. So the entire enterprise will be instrumented insofar as SaaS tools are already in place. What’s interesting about those three areas is that, in our view, those three areas are already commoditized. You can bring those in, deploy them, subscribe to those kinds of capabilities with relatively little work on the back end, if you will.
Then there’s a fourth level, which is what we call enterprise AI in the article. This is really looking at: what are the crown jewels of data and information and information flow on an ongoing basis that we have as an enterprise? In other words, things that we have that perhaps nobody else has.
How can we create strategies – and they’ll be a little bit longer term because there’s different technology and different kinds of enterprise data strategies that have to back it up – but how can we actively explore that in combination with the expertise and the knowledge that only our people have? Those two areas are very critical. That is where we see a real opportunity for over-time competitive differentiation, again with everything else being largely a commoditized functionality that’s going to be coming through a variety of different areas.
Of course, and we’ll talk about this today, there are all kinds of different risk topics, things that we need to be paying attention to across all of those dimensions. But that was the original intent of this idea of total information mastery. As the title suggests, how do we have a cohesive strategy at an enterprise level across all of those domains?
Alexander Barzacanos: Then you mentioned risk. What are some of those emerging AI strategic risks?
Ryan McManus: We explore this in the article as well. And of course, there’s a large number of ongoing conversations about both existing and emergent risks.
I like to put them into four or five major categories. There’s obviously technology risk. There are entirely new vectors of cybersecurity risk, both today as well as GenAI generating entirely new vectors on an ongoing basis. That can just continue to spool forward. The growth of GenAI-created cybersecurity risks is in the several hundreds of percent on an annual basis. It’s absolutely shocking how fast that is moving, but also at the same time predictable.
There are technology risks around AI models. In particular, do we have the right kind of review and audit process before we implement a model that perhaps we didn’t build ourselves, or that we’re taking off one of the libraries, of which there are thousands online? There are entirely different kinds of technology risks where bad actors are embedding different kinds of code into those models, which could be Trojan horses, that sort of stuff. Lots of things are happening at a pure technology risk area.
There is regulation and ethics. How do we know that we are using these tools in the best way? How are we approaching privacy? How are we approaching some of the ongoing evolution of ethical considerations? We also know that there were something like 1,500 laws proposed across just the United States last year, principally at a state level. That’s an enormous thing to be keeping up with. So it’s risk and regulatory, of course. How are we controlling for bias? What are the policies we’re putting in place?
Then there is everything about our people – the people who are our customers, the people who are our employees, the people who are our stakeholders. How do we give them confidence in terms of the strategy of the organization and in terms of really looking to have a shared benefit of the use? There’s a lot of fear out there about job displacement and career progression, which is honestly a legitimate thing for people to be concerned about.
That leads us into the broader strategic risk. This is not the first time we’ve seen technological platforms emerge that change the nature of the economy. AI is the latest, but it’s not the only one. We know very clearly from previous cycles what wins and what doesn’t do so well. So having that attention at a C-suite level, at a board level, and having, again, explicit value-creation and business-model conversations – in addition to risk and operating model efficiency and the rest – is a critical part of the recipe.
We’ve seen this before where, when there’s new tech, leadership only focuses on automation or productivity. It’s a big miss, and that’s effectively where disruption comes from.
Alexander Barzacanos: Excellent. Thank you, Ryan.
Moving on to Andrea, in your upcoming book, Governing Pandora, you describe the importance of an exponential governance mindset. Can you explain that term and how it applies to governance responsibilities?
Andrea Bonime-Blanc: Sure thing, and thanks for having both of us talk about this important subject.
When I started writing this book two years ago, it was actually on the basis of another article that I wrote for Directorship magazine. So both Ryan and I are authors for the same magazine from time to time, which is for the governance community mostly, but also executives and others.
I wrote about what we need to do to govern exponential technologies. I was talking not just about AI and GenAI, but several others that are either intertwined with AI or somewhat independently from AI are being developed. There’s the biotech and synthetic tech area. There are advanced materials being discovered. Of course, there’s robotics and automation. There’s computing of many different kinds, including quantum.
So we have an array of exponential technologies out there that will require all of us – whether we’re board members, executives, frontline workers – to really adjust our mindset.
The book grew out of that particular article because my publisher approached me and said: “Do you want to write a book about this?” And I said: “No,” and then I said: “Yes.” So that’s why the book exists. But really, it’s an exploration of where we are right now, having the situational awareness that we are at a time I dare to coin as the beginning of the fifth industrial revolution or post-industrial revolution.
There is so much going on at such speed, with such incredible consequence – and a lot of which we don’t know. There’s a lot of uncertainty. There’s a lot of opportunity. And there are all of these exponential technologies egging each other on and making advances that we couldn’t have imagined even 20 years ago.
So the book comes out of that idea that we have to prepare ourselves for these exponential times, in which GenAI is a major player, obviously, and AI is a major player.
The exponential governance mindset itself has five elements that I think are very intuitive, very common sense, and should help – whether you’re a board member, a CEO, or a frontline manager – to think about how to tackle all of this change.
It starts with leadership. It’s really about taking technology – whether you’re creating it yourself, you’re buying it and integrating it into your products and services, or otherwise deploying it – and making sure you have technology governance of 360 degrees in your whole organization. It has to start with frontline people and managers working together, but it also has to come from the top down. It has to be an integrated 360 tech governance. That’s number one.
Number two is ethos – the culture of the organization. Are you embedding responsible tech culture into your organization? That has a lot of elements we already know about: ethics and compliance programs, regulatory compliance, etc.
The third one is what I call impact, which is really integrating stakeholders that are important to you into your tech loop. It’s about humans in the loop, but really thinking about the stakeholders that are most important and making sure you understand their expectations and cater to them, because reputational risk and opportunity will come out of that.
The fourth one is what I call resilience. It packages together the idea of risk management, crisis management, business continuity, and other things that you do to build resilience within your organization. That includes new things that we perhaps haven’t done before with these new technologies, like red-teaming approaches to making sure that you’re protected.
And then the last one is foresight, and that’s really unleashing a future-forward tech strategy – very much what Ryan was talking about in terms of being prepared to deal with both the risk and the opportunity that these technologies are presenting to us and being able to take advantage of them.
So that’s, in a nutshell, what I mean by the exponential governance mindset.
Alexander Barzacanos: Great, thank you. And to return to the topic of risks, where do you think organizations most often misread AI risk, and how should boards in particular correct course?
Andrea Bonime-Blanc: Yeah, you know, as Ryan said, there’s a lot of noise out there – a ton of noise. We need to separate the noise from the signal of what is real risk and what is real opportunity.
I think there’s a ton of fear out there: fearmongering, not well-placed fear, but also well-placed fear. So there’s a lot of that going on. And then there’s a lot of opportunity and, dare I say, greed that’s in the system right now – driving certainly the very big technology companies and their leaders to outdo each other with trillions of dollars of investment, etc.
So there’s a lot of fear, and there’s a lot of greed, and there’s a lot of noise. We need to find the signal.
From a risk standpoint, Ryan already gave you some really good descriptions. I think we have to pan out to the very big picture of risk. Do you have a good enterprise risk management program that has adapted to the new technologies, that understands which ones are important to your business and your footprint, and that is properly integrated in terms of risk data collection, risk-mitigation policies, programs, reporting up to your executives and your board, and also to the authorities if you’re publicly traded and otherwise regulated?
We need to think of risk management with these new technologies not as something bright, new, and different. It’s an extension of what we already do, but we have to really understand it and integrate it into how we talk about it, how we think about it, and the data we collect.
I think one of the most important things – it was true before, but it’s even more true now – is to have those really smart cross-disciplinary teams working with each other to identify those risks, to understand them completely, to create teams, rapid deployment forces that go and understand these things better and then come back with better ways to mitigate or better ways to take advantage.
We need to have more of a life-cycle approach now than we ever had, because technologies are coming in different ways. Sometimes we’re acquiring data or algorithms or other kinds of products, and we really need to understand what we’re acquiring at the very inception and then throughout the whole life cycle of the program, product, or service that you’re creating and selling to your customers.
Alexander Barzacanos: Great. A survey compiled by White & Case shows compliance teams mainly using AI for document summarization (88%) and investigations review (85%), while top concerns are data protection (64%) and inaccuracy and hallucination (57%).
What would an effective AI training and controls program for compliance officers look like?
Andrea Bonime-Blanc: Yeah, I think the best thing you can do with your compliance officers and your regulatory folks – lawyers, regulatory experts, and compliance officers – is immerse them in the technology.
Send them to one of Ryan’s courses so that they really understand what it is, hands-on, to engage with the various kinds of chatbots, to create agents, and anything else that’s relevant to their particular business as well. Immerse these people in the technology and what it means and how you use it or how others are using it. Without that, you’re not going to have the empathy and the ability to understand what’s going on.
This, to me, has been one of the biggest downsides of the compliance function – and I used to be an ethics and compliance officer for several companies, so I know what I’m talking about. They often don’t understand the business properly. They just come with a cookie-cutter approach to deploying a program.
You really have to not only understand your business now; you have to understand the tools that your business is deploying and using internally and for customers. To me, that’s the most important thing: train, train, train, and also have them work with each other on case studies that are relevant to the business.
Alexander Barzacanos: So to move back to Ryan , I was interested in hearing your perspective on how we future-proof AI governance.
Ryan McManus: Sure. The key to the answer is actually in the question – this idea of not only looking at what we’re doing currently, but making sure that we have explicit instrumentation, explicit attention, regular focused attention on what’s next.
I had the opportunity to serve as one of the commissioners on the National Association of Corporate Directors 2024 Technology and Governance Blue Ribbon Commission Report. We addressed not only artificial intelligence, but, as Andrea mentioned, a whole compendium of both individual technologies and the intersections of those technologies.
We really looked at the drivers that are creating more and more requirement for attention at a board level, but also what boards should do about it. It’s a terrific report. There are a dozen tools in there for executives and board directors to refer to – things that some of us have actually used on our boards – tried and tested material.
At the highest level, we discussed three levels and broke it down into oversight, insight, and foresight. There are different behaviors, different questions to ask, different areas of responsibility, a different diffusion of responsibility across the various committees of the board, as well as the full board.
There’s no single answer for how to structure boards on these things; it really depends on the culture and the industry, of course. But we need to make sure that we’re not only looking at technology governance as it was, say, a decade ago, and instead appreciating the incredible speed of development – understanding that the actual nature of the generative AI wave is the speed of development. It’s inherent in the technology itself, and that’s only getting faster.
The point is that it’s no longer enough for boards to only look at what’s already happened and to look backwards. We need to do that, of course – make sure that policies are being adhered to, that they’re in place, and that we’re doing all of that necessary work. But we also need to ask: in more real time, what are we learning? That’s the insight piece.
What kinds of ongoing conversations are we having with the management team about how a particular area of technology – be it AI or other tech – or the industry has shifted in the last quarter? Those really need to be regular, ongoing conversations.
Then, finally, this idea of a board being more engaged in terms of what’s next. Part of what’s different with generative AI, in particular, is that it is moving so incredibly quickly that there’s not really any single cohort of people outside of the actual industry who know exactly what’s going on day to day, week to week, month to month.
So boards can be more open to having exploratory conversations and asking future-guidance questions. Where does management think this technology is going to go? What are the new value propositions? Who are the leaders? What are they doing differently? What are we learning from our experiments?
We need to make sure that the conversation between the board and the management team is not only about what happened previously, whatever that time period is for that conversation, but also about what we are looking to learn and where we see things going. That helps us address one of the key strategic risks in major economic shifts like this, which is: we didn’t see what was coming.
Historically, a lot of groups – these are the case studies you read and hear about – were leading in their sectors and then were suddenly displaced by much more nimble, aggressive competitors. Nine times out of 10, they simply weren’t asking the questions about how their markets could change. Making sure that we keep a bright light on that on an ongoing basis is a critical thing that boards and management teams can agree to do together.
Alexander Barzacanos: Great. And I wanted to ask, what’s the simplest way to integrate AI oversight into existing governance without adding bureaucracy?
Ryan McManus: There’s not necessarily one way to do this. Personally, I’m an advocate of considering the creation of another committee. We call these Science, Technology, and Innovation Committees.
It’s not an absolute requirement, but it does help with the critical problem that boards consistently have, which is: do we have enough time to talk about everything that we need to talk about? By having an additional committee of the board focused on these kinds of topics, you guarantee that you’re going to be able to address all of the different levels that we just talked about in my previous response.
I’ve done a lot of research on this, including a couple of publications that, again, we did for the NACD in 2021 and 2023. In that 18-month period between the first and the revised article, we saw a 20% jump in the number of Fortune 500 companies that have one of these kinds of committees. That’s a pretty spectacular jump in terms of a committee that is not required for any regulatory purpose to be there. It basically illustrates that more and more boards understand how critical it is to have some kind of ongoing board-level focus on the competition, risk, regulatory dynamics, value creation, and market changes that technology is driving.
So that’s a very straightforward way to do it. There are all kinds of available charters that board directors can look at and mold to their own purposes.
Additionally, whether or not you have one of those committees, there’s also room for distribution of some of the various technology themes we’re talking about across all of the main committees – certainly audit, certainly nominating and governance (NAMGOV), in particular on board education and ongoing board skills and next-generation board directors, and certainly compensation and talent.
How are we bringing this into the culture? How are we making sure that people feel comfortable and engaged in the AI strategy that the enterprise is focusing on?
All things investment and capital-S Strategy belong at the entire board conversation on a regular basis. So it’s a combination. In many cases, there could be an opportunity to stand up a new committee, and we’ve seen terrific results for groups that do that. There’s certainly a way to engage the broader committee structure of boards, but there has to be some ongoing conversation at the full board level to make sure that everybody on the board is engaged and following what’s happening.
Andrea Bonime-Blanc: I’d like to add another nuance to what Ryan just really nicely laid out in terms of changes that are needed at the board level.
Having been a NAMGOV chair for several organizations over the years and really focusing on the skills matrix and the recruiting process for board members, I think it’s absolutely necessary – critical at this point – for all boards, no matter who they are, to have proactive recruiting that looks at the skills needed that are informed by all of these technological changes.
Boards really need to bring in people that they didn’t think they should bring in maybe five years ago. There has to be a real reform, if not strong revision, to how we recruit board members and who we bring on the board. I know Ryan feels the same way, but I think it’s an important point to underscore for purposes of future board productivity and effectiveness.
Alexander Barzacanos: Great. To jump off of the concept of culture, I wanted to ask Andrea: what do you think a responsible culture of AI governance looks like?
Andrea Bonime-Blanc: In terms of culture, having been an ethics and compliance officer for several years in different companies and really focusing on how the leader – meaning mostly the CEO, but also the board because the board holds the CEO accountable – models or doesn’t model a good culture really trickles into the whole organization.
If you have a leader who actually models ethical behavior and responsible behavior – not just by appearing to through speeches and talks, but also by putting money where their mouth is in terms of resources and budgets for the proper kinds of guardrails and other programs that are needed, whether it’s enterprise risk management, ethics and compliance, or other things – that’s where the rubber meets the road in terms of whether you’re going to have a good culture for the technology piece as well.
I’m not a doomer. I’m not a de-accelerationist, as some people call them. But I am very much in favor of making sure we have guardrails in place. That means really understanding what your risks are and then having those programs – ethics and compliance, enterprise risk management – in place.
The leader, the CEO, needs to model that through accountability: speaking up when something’s gone wrong, being accountable for it to the public and to stakeholders. I can think of a couple of leaders that I think have done a really good job of that, and a couple that haven’t.
In the big-company picture, I think Microsoft has been quite good at developing effective ethics and compliance programs and guardrail programs for technology. They were one of the early adopters of responsible technology and responsible AI. Whether they’re perfect all the time or not is up to us to determine, but they do put their money where their mouth is, and Satya Nadella has been a good role model from that standpoint.
I can think of another CEO that doesn’t really live up to those expectations, in my opinion, and that would be Zuckerberg, always minimizing some of the important ethical and responsible tech issues in his own company. We see similar contrasts with some of the nascent, really powerful players like Anthropic and OpenAI. I would say Anthropic’s Dario Amodei does a very good job of putting his money where his mouth is in terms of tech guardrails, setting up the company as a B Corp to begin with, and continuing to be very vocal about some of these issues.
Then you have someone like Sam Altman at OpenAI, who has had a rocky, up-and-down path on governance issues. Again, I think it always goes back to the leader, but it also goes back to whether the board is holding that person accountable or not.
Alexander Barzacanos: To put it broadly, either of you – Andrea or Ryan – can feel free to answer this question. What do you think the future holds for AI and AI governance?
Ryan McManus: One thing that’s really useful for people to remember is that we’ve seen these kinds of patterns before.
What’s happening with artificial intelligence today is very different in a lot of ways because we have some entirely new technological capabilities – in particular, but not exclusive to, generative AI. But there will be other things that come afterwards. Andrea mentioned quantum; that’s one of many things we see coming down the road.
The point is this: we know exactly how these 10-year cycles roll out. The first couple of years are largely about massive investments in infrastructure and compute and hardware. That’s exactly what we’re seeing in markets today. The vast amount of money that’s going into the sector is going into building new data centers, new chips, new systems, new infrastructure. That is going to give us very different capabilities moving forward.
There are a couple of ways to think about this. Number one, largely we haven’t seen anything yet, despite the really terrific and extraordinary things that we already have seen at scale. A lot of what people are used to and working with is actually on existing or even older tech. The new stuff – the new capabilities – is barely coming online yet.
We need to anticipate, from a forward-looking perspective, that there is going to be much, much more capability coming in the future. We need to make sure that we are not allowing ourselves to anchor on what the technology can do today compared to what it will be able to do. That’s a really important piece and maps back to my earlier comments about oversight, insight, and foresight – making sure that from a governance perspective, we have mechanisms for actively looking ahead.
Here’s a couple of ways to look at where AI is going to go. First, we have seen utterly extraordinary benefits at levels of the economy that are effectively text and image and code. We’ve seen thousands of percentage points of gains in terms of productivity, efficiency, and quality.
What that should enable people to think about is: if you can get that kind of productivity out of basically a singular asset class in the economy, we need to expect that it’s going to spread across the rest of the asset classes in the economy. And of course, that’s what’s happening very rapidly.
It doesn’t get all of the headlines, but that’s where we get into much more mathematically astute models and much more robotics in the domains of physics, chemistry, biology, etc. It’s already happening. There are already plenty of leading-edge use cases. I call them the AI superpowers – this idea that we can already solve problems we’ve never been able to solve before; we can design, build, and manufacture things we’ve never been able to design, build, and manufacture before; and we can find patterns we’ve never been able to find before.
That’s the way to think about where this is going to go. One of the challenges I pose to leadership teams is not a technology challenge on the face of it, but a strategic challenge, and it is the following: what are the assumptions that you have that are inherent to your business model in your industry – in other words, the problems that have never been solvable, the things that, if you could do them, would change the nature of the competitive landscape of the industry and would represent the kind of new business model that would bring enormous benefit to your markets?
Those are the questions that a lot of people don’t ask on a regular basis because there aren’t new answers to those questions on a regular basis. But it’s exactly the kind of question to continue to pay attention to because this is where leaders really fall behind if they’re not looking at what is coming next.
Tons of things are coming down the pike. We have barely scratched the surface in terms of business use cases and cross-industry use cases. Largely, the things that people are reading about are generic administrative, almost personal kinds of use cases. Again, they’re fantastic, but the deployment of this across all elements of the enterprise, all functional domains, all industries is really nascent – but it’s coming at a rapidly developing clip. We have to keep our eyes open.
In terms of governance, this is not only about a single AI acceptable-use policy. There are upwards of a dozen kinds of policies that compliance groups and risk groups and teams and chief legal officers and others need to be paying attention to. They need to keep updating those as part of this ongoing development of where AI is bringing us.
In a nutshell, the concept here is to not mistake where we are today for where we’re going to be in 12, 18, or 24 months, because it’s going to look dramatically different as we continue to accelerate.
Andrea Bonime-Blanc: I second everything that Ryan just said, but I’ll put a little focus on some of the big-picture things that I think we need to keep our eyes on from a governance standpoint – at the government level, at the corporate level, and in civil society.
It goes back to some of the impacts and some of the key stakeholders that are going to be affected: society overall, specific groups, the environment, and others. We need to really keep an eye on societal impacts like jobs.
We just saw 600 people fired from the AI division of Meta. That’s happening at Meta, but it’s also happening at non-technology companies, if you want to call them that. There’s a lot of job turnover, and we as a society, together with government agencies, need to think about what the impacts are going to be and how we’re going to ameliorate some of the negativity that will happen when people lose their jobs.
So the question is: do we upskill them? Do we send them to new educational projects where people can acquire skills needed for tomorrow and for the world of technology? Or are we going to have to go so far as to provide universal income for some people who can’t be reskilled? That whole area of jobs and job insecurity is really, really important.
There’s also the inequality piece that’s happening. We’re seeing it most here in the United States because we have the most powerful AI technology companies and leaders and a lot of people being left in the dust. Then there’s the geographical worldwide aspect of that.
I have two more areas that I think are really important. One is our own national security. I don’t know how we’re handling that at this point. There used to be at least some kind of guidance – maybe not federal laws, but some guidance. I’m not sure exactly how that’s being handled now in terms of AI competition, national security implications, and geopolitical implications.
Finally, one other concern I have is the continued turbocharging of the weaponization of disinformation and fake everything, which is being turbocharged by AI and all the capabilities we now have. These are areas that we, as human beings, professionals, and citizens of the world, need to worry about. Some of the most proactive and productive parties don’t really care about this stuff, so there has to be a counterbalance.
I’ll leave you with one thought for the corporate setting in terms of how we survive all this change. I think it’s really important to adopt an internal voluntary code of conduct and policies – as Ryan mentioned – that keep pace with the change, are adaptable to the change, and have the right people looking at how these changes impact regulations or the internal protocols of the company.
It’s up to the CEO and the business to nurture and provide resources to a group that’s going to do that. That’s akin to what we’ve seen in the past, but it is so much more important now. Regulations are the foundation of what you need to do, but on top of that foundation, you need to have your culture of tech responsibility built around these kinds of voluntary exercises.
Ryan McManus: I’d like to comment a bit more on one thing that Andrea said about geopolitical considerations, national security, and economic security.
I’ve worked in tech for a long time. For the first time in my career, I’m seeing a much more aggressive pull from a lot of different domains in terms of understanding that we have to understand this, we have to make sense of this, and we have to assert a leadership position – be it at a national level, a regional level, or an enterprise level.
Typically, we’ve seen regulation, governance, and strategy on the back foot with a lot of technological developments. This could be generational – that so many leaders today have lived through the mobile revolution, the cloud revolution, Internet V1, and other areas, and understand that there’s something different happening here.
I would give confidence to listeners of the podcast that there are very active conversations happening in the United States as well as in other areas of the world. I think this is appreciated as a capital-S “Big Game” strategy dynamic, and it’s about security from a military and defense perspective as well as economic security. There is lots and lots of attention being paid to this.
That underscores how important it is for, in the case of the United States, American enterprises to continue to look to stay ahead with the developments that are continuing to progress.
Alexander Barzacanos: Great. And with that, I’d like to thank Andrea and Ryan for joining us to discuss these important topics.
Once again, Andrea’s forthcoming book is Governing Pandora: Leading in the Age of Generative AI and Exponential Technology, which will be out in February.
Thank you for listening, and we’ll see you next time.





