This is a transcript of the podcast In discussion with Stephen Burt, Chief Data Officer for the Government of Canada between Stephen Burt and US Content Manager Julie DiMauro about how businesses can deliver on strategies that feature concrete action plans, but also feature the flexibility to be iterative and adjust to changing risk.
Julie DiMauro: Greetings everyone, and welcome to a Global Relay Intelligence and Practice, or GRIP, podcast.
I am Julie DiMauro, North American Content Manager for GRIP, talking to you from New York City. I’m so pleased to announce that today’s podcast session features Stephen Burt, the Chief Data Officer for the Government of Canada.
I’m going to ask Stephen to please introduce himself and describe his background before we kick off the program. If you would, please, Stephen.
Stephen Burt: Thanks, Julie. I’m very pleased to be here today. As Julie mentioned, my name is Stephen Burt. I’m Chief Data Officer for the Government of Canada. It’s a job I’ve been doing for about four years now, having been the senior guy for data at the Department of National Defense before that, for about four years.
And prior to that, most of my career has been defense policy, international issues, and intelligence. So, I came into this field from the defense intelligence world, where we’ve always been fairly digital-centric and data-heavy in terms of the work that we do.
Julie DiMauro: Terrific, thank you so much.
Stephen, in September, the federal government said it planned to launch a public registry to keep Canadians in the loop on its growing use of artificial intelligence.
Since AI platforms were being used in many ways across the government, did you feel the full list would be helpful to the public and industry both? And did the registry reveal certain things to you in terms of the breadth of AI use?
Stephen Burt: Yeah, so the registry, when we first conceived of it, was intended as a transparency tool. We had done a bunch of engagements and we were consulting on the artificial intelligence strategy for the federal public service through 2024.
We heard from stakeholders that there was really a lot of interest in knowing more broadly than we had published previous up to that point what the government was using automation and AI for, as well as appetite for opportunities to engage with the federal government on that kind of work.
As we developed it over the course of 2025, it became clear that it’s also going to be very useful for us, from looking internally at adoption, surfacing successful systems and use cases, looking for areas of duplication where we can consolidate our efforts across departments.
Now that it has been published, the other departments across town and ourselves are using it for exactly these purposes, but they’re also using it, we’re discovering, for things like analysis of vendors, how many vendors are involved across these different types of efforts.
And I think it’s just another validation with regards to open data, which is something that I’m also responsible for, that when you start making data open, there’s many uses, for a data set that you didn’t anticipate and needs that you can meet that you hadn’t really known you were going to be able to meet.
So that’s been a great experiment in this space and one that we’re going to keep on pushing. The register that’s available now on the open government platform is a minimum viable product. We’ve been very clear about that. It’s a basically slightly enhanced Excel spreadsheet. We’re going to make it much more sophisticated and easy to use as time goes on.
What it’s got in it now is a list of 409 systems from 42 organizations. We know this is an underestimation of what’s happening across town, but that’s what we got in this first sort of cull. Almost 90% of those systems in terms of what we’re seeing there are internal facing, so public servants are the primary users. They’re not for the public to use directly. About 39% of them are in production, live now, another 44% in development.
What’s interesting from where I’m sitting, is that about 43% of them were developed in house. The earliest entry there has been in use since 1994, which just gets to the fact that generative AI, which is all the rage right now, as important and transformative as it is, is not where all this started. We’ve been in this business for a little while in this space.
So, lots of work still to do to analyze the registry and see what else we can derive from it, as well as to enhance it going forward.
Julie DiMauro: Now, in July 2024, you wrote an article entitled Driving Forward, Navigating the AI Landscape in the Government of Canada, in which you talked about strong policies and how strong policies can serve as a safety mechanism for AI in government.
Your article mentioned your government’s directive on automated decision making, 2019. It is a guide on the use of generative AI, 2023, and the 12 guiding principles of the use of AI. Can you tell us a little bit about these policies, their importance, and the message that you’re sending industry participants in adopting and using them within the government?
Stephen Burt: Sure. I can certainly talk a little bit about each of those three documents. But before I do that, I would just say that as we look at policy in the digital space, our approach has been really focused on responsible use, flexible principles that we can adopt and adapt to meet the needs of a particular situation.
Compared to my time in defense and foreign policy, where the height of your skill as a policy officer was to write a policy you never needed to change, in the digital space, that just doesn’t work. So, one of my learnings of the last eight to 10 years in this work has been that you need to upfront adopt an approach that will set an expiry date or a requirement to review in order to make sure your policies keep pace.
I think that’s as true in the private sector as it is in public. So having some kind of a framework that you’re going to, that’s going to sort of hold your nose to the grindstone around making sure that these things stay up to date is really, really important.
So, the goal of all of this stuff is to both enable and accelerate the government’s drive to adopt artificial intelligence, but also to address the risks that we know Canadians are concerned about. And the directive on automated decision making, which was the first policy document that we sort of officially put in place here at Treasury Board Secretariat in 2019, as you mentioned, is sort of the cornerstone of that effort.
And it predates me, I can’t take any credit for it, but it was a hugely important piece of work when it came through. What the directive does is it sets out rules for the federal government on the use of automated decision systems. Now we picked that language deliberately. It includes AI, but it goes beyond AI. It can be any automation inside a system that affects administrative decisions, decisions that are going to affect people like you and me in their daily lives.
So, we focus in that directive on impact assessment, transparency, quality assurance, recourse, and reporting, just making sure that people have looked at sort of the data bias, algorithmic transparency, and I think fundamentally for anything in the public sector standpoint, some form of recourse if people aren’t happy with the decision that is made.
We want the clients to trust that we’re doing these things in a fair and explainable way with a due process in behind that they have some sort of levers to affect. We’ve reviewed that. The original directive said that we’re going to review it every six months. That was bananas.
We tried, but it was way too fast and it didn’t give people time to actually implement the directive before we were reviewing it again. And two reviews ago, we moved that to every two years back around the time that I was starting in this job.
And the fourth review, which we just completed this last summer, what we’ve done is extended requirements beyond administrative law, fairness and transparency to include things like human rights and looking into things that are a little more beyond the remit of the legal framework itself into some of the, for lack of a better word, sort of software areas of government policy.
The key tool in the directive is the algorithmic impact assessment tool, which helps departments as they’re working through their automation, identify what risk level their automation is going to, is going to be ranked at from one to four. And what kind of requirements there are, understand what kind of requirements there are for each one of those, of those risk levels for mitigation and transparency.
Every review we do involves collaboration with civil society, academia, industry, other levels of government, international partners. And we’re going to keep up to, I think the two year schedule is just about right in terms of having at least a year of operation and then up to a year to review as you are updating.
I suspect the fifth review is going to be a little more fundamental than anything else we’ve done because we are finding there’s a much greater breadth of application in the AI space now with the generative tools that we have.
Very briefly on the other two pieces, the guide on generative AI was something that we put out in 2023 as the free online generative AI platforms began to proliferate across the, the landscape. We pushed out guidance for public servants on how to use these tools responsibly.
We put in a set of principles we call faster, fair, accountable, secure, transparent, educated in terms of yourself and relevant to help departments understand the kind of decisions they were going to have to make in this generative space. It goes beyond what the directive does in terms of applying more broadly than just for decision-making tools.
It’s anything where you’ve put it, where you’re using generative AI and offers some counsel on things that are really a good idea to use it for and things that maybe you want to pause and think before you or not do it all in some cases. We’ve updated that once since we put it out in 2023.
I suspect we’ll be looking at another update. We updated in 2024. I expect this sometime this year, we’ll want to take another look at that and make sure it still applies though. Generative AI tools and their use cases are becoming a bit clearer now. So, we’ll keep these things fresh, but we may not need to update it as frequently as we have the last couple of years.
The last document I would mention is the guiding principles that you alluded to.
We’re a member of a group of 10 countries called the Digital Nations, with the goal of using digital technology to improve citizens’ lives. Again, in 2023, we helped craft and then subsequently endorsed the Digital Nations shared approach to AI, which sets out 12 principles for AI use, including openness, transparency, explainability, a client and user taking into account client and user perspectives, mitigating risks, lawful data use, oversight training, very similar to the things you find in our other documents.
Really, it just offers a common set of guardrails across those 10 countries to make sure the public services are respecting the rights of their clients, citizens or otherwise, and maintaining their trust as we go along here. So that work internationally is an important aspect of what we do here as well.
Julie DiMauro: Absolutely, and I’m going to ask you a little bit more about your collaboration on an international level later on.
I want to just follow up, though, on the fact that you said you collaborated and had stakeholders weighing in on that first document in particular in 2019. Was there anything that they recommended that maybe was surprising to you or that really substantively added to the document and maybe over time certainly did?
Stephen Burt: Yeah, I think the most surprising thing for me was what we didn’t do in the last review.
We had gone out explicitly to stakeholders asking about banned uses. Are there some uses of AI in the current frame that we should actually just ban and say government is not allowed to do these things? There was a real appetite for that, and we’d heard it from stakeholders leading up to that review.
So, we thought we would just go and press them on what specific use cases they thought those should be. Lots of appetite, lots of interest, lots of people who agreed that we should ban some use cases, absolutely zero consensus on what use cases should be banned. Just a huge variety of views, some of which were not implementable in the public sector context.
In the end, we reserve the decision on that point. And if you look at the “what” we heard document and other pieces that are available on Canada.ca, you’ll see some of that feedback and why we landed where we did.
We’re going to address that as we work through the implementation plan for the AI strategy for the public service. We’re about halfway through that now. It’s a two-year plan. We’re going to take another look at bans in that context.
It’s a live issue, but we couldn’t address it in the directive given the very diverse set of views.
I was surprised at that, but it was an interesting, the aspirations there, we think there’s probably some things we should do in that space, but yeah, need to go back a little bit more and think about it a bit harder.
Julie DiMauro: Absolutely. That makes sense. You’ve included some of this in your prior remarks, but looking back on 2025, beyond some of the initiatives you just mentioned, can you tell us what you and your team look back on and are most proud of having accomplished?
Stephen Burt: We’ve delivered a lot in the last four years that I’ve been here. I’m very proud of the team and all the work that we’ve done in this, in this space. In the context, some of it goes beyond the, what we’re talking about today, but specifically on the data and AI side of the house.
I mean, I came in in 2022 as chief data officer with the, Canada’s first chief data officer with the goal of helping the CIO of Canada get her arms around the data community and to make sure that we were working in lockstep to the extent it’s appropriate across many different departments and agencies with different mandates to develop the government’s capabilities as a whole.
The government candidate data strategy that we put out in a year after I arrived in 2023, which was a refresh of a 2018 document, it was really, really helped gel my views on, and I think the government’s perspectives on how you have to iterate through policy as well as make sure that you are delivering on strategies that you have concrete actions that you’re going to do with timelines attached.
We’re in year three of that, of a three-year strategy now for the GC strategy. That’s been, that’s been very successful and underpins a lot of the things that we’re delivering now in the AI space.
The piece that I have been most thrilled about though writ large, my team has been part of it, but beyond that across government, the data community itself has really stepped into this space and is in my opinion, one of the most activist, action-oriented delivery focused communities we have in the federal government. There are many, many communities in the federal government.
We have a Chief Data Officer’s Council that I was part of standing up when I was at defense that I oversee now, but made a deliberate decision that we’re going to have co-chairs of that organization that don’t include me. There’s two, one person from a line department that rotates and one person from our trivia council office on that as co-chairs.
And there’s little north of 50, I think now chief data officers on that group. They’ve set up working groups and the vast majority of the work that we’re delivering out of Treasury Board Secretariat has been put together, thought through and vetted by that council that then gets pushed up to us for endorsement here with a committee of assistant deputy ministers that I chair, and then are issued under the CIO of Canada’s authorities.
I don’t think there’s another example in government of a group that is self-organized like this and driven work across multiple departments with multiple different mandates and capabilities and big, small and otherwise.
And that’s all underpinned as well by a grassroots, two grassroots groups, really one, the community of practice for data information, which is very much a working level self-identifying community, as well as our Canada School of Public Service has a, runs a government of Canada data community, really on the training and events side of the house, which has been very important in the last few years as well.
So, it’s just, I feel like I could disappear in a puff of smoke and the community will continue to run itself in a very responsible and responsive way, which is fantastic.
Julie DiMauro: Stephen, as you note in your writing, the success of AI initiatives hinges not just on the technologies themselves, but also on the quality of the data that fuels them. How do you ensure that you have the right data available, that inappropriate biases are identified and removed among other quality concerns?
Stephen Burt: Well, I think actually that dovetails very nicely with what I was just saying. I think that that resting on that data community and the work that we are delivering under the Government of Canada’s or the data strategy for the Federal Public Service has been really important.
And that strategy, if you look at, look it up on Canada.ca, you’ll see there’s sort of four missions in there that help us make sure that we get this right.
One is being proactive about data needs at the time of planning to make sure that you’re resourcing them and building them into your project design and all that kind of thing.
Another is setting up a data stewardship model that sets roles, expectations, and common practices in data management across departments and agencies that help foster interoperability.
Third is ensuring that data flow can get to where it needs to improve services.
So making sure that you’ve got that citizen service and client service lens on the front end of this and that you are looking, planning through how you’re going to get data from where it is to where it needs to be to support those efforts while still protecting very important in the government context, personal data, as well as any other sensitive data from a security standpoint.
And then last of all, and this is common to the data strategy and the AI strategy, talent, talent and training, making sure that you are skilling the workforce to be able to do these things effectively and responsibly and that they understand what they’re doing when they get into this work.
I think that that has been a success story for us. Data bias is always a challenge in any organization, any industry, and the public sector, the federal public sector in particular, we have some very strict rules between the Privacy Act that sets out in law, how we are to collect and manage personal information and giving citizens and others a right to correct information and to know how their information is being used.
That’s something that translates all the way through to our policies and directives like the directive on automated decision making that we’ve been talking about, where we have to build in those safeguards to make sure that departments are using, protecting and accessing data according to law as we move through that space.
I mean, certainly in the private sector as well, legal requirements are a thing in a number of sectors, but when you’re this close to the legislative frameworks, like making sure that you are holding yourself to a very high standard is really important.
It takes a long time to build trust in these areas and even stupid oversights that you didn’t think would have that big an impact can really erode how people feel about your ability to deliver in this space.
I think monitoring outcomes to safeguard against unintentional unfair results, documenting client feedback and unexpected impacts, looking at things that we have, processes we have around gender-based analysis plus particularly for higher impact systems to ensure that different population groups are equitably treated and don’t find themselves with unintended inequities.
One of the things we’re working through now is a more structured, we’ve had a peer review for higher impact systems and the policy for a while, but working through a more structured approach and more consistency around things like peer review to make sure that data bias and similar issues are caught as early as possible by people that have the right kind of experience to know what to look for.
Julie DiMauro: Absolutely. One thing I want to pick up on is you have mentioned training and there are a lot of people out there that remain skeptical about using AI. They are afraid of some of the risks that you have cited in others.
Given this, how important is training, developing education for a workforce or for users of AI in general and developing that AI literacy to overcome maybe some of that skepticism in using it?
Stephen Burt: That’s hugely important. And we know from surveys that Canadians are among the most skeptical, which is really interesting for a country that has led the development of so many of these technologies, the population as a whole, I don’t know if that’s a cause and effect thing or not, but the population as a whole is quite skeptical about what these technologies are good for and how they’re being used.
I think when we’re looking at encouraging adoption and building literacy in this space, we focus on knowledge, making sure that people have what they need, clarity to make sure that we’re being explicit on how things are being used. And in some cases, particularly inside the public service culture change, like how do we make sure that people are actually adapting to new tech, new processes, new ways of doing things where those are warranted.
Just to go through them quickly, acknowledge it is really, not everyone needs to be an AI expert. You do need as your average public servant does need some foundational understanding and some basic skills on using AI and some understanding of what’s a good use case, what’s a bad use case.
Some of those are technical, but an awful lot of them are about conventional literacy skills and good prompt engineering, for example, if being able to write things clearly. I don’t know if you saw there was an article suggesting that the ruder you are to AI, the better it works. But I think because when we’re being rude, we’re being direct. And that helps a lot with AI systems.
The joke around I’m always polite to my AI system because I’m afraid of what happens when the robots come for us. When we put in too many polite phrases, a lot of those are indirect and come at things from different directions. And that, from a prompt engineering standpoint, is not always the best approach.
So, stuff like that, as you’re working through the basic skills, these aren’t our technology skills, just an understanding of how the technology is actually interacting with you. But also, we’ve got a bunch I mentioned earlier, the GC data community from the Canada School of Public Service.
The Canada School has been a tremendous partner to us and has put together a ton of foundational core learning products available for the average public servant. And they’ve grouped them into what they call learning pathways. So, if you’re interested in AI, here’s a bunch of courses you can take.
And here’s the order we would suggest you take them in and do it for other areas as well. Really helpful and self-directed. And you can log on and do these courses online.
There are opportunities for face-to-face in real life stuff as well. But if you’re just interested, you can start to dip your toes in with things like that. In the public sector context, the other piece on the knowledge side is collaboration with public servants through their bargaining agents.
We are a unionized workforce. Comparing agents play a tremendously important role in representing the interests of their members and working with them in the digital space in particular to prepare the workforce where it is more technical, our technology workforce, without skilling and understanding of changes that are resulting from AI makes a big difference as well.
So, the difference between skills that everybody needs versus practitioner skills is something that we do pay a lot of attention to. And then, as we’ve alluded to a couple of times, that cross-government collaboration to make sure that you’re getting good ideas and sharing your own across the different jurisdictions.
Clarity, I won’t spend a ton of time on this. I’ve alluded to it a few times, like clear and up-to-date legislation, policy and guidance. Up-to-date being where I would put my emphasis. Clarity, of course, is always important.
We’ve done a good job on that, on policy and guidance. I’m a bit concerned about legislation. I think that we do have a number of pieces of legislation where it’s important for parliamentarians to do their piece of this as well and make sure that laws are fit for purpose. I can’t control that from where I sit.
The governments have to work that into their agendas moving forward based on advice that they’re getting from public servants. But that’s an area where our legislative system was built a long time ago and is not always the most efficient and speedy in terms of updating laws. So, some way of building that feedback loop into our legal frameworks is, I think, going to be important and something that I like to see people turn their attention to.
And then down at the other end of the spectrum, making sure we have clear structures and processes for technical assessments, assessing system risk, looking at impact on clients, all that kind of stuff, the more of a clear checklist that you need to make sure that you’ve done your due diligence around things that you’re putting into operation.
The last thing I would mention is just culture and all these “culture eats strategy for breakfast” and stuff that gets kicked around. I think that’s true. But I also think that most organizations and certainly the public service, and again, my view on this is biased a bit from my experience in defense and national security. We do well on culture change in crisis.
And that’s not just the Defense and Armed Forces team. It’s true across the public service. We saw it during COVID. We’ve seen it in other areas. When things hit the fan, the public service can move very fast to adapt to that.
The challenge is when we don’t have a crisis, how you actually continue that attitude and push things along because we have a tendency to retrench into, quite appropriately, like I’m not against this.
You need to have safeguards and checks in a big complicated system. But we also have a tendency when we’ve moved fast in a crisis to go back, review those decisions and blame people for having moved fast in some cases. So there needs to be a bit of a balance if we want to change the culture on moving fast preferably without breaking things.
But some cases breaking things and are changing the culture as a result is the way we make progress. I would still say that we are probably five years further ahead than where I thought we would be on technology tools inside government than we would have been if COVID hadn’t come into place.
Because we rolled out a bunch of tools that I think we would have really dragged our feet on if it hadn’t been for the need to suddenly deal with the distributed workforce. I think that set us up really well for the rest of the 21st century.
Julie DiMauro: Very interesting, thank you.
Now, I want to talk a little bit about Canadian sovereignty. When I was at the All In AI conference in Montreal, you and several other Canadian government officials referenced that term, Canadian sovereignty, with regard to technology and AI specifically. What do you mean by that term?
I heard all of you say that it doesn’t mean going it alone. That you still need to collaborate with our partners internationally. And you mentioned leveraging the talent and expertise of Canadian people and institutions. So please let us know what you mean by Canada’s tech sovereignty and please tell us how you are fostering it within your role.
Stephen Burt: Yeah, this is the sovereignty discussion is live right now, as you say, we are in it up to our eyeballs in Montreal in September.
And the discussion continues. It is a discussion that is relevant to AI, but frankly, it’s relative to the entire digital technology stack as well. There’s all kinds of aspects to this in terms of we’ve had rules for a long time around data residency, for example, going back to the early 2010s for sensitive data.
But it’s a real expansion of that to looking at all kinds of other things, not just in terms of hosting services and data residency, but in terms of movement of data across systems and who you’re procuring from as well.
Mr. Evan Solomon, who was in Montreal as well and was very, very vocal and clear on this and his team over innovation science and economic development department are in the process now of renewing the pan-Canadian AI strategy, which was originally released in 2017.
So the refreshing that work right now as we speak, and it’s going to be looking at Canada wide needs and things like sovereign compute capacity, support for Canadian suppliers, all those good economic development things that my colleagues care deeply about, as does the rest of government, but really, they’re the ones at the tip of the spear on this one. From where I’m sitting at Treasury Board Secretariat, we’re really focused on what’s happening inside government.
We are down and in on departments and agencies across the federal government and positioning Canada, the Canadian Public Service as a global leader. I would put the emphasis like sovereignty is a big portion of this, but the emphasis that I would put is less about sovereignty and more about responsibility, which I think is responsible AI, of which sovereign AI is a component.
We’ve talked a lot already about some of the things that we’ve done to, I think, position Canada as a global leader and responsible AI with our directives and policy work that we’ve done up to this point.
We’ve seen provinces and in Canada, as well as a number of other countries around the world, adopt the approach that we put out there in 2019 in the directive on automated decision making and the use of algorithmic impact assessments. Some just pick it up and shift it right over into their own context, which is great to see.
We’ve also done things like work with our global affairs colleagues to sign the Council of Europe’s Framework Convention on AI, Human Rights, Democracy and Rule of Law. We’re making our own directive more explicit in terms of the requirements around things like that, and I’d mentioned as well, human rights.
On sovereignty writ large, which I know is really what you want to get at with this, I would point towards a policy paper that we released early in 2025, I think it was in April, that outlines our current thinking from, again, from the government of Canada, functioning of the public service perspective on digital sovereignty, AI and beyond. That paper is available on Canada.ca.
We’ve gotten a ton of reaction from it, which is the point, to see what people think about where we’re thinking on this.
We’re going to be refining it and updating it over time as our understanding evolves based on stakeholder reactions as well as the realities of the marketplace and what vendors are doing in this space.
I think it’s very clear from where I’m sitting that vendors have gotten the signal on the importance of sovereignty, both Canadian and international vendors, which are inevitably going to continue to be part of the landscape for Canadian buyers like the government.
I would say in terms of engagement and how we’ve developed that paper and just more generally, we do have, generally speaking, as we’re working up policy from the government of Canada, we look at a really broad stakeholder engagement. We are really pushing to get our early thinking out, get reactions to it, and then refine it as we go.
Those can include public servants of the grassroots, bargaining agents, I’ve mentioned already, relevant policy centers where people are working on specific issues around human resources or finance or other aspects of specific clusters of government work.
In the Canadian context, indigenous partners and rights holders, very important partners in this work and very important to hear from as we develop all our policy work, as well as, as I mentioned already, civil society, academia, industry users, and vendors, and then just an approach towards open public engagement as well to hear from Canadians who have an interest in this area.
Sovereignty continues to be a live discussion. It will evolve as we go, but lots of work still to be done in this space here and with partners, I would say.
Julie DiMauro: Sounds great. Thank you. I talked a lot about risk already, and I just want to mention one other aspect that is concerning to some governments and people, and it’s the incredible amount of energy that training and running large language models require.
It’s a significant environmental concern involving power grids, water use, e-waste. What is the Canadian government doing in terms of exploring maybe more efficient and greener solutions?
Stephen Burt: I would mainly point to the work that ISED is doing on data centers and Canadian compute.
I do agree completely, and it’s been a principle that we’ve had embedded in our work for a while, that the use of AI has to be, I don’t know if balance is the right word, but has to take into consideration global greenhouse gas and other environmental footprints with regard to mitigating any damage it might cause.
AI compute is a big industrial process that is out of sight to a lot of folks, but people are hearing the stories now about the implications of these things both on a carbon standpoint, but also on water resources, for example.
There’s no question that AI systems and data centers can be a significant source of emissions as well as water usage. At the same time, AI tools, I think, do offer the potential to combat some of these climate change factors, building up improved climate modeling, supporting environmental monitoring at large.
We’re seeing that already in the oceans space and some of the use cases that we’ve got for AI-driven tools in the overhead imagery world, as well as maximizing efficiency of energy systems and distribution and things like that. There’s going to be a bit of some push and pull as we go forward in this. I think, personal view here, I think that Canada’s got a lot to offer in this space. I think if you’re going to build a big data center, it might make it a lot more sense to do it in a cold climate with abundant water resources rather than not picking on any particular state.
Just as an example, the desert in Nevada does not necessarily seem like the best option for some of these things. I think there are opportunities here for Canadian industry as well. Broadly speaking, though, my team here works closely with folks working on the Canada’s greening government strategy, looking at federal efforts to reduce environmental impacts across our operations, which includes obviously our digital infrastructure.
Anything we can do to foster more efficient, lower impact AI solutions, encourage departments to think about energy use when they’re procuring and deploying capabilities, and then supporting research as well into greener computing approaches is a big factor to address.
Julie DiMauro: Thank you.
A lot of our listeners use structured data or data that is organized into specific formats and is easily searchable for analysis and other purposes.
Now, unstructured data such as text, audio, video was ripe for AI to come along to make it more actionable, manageable, and useful, it seems. Can you tell us if and how this has happened?
Stephen Burt: Yeah, so I think like many large or older, more long standing private sector entities, the government’s been around for a while and boy, do we have lots of unstructured information. It’s fundamental to what we do.
I know it’s the same in banking. I know it’s the same in big industrial companies that have been around for a long time.
Text documents, emails, PDF reports, images. Imagery is actually how I got into this business. So, as well, scanned records, all of these things have historically been difficult to search, to analyze, to scale for reuse. Natural language processing has offered some options in this space for a few years now. And generative AI approaches to natural language processing, in particular, I think are changing the game with unstructured data.
So, all kinds of new opportunities to extract value from this information. We’re looking inside government right now at things like how we do information search and retrieval so that we can find relevant records and documents more quickly.
Similar to what I was saying earlier about the Privacy Act, we have a set of legal obligations as a government in our access to information legislation. We are required by law to be able to identify relevant records and produce them for when the public asks under our access to information legislation.
There’s lots of work that needs to be done there. And a lot of it is too much of it. It’s still manual, slowing things down and causing lots of dissatisfaction from how long it can take to produce especially big requests. There are definitely roles for AI in that space with human oversight to make sure that the law is respected and that the decisions you are making are explainable and reproducible because we are subject to complaints and oversight in that space. It is one area to get a little bit more technical.
It is one area where generative AI doesn’t always serve us well, reproducibility and explainability. So finding ways to enhance the natural language capabilities of generative AI with something that is much more repeatable and explainable in a court or to an information commissioner who provides oversight in this space and adjudicates complaints.
We need to balance off the automation with those things that will allow us to show our homework in this space. We have whole institutions in the government like Library and Archives Canada who are experimenting now with AI to enhance classification, metadata, tagging, summarization of large documents and entire collections in order to improve record management. And classic information management techniques still matter in terms of how you bring this stuff together.
Students know a lot about how this stuff works and they are at the forefront, frankly, making sure that they have the latest technology tools available to them and to their client base.
The challenge as in banking, as in any other regulated industry, and we are certainly that, is that you’ve got to make sure all of this is done under the appropriate legal and regulatory rule set, privacy, security, any specific legislation or policy that drives a particular process, taking those things into account as you work these things through.
So, it’s a work in progress, but I think this is one area, as I said, where the generative AI revolution I think is going to pay real dividends, especially if we can supplement it through things like Neuro-Symbolic AI with more structured techniques that will drive reliability.
Julie DiMauro: Terrific. Thank you so much. Stephen, can you please share with our listeners your vision for Canada’s AI strategy and future leadership in 2026 and beyond?
What are you most excited about and how can people get more involved?
Stephen Burt: Yeah, the great question.
I think as we look at 2026 and how it’s rolling out, we are going to complete the first year of the AI strategy move into the second year. It’s only a two-year strategy because things are moving so fast. So, I think as we hit those milestones on our implementation and we’re working up a year one report, how that implementation is going.
My vision is for a federal public service that is confident in using AI responsibly and transparently and focused on outcomes from that use for Canadians and other clients, but principally for Canadians.
In the current context, the team here, my team that has been working hard for a few years on this now, it continues to drive for a strong finish on our work as G7 president. Officially I think it hangs on the calendar year.
So, officially Canada, if I’m not mistaken, has handed over to France for the next G7 presidency. But when we were president last year and after meetings, during the meetings in Canada asked us, we made a commitment to an AI initiative under the G7 and we run a what we call rapid solution labs, basically a hackathon for public, within public sector problems across the G7 and the EU to see what kind of ideas we could get from people across that partnership.
We’re going to drive to a strong finish on that and hand off that work as we finish up and declare, I think by the end of March, the winners of those rapid solution labs, hand off that work on an ongoing basis to the French presidency, which we’re very excited that they are pleased to continue with. So that will be a good partnership going forward.
We also need to, in the context of our internal work, really focus in on measuring the impact of AI. This is the constant refrain from our political leadership as well as we see it in the private sector as well. Lots of talk about efficiency, but very few metrics on what efficiency looks like, what performance indicators you can use, what’s the impact, what’s the productivity increase that you’ve gotten from it.
Really disparate views in different parts of different areas of application and different use cases. We need to gel some of that this year. How we actually know the answer to the impact of AI is it made us better, faster, higher, whatever else that Olympic motto is, in terms of what we’re doing and what costs that we encourage to achieve that as well. So return on investment is going to be key to our work this year.
Getting feedback on the AI registry is going to be really important. I am very keen to get a new version of that out, which is a little bit more interactive and maybe some visualizations of where there are concentrations of work going on inside the federal government with that registry.
And then I’m looking forward to establishing an AI center of expertise within the Government of Canada. We do need to quote an ISED colleague, Deputy Minister Marc Sean, who said this first about a year ago, we need a Call Before You Dig program for AI for departments who haven’t been in this space previously to help them understand how to do this responsibly, hopefully adopt more quickly than some of the pathfinders we’ve had in government who’ve had to progress a little bit more by trial and error.
It’s a bit of a workbook and a path for people to follow in their implementations with all these things like ROI rolled into that. So I think we’re on the right path. I think we’ve got the pieces in place that we need. It’s just a question of making sure that we drive that implementation and measure the impact going forward.
Julie DiMauro: Fantastic. Stephen, just in keeping with some exciting news, you told me at the start and when we were not taping about your news career-wise, and I just wanted you to be able to share with our listeners some of that exciting news.
What’s next for you and your journey?
Stephen Burt: Yeah, thanks, Julie. For those listening, what I told Julie at the start of this is that I’m very pleased that this is my opportunity for what is probably my end of federal public service career interview.
I’m very happy to be doing that with partners at Global Relay. I will be leaving this position in about 10 days on the 23rd of January and starting on the 26th with the government of Ontario as their new associate deputy minister and chief strategy artificial intelligence and data officer in the ministry of, make sure I get this right, public and business services delivery and procurement.
I’m very excited. I’m very pleased and proud of what we’ve accomplished here within the federal government. The new job with Ontario is going to be much more operational and implementation focused, which is very much where I want to be right now. So very, very excited to be moving into that and look forward to continuing our discussions maybe from the other side.
Julie DiMauro: Absolutely, we’d love to hear from you. Steven Burt, Chief Data Officer for the Government of Canada, thank you so much for sharing these incredibly useful insights with us and being on our GRIP podcast program.
And a big thank you to your incredibly helpful staff for helping to make this happen. Thanks as well to our wonderful listeners for tuning in as ever. Please tell your colleagues about us and we’ll see you back here for another podcast session soon.

