Transcript: Tim Dasey podcast

Our senior reporter Carmen Cracknell spoke to AI specialist and educational consultant Tim Dasey.

This is a transcript of the podcast episode Tim Dasey on education and the workplace in the AI era between GRIP Senior Reporter Carmen Cracknell and AI expert Tim Dasey.

[INTRO]

Carmen Cracknell: It’s great to have you here. Firstly, can you just tell us a bit about you and your background?

Tim Dasey: Sure, yeah. So my background is er, I’m a bit of a mutt in terms of expertise. So I actually started working on AI back in the late 80s, early 90s as part of my PhD program and went from there to MIT where I spent 30 years really on a whole range of topics. I was not on the academic side. I was on the research side doing a whole bunch of national security work. So kind of hidden from view for a long time. But what that did is national security in my career was broadly defined. So I ended up working on air traffic control, bunch of defense and homeland security issues, but also public health and medicine and disaster response. And that gave me a range of views on a whole bunch of different professions and the workplaces and the cultures and what kinds of technologies helped them and what didn’t. And AI was a big part of that. And then last year, really kind of in the aftermath of COVID, but also based on some concerns we’ll talk about, about how I think the world isn’t set up well to deal with AI and the jobs that remain.

And I decided to jump from the AI development side to helping the users. And education was really the most logical place to start because I think that’s where the greatest need is.

Carmen Cracknell: So was that the aim with your book, Wisdom Factories, to kind of expand knowledge about how important it is to integrate ways to deal with AI through education?

Tim Dasey: Yeah, my biggest concern is that we have a paradigm misalignment, I’ll call it, that the way we’re educating and the purposes of our education systems don’t fit the kinds of skills that AI era work needs. And so I’ve viewed that even if educators absorb the implications of AI well, that there’s enough change resistance in the community, that their paradigms for teaching these new skills wouldn’t necessarily change. And I think there’s a huge problem with that. If I consider AI to be a technology that helps people judge and make judgments about various things, with the decision-making assistant, people in the world don’t make judgments very well. And that’s both an innate human frailty that all of us have, but it’s also I think a product of the fact that we don’t give students enough practice on how to make complex, multi-factor judgments. Instead we give them problems that they can work through either to come up with an answer that the teacher has predefined or even sometimes to come up with a point of view that the teacher wishes you to have. So it’s not allowing that free-form thinking and that experiential learning that really drives a lot of the intangible skills that the AI era requires.

Carmen Cracknell: And your book, which is so interesting and I’ve read through, in there you talk about wisdom skills versus kind of more narrow expertise. And you say that wisdom skills are increasingly important in the AI era. Can you just talk a bit more about what you mean by that?

Tim Dasey: Sure. I mean the traditional path of schooling is to build more and more expertise in particular specialties. So that’s what the college credentials and the high school courses are set up to do. We generally try to build specialists. That is done through imparting a whole bunch of knowledge in specific domains and teaching people the reasoning and the judgments that people need to make within that domain.

But increasingly the problems we encounter in the workplace are not single domain, single disciplinary problems. If you’re a manager you need to have a whole bunch of skills in psychology and people management and how the work is performed and how to manage projects. If you’re a typical worker with more and more of the detailed work getting automated, then you’re left as a worker with more and more problems that are novel, that you have to learn on the fly. I used to always get problems that me and my team had very little experience with. We had to dive in and figure it out. We had to learn enough to do the task well. Those kind of novel tasks need a different approach. Rather than it being the underlying specialty knowledge that drives good solutions to that, it’s our ability to go and get knowledge quickly that will help us solve problems. It’s our ability to do what I would call process and meta knowledge to have high skills in how to approach problems. A great example of this is there’s a field that has emerged in the last year called prompt engineering.

Prompts are simply the requests that are given to some of this new generation of AIs. If I wanted to do something, I have to ask AI the right way in order to get an output that fits my needs. I can’t just ask it, “Tell me about the competition in a certain domain. It may be something I’m trying to do some background research on.” I need to tell it, “Why do I need that information? How long of an output do I get? What kind of should I treat this as an explanation to my grandmother or somebody who’s well known in a field?” You have to give it all of the context and sometimes even the process for looking up the information. That art called prompt engineering doesn’t rely on knowing the individual details behind whatever the answer is going to come up. What it relies upon is being able to have the big picture view of how whatever AI gives me back fits into the broader problem that I’m trying to solve and how do I couch that problem in a way that I’ll get a reasonable answer back from the technology. That skill prompt engineering has very little knowledge associated with it.

I can teach people that there’s certain patterns of asking for things in chatbots, for example, that I will be more likely to get an answer back. The real skill in that field is knowing how to decompose problems, knowing how to ask questions the right way with precision and clarity, knowing about a process that I should go through to step by step to try to solve a problem, and then understanding how to fold the information that I get from AI back into this broader big picture view. I call those wisdom skills, and that’s very, very different from the expertise skills, the detailed knowledge skills, that almost, I would say, 80% or 90% of our schooling is devoted to, even with continuing an adult education.

Carmen Cracknell: Yeah, chat GPT, is that an example of a piece of software, I guess you’d call it, that relies on prompt engineering?

Tim Dasey: Absolutely.

Carmen Cracknell: Although it’s been lauded by everyone, I’ve used it and it’s really, to me, it seems quite basic and there’s so many things it can’t do and can’t answer.

Tim Dasey: Right, and that has been my view too. It’s one of these situations where if you ask it a generic question, you’re going to get a generic answer back, but there are ways to ask it questions that you get very detailed answers back. So yesterday, I was writing or decided to write, and I should point out, I used to do software development way back when, 20, 25 years ago, but I wanted to play around with how easily I could get chat GPT to write some code for me. So I just knew a little bit about coding, but not the modern tools. But if I could ask chat GPT the right way, could I write a program that I could use for something I wanted? It took me a couple hours to figure out the right way to ask, but I absolutely got working code back and it did what I wanted it to do. And it otherwise would have taken me digging through manuals for two weeks to figure out how to do it.

Carmen Cracknell:: Right. It just takes a long time to get it to give you the answer. Because I’ve heard about people using it as like, even a psychotherapist or a financial advisor. And I’m like, I don’t know what kind of questions they’re asking, but they must know that in itself is something you obviously need to be trained in how to input the right questions and phrase it correctly.

Tim Dasey: That’s right. And that’s a lot of the art. It’s a very iterative process to figure out how to use these tools and learn what will work and what will not. And there are tricks that you can pull from other folks and from the internet, but this is a very rapidly evolving field. In a year or two, is there going to be a giant demand for prompt engineering? Maybe in some part there’s going to be still the demand for how do I get AI to do what I want to do, but it might not take that form of having to precisely word things to AI because maybe AI will have figured out how to ask you the questions so that you don’t have to have to understand how to put together just the perfect phrasing. So the general skill, how do I use AI is going to be around prompt engineering may or may not. And that’s the way the work world works is that increasingly everybody’s job is going to be changing very differently. If you’re in journalism, let’s say, and you run a podcast, not that I’m pointing at anyone in particular, but you may want to increase your reach for that podcast.

So the question is, can you find a tool to do that? Well, you can go grab tools now that will take a bunch of video in, will translate it into umpteen languages and actually change the video so that it looks like your mouth is speaking in those languages. So that’s an example of how even in a very creative field like journalism, there are ways that AI can help improve productivity. And that means all of the jobs that are out there, literally all with very few exceptions, which I would try would place in into categories of certain types of service jobs and manual labor jobs that are going to be harder to automate. But in any of the work that really is where our brain is the chief asset, you know, these jobs are going to be changing radically because of AI.

So how do you learn quickly? Well, the main difference between the paradigm that I’m talking about focusing on wisdom skills and these big picture qualities is that you don’t learn those by listening to lectures or giving, you know, being given a whole bunch of details. You learn those because you have a range of experiences and trying to solve problems that way or in that, of that form. And so it’s experiential learning. It’s not learning of knowledge and details. And that paradigm shift is pretty radical. We do have experiential schooling paradigms, project-based learning and even things like Montessori schools and the like, but they’re still all focused on the end goal of teaching people detailed knowledge. Right? And there’s a lot of reasons for that that have to do with, okay, that’s what college is asking for. That’s what standardized test ask for, et cetera.

But fundamentally, if teachers and schools just focus on trying to take AI and make them teach the existing model more effectively or efficiently, then they’re missing the boat because the real impact is that curriculum has to change and the paradigms for teaching have to change. And that’s one of the reasons why in the book I had spent 10 or 15 years at MIT building various games for various workplaces. And a lot of times, the reason we were doing it was to put people in specific job situations particular to their jobs.

If we were working with, let’s say, law enforcement detectives, we wanted to give them a problem where they had to piece together a whole bunch of evidence and nominate who they wanted to go question and et cetera. And that skill, we wanted to put them in job situations because we wanted them to tell us through the experiences in the games, what is it that technology, how technology could help improve that process. And we found that if we just asked people upfront, what does your job need technology wise, we got very unimaginative answers. But when we put them in these game situations and had them play through how to solve a problem, they would come back with very rich answers. We found that they were learning so much more about their own profession from playing these games. And a lot of times, there is a market in the education world for game-based learning, but often those games are still focused on trying to teach people details. And if you think about what games are, if they’re just environments for play, kids on a playground don’t start playing a game in order to learn the letters of the alphabet or the state capitals. They play a game to put themselves in a different experiential form. They spend their time arguing about the rules of the game and how to interpret those rules.

They’re learning the big picture of human interactions in playing with the games. So the kind of games that I talk about in the book are really ones where people are put into, in an age appropriate way, are put into complex situations and then asked to solve problems. So when we’re working with emergency managers for FEMA, which is in the US, the main federal emergency management agency, we put people in situations after disaster, said, “Here are your resources. Here’s health and fire and security. And here are all the resources you have. Decide how to allocate these to these different functions of saving lives or clearing streets or whatever it is that you have.” And there was no right, single right answer to that problem. We did give them feedback on how well it’s going given the resources that they allocated. There’s no single right answer to that. It requires trying something, failing, and trying again. That takes a really, really long time to get in the real world. If I’m going to wait for all of these different kinds of situations so that I get practice with enough variation that I build that judgment skill, I’m really waiting a long time or maybe never in the case of disaster management if the disasters don’t come toward my jurisdiction. So the games are a way of putting people in situations and making them try to solve a problem. It’s very much a try, fail, iterate paradigm. And the failure, just like real life experience, is as important as the successes. That’s the kind of paradigm that’s more important to teach these wisdom skills.

Carmen Cracknell: Do these games take the form of a crisis simulation? Because when I think of gamification and gaming, I think of computer games or real life scenarios. What does it look like with this game training?

Tim Dasey: I think you need to think a little differently than entertainment games. So there can certainly be situations where I put people in immersive environments and require them to solve problems. If I’m trying to learn how to perform surgery, I want to put people, I may have them wear the heads up display that’s got the body and something that I’m working on manipulating. I need to see the detailed information because that matters to the judgments and even the movements that I make during a surgery. But in the case of the disaster management game that I mentioned to you earlier, that was almost like a board game. It was computerized. But the only reason it was computerized was so a lot of people had access to the game that otherwise would have required being in person. And because that the feedback, we actually had some software running in the background that estimated, okay, if you decided to deploy your resources in the following way, here’s how that would help life and health.

So we had those kind of number crunching going on to provide feedback to the learner. But they don’t have to be fully immersive situations. It really depends on the skill you’re trying to impart. It can be simple. It can be whiteboard games. It can be board games. It can be computerized. And those are very dependent on what you’re trying to teach.

Carmen Cracknell: Yeah, in terms of just going back to education techniques, you talked in your book about the importance of abstract thinking, skim reading, extracting information without necessarily understanding the fundamentals and focusing on things like statistics rather than calculus in maths. Especially when it comes to sort of younger children, how can the education system evolve to teach these things rather than what it’s doing right now? And also how can these things be measured? Because I guess standardized testing, you’re probably not a fan of that, right?

Tim Dasey: Well, okay. So standardized testing is fine as a calibration of how well people are learning. I tend to have a problem when standardized testing is used to take, either to punish schools or to rate students on innate ability. Because really the standardized testing scores are correlated well with school performance, future school performance, but they’re not correlated that well with work performance. It turns out that the kinds of things that are correlated with work performance are general cognitive skills. So an IQ test result is better indication of how well someone is going to do in the work world than in the US and SAT or some other form of standardized test would. And the other thing that’s a great indicator of job performance is how well you do in a job tryout. So internships, apprenticeships, and things of that sort.

So if you look at how people really succeed in the workplace, they’re not well correlated with standardized tests. And that’s because if you sort of take a step back and think about how we’re evaluating students and compare that to how we evaluate adults. If I got a performance review in my workplace, it’s not because I sat down and took a test that the employer gave me, right? Because everybody knows that nobody’s job is going to be well characterized by a test. But we do that with students. We do it with students. There’s probably a number of reasons why we could say that that’s the case. One of the reasons is we want to remove subjectivity from the process, right? Well, that’s kind of impossible when you get to the point where you’re doing these squishy, big picture kinds of tasks and thinking, those necessarily require certain subjective judgments. So the question about how do you evaluate these skills, let’s say if I were trying to evaluate critical thinking or creativity or collaboration ability, how would we do that? Well, in the workplace, we do it by these subjective assessments. We do it based on what you’ve been able to do and what you’ve demonstrated you haven’t been able to do. That’s sort of what the learning people would call competency-based learning. I can do the following things, therefore I’ve demonstrated such skills. And so use of the games is an example of a competency-based measure. If I’m trying to teach the skill of, let’s say recognizing that lots of problems that I might try to address and perturb with some kind of solution that I’m engineering, lots of those problems may have unintended consequences. I may fix problem X, but problem Y pops up as a result of me doing that. So that’s kind of a common issue in interacting with any kind of complex system.

And people learn those things experientially during their normal work life. They learn what works, they learn what doesn’t. The game just provides an artificial and accelerated paradigm to do that. And you could argue that rather than that can be a standardized test. If I’ve shown in the course of being put in a bunch of situations, even though they’re virtual game-based situations, if I’ve shown that I can come up with a reasonable answer in that realm, well, that’s a much closer match to the way my job performance would normally be evaluated. So it is definitely a hard… If I were to say to someone, how do I evaluate creativity in a very general sense? That’s a hard problem. But if I’m going to say, how do I evaluate… My daughter is a musician and composer. How do I evaluate the novelty and beauty of the song she created? I’ll never develop a great test for that. But I can put her in a bunch of situations maybe with and without certain tools that help her compose. And I can get a bunch of people to say whether they like it or not, what comes out, or to say whether it’s novel or not. That’s not as perfect a measure in terms of it being very objective as what schooling standardized tests will provide.

But it is a measure. She can build a song with this tool or she cannot build a song with this tool or she can do it in this amount of time with this amount of novelty or not. And so imperfect measures that are at least trying to measure the right things, which are these big abstract cognitive squishy skills, is much better for our future than knowing that you can measure something which isn’t relevant. Knowing all of the factual recall about music theory that I might ask my daughter doesn’t help me understand if she composes songs well. And so imperfect is what we’re going to get when we start looking at these other kinds of skills.

Carmen Cracknell: Yeah, you talk in your book about the left and right brain quite a bit, which I found really interesting. I’ve only sort of heard about this in the context of female male differences, and I think that’s somewhat disputed at the moment. But it seems like the left brain has been kind of overused and there needs to be more focus on using the right brain and the soft skills that come with that. Is that right? I’ve got them the right way around, hopefully.

Tim Dasey: You do. And I almost hate to bring up left and right brain because there’s a lot of misinformation out there. But the best neuroscience understands this right now is that you can always think of this from a very general principle of how we interact with the world. So if we’re going to interact with the world, there are really only two directions to go with any particular thing that we’re thinking about. We can either dive down into more detail, or we can step back and go up and think about the bigger picture. We can’t do both of those at the same time very well. But we do bounce back and forth between these all of the time. The left brain kind of specializes in that digging in, diving into a detailed part. The right brain specializes in putting information into context, in storytelling, in trying to understand the bigger picture behind any insight.

So the way one book that’s called The Master and His Emissary that I really do love, the way that book describes it is the master is really the right brain who’s deciding what are the problems we need to solve and how might we approach these. The left brain is kind of the emissary who’s going off and solving detailed problems. And then reporting back, here’s what I got so that the right brain can fit it into context. If our education system is focused on a whole bunch of details and in regurgitating information, remembering information, putting the details in, then we become really good. We strengthen that left brain skill. We don’t strengthen the right brain skill nearly as much. And unfortunately, there are correlated aspects that aren’t necessarily good in feeding the left brain too much. Because if you tell people there’s lots of details that they have to understand, but they can’t put those details into a broader story that explains what the world is doing and how it behaves, then left brains can tend to get very, very diluted. They can tend to believe that what they know is all there is to know, because that’s not a part of the brain that’s responsible for thinking about stepping back and taking a look at what else. And so the more we feed people detail, the more they get good at detail and the worse they get at understanding how to use that detail productively.

It’s an art. If we want people to be good at solving complex problems, then we need to put them in situations where they practice solving complex problems. For a first grader, a complex problem for them might be, I talked to a friend about something, and they come back with a response that wasn’t what I expected. Okay, well, that’s a complex interaction, right? I do something, I get feedback. The general principle of when I do something, I usually get feedback, and some of that feedback is unintentional. That may be a big picture concept for a first grader. But I can teach that by putting them in not just in situations that they may experience every day in interacting with a friend, but I can have them interact with 15 different kinds of friends via an avatar in a computer game to practice the skill of how do I interact with this personality versus that personality. The same thing might be done with an adult, but on a much more sophisticated kind of problem or question that the two of you are trying to work through, the negotiation on something or trying to persuade somebody of something.

So the whole spirit of the schooling model change and the left and right brain paradigm is really just to say that I didn’t just make up in the book, it’s really kind of my way of saying I didn’t just make up this new paradigm. It turns out that there is actual neuroscience and psychology on how people make judgments, and how complex decisions are made, and in what ways we’re really bad at that. And there are a lot of ways we’re really bad at that with biases and other kinds of factors getting in our way.

And then there’s an understood way for going about changing that, but it’s completely different from the way schooling is done now.

Carmen Cracknell: Yeah, I have to ask this question. What are your opinions on AI regulation and what do you think is needed and where do you think it’s going next?

Tim Dasey: Well, I mean, I can’t predict what governments are going to do, so that’s the big unknown here. Look, I think we’re dealing with something that’s not alike or not akin to what we’ve experienced before. So when we’ve tried to regulate technology in the past to the extent that we have, it’s been a much easier process to do so.

I think one of the problems with AI is even if you can come up with a regulatory rule set, it’s very difficult to understand if we can figure out for a particular product or a particular use case whether that is in or out of the rules. So let me give you an example. Let’s say we had a rule in the regulation which said, “If you interact with AI, you should know that you’re interacting with an AI.” Okay, I consider that to be sort of a fundamental right. Am I talking to a person or am I talking to an AI? That at least helps me understand how to interpret the information that comes back potentially. Right now, we don’t get that information unless we’re explicitly. If I go into chat GPT, I know I’m interacting with an AI, but there are a whole bunch of products that are going to be coming out where there will be parts of what’s communicated to you that are AI and parts that are not.

Or maybe the original insight was AI generated, but then a human wrote text or drew images on their own based on recommendations from the AI. So where in that spectrum do I say, “Okay, this is AI generated.” Or if I have, let’s say I have some kind of principle like whatever the AI generates should not be negative toward a particular person and their personal characteristics. So I don’t want the AI starting to prefer certain identities over other identities in some way. How exactly we measure that in a product is very, very unclear. So I think what’s likely to happen is there will be a whole bunch of principles. We’ll almost call this sort of a bill of rights for AI use. And those principles, even if manufacturers strive toward them, are going to be very difficult to understand how to precisely measure for quite a while. Maybe honestly, maybe forever if the AI or “brain” remains difficult to understand.

Right now, putting out a complicated AI into the world means there are a bunch of things that AI is probably capable of doing that I can’t or don’t even know to test for, some of which are negative. Unfortunately, there isn’t a way to engineer the values into AI directly. That’s something that people are technically working on. But if I said, “I’d like to put in that principle upfront that AI should not be prejudicial toward identity,” I honestly don’t even know how that would be done. It would only be done by being very selective about what information the model was trained on. And that’s honestly too difficult a task for humans to curate manually. So instead, the manufacturers just throw all the information in as examples and let it figure it out. And when you let the AI figure out what it needs to know and what it doesn’t, it’s going to figure out things in ways and for situations that you just can’t anticipate. So regulation is good in the sense that we need to have some values that society puts forward around how AI is used. But I think the hard part of that is going to be that those values are going to be difficult to measure.

Carmen Cracknell: Yeah. Should we be afraid of AI? I know that’s such a broad question, but do you think we can take the steps now or it’s just too exponential and is running way ahead of us? Or can we, like you say, take these steps in education and different paradigms and frameworks to be able to harness AI so that it is a force for good?

Tim Dasey: AI is a powerful technology. It will be used for bad and it will be used for good. There’s almost no technology I can think of that hasn’t been the case. I do think there are many directions this can go, but I think the key is not so much the technology and how that evolves as it is humanity and how we evolve.

We are the partner of AI. If we’re incapable of doing the work that AI leaves behind, then we’re in trouble because there will then be too many people doing low wage work or trying to do low wage work and not enough jobs to support them. I’m not an economist, but that’s my personal opinion. If you can’t use AI in your job, and right now AI is across a range of professions is being shown to be somewhere between just the current AI, like ChatGPT and the like, is shown to be about 20% to 60% more productive people using AI in their jobs than those who don’t. That’s a gigantic productivity enhancement.

If people can’t learn to use AI, which requires these big picture skills, then I think we’re in trouble if not enough people learn those skills. Learning those skills is not as simple as teaching, giving them a two-day course in something like prompt engineering. It requires a whole bunch of underlying cognitive abilities that schooling has to build more of. I think whether we go toward a dystopian future or utopian one is more dependent on whether we can learn the skills we need to be to be valuable in the AI era than it is the technical capabilities of the AI.

I’m scared about that because I think that schooling and education is a very slowly changing world and a change-resistant world, and the technology is moving incredibly quickly. So the mismatch between what people can do and what AI demands that they do may get quite big very quickly, and that is my concern.

Carmen Cracknell: Well, those are all my questions. Is there anything you wanted to add to that?

Tim Dasey: I think the biggest… I saw a survey recently that asked CEOs across a range of different industries if their company was working on AI or not, and about half of them said, “Yeah, we’re keeping an eye on it, but we’re not quite sure if it’s ready for us yet.” Some of that’s change-resistant, but some of that’s a typical wariness based on how products have come out in the past. The product that comes out in the past, I either buy a database or I don’t buy a database or I’ll buy an information management tool or I don’t buy one, but the AI technology is much more try it out. Whether you’re an individual worker or whether you’re a company, the way to figure out if this realm of technology is going to be useful to you has to be trial and error. It’s very much an iterative process. It’s not a, “Well, I’m going to wait till the next version of AI comes,” because what you’re trying to do at this point, there may not be, if you’re in the legal profession, there may not be a chatbot you can trust for legal advice. ChatGPT will put out some stuff that’s pure fiction at times, but that’s coming in probably the year or two timescale that you’ll get a chatbot, particular to lawyers or particular to teachers or particular to journalists. Then the question is, “Can I use it well?” That’s an art that you need to practice and you can practice starting now. You can start to learn. Just take one aspect of something and see if you can figure out how to use AI for you, because it’s not a question of AI good or AI bad, or do I decide to use it now or later. It’s an incremental evolutionary process for how to use these tools.

Carmen Cracknell: I did actually want to ask you as well one thing. You mentioned defense, having worked in that, national security. Which sectors going forward will or could benefit most from good training methods in AI and which sectors are going to be most radically transformed by AI? Do you think? Or is it too difficult to say?

Tim Dasey: This is such a difficult question. In the near term, if you draw pictures for a living or you write for a living, that’s going to be the most affected realm. But in the longer term, look, there are even researchers who are starting to claim that we could replace big parts of a CEO’s job with an AI. I don’t think we understand the full range of possibilities and where this is going to go. We may in fact have certain jobs that see tremendous volatility and that go from human dominated to something AI dominated. But I think it’s too soon to know that. Most of the best applications of AI are having it augment what people do and how well they do it. As long as people are capable of that other side of the equation of being productive and collaborating with AI, then we can keep it at that level without massive layoffs. But the moment that people can’t do that job very well, then AI is going to do it too. If our judgments are coin flips, then I can get a machine to do that too. That’s my fear, is that if humanity shows a lack of skill in interacting with AI, then AI will be asked to do more.

Carmen Cracknell: Well, thank you so much for joining me, Tim. That was such an interesting chat.

Tim Dasey: My pleasure, Carmen.

Listen to the audio.