Thriving on Overload

Follow Thriving on Overload
Share on
Copy link to clipboard

Leading futurist and entrepreneur Ross Dawson speaks to the world’s best at creating value from infinite information. His guests share the insights and practices that will help you to thrive in a rapidly accelerating world.

Ross Dawson


    • Apr 1, 2026 LATEST EPISODE
    • every other week NEW EPISODES
    • 35m AVG DURATION
    • 192 EPISODES


    Search for episodes from Thriving on Overload with a specific topic:

    Latest episodes from Thriving on Overload

    Nina Begus on artificial humanities, AI archetypes, limiting and productive metaphors, and human extension (AC Ep38)

    Play Episode Listen Later Apr 1, 2026 34:46


    “Fiction has this unprecedented power in tech spaces. The more I started talking to engineers about their technical problems, the more I realized there’s so much more that humanities could offer.” –Nina Begus About Nina Begus Nina Begus is a researcher at the University of California, Berkeley, leading a research group on artificial humanities, and the founder of InterpretAI. She is author of Artificial Humanities: A Fictional Perspective on Language in AI, which received an Artificiality Institute Award, and First Encounters with AI. Webiste: ninabegus.com LinkedIn Profile: Nina Begus  Book: Artificial Humanities What you will learn How ancient myths and archetypes influence our understanding and design of AI Why the humanities—literature, philosophy, and the arts—are crucial for developing more thoughtful and innovative AI systems The dangers of limiting AI concepts to human-centered metaphors and the need for new, more expansive imaginaries How metaphors shape our interactions with AI products and the user experiences companies choose to enable The challenges and possibilities of imagining forms of machine intelligence and language beyond human templates Why collaboration between technical experts and humanists opens new frontiers for creativity and responsible technology What makes writing and artistic creation uniquely human, and how AI amplifies—not replaces—these impulses Practical ways artists, engineers, and thinkers can work together to explore new relationships and futures with AI Episode Resources Transcript Ross Dawson: Nina, it is wonderful to have you on the show. Nina Begus: Thank you for having me. Ross Dawson: You’ve written this very interesting book, Artificial Humanities, and I think there’s a lot to dig into. But what does that mean? What do you mean by artificial humanities? Nina Begus: Well, this was really a new framework that I’ve developed while I was working on the relationship between AI and fiction, and I started working on this about 15 years ago when I realized that fiction has this unprecedented power in tech spaces. So this is how it all started, but then the more I started talking to engineers about their technical problems, the more I realized there’s so much more that humanities could offer in this collaborative, generative approach that I’ve developed. I would say that now, as the field stands, it’s really a way to explore and demonstrate how humanities—as broad as science and technology studies, literary studies, film, philosophy, rhetoric, history of technology—how all of these fields can help us address the most pressing issues in AI development and use. And it’s been important to me that this approach uses traditional humanistic methods, theory, conceptual work, history, ethical approaches, but also that it’s collaborative and exploratory and experimental in this way that you can look back into the past and at the present to make a more informed choice about the future. You can speculate about different possibilities with it. Ross Dawson: Well, art is an expression of the human psyche, or even more, it is the fullest expression of humanity, and that’s what art tries to do. Also, I’m a deep believer in archetypes, human archetypes, and things which are intrinsic to who we are, and that’s something which you can only really uncover through the arts. Now we have arguably seen all these archetypes play out in real time, these modern myths being created right now in the stories being told of how AI is being created. So I think it’s extraordinarily relevant to look back at how we have depicted machines through our history and our relationship to them. Nina Begus: Yes, this is the reason why I started exploring this topic, actually, because there were so many ancient myths, these archetypal narratives that I’ve seen at the same time, both in technological products that were coming to the market and in the way technologists were thinking about it, and also in fictional products and films and novels in the way we imagined AI. I framed my book around the Pygmalion myth, but there are many, many other myths—Prometheus, Narcissus, the Big Brother narrative, and so on—that are very much doing work in the AI space. The reason why I chose the Pygmalion myth is because it’s so bizarre in many ways: you have this myth where a man creates an artificial woman, and then in the process of creation, falls in love with her. So there’s the creation of the human-like, and there’s also this relationality with the human-like. You would think this would not be a common myth, but quite the opposite—I found it everywhere I looked. It wasn’t called the Pygmalion myth, but the motif was there. I found it on the Silk Road, in ancient folk tales, in Native American folk tales, North Africa, and so on. So I think this kind of story is actually telling us a lot about how humans are not rational, how we have some very deeply embedded behaviors in us, and one of them is that we anthropomorphize everything, including machines.So I think this was a really important takeaway that we got already from the early days of AI with the first chatbot, Eliza. We’ve learned that that will be a feature of us relating to machines. Ross Dawson: So Joseph Campbell called the hero’s journey the monomyth, as in, there is a single myth. And I guess what you are doing here is—well, if you agree with that, which I’d be interested in—is that there are facets. The classic hero’s journey is quite simple, but there are facets of that monomyth, or something intrinsic to who we are, that is around this creation. And in this case, as you say, this relation we have with what we have created. Would you relate that at all to Joseph Campbell’s work? Nina Begus: I haven’t thought about it in this way, because I thought about myth and myths more and less of a storytelling issue, which here is definitely happening—the hero goes on a task, returns back changed, and maybe changes something in the community. The myths that I was looking into and the metaphors that I was exploring, primarily this huge metaphor of AI as a human mind, as an artificial reason—I think it works differently. It’s less of a narrative; it’s more of an imaginary of how or towards what we are building. I think this is a big problem, actually, because the imaginary around AI is very poor. What you get is mostly imagining machine intelligence on human terms, and a lot of people are bothered by that in the AI discourse—right, when you say the machine thinks, or the machine learns, or it has a mind, and some people go as far as to say it has consciousness. I think this kind of debate is actually not that productive. I think it’s more important to see how all these different AI products that we’ve created—and mostly when we talk about AI, people think of language models now—are very much designed as a sort of character, almost as an artificial human that, in literature, authors have been creating for a long time. So I think in that case, we can get back to a hero’s journey. But I think what I was looking at was actually more on the surface level of what kind of shortcuts we are using with these metaphors that we’re employing when building and using AI. I think the book makes a really good case showing that, yes, this is actually a very cultural technology. It’s very much informed by our imaginaries. One surprising part of it was really how hard it was to break out of this human mold. It was pretty much impossible to find examples of machines that are not exclusively human-like. I think Stanislaw Lem is one of the rare writers who can consistently deliver this kind of imaginary. Even looking at more recent works, like popular films such as Hollywood’s Ex Machina or Her, you can see how the technologists themselves would say, “Oh, we were influenced by this film,” in a way that it affirmed their product development trajectory. You can see it now, at this moment, with OpenAI launching companionship. So in many ways, not a lot has changed. Ross Dawson: Yeah, there’s a lot to dig into there. I just want to go back—in a sense, Pygmalion is a metaphor, but it’s also a myth. It is a story: creates a woman, and then falls in love with her, and then whatever happens from there. There is this, something happens, and then something else happens. That’s what a story is. I think that can impact the implicit metaphor, but coming back to the metaphor—so George Lakoff wrote the beautiful book Metaphors We Live By. I think the way the brain works is in metaphors and analogies to a very large degree. Some of those are enabling metaphors, and some of those are not very useful metaphors. I think part of your point is that some of the metaphors that we have for thinking about AI and machines are not useful. There may be, or we could create, some metaphors that are more useful. So, what are some of the most disabling metaphors, and what are some of the ones which could be more constructive? Nina Begus: Yes, So I think this main metaphor that I’ve mentioned—of AI as a human mind—is very limiting. I think it really limits the machinic potential to actually do something good with it. The fact that we’re still using the criteria that were made for humans, like different criteria developed on human language—the Turing test was one of them, right, a while ago. Now we have stricter ones. I think this tells you a lot about how we actually evaluate AI and how even these benchmarks that are supposed to be quantitative are actually often qualitative, often stories, like mini-narratives. But yeah, when we look at different metaphors in this space, there are other ones that also emerge from fiction. I mentioned the Big Brother, the AI as an Oracle, and we need to be aware that these ideas inform the very interaction we have with AI. If we think of it as a mirror, we’re going to use it differently—it’s almost as a bouncing board. If we think of it as a teacher, or as a coach, or as an assistant, it would again create a different use. So I think there are a lot of these metaphors that the companies themselves are trying to decide which one they will go with, because it completely changes the user and the interaction. I think they’re also very cultural, even though you might say, “Oh, it’s a categorical mistake to treat a machine as a human.” I think you can see this kind of treatment across, at least in part, and it doesn’t mean that we consider it human. It just means that we’re engaging with it on our own terms, as if it was human. Now, what could be productive? I do think metaphors, even if they’re not accurate, can be productive. My goal, really, with the book was to break out of this projection of what the machine could be, to find in this exploratory way other directions, other landscapes where we couldn’t go because we’re being limited by our imaginary, by our ideas. So in this way, I think humanistic approaches can be very helpful to designers, to technology builders, to artists, to explore the novelty that so many of these sectors are after. Ross Dawson: Yeah, and I guess people latch on to what they know. I think that’s part of the thing where with AI, “Oh, it’s like a human. Let’s treat it like a human, and let’s make it like a human.” It is, amongst other things, a lack of imagination. That’s where the humanities, the arts, can offer us—those who have the imagination to be able to envisage different possibilities or relationships. But I guess part of it is also that humans relate, and so we have learned to relate to other humans and also to other animals and hopefully to nature as well. But these are all established patterns of relating. So do we need to discover in ourselves new ways of relating to new categories—things which are not humans, not animals, and not nature? Nina Begus: Exactly, this is the exact problem we’re dealing with, and because we’re dealing with a yet unexplored, yet undefined relation, and we’re using old, outdated terms for that relation. This is why we don’t really have a good way of describing it and establishing it. It will take a while for this to develop, which is fine, but we need to realize that there are some concepts that we’re using that we better leave behind and go ahead by building new ones. This is why I think it’s really important to work in a more interdisciplinary collaboration, so that you can see what you can actually build from the technical perspective, so that you can see what these machines are actually capable of. Because you usually don’t know when you create them right?Machine learning is sort of exploratory by design. Ross Dawson: So, just to call it out more explicitly, what are the metaphors you think are the most destructive or most inappropriate, and what are some of the ones which you think are the most promising? Nina Begus: Well, I’m just writing on the Midas myth, which is sort of the opposite of the Pygmalion myth. With Pygmalion, you lean into that human imitation, but with Midas, you lean into the liminality that Midas presents as this sort of hybrid creature. I think leaning into the boundaries that we draw for ourselves—and now AI is not cooperating with them—this is where the productive part will be in actually creating something that has philosophical dignity, but also a kind of productive trajectory for the machines to go. I feel like we’re still in this first phase of developing AI, because when you look at it historically, we haven’t really moved from the conceptual and philosophical premises that were established in the 1940s, 50s, and 60s for this technology. We have now gotten the technology that caught up to the ideas from the 60s, but we’re still stuck in the same conceptual space. Ross Dawson: Yeah, very much so. And, you know, of course, what is AGI, which everyone talks about, is basically—the only way in which people seem to be able to frame it is as relative to humans, which is the only reference point we have. I mean, there’s, of course, animal intelligence, but that’s because of that. It is, again, that lack of imagination—saying, “Well, intelligence, oh, intelligence is what humans do, so let’s do something which is the same as that,” whereas there’s so much white space in what intelligence could be. I think this almost comes back to definition. When people say intelligence, the word, when they use the word intelligence, they are referring to what humans do. It’s not a general term, and so it all becomes a language problem as well, because we are so rooted to relating our language to human capabilities, as opposed to a more general potential. Nina Begus: Yes, I think you’re really on to something here, because I can see it also—because I work with animal communication researchers, and we’re finding things there that we didn’t find because we limited ourselves to thinking language is just a human production, that it needs a human subject. Now, as soon as we got rid of this presumption, we’re finding new things, things that are basically parallel to what we do in our language. So language is in a space of tension because it’s being attacked both from the animal side and from the machinic side, which is why I really focused on language in this book. It’s not a coincidence that we centered artificial intelligence in language as the interface, because this is how we relate to the world—this is our interface to talk to each other, to understand each other. I think the fact that language is coming under such pressure as an interface brings with it a lot of other concepts that are being challenged. Are only humans creative? Is there a natural creativity, machinic creativity? Is there a different kind of intelligence that’s maybe solely biological, embodied? How do we think about cognition? How do we think about culture? In AI and in the natural world, there’s so much that comes with it: agency, autonomy, freedom, community, which I think we will be grappling with for the next few decades, at least. Ross Dawson: I think you alluded before to the potential for AI to have its own languages.  Nina Begus: I’ts happening already. The reason why I like Stanislaw Lem so much is because he can actually think about a machine—back in the 1970s, he’s doing that—about a machine that’s not human-like, that’s not limited to human language. It is trained on human language, but then it goes its own way, where the human linguistic ceiling just cannot go anymore. We’re already seeing that in the models, in Berkeley’s Biological Artificial Intelligence Lab, in the models that are not large language models, but generative adversarial networks that are based on speech. We see that as they are learning the words, they are encoding some information into silences that we don’t know what it is. I think what’s really exciting to me are two things about language in machines. The first one is, what is this non-human production of language? We did not think that non-humans can produce language, even though we had parrots who had to crawl their way to us to speak in “humanese,” to show that they have some kind of intelligence—even if it’s just parroting, even if it’s just what we call imitation, which some people consider not to be intelligence. We’ve had these examples before, but now it’s gotten nuclear—on this scale that LLMs are performing, it’s really challenged a lot of our solely human attributes: creativity, storytelling. A lot of journalists come to me because there’s this existential fear of machines taking over their work and so on. So we’ve been thinking about those things, and now it’s actually happening. Ross Dawson: One of the other key points here, I think, is that humanity is—the arts—there’s so much, as you mentioned, in terms of fiction, in terms of films, in terms of visual arts, and many other artistic domains. We have reference points that we use, and the amount which people refer to the movie Her in the last years is pretty extraordinary, partly because it’s obviously coming very much true. I think the Ex Machina story is very interesting as well, as are many others in the past. But there is also this act of imagination. There are people who have written these books, who have crafted these films, who have created these things, and they are the ones who have been not just manifesting our human psyche, but also pushing that out and coming up with ideas which others haven’t had, to give us something. So one thing we can certainly do is mine and dig into what has been created. But is there a way to interface through this to this act of imagining, which can give us new artifacts and ways of thinking and ways of relating? Nina Begus: Yes, I think imagination and humanities in general are going to become more and more important, because AI will do a lot of technical work, but imaginaries—this is what we really excel at. It’s actually interesting to see how you think fiction is this unbounded landscape where you can imagine anything, and yet it’s really hard to find examples of machines that are beyond the human. Even these writers, like the screenwriters for Her and Ex Machina, create these completely Pygmalion-esque films, where you have an artificial woman leading a relationship with a human man, and so on. For the whole film, you have her act as a human-like entity. But then at the end of each of those films—well, particularly in Her—Spike Jonze really tried to break out of this and show her AI side. Basically, there was no language to describe it, so he resorted to a metaphor—the metaphor of a book, where Samantha, the operations assistant, explains that her world is falling apart, like the way words are floating further and further apart in a book. That’s how she’s able to describe it; that’s the closest she gets. And then in Ex Machina, Alex Garland really wanted to portray the world from the social robot Ava’s perspective in a visual way. He wrote down a scene, but he said, “I failed to execute it visually. I just couldn’t do it well.” So instead, he gave us a different scene that’s shot from afar, where Ava embarks onto a helicopter and she has to undergo her Turing test—the helicopter pilot cannot recognize her as a robot; he needs to think she’s a human woman. There have been attempts, I think even in Garland’s next film Annihilation, they’re trying to set the grounds for something that’s entirely new and hard to imagine. I think a big takeaway for us is this is very hard to do. Ross Dawson: Yes, well, given that context, I do want to—as in the human plus AI framing—given all of this, what is it that we can do or should be doing in order to amplify our humanity, our capabilities, the positive aspects of what it is to be human? How can we relate to or use AI in order to amplify the best of us? Nina Begus: Yeah, I actually had, while I was writing the book Artificial Humanities, this other dream project to work with writers—professional writers, creatives, people who live in a world of words—to see what they make of AI. I waited a little bit for the public’s polarized reactions to calm down a bit and gathered 16 writers, some of whom already made a space for themselves in the field, like Sheila Heti and Ken Liu and Ted Chiang, and then some of the more junior writers who I knew were thinking about that—a Netflix screenwriter, and so on. I gathered them to see—I think the creative people are really the answer here—I gathered them to see how they approach this very human part of the new human and AI collaboration zone. What was common across a lot of essays that are coming out in October under the title “First Encounters with AI” is this argument that, well, AI doesn’t have subjectivity, it doesn’t have emotions, it doesn’t have a body, it doesn’t have experience, it doesn’t have meaning—all of these things that really make us human, all of these parts that actually make art compelling and literature compelling. So Ken Liu’s argument, for example, was, let’s leave machines what they’re good at—they’re good at imitating and copying—and we’re good at interpreting, we’re good at creating and imagining. I think this is really a way to go with this. This catastrophizing that’s very present in the public discourse, I think, is a bit misleading. I wish we had a more nuanced approach to what’s actually happening, particularly in the space of writing. Obviously, AI is a groundbreaking technology that affects pretty much every one of us and all the sectors, but when it comes to writing, we just don’t think it’s killable. We think that there’s this perennial impulse that humans have to play with language, and that is not going to go away with AI. We’re just going to amplify it through AI, through this new possibility that has now opened in many ways. I like to think about AI as—you know, we’ve figured out how to fly. As soon as we figured out the physics of flight, we had planes and helicopters and drones and kites, and these are the new possibilities for human activities. In the same way, we figured out the machine learning principles, and now we have large language models and diffusion models, and we have GANs and so on, and there will be more. These are the new spaces of possibility that have opened for our activities, for our spirit to work on, but they do not replace the human in a meaningful way. It’s more about extension than it is about automation. Ross Dawson: Yeah, that’s a wonderful way of framing it. So where can people go to find out more about your work? Nina Begus: I have a pretty populated website with my name, ninabegus.com, where I write about my books, I write about my public work. I have videos on there, podcasts, links, and so on. I also have a pretty lively lab with a lot of collaborators and students, where a lot of what I imagined when writing Artificial Humanities—where a lot of collaborative projects happen. We have artists, we have engineers, we have philosophers that work on the same question, but come at it from very different backgrounds and with very different skills. I think this is becoming more and more important in the world of AI. Ross Dawson: Yes, yes, bringing all of those disciplines and frames and thinking together. That’s wonderful. I love what you’re doing—very important. I hope the messages ripple through, and obviously wonderful to be able to share this with the Humans Plus AI audience. Thank you so much. Nina Begus: Thank you, Ross, and thank you all for listening. The post Nina Begus on artificial humanities, AI archetypes, limiting and productive metaphors, and human extension (AC Ep38) appeared first on Humans + AI.

    Henrik von Scheel on making people smarter, wealthier and healthier, biophysical data, resilient learning, and human evolution (AC Ep37)

    Play Episode Listen Later Mar 25, 2026 47:06


    “The center of any change that we’re doing in the fourth industrial revolution is always the human being, because humans have an ability to adopt, adapt to skills, and adjust to an environment.” –Henrik von Scheel About Henrik von Scheel Henrik von Scheel is Co-Founder of advisory firm Strategic Intelligence, Chairman of the Climate Asset Trust, Vice Chairman of Regulatory Intelligence Committee, and Professor of Strategy, Arthur Lok Jack School of Business, among other roles. He is best known as originator of Industry 4.0, with many awards and extensive global recognition of his work. Webiste: von-scheel.com LinkedIn Profile: Henrik von Scheel What you will learn Why human-centered AI is crucial for widespread societal prosperity The impact of AI hype cycles, media narratives, and the realities of technology adoption How equitable wealth distribution and capital allocation in AI can shape economic outcomes Risks around data ownership, privacy, and the importance of controlling your own data in the AI era Divergent approaches to AI regulation in the US, EU, and China, and the implications for global AI leadership The importance of trust calibration and intentional human-AI collaboration in practical applications How education and lifelong learning can be reshaped by AI to support individualized growth and mistake-enabled reasoning Opportunities for AI to amplify individual talents, address educational gaps, and enable more specialized and innovative skills Episode Resources Transcript Ross Dawson: Henrik, it is wonderful to have you on the show. Henrik von Scheel: Thank you very much for having me, Ross. Ross Dawson: So I think we’re pretty aligned in believing that we need to approach AI from a human-centered perspective and how it can bring us prosperity. So I’d just love to start with, how do you think about how we should be thinking about AI? Henrik von Scheel: Well, I think, like every technology that comes into play, it brings a lot of changes to us. But I think the center of any change that we’re doing in the fourth industrial revolution is always the human being, because humans have an ability to adapt, adapt to skills, and adjust to an environment. So technology is something that we apply, but it’s the strategy on how we adapt with it that makes a difference. It’s never the technology itself. So I’m excited. It’s one of the most exciting periods for the industry and for us as people. Ross Dawson: There’s a phrase which I’ve heard you say more than once around AI should make us smarter, healthier, and wealthier. So if that’s the case, how do we frame it? How do we start to get on that journey? Henrik von Scheel: So I think what people experience today in AI is that they experience a lot of media hype—large language models, ChatGPT, and all of this—and they consume it from the media. So there’s a big hype around it, and I believe that AI is about to crash fundamentally, but crashing in technology is not bad, right? There are a lot of promises and then an inability to deliver, and then it crashes. What you hear in the media today is very much driven by a story of them raising funds because it’s so expensive, and so they are promising the world of everything and nothing, and the reality looks a little bit better. The world that they are presenting is that you will be replaced, and you will be happy, and you’ll be served by everything else. And somehow it will work out. We don’t know how, but it will work out. And that’s not a future that is really a real future. The future must include that everybody gets smarter, wealthier, and healthier. And when I say everybody, I mean not only the guys that have money, that they become more rich, or the middle class. It’s like everybody in society should get smarter from AI. That means part of the things that they need to learn or how human evolution works should be better, and it should make us healthier people and wealthier people. So it should not only be that we sacrifice our convenience with our freedom, with our privacy, with our environment, or any other things that we put on the table to get convenience back. That exchange we have done a couple of times, and it’s not working really well for humans, and it’s not a good trade for us, right? Ross Dawson: Yeah, I love that. And since it’s quite simple, you know, you can say it, it’s clear, it sounds good, and it is a really clear direction. But you’re actually pointing in a couple of ways there to capital allocation. So obviously, if you’re looking at the AI economic story, this is around this diversion of capital from other places to AI model development, data centers, deployment, and so on. But also, when you’re saying wealth here, this is around the distribution of wealth—where we’re allocating capital to AI development, but also from the way in which AI is developed, there will be creation of wealth. There is the real potential for productivity improvement. But then it’s about finding, how do we have the mechanisms for allocation of wealth or capital from that which is allocated? Let’s call it equitably. Henrik von Scheel: I’m a firm believer that this year, 35 to 45% of the money invested in AI will evaporate. Companies that have invested—they’re the early adopters—they have this format, so they’re rushing to it. From a company perspective, you always adapt the best practices. When it goes beyond the hype, and the performance curve and adoption curve is low. For example, for AI, the simple version is there. You heard that Deloitte and McKinsey talked 10 years ago about robotic process automation like God’s gift to mankind in AI. Today, you don’t hear them talking about it, because you can download it for free—for HR, for forecasting, planning, budgeting, and so on, you can save 20 or 30%, and as an organization, you can do it yourself. You download two, three models, you test it, and you run it. Good, okay, so that’s when you apply best practices. Then you have industry practices, like AI agents. So when you have AI agents for manufacturing, for industrial sectors, for energy sectors, they are nothing else than workflow optimization. You use robotic process optimization, you do a visualization on it, so it’s far more practical at a level, because you use the data they already have in the organizations under a simple line on the process flow, on the safety, security—it’s very much down at the level where they can apply it and use it. So this version of large language models, where you have this magic powder you spread over the organization and it’s totally working—it’s not really there. And then there’s the third leg that companies are quite aware of. It’s called Shadow AI, right? Shadow AI is because AI is the biggest infringement on intellectual capital within organizations. The reason why normal people are not allowed to look at pornography at their work is because of cybersecurity. It’s not that your boss doesn’t like you to look at pornography; it’s because of cybersecurity. It’s the same reason with AI—you should not be allowed to use Copilot latest version or large language models as a CFO or as a worker, because you’re exporting your own information outside. Copilot takes, every five seconds, a screenshot for the large language models’ learning. So as a corporate point of view, that’s the first thing—you should actually protect your own data so you can monetize your data in the future. From an economic point of view, if you go two, three steps behind this, you ask, okay, what is it that makes sense in this? There’s something really, really strange in this. Australia was built by building railways—they take 100 years to build, they also last 100 years. The infrastructure that lasts. So there’s a return on investment. You build streets, you build education systems—everything we build as humans, as society, has a lasting element to it. Now, we build data centers that last three months until the chips need to be returned, or six months. So there’s no sense in that we are building data centers around the world where we capture all data. It has a volume of hundreds of trillions of dollars, and we need to exchange them at a rate between three to six months to maintain the data. And then you say, wow. And you do that via license models of large language models—the data can never, in its entire life cycle, be that much worth. So there’s a very strange element, because most of the entrepreneurs that go to large language models and use their solutions on Gemini and ChatGPT and so on, you say, okay, you are building your solution on large language models, but you don’t own the model. You don’t own the data. You don’t own your own data. So what are you doing? Ross Dawson: You have architectural choices, to a point, as to— Henrik von Scheel: That’s Architectural choices, but you are limiting yourself. So the first element you always say, if my value is customizing a solution, your value is actually the data. So you must have a way to keep and maintain the data yourself. We can take another call to say how you apply AI and what the future of AI looks like, because AI today is very much focused on language models, and language models are the most limited version of AI science of all. It has the least data, but it’s the one we’re most excited about, because it resembles something we do—our wording, our formation of words. It’s a recognition. Recognitions are what we do. I wanted to come back to this about the economy, right? The US economy puts all chips on this. It’s highly energy sensitive, and it’s working all railroads. However, the US dollar is on a really, really bad track record. Three and a half years ago, there was a president in the US—he was sleeping—and meanwhile, he was sleeping, Saudi Arabia’s King MBS went in and he did a divorce, which is called the divorce of the petrodollar. So the gold linked with US dollar linked with oil—that was the solution. The US had that anybody, they could print as much money as they wanted, and the rest of the world was paying the dividend for it. It was the only country that could just print money. That brought the US into a mode, and when the new president came into his office, it’s very rare that in the US, you are writing an accord. An accord is only written when the Federal Reserve goes into the president’s office saying, guys, we’re hitting the wall. We need to do something. And they wrote five plans, what they wanted to do. And here’s the funny thing—when I mention them, you will recognize them very much. Number one, bring back manufacturing. Number two, implement tariffs so they can pull back US dollars. Number three, then they wanted to implement stable coins to pull back US dollars. I forgot number three, actually. Number four, and number five was actually they want to go to war. Now they go to war, right? So they are going to war, not because of any reasons besides their economy is based on a war machine, and the economy is becoming unstable. So that’s one of the main reasons. The US has put all cards on AI—all their economy cards are on AI. And that’s, from a country perspective, a very dangerous thing to do because you need energy and you need data, and AI from the US perspective has become a defense mechanism. When you look at the regulatory aspect of AI, Europe is very much put into human and center, and that the human owns the data, protects teenagers up to 16 years old, and that you can work as an entrepreneur with data, but you have to coordinate how you protect and manage the data. You have to be transparent on how you use the data and how much data you use. The US is very different—red tape off, no regulations at all, full-blown power to the market, and you are seen as a consumer, Ross, so all power to the guys who earn money to make more money. So no protections of anything, of your data—that’s the US version and literally, no regulations, no redtape regulations. Ross Dawson: In a moment, I want to move on to the human-AI collaboration. But just to round this out, you said before about your prediction that 35 to 40% of the investment in AI is gone, which I think is very, very fair. So back when we both were speakers at the Future of Sex Summit in Dubai last year, I was on a panel where I was asked, is it boom or bust? And basically both, in the sense of 35–40%—that’s bust. But at the same time, there are other parts of the market which can prosper. Of course, consolidation of the market means that there’s massive investments and in some cases massive losses, but there still are sectors where high value can be created. But this goes back to your point where still a lot of the center is in the US. We are starting to see sovereign AI initiatives and other initiatives around the world, but those are often open source foundation models. And obviously the regulation, particularly around the EU, provides a still very differentiated AI landscape with US, China, EU, and then some other players as well, where if we see boom and bust, that could be very much focused on the US, with potential for other parts of the world to see more growth in AI. Henrik von Scheel: So Ross, you’re using large language models, right? Ross Dawson: Yes. Henrik von Scheel: Do you have the feeling that they, since last year, are getting stronger or weaker? they’re getting weaker? Ross Dawson: They’re getting better. Henrik von Scheel: My feeling is the opposite. My feeling is that they’re getting weaker and weaker, and that’s because part of the data — Ross Dawson: In which content? Henrik von Scheel: They’re using old, old content. They’ve already used old content. So now you need to go to specialized, you need to go to public sources, to go for research data, you know. But from a content-wise perspective, it becomes extremely weak. I mean, last year, I’m extremely disappointed by large language models—very, very disappointed in terms of what they can deliver and what they do. Ask it whatever—ask it about futurism prediction, or ask about Industry 5.0, 5.6, whatever answer you give it, you can get an answer. You know, 110%—like CPAM, there are 19 regulations on CPAM, and you ask, how many regulations are there? They will give you sometimes 19, sometimes 17, sometimes 23—they just make up stuff. It just gets worse and worse. So if the valid data is not strong enough, it becomes actually a very, very weak tool after all, right? Ross Dawson: So are these using the top models from the frontier labs, because they are very good. Henrik von Scheel: Yeah, but then you have to have the paid model. But it’s not like I’m really, really impressed by it. It’s not kicking my bum where it says, holy smokes. In the beginning, the first two years, you were surprised, right? So I have a little bit of the feeling that AI today is a little bit where emails were in the beginning, and then digitalization came. With emails, we were all excited, but emails just created not less workload, but more workload for us—it decreased our productivity. There are really good signs of this. Then you look at digitalization, right? We were all excited because we can connect, we can talk to our friends, all of this. But what ended up with WhatsApp Business? WhatsApp Business is no business, right? We are using it, but it decreases our productivity level far more. So today, with digitalization, we are becoming generalists—quick information, we know something, but we don’t know anything, right? It’s not that you would put the finger on it and say, well, it has really increased our innovation level. No. Has it really increased our research level? No. Has it really made us better human beings? No. So I’m not negative against it. I’m just saying we have to be careful, because we have a knife or a hammer—we shouldn’t use the hammer for everything. And you mentioned that really well, right? AI’s hype cycle is, with any technology, there’s a hype, and then it goes down and matures, and then the application of this is different than what you thought in the beginning, of course, but that’s AI—it’s very much relevant. But you know, the big message today in AI is AI physical, right? What is AI physical? Ross Dawson: Well, just going back to the point—a lot of what I’m working on at the moment is the idea of appropriate trust. So you trust the models enough, but not too much, so that if they are going to give you bad results, you’re not relying on them. But if they are useful, you can use them. So we have to continue to calibrate for any particular model, which is different in every particular context. This is both essentially a skill or a capability, where we need to know when and how to use models at any particular time, because they’re changing in whatever way. So that becomes a foundation of how we can trust them to the right degree—not too much, but enough that we can actually use them if they are useful. Which comes back to this frame of the human-AI collaboration, which you’ve been doing a lot of work on. So if AI can be useful in some contexts, how is it that we can best build effective human-AI collaboration? Henrik von Scheel: I like this. Let’s play a little bit, right? So if human evolution is evolving with the birth certificate, we go to kindergarten, we go to school, and we learn differently. Everybody’s individual—we learn differently, right? It takes humans a long time to learn, to sense, to do all of this. And then you have AI, which is a supporting learning model for you to store information. But today you learn, and the model learns on you. You log in, and every time you learn, the model learns from you. That means that all your information is captured there, right? So the next evolution of a model should be that the privacy of Ross is throughout your last five years with large language models—you’ve studied Porter’s models, you’ve studied this and this. Well, if I ask you next day about Porter’s model, you still forget it, but the machine should be able to help you to learn, to adopt the skills in your daily life. So it cannot be a machine knowledge learning that is owned somewhere else by a big company—it must be something that is attached to Ross throughout your life, that you go from where you are now, and in five years, you’re somewhere else. So the knowledge that you have searched and gained and adopted, it follows your life, right? This is, for me, AI—the real AI revolution happens in the bio revolution in 2030, because the biggest amount of data we have is biophysical data. So the interconnection between our body, the modules, the biosystem modules, the biophysical systems, how we eat food, how material, with their level, is coming all in there, and part of this is the knowledge center of you, Ross. So if you learn something, how does it follow your evolution? Do you learn the same way today you learned 10 years ago? Ross Dawson: And it’s a wonderful thing that we continue to learn and forget and evolve. We are the same person, sort of, but, you know, we are a different person at the same time. Henrik von Scheel: I was talking yesterday to a psychiatrist who’s studying human evolution, and she’s called Trina Gondo, and I had this interesting discussion with her, because she says humans’ learning capacity changes throughout their life. So if we have learning modules that can support us throughout our life—to go through how conscious, how focused we are on things, how much stress level we can take, because stress levels are also different, how much breadth are you covering in terms of your work, your private life, how are you in terms of setup, in terms of your spiritual life—all of this has something to do with your learning, because it’s your perspective you drive. It’s your values you drive. I actually developed with her a model in terms of how the six aggregates of the brain work to understand our human evolution. For the last eight months, I’m trying to map human evolution, to map it to what AI—how it affects it, what we should regulate and how we should protect it, and how the human can monetize its own data, right? So just look at— Ross Dawson: The initiative by Doc Searls. So there’s a couple of really interesting initiatives. This is one where he worked originally on VRM, the vendor relationship management—you own your own data and trade that as effective—and is now building, or being instrumental in setting up, an AI initiative where it is around your personal AI, so you own the data, you own the systems, and you’re able to evolve with it. There are some other interesting initiatives like this, but these are obviously very tiny compared with the ways in which most people are using—essentially giving off their data to other people. But this is certainly part of the potential, to build the structures and architectures where we do own our data and our models and how they are used and what comes from them. Henrik von Scheel: So let’s go back into one element, right? Originally, Ross, you and everybody else of us who live in a society, we made an agreement with the government—a social agreement. And the social agreement is, I’m using, you’re protecting me, and I’m willing to pay tax somehow, right? So in reality, the government you made an agreement with should have the ability to protect you. However, in an AI model today, it’s not possible, because if they should protect you from the very beginning and keep the store of your data and maintain your data, the amount of money they need just to maintain your data is immense. So we need to define and find a model with governments where governments and the human being can, in co-ownership, hold the data structure—like in a blockchain, that you have a public and a private key, and both can hold the data, but the data is only unlocked both ways. Why? Because there’s a monetization model on your own data throughout your life. And when you die, your data goes on to your children, because that’s your DNA data, that’s your history life data, that’s all of it. So there should be an ability to monetize it. The challenge we face with this is the amount it will cost to maintain your data throughout your life, and we need to find—in the fourth industrial revolution, we’re going through the bio revolution, then we’re going to the consumer revolution, and then we go to the fusion revolution. And in the fusion revolution, the objective and the hope is that we are finding mechanisms to have cheap energy, because the amount of energy we use today in terms of data is literally crazy. It’s utterly, utterly crazy. We should be ashamed of ourselves if we see that, and that’s just for the amount of convenience. So if we find a model for our government to do this, we should actually work on this. This is what I’m trying to look at. I want to alert you to one interesting thing. My key field of study is patternicity with probabilities. So when you look at trends that are coming, you look at probabilities—not ChatGPT stuff, right? When you look at this, there’s one trend that emerged last week that hasn’t been emerging before—the trend of anarchy in Europe. Anarchy is an interesting aspect, because anarchy is your distrust in the government. And when anarchy comes, it’s just an equation of 25%. If 25% in a country like Germany or UK or France will take it, 25% is a flipping chart for everybody, because the petrol prices are too high, expenses for food are too high, they get too many promises they never—and then take the power in their own hand. When you look at it a little bit, you say, but anarchy—is that something new? No, the US is living in anarchy today. Trump is the true version of anarchy. They distrust the government, and they choose him, and he, from all aspects, says, okay, I’m doing something very different. I give all the power to the market. There’s been no time in history where all the power is residing within the market—Elon Musk and Amazon, Apple, all of them have literally all the power. It’s totally, utterly crazy. This is the highest version of anarchy you can see in a country. And if we’re not careful, it’s spreading. Why am I discussing this in an AI human element? Because if the human is the centerpiece, what is the core element of human development? It’s that we have safety, security, and trust. If trust is broken, anarchy emerges. So if anarchy emerges, AI can take on very different versions that we don’t want in a scenario thinking, but AI can also take on the version that it can support us in our evolution. Ross Dawson: Well, just going to that—education. You are a professor. You are an educator. You look at the future of education, and you alluded to that before. So in this world where AI is already and is becoming more significant, how do we reinvent education? How do we educate ourselves as individuals, as educational institutions, or society? How do we shape the education that we need for the exciting coming times? Henrik von Scheel: I think one of our challenges with education is that we as people, when we go beyond eight years old, the key element we’re learning is reasoning, and our reasoning skills are learned by doing mistakes, unfortunately. We never learn by getting an answer. If you study Porter’s model on ChatGPT, and you get all the answers from Porter’s model, and I ask you the next day, if you haven’t applied it, you haven’t learned it. If I would ask you, you will learn it. You do mistakes, and it’s by doing the mistakes, by putting yourself into the content, working with the content, and doing mistakes, you learn. Unfortunately, most of the stuff we learn today—now, human evolution in reasoning is by doing mistakes. So we need to find a very smart way how AI can support us in this mistake learning phase, because it’s the way that we are built to learn, right? Ross Dawson: And I think that’s a critical thing—where as individuals, we need to understand that if we delegate our thinking to AI, it’s not going to work; you’re going to be dumber rather than smarter. But if we can have the intent of using it to hone our thinking and helping us to make mistakes or be a Socratic dialog or whatever, we can do that, but that requires the individual intent. So again, we also need to frame as educators and also in organizations—which should be educational institutions in their own right, because they are learning organizations—it’s this framing of the use of AI as a cognitive foil for us, as opposed to something where we delegate our work, which is never going to get us anywhere good. Henrik von Scheel: And where do you think we can use it in education? Ross Dawson: The good thing is, you know, personalized education, where I think that there is definitely this ability to address where individuals are and their understanding, the metaphors that will be relevant to them, the frames for that. But it never has to be in a form of giving the answer. So there’s always this complement of human—as in, the educator needs to be inspiring. They need to help the person to find themselves. They have that relationship with them. So it’s this complement with the AI, which can guide to specific lessons or frames or examples that people resonate with, which can assist them. And so again, it needs to be very much—individuals need to understand, they have to shape it for themselves. I think we can present things in the right way. And there’s very much a human plus AI educational frame. Henrik von Scheel: I think you’re spot on with this. When you look at the five aggregates that we have in human evolution and in education phases, our sensory—our forming of ourselves to the outside world—is shaped quite early on, until we are maybe 12 years old, but quite early, the first two years. That means our sight, our smell, how we hear, how we taste, how we feel, and how our balance works—we learn quite fast. This is what AI is focusing on in AI physical today. They’re trying to come from a language model point of view outside to the physical world. Then we have this cognitive version of us, which is the intellect version. It’s very different. The intellect version of us is a version of awareness, a version of how we comprehend things, how we understand things, how our knowledge is conceived and given out. So it’s both communications, it’s storytelling, it’s our comprehension, it’s our perspective, it’s our reasoning, it’s our awareness. These four things are never the same for the same person. I can have a room of 200 students, I can talk about the same element on Adam Smith’s first principle, and they will all understand it differently because of their different backgrounds. So this part of cognitive understanding, the intellect, is far more complex. Then you go to the versions of who we are as a person. Our memories—our memories are a whole element of our emotions, which is a hugely important part of our learning, because memories have nothing to do with truth. Large language models always look for the truth, but in our own memories, we are lying to ourselves to keep our sanity. We are partly, not consciously but unconsciously, lying to ourselves because we view it only from one perspective. So our reflection of our memories or our impulses are related to our memories or our conceptual things. All these elements are our emotional elements, in terms of how strongly we can link to knowledge, how strongly we can see the future, how we can see ourselves in the future—all of this. When you look at the crisis now, the memory is on how resilient we are as people, how resilient we are in our learning phase, how comfortable we are with the unknown, how comfortable we are to learning. Then you have the next two ones. The other one is our mental formation or our identity. This is the element we’re trying to protect in digitalization—how we form our opinions, our insight, our resolution, our understanding, ourselves, and our retentiveness, who we are. All of these things are being shaped as teenagers. We don’t want this to be in a social aspect. We want this to be a safe, secure element. So this is the identity you form. Then you have the consciousness. The consciousness is a strange thing. You have two layers running in your education. You have the layers that are running long term and the unconsciousness that actually takes the decision—the analytical versions and the underlying elements. For example, why are you doing something? So you come with purposes, you come with energy, you come with desire, or you come with willpower. Then you say, well, they’re more etheric. No, they’re not. Because, Ross, you wake up every morning with that much amount of energy. You can use this the next eight hours you work. You can use it on emails the first four hours, but then you’re using your most precious willpower and energy right then. You have your willpower to train, for example, if you want to do training. When you want to train in the evening, when your willpower is lower, you want to train early in the morning. So this willpower and the energy is what we as humans in our consciousness—how we are aware of things, what we focus on, we magnify. So these are the five aggregates you’re using from the learning perspective. If we apply these, you and I, Ross, we would go into an initiative to say, how can we apply this to understand human evolution when we evolve this? Because I’m nearly 60 years old now, and that means, for me, my concept of life, experience of life, is different than when I was 30, than when I was 20. You cannot go to a young person that is 15 years old and say, let me tell you about love—there are four different phases of love. They need to experience them themselves, because it’s not my job to take that away from them. And it’s not my job to tell a young man, now you want to conquer and do, you want to have freedom, Generation X and all of this. And then you realize, easy, easy, easy. I’ll let you know. When you fall in love and you become a father, it changes you. Why does it change you? Because accountability moves into a man’s focus area, as before he was conquering. And then accountability—a man wants to be a caretaker of something, and it fulfills and magnifies a man. And then you say, well, this is not part of the five aggregates—very much so, right? Because it’s part of human evolution. Ross, you have experienced that in your life. So then you say, how do we connect that with our evolution and learning? Ross Dawson: Yeah, no, I think that’s a really important point around accountability for ourselves, for those around us, directly in the broader community. And I think that’s kind of this big humans plus AI frame. So we’re obviously just touching the surface of what we could dig into now. But how can people find out more about your work Henrik? Henrik von Scheel: I’m a public figure. I’m doing a lot of research projects with universities. I have a lot of PhD students and coaching and supporting governments on policy initiatives. Currently, I’m focusing a lot in the Gulf regions on strategic briefings, on crisis management, in terms of doing scenarios for strategic, tactical, operational, for short term and long term. But my passion is actually teaching, and this is far more a personal story on teaching. People see me always as the Industry 4.0 originator on everything I have accomplished. But my true story is actually quite different. When I was young, I was dyslexic. I’m actually double dyslexic, and I was stuttering. I had a very, very difficult time in school. That’s why I am a little bit passive aggressive, because I’m always on the defensive, because many years I went through life just being some sort of an outcast. So within that phase, I had a very strong teacher that actually supported me and used time and effort to see my skills, and he helped me to overcome my dyslexia—which is not really true. You never overcome your dyslexia. You are just getting tools to work with it. So that means I’ve written today nine books, and five of them are bestsellers, but I cannot even read my own books aloud. So what is the message I’m giving? Everybody of us is made different, and because we’re made different, it’s not that—because society is often built on, if you don’t fit that frame, then you’re not part of that frame. But I think AI opens up something for us—that the breadth of who we are as people is a beautiful thing. And because I cannot speak the same way, like I have a good friend Tarek, who is also your friend—he’s a gifted storyteller. My gift is that I can see patterns. So I believe that every human being should be able to see their superpower. Your gift, Ross, is a very different gift. You can gather communities, you can convey difficult things in a simple thing, you have an ability to put the human in the future, where everybody sits today and they freak the hell out because they don’t see them part of the future. So I think everybody has a future in that. To answer your question, I’m a quite reachable person. I believe the future looks like a good future for us, Ross. I believe this is the time for our educators to wake up out of their long-term sleep. We need to evolve our teaching material. We need to evolve the way that we learn and teach. We have terrible lessons in terms of how boys and girls evolve in their learnings, and we’re not doing anything about it. This is our chance with AI to change the learning mechanisms for boys and girls, our learning mechanisms if you’re one like me that doesn’t fit these templates, if you have special needs. We have the ability with AI to specialize ourselves far more in detail. One of the challenges we have with education today—when you go from primary school to higher education, and then go beyond higher education—our challenge with higher education is we have become generalists, and our generalism is actually inhibiting us to innovate, so we’re not meeting some of the core challenges that we have in science today, and we need to push the boundaries on where we go to research to really become innovative. We need to push our boundaries in terms of manufacturing, energy sector, and so on, to specialize in special fields. When you look at engineering schools, engineering schools have become more and more generalist in six fields, and they should become specialists in fields. So I think that’s where we need to really push the boundaries. Ross Dawson: Yeah, no, I think, to your point, what I see as one of the ultimate possibilities from AI is that it amplifies our individuality. And so that’s an extraordinary possibility. So thank you so much for your time and your insights, Henrik. You’re sharing some great work, and we’ll share in the show notes links to one of your research papers and the work you do. Thank you. Henrik von Scheel: Okay, thanks a lot. Good. Goodbye. The post Henrik von Scheel on making people smarter, wealthier and healthier, biophysical data, resilient learning, and human evolution (AC Ep37) appeared first on Humans + AI.

    Joanna Michalska on AI governance, decision architectures, accountability pathways, and neuroscience in organizational transformation (AC Ep36)

    Play Episode Listen Later Mar 18, 2026 34:04


    “Determining accountability, the ability to intervene, the time to intervention, the time to stop, pause, change, alter—there are so many different layers that need to be thought through.” –Joanna Michalska About Dr Joanna Michalska Dr Joanna Michalska is Founder of Ethica Group Ltd., Co-Founder of The Strategic Centre, and an advisor to boards on AI risk, ethics, and governance. She holds a PhD in Strategic Enterprise Risk Management and has twenty years' experience leading enterprise risk, strategy and transformation at J.P. Morgan and HSBC. Webiste: ethicagroup.ai LinkedIn Profile: Dr Joanna Michalska What you will learn How boards and executives can rethink governance and accountability in the age of AI The importance of embedding governance into organizational ecosystems for agile, responsible AI adoption How to map and assign human accountability for both automated and hybrid AI-human decisions The decision architecture needed for scalable oversight, intervention, and escalation pathways Practical examples of effective AI oversight in areas like fraud detection and exception handling Steps for complying with new regulations like the EU AI Act, including inventorying AI systems and risk tiering Why human qualities like emotional intelligence, psychological safety, and honest communication are critical in AI-driven organizations How leaders can foster organizational resilience and help teams adapt by building AI literacy, retraining, and supporting personal growth Episode Resources Transcript Ross Dawson: Joanna, it’s a delight to have you on the show. Joanna Michalska: Well, thank you for having me, Ross. Ross Dawson: So, AI is wonderful, but it also brings us into a whole lot of new territory where we have to be careful in various ways. I’d love to just hear, first of all, the big framing around how boards and executive teams need to be thinking about governance and accountability as AI is incorporated more and more into work and organizations. Joanna Michalska: I think we’re all very excited about the capability that exists today to help us enhance our performance and the way we think about strategic execution for our organizations. It has multidimensional consequences for how we adapt it. What’s very important right now is, as executives and boards think about accelerating their ambitions and growth plans, there needs to be awareness of two components. First, how do we as leaders, as humans, need to adapt to that new environment? There are new conditions, or perhaps existing conditions that really need to be enhanced. They’re very important to exist in order to be able to adapt and to scale. Second, do we actually have the right systems in place to enable that scale? I think it’s important to recognize that, yes, governance has always existed, but the way it existed was more as external supporting scaffolding, rather than being built into an organizational ecosystem. We also need to have the right leadership in place to ensure that decisions are made in the right way and the organization is designed in a much more robust, agile way. These two conditions are critical for not only increasing adoption, but also doing so in a safe and responsible way, especially as we expand our ambitions for the future. It’s exciting, but there’s also a lot of caution and a lot of questions being asked by executives at this time. Ross Dawson: Yes and I guess the more we can address those concerns upfront, the more it enables us to do. I have this idea of minimum viable governance—at least having some governance in place so we don’t go too badly astray. But I always think of governance for transformation as: how do you set governance not as a brake to slow you, but in fact to accelerate you, because you have confidence in how you’re going about it? Joanna Michalska: Absolutely! I think the mindset shift is very important, because governance, to your point, has always been seen as a compliance-driven thing that we must do because regulators require us to, and we need to demonstrate we have these policies and procedures in place and the right people in the right positions. Now, what the new environment is requiring of us—as executives, even board members—is a different set of responsibilities that really cannot be assumed as pre-existing. In this accelerated environment—let’s call it that, rather than just “AI,” because it’s so overused and can mean so many different things—where the automation rate is fast and overtaking everything, governance needs to change. It can’t be an afterthought or something we designed at one point in the past and now just try to fit into what’s happening. It really needs to become a well-designed, living organism. It needs to organically evolve. It needs to have the right people with the right accountability that is well understood. Accountability that was designed in the past needs to be looked at, discussed, and understood by all executives and across the organization, cross-functionally, to really work. Another important thing is to make sure executives have the right level of ownership and responsibility to ensure the conditions exist to enable that system to work. That’s a very difficult thing to do, because now you’re talking about having designed human oversight that doesn’t just become a “human in the loop,” but the right human in the right loop. By “right,” I mean: does this person, or these people, understand exactly what the output of the automated system is? How has this decision been made? Is there the right level of executive oversight when that decision is already made? How confident are we that we can say, with a level of certainty, “I’m comfortable with this, and this is not going to create negative consequences I’m not willing to accept”? That’s not an easy thing to do—to create those conditions of trust and safety. Ross Dawson: Particularly when there are so many decisions and outputs throughout the organization. Let’s go into decision making. I’ve built a little framework around going from humans-only through to AI-only decisions. Hopefully, there are no purely human decisions anymore; at least you can ask an AI, “Am I crazy or not?” even if it’s a human decision. Some decisions are already fully automated, but they still need oversight. You can bring in exceptions, conditional things, humans in the loop for approval, humans in the process, or build an explainability layer. There’s a whole array of different things. For every decision, you need to create the right way to implement it. In an organization with that profusion of different decisions and possible approaches, how can you actually make that happen? Joanna Michalska: Yeah, it’s a great question. Decisions are at the center of everything, and the quality of those decisions—and the whole architecture, how it’s designed for decisions to be made—is really important. It doesn’t stay static; it evolves as the organizational structure evolves. Questions like accountability—what does it look like, and what is the governance around accountability—are critical. Intervention capability is also very important, because with this level of automation, the whole design of how automated decisions are made raises multiple questions. Are these decisions made by old algorithms that are very simple, where the risk is determined by a set of rules? Is there clarity around who actually has the decision intervention rights in the organization, and how does that roll up to an executive layer? Determining accountability, the ability to intervene, the time to intervention, the time to stop, pause, change, alter—there are so many different layers that need to be thought through. The quality of human decision-making, and determining when a human is able to review decisions made by complex systems—whether agentic or whatever structure the organization has—is critical at any level, whether it’s middle management, executive management, or board. There are different layers of how the architecture requires design and measurement. Escalation pathways are another one. People will not naturally escalate if they fear negative consequences, retaliation, or any type of fear created because there isn’t psychological safety or trust within the organization. Even if there is an escalation protocol in place within the decision architecture, how do we know that people will raise the problem? Ross Dawson: The accountability. Of course, only humans are accountable. Ultimately, the board and their executives are accountable. But what you’re suggesting, it sounds like, is that for every decision, there is somebody where you can say, “That person is accountable.” Obviously, it cascades up to who they’re reporting to, but there is human accountability for every decision made, even if it’s a thousand decisions where somebody has oversight and responsibility that those are the right decisions. I want to talk about escalation and how that might happen, but perhaps we can ground this with a couple of examples. What are some examples of decisions made in organizations—hopefully well-designed, or perhaps not so well-designed and haven’t worked out? Joanna Michalska: Yes, I have a couple of good examples where an automated system allows review of multiple false positives, where a human would spend months or weeks looking at exceptions. From an optimization perspective, that’s really valuable. For example, in fraud detection or sanction screening, you can design a process where your algorithm applies rules very quickly with specific risk tiering. You know which decisions need an additional level of checks—let’s say, automated checks. With a confidence of over 90%, your queue for checking and looking at exceptions—what would otherwise have to be done by a human—is not really necessary; it’s done by the algorithm. In terms of decisioning, from a human oversight perspective, you’re really looking at things that are very high risk and require additional human review, or exceptions to the usual flow that break the rule designed for the algorithm to execute. Then, somebody picks it up and looks at it. These are powerful examples where there’s potentially a high human risk of misinterpreting something, but if the algorithm is designed appropriately and has the right governance in place, it can really speed things up and make space for a human who otherwise would be involved in that process to actually develop and do something different. In the example I’m talking about, we’ve focused on retraining people to expand their roles and do something else, rather than just being involved in checking decisions or reviewing boring exceptions that were really false positives. Ross Dawson: What’s another different example? Joanna Michalska: Fraud detection is another really good one where— Ross Dawson: Because one of the things about fraud detection is there is an answer—as in, it is fraud or it isn’t fraud. You can get false positives and false negatives, but that’s kind of reductionist. There’s a whole array of decisions where you can’t necessarily say before the fact whether it’s a good decision or not. It’s interesting to look at these very different types of decisions, not just ones that can be very algorithmic because they’re data-based and there is a true or false. Many decisions don’t fit those parameters. Joanna Michalska: No, that’s very true. Actually, what I’ve seen, especially recently, is that there are a lot of questions being asked by the board or executives when they get to the point where a decision isn’t easy or clear. They look at sets of metrics that do not make a lot of sense, and then the question becomes, “Who can explain to me how this decision was made, that this metric shows me X? Who in the organization can I go to, and how quickly, for them to explain that to me?” In my experience, especially recently, that’s a very difficult and uncomfortable question to ask and answer, because it’s not clear—especially when it comes to things that don’t have a clear accountability pathway, because more than one person is accountable. So the question is, is this a Risk Officer question, or is it a Data Officer, CTO, CSO—who actually is responsible? In these instances, it’s particularly important to have the right accountability that is understood at that level—who is accountable for what part of the process? It’s not easy, because it is quite complex and creates a lot of challenging discussions. Very often, it depends on the organizational maturity and the level of AI adoption. What systems do we have? Do we understand what is an AI system in the first place? That part is not easy, it’s complicated, and it creates quite a lot of challenging discussions. Ross Dawson: Well, it is difficult and complicated and challenging, but that’s not very useful. Let’s map the pathway. Give me a roadmap for an organization: we’re going to assess our decisions, rank them in order of priority or risk or uncertainty, apply AI, and put accountability in place for all of these. This might take us sixteen years, but we’re going to start somewhere else. Joanna Michalska: I think a really good example is a new European EU AI Act, where there is a very clear starting point and clear requirements. For many organizations, that’s step one: what are we actually required by regulators to do? We look at what processes, systems, and outcomes we’ve got. That’s step one. Then, we look at which of our applications are actually what risk, and we tier them. We assign the right executives for the right processes. First, we identify where we are today, then work with our compliance or risk officers to understand where we think we are versus where we are according to the regulation. That regulation—the high-risk identification of those systems—is going live in August this year, so it’s a very pressured point for people to address. Once that’s done, there’s a clear inventory of the current state, a clear inventory of where we need to be, gap identification for which high-risk systems require transformation and to what extent, and then the right people need to be in the right places so the transformation roadmap is defined. There’s accountability for that transformation to occur, but often a lot of external advisors are invited to help. In that case, my work usually starts with an exposure review, where I speak to everyone accountable and get a view of where the organization is, maturity-wise, versus where it needs to be for the implementation deadline. Then, there’s a very clear prioritization roadmap: what’s the impact and consequences for these processes or systems not to comply, and what are the next steps for compliance? Who needs to be in the process? What metrics need to exist? What’s the gap to the right level of maturity to demonstrate that we are compliant and can confidently talk to regulators, our organization, stakeholders, whoever, to demonstrate, “Here’s where we were or thought we were, these are the steps we’re taking, these are the people who are accountable, these are the decisions we’re going to make, and we’re going to demonstrate that we’ve taken them this way.” That’s how we design our journey. All of this sounds very simple, but the initial assessment is always very complicated, because everyone goes through the process and, in what I call a deep dive into documentation and governance structures, very often the outcomes show governance is not mature. It often needs to change—even the level of metrics, the maturity of the metrics thresholds in place is very immature and very legacy. Ultimately, it applies to the old world, but not to the new world. Ross Dawson: Well, every organization needs to evolve, and potentially rapidly. Do you look at strategic decisions, or the role of AI in strategic decisions, or other very complex, high-level decisions? Joanna Michalska: Yes, and it’s interesting to see that there is, what I would call, a cognitive dissonance between where people would like to apply AI for strategic decisions versus the reality. AI is actually used much more for operationalization or speeding up optimization—very performative. How can we quickly improve performance? There’s a lot of discussion about it, and I see that people want to think about it, especially as boards are putting a lot of pressure to improve strategic ambition and create a competitive advantage, which is well beyond just regulatory compliance. But it’s not very mature—let’s just put it that way. It’s much more integrated within improving operational performance — let’s just say that. Ross Dawson: Yeah, well, we might loop back to that. Part of your background is neuroscience, and we’ve been talking a lot about decision making. There’s been a lot of wonderful work over the last seven or eight decades on the role of human cognition in decision making. Tell me about the way you see this understanding of neuroscience being applied to cognition, work, and decision making in a world where we have AI as part of those processes. Joanna Michalska: I think it’s a fascinating area of science, and we as humans, especially in this fast-paced environment, as leaders, really need to evolve our capability of not only managing organizations, but also leading the people side of things. Historically, the human brain and mindset take time to change; it’s not something that changes overnight, and it typically requires a trigger, which is usually not very pleasant for someone to take responsibility or ownership of that change. Now, when you think about the acceleration of decisions and what happens around the organization—because when we think about business, evolving organizations, or being accountable to shareholders, regulators, or society—we can’t just remain the same. That means our mindset and our ability to become more human really matter. Emotional intelligence, relationship-building skills, recognizing the importance of trust, building psychological safety so people can take responsibility at every level of the organization, and having the courage to say, “This is not working, I’ve seen this problem, something doesn’t feel right, I’m going to escalate to the right person because I know who the right person is”—all of that really comes from personal qualities and owning those qualities that just cannot be replaced by machines. There’s a gap between what machines are taking over in terms of processes and things that can be done easily, and, to my earlier example, that almost brings a higher weight and pressure onto us to become better—much more resilient, agile, responsible, and accountable. Those qualities, maybe we weren’t really owning to the same extent in the past because we were focused on performative activities much more. That will be required of us through all the external pressures, but also because we want to achieve better outcomes not just for organizations, but for broader society. That sense of responsibility for an impact that is much deeper and more long-lasting is very important. From a leadership perspective, it’s almost like role modeling becomes even more important for everyone else in the organization, and it creates a higher level of satisfaction, engagement, and level of happiness for everyone. Ross Dawson: So, this is about personal growth in a way—becoming more, as you say, building these human qualities that we need so much in this world. But at the same time, very few people are not experiencing pressure or stress, not least from the pace of change through AI and many other things. There are countervailing forces: we’re being called to be more human, to bring out more of ourselves, but the context is extraordinary challenge. What are specific things that leaders or organizations can do to help people draw out those capabilities? Joanna Michalska: I think there are a couple of things to start with. I would always say awareness is the first step, and leadership awareness of where the gap is and what’s actually required is very important. Integrity and honesty follow right after, because people know what’s happening—or even if they don’t know, they’ll fill in the blanks with probably not very positive things. Once fear starts to creep in, it erodes trust and confidence, and it also takes away from participation. No matter what transformation you’re trying to achieve in your organization, you need your people engaged. Executives need to be honest about what’s happening. I’ve seen a lot of examples where leadership is not honest—they say, “Don’t worry, you’re not going to lose your job, it’s going to be fine,” while everyone knows their job will either be lost or significantly changed. That level of honesty is important: having a uniform communication strategy to communicate honestly to people and say, “Things will change. Things are already changing, but we will take care of you, and this is how it’s going to look.” I’m not saying to lie; you need to be appropriately honest and say, “Yes, there will probably be a reduction in roles, but this is the plan.” How do we communicate honestly to people and make sure they understand that if their job will be eliminated, how will they be supported to develop capabilities and skills to go into another role or do something else somewhere else? That’s a human thing to do—be honest and help people develop that capability. The second part of that plan is to have, whether it’s AI literacy or AI retraining, whatever the organization decides to do to help people develop the skill set they don’t have. Organizationally, but also, as someone said at an event I attended recently, “What used to be a soft skill now really becomes a hard skill,” because that’s at the heart of everything. As more process-driven tasks are taken away by machines, those human skills will become very, very important and already are. Ross Dawson: Yeah, and arguably, that’s one of the possible benefits of AI—it helps us to become more human, or develop our intrinsically and distinctly human capabilities. So, Joanna, where can people go to find out more about your work? Joanna Michalska: They can reach out to me on LinkedIn, And I do have also my website, that’s called ethicagroup.ai, and I’m happy to connect on any topic related to what we’ve just discussed, especially executive authority, how we become more human, and how we can be at the center of what we can actually do within this very fast-moving environment. How do we have as executives and leaders more impact on changing this reality? Because, to your point earlier, if each of us doesn’t take that responsibility, nothing’s really going to change. Ross Dawson: Indeed. Thank you so much for your time and your insights, Joanna. Joanna Michalska: Thank you, Ross. Thank you for having me. The post Joanna Michalska on AI governance, decision architectures, accountability pathways, and neuroscience in organizational transformation (AC Ep36) appeared first on Humans + AI.

    Cornelia C. Walther on AI for Inspired Action, return on values, prosocial AI, and the hybrid tipping zone (AC Ep35)

    Play Episode Listen Later Mar 12, 2026 36:05


    “You and I, we’re part of this last analog generation. We had the opportunity to grow up in a time and age where our brains had to evolve against friction.” –Cornelia C. Walther About Cornelia C. Walther Cornelia C. Walther is Senior Fellow at Wharton School, a Visiting Research Fellow at Harvard University, and the Director of POZE, a global alliance for systemic change. She is author of many books, with her latest book, Artificial Intelligence for Inspired Action (AI4IA), due out shortly. She was previously a humanitarian leader working for over 20 years at the United Nations driving social change globally. Webiste: pozebeingchange LinkedIn Profile: Cornelia C. Walther University Profile: knowledge.wharton What you will learn How the ‘hybrid tipping zone’ between humans and AI shapes society’s future The dangers and consequences of ‘agency decay’ as individuals delegate critical thinking and action to AI The four accelerating phenomena influencing humanity: agency decay, AI mainstreaming, AI supremacy, and planetary deterioration Actionable frameworks, including ‘double literacy’ and the ‘A frame’, to balance human and algorithmic intelligence What defines ‘pro social AI’ and strategies to design, measure, and advocate for AI systems that benefit people and the planet The need to move beyond traditional ethics toward values-driven AI development and organizational ‘return on values’ Leadership principles for creating humane technology and building unique, purpose-led organizations in the age of AI Global contrasts in AI development (US, Europe, China, and the Global South) and emerging examples of pro social AI initiatives Episode Resources Transcript Ross Dawson: Cornelia, it is fantastic to have you on the show Cornelia Walther: Thank you for having me Ross. Ross: So your work is very wonderfully humans plus AI, in being able to look at humans and humanity and how we can amplify the best as possible. That’s one really interesting starting point is your idea of the hybrid tipping zone. Could you share with us what that is? Cornelia: Yes, happy to. I would argue that we’re currently navigating a very dangerous transition where we have four disconnected yet mutually accelerating phenomena happening. At the micro level, we have agency decay, and I’m sure we’ll talk more about that later, but individuals are gradually delegating ever more of their thinking, feeling, and doing to AI. We’re losing not only control, but also the appetite and ability to take on all of these aspects, which are part of being ourselves. At the meso level, we have AI mainstreaming, where institutions—public, private, academic—are rushing to jump on the AI train, even though there are no medium or long-term evidences about how the consequences will play out. Then at the macro level, we have the race towards AI supremacy, which, if we’re honest, is not just something that the tech giants are engaged in, but also governments, because this is not just about money, it’s also about power and geopolitical rivalry. And finally, at the meta level, we have the deterioration of the planet, with seven out of nine boundaries now crossed, some with partially irreversible damages. Now, you have these four phenomena happening in parallel, simultaneously, and mutually accelerating each other. So the time to do something—and I would argue that the human level is the one where we have the most leeway, at least for now, to act—is now. You and I, we’re part of this last analog generation. We had the opportunity to grow up in a time and age where our brains had to evolve against friction. I don’t know about you, but I didn’t have a cell phone when I was a child, so I still remember my grandmother’s phone number from when I was five years old. Today, I barely remember my own. Same thing with Google Maps—when was the last time you went to a city and explored with a paper map? Now, these are isolated functions in the brain, but with ChatGPT, there’s this general offloading opportunity, which is very convenient. But being human, I would argue, it’s a very dangerous luxury to have. Ross: I just want to dig down quite a lot in there, but I want to come back to this. So, just that phrase—the hybrid tipping zone. The hybrid is the humans plus AI, so humans and AI are essentially, whatever words we use, now working in tandem. The tipping zone suggests that it could tip in more than one way. So I suppose the issue then is, what are those futures? Which way could it tip, and what are the things we can do to push it in one way or another—obviously towards the more desirable outcome? Cornelia: Thank you. I think you’re pointing towards a very important aspect, which is that tipping points can be positive or negative, but the essential thing is that we can do something to influence which way it goes. Right now, we consider AI like this big phenomenon that is happening to us. It is not—it is happening with, amongst, and because of us. I think that is the big change that needs to happen in our minds, which is that AI is neutral at the end of the day. It’s a means to an end, not an end in itself. We have an opportunity to shift from the old saying—which I think still holds true—garbage in, garbage out, towards values in, values out. But for that, we need to start offline and think: what are the values that we stand for? What is the world that we want to live in and leave behind? As you know, I’m a big defender of pro social AI, which refers to AI systems that are deliberately tailored, trained, tested, and targeted to bring out the best in and for people and planet. Ross: So again, lots of angles to dig into, but I just want to come back to that agency decay. I created a framework around the cognitive impact of AI, going from, at the bottom, cognitive corruption and cognitive erosion, through to neutral aspects, to the potential for cognitive augmentation. There are some individuals, of course, who are getting their thinking corrupted or eroded, as you’ve suggested; others are using it well and in ways which are potentially enhancing their cognition. So, there is what individuals can do to be able to do that. There’s also what institutions, including education and employers, can do to provide the conditions where people are more likely to have a positive impact on cognition. But more broadly, the question is, again, how can we tip that more in the positive direction? Because absolutely, not just the potential, but the reality of cognitive erosion—or agency decay, as you describe it, which I think is a great phrase. So are there things we can do to move away from the widespread agency decay, which we are in danger of? Cornelia: Yeah, I think maybe we could marry our two frameworks, because the scale of agency decay that I have developed looks at experience, experimentation, integration, reliance, and addiction. I would say we have now passed the stage of experimentation, and most of us are very deeply into the field of integration. That means we’re just half a step away from reliance, where all of a sudden it becomes nearly unthinkable to write that email yourself, to do that calendar scheduling yourself, or to write that report from scratch. But that means we’re just one step away from full-blown addiction. At least now, we still have the possibility to compare the before and after, which comes back to us as an analog generation. Now is the time to invest in what I would call double literacy—a holistic understanding of our NI, our natural intelligence, but also our algorithmic, our AI. That requires a double literacy—not just AI literacy or digital literacy, but the complementarity of these two intelligences and their mutual influence, because none of them happens in a vacuum anymore. Ross: Absolutely, So what you described—experiment, integration, reliance, addiction—sounds like a slippery slope. So, what are the things we can do to mitigate or push back against that, to use AI without being over-reliant, and where that experiment leads to integration in a positive way? What can we do, either as individuals or as employers or institutions, to stop that negative slide and potentially push back to a more positive use and frame? Cornelia: A very useful tool that I have found resonates with many people is the A frame, which looks at awareness, appreciation, acceptance, and accountability. I have an alliteration affinity, as you can see. The awareness stage looks at the mindset itself and really disciplines us not to slip down that slope, but to be aware of the steps we’re taking. The appreciation is about what makes us, in our own NI, unique, and the appreciation of where, in combination with certain external tools, it can be better. We all have gaps, we all have weaknesses, and that’s what we have to accept. The human being, even though now it’s sometimes put in opposition to AI as the better one, is not perfect either. Like probably you and most of the listeners have read Thinking, Fast and Slow by Daniel Kahneman and many others—there are libraries about human heuristics, human fallacies, our inability for actual rational thinking. But the fact that you have read a book does not mean that you are immune to that. We need to accept that this is part of our modus operandi, and in the same way as we are imperfect, AI, in many different ways, is also imperfect. And finally, the accountability. Because at the end of the day, no matter how powerful our tools are going to be, we as the human decision makers should consider ourselves accountable for the outcomes. Ross: Absolutely, that’s one of the points I make. We can’t obviously make machines accountable—ultimately, the accountability resides in humans. So we have to design systems, which I think provides a bit of a transition to pro social AI. So what is pro social AI, how do we build it, how do we deploy that, and how do we make that the center of AI development? Cornelia: Thank you for that. Pro social AI, in a way, is very simple. It’s the intent that matters, but it starts from scratch, so you have the regenerative intent embedded into the algorithmic architecture. It has four key elements that can be measured, tracked, and can also serve to sensitize those who use it and those who design it—tailored, framed, tested, targeted. The pro social AI index that I’ve been working on over the past months combines that with the quadruple bottom line: purpose, people, profit, planet. Now all of a sudden, rather than talking in an airy-fairy way about ethical AI—which is great and necessary, but I would argue is not enough—we need to systematically think about how we can harness AI as a catalyst of positive transformation that is with environmental dignity and seeks planetary health. How can we measure that? Ross: And so, what are we measuring? Are we measuring an AI system, or what is the assessment tool? What is it that is being assessed? Cornelia: It’s the how and the what for. For example, what data has been used? Is the data really representative? We know that the majority of AI tools are biased. And the other question is, is it only used for efficiency and effectiveness, but to what end? Ross: Yes, as we are seeing in current conversations around the use of models at Anthropic and OpenAI, there are tools, and there are questions around how they are used, not just what the tools are. Cornelia: Yes, so again, it comes back to the need for awareness and for hybrid intelligence, because at the end of the day, we can’t rely on companies whose purpose is to make money to give systems that serve people and planet first and foremost. Ross: This goes on to another one of your wonderful framings, which is AI for IA—AI for inspired action—around this idea of how do we amplify humans and humanity. Of course, this goes on to everything we’ve been discussing so far. But I think one of the things which is very useful there is AI, in a way, leading to humans taking action which is inspired around envisaging what is possible. So, how can we inspire positive action by people in the framing we’ve discussed? Cornelia: AI for IA is the title of the new book that’s coming out next month. But also, as with most of the things I’m saying, it’s not about the technology—it’s about the human being. We can’t expect the technology of tomorrow to be better than the humans of today. As I said before, garbage in, garbage out, or values in, values out—it’s so simple and it’s so uncomfortable, it’s so cumbersome, right? Because we like quick fixes. But unfortunately, AI or technology in general is not going to save us from ourselves, and as it is right now, we’re straightforward on a trend to repeat the mistakes made during the first, second, and third industrial revolutions, where technology and innovation were driven primarily by commercial intent. Now, I would argue that this time around, we can’t leave it at that, because this fourth industrial revolution has such a strong impact on the way we think, feel, and interact, that we need to start in our very own little courtyard to think: what kind of me do I want to see amplified? Ross: Yes, yes. I’ve always thought that if AI amplifies us, or technology generally amplifies us, we will discover who we are, because the more we are amplified, the more we see ourselves writ large. But we have choices around, as you say, what aspects of who we are as individuals and as a society we can amplify. That’s the critical choice. So the question is, how do we bring awareness to your word around what it is about us that we want to amplify, and how do we then selectively amplify that, rather than also amplify the negative aspects of humanity? Cornelia: The first thing, and that’s a simple one, is the A frame. I would argue that’s something everyone can integrate in their daily routine in a very simple way, to remind us of the four A’s: awareness, appreciation, acceptance, accountability. The other one, at the institutional level, is the integration of double literacy. Right now, there’s a lot of hype in schools and at the governmental level about AI literacy and digital literacy. I think that’s only half of the equation. This is now an opportunity to take a step back and finally address this gap that has characterized education systems for many decades, where thinking and thinking about thinking—metacognition—is not taught in schools. Systems thinking, understanding cognitive biases, understanding interplays—now is the time to learn about that. If the future will be populated by humans that interact with artificial counterparts configured to address and exploit every single one of our human Achilles heels, then we would be better advised to know those Achilles heels. So, I think these are two relatively simple ways moving forward that could take us to a better place. Ross: So this goes to one of your other books on human leadership for humane technology. So leadership of course, everyone is a leader in who they touch. We also have more formal leaders of organizations, nations, political parties, NGOs, and so on. But just taking this into a business context, there are many leaders now of organizations trying to transform their organizations because they understand that the world is different, and they need to be a different organization. They still need to make money to pay for their staff and what they are doing to develop the organization, but they have multiple purposes and multiple stakeholders. So, just thinking from an organizational leader perspective, what does human leadership for humane technology mean? What does that look like? What are the behaviors? What are the ways we can see that would show us? Cornelia: I think first, it’s a reframing away from this very narrow scope of return on investment, which has characterized the business scene for many decades, and looking at return on values. What is the bigger picture that we are actually part of and shaping here? What’s the why at the end of the day? I think that matters for leaders who are in their place to guide others, and guidance is not just telling people what they have to do, but also inspiring them to want to do it. Inspiration, at the end of the day, is something that comes from the inside out, because you see in the other person something that you would like in yourself. Power and money are not it—it’s vision. I think this is maybe the one thing that is right now missing. We all tend to see the opportunity, but then we go with what everybody else is doing, because we don’t really take the time to step back and think, well, there is the path of everyone, and there’s another one—how should I explore that one? Especially amidst AI, where just upscaling your company with additional tools is not really going to set you apart, it matters twice as much to not just think about how do I do more of the same with less investment and faster, but what makes me unique, and how can I now use the artificial treasure chests to amplify that? Ross: Yes, yes. I think purpose is now well recognized beyond the business agenda. One of the critical aspects is that it attracts the most talented people, but also, over the years, we’ve had more and more opportunities to be different as an organization. Back in the late ’90s and so on, organizations looked more and more the same. Now there are more and more opportunities to be different. The way in which AI and other technologies are brought into organizations gives an extraordinary array of possibilities to be unique, as you’ve described, and distinctive, which gives you a competitive position as well as being able to attract people who are aligned with your purpose. Cornelia: Yes, exactly. But for that, you need to know your purpose first. Ross: From everything we’ve just been talking about, or anything else, are there any examples of organizations or initiatives that you think are exemplars or support the way in which, or show how, we could be approaching this well? Cornelia: I think—this will now sound very biased—but I’m currently working with Sunway University, and I think they are the kind of academic institution that is showing a different path, seeking to leverage technology to be more sustainable, bringing in dimensions such as planetary health, like the Sunway Centre for Planetary Health, and thinking about business in a re-envisioned way, with the Institute for Global Strategy and Competitiveness. I think there are examples at the institutional level, there are examples at the individual level, and sometimes the most inspiring individuals are not those that make the headlines. That’s maybe, sorry, just on that, for me the most important takeaway: no matter which place one is in the social food chain, the essential thing is, who are you and how can you inspire the person next to you to make it a better day, to make it a better future. Ross: Yes, in fact, that word “inspired,” as you mentioned before. So that’s Sunway University in Malaysia? Cornelia: I think they are definitely a very, very good illustration of that. Ross: Just pulling this back to the global frame, and this gets quite macro, but I think it is very important. It pulls together some of the things we’ve pointed to—the difference between the approach of the United States, China, Europe, in how they are, you know, essentially the leaders in AI and how they’re going about it, but where the global south more generally, I think there’s some interesting things. Arguably, there’s a far more positive attitude generally in the populations, a sense of the opportunity to transform themselves, but of course a very different orientation in how they want to use and apply AI and in creating value for individuals, nations, and society. So how would you frame those four—the US, China, Europe, and the global south—and how they are, or could be, approaching the development of AI? Cornelia: Thank you for that. I think right now there are three mainstream patterns: the US, which is—I’m overly simplifying and aware of that—the US path, which is business overall; the European model, which is regulation overall; and the Chinese model, which is state dominance. I would argue there’s a fourth path, and I think that’s where leaders in the global south can step in. You might know I’m working, on the one hand, in Malaysia and, on the other hand, in Morocco, on the development of a sort of national blueprint of what pro social AI can look like. I think now is the time—again, coming back to leadership—to think about how countries can walk a different path and be pioneers in a field that, yes, AI has been around for various decades, but the latest trend, the latest wave that is engulfing society since November 2022, is still relatively new. So why not have nations in the global south that are very different from the West chart their own path and make it pro social, pro people, pro planet, and pro potential—and that potential that they have themselves, which sets them apart and makes them unique. Ross: Absolutely. Again, you mentioned Malaysia, Morocco. Looking around the world, of course, India is prominent. There are some African nations which have done some very interesting things. Just trying to think, where are other examples of these kinds of domestically born pro social initiatives happening? Of course, the Middle East—it’s quite different, because they’re wealthy, though they’re not among the major leaders, but there’s a whole array of different examples. Where would you point to as things which show how we could be using pro social AI at a national or regional level? Cornelia: Unfortunately, right now, there is not one country where one could say they have taken it from A to Z, but I think there are very inspiring or positive examples. For example, Vietnam was the first country in ASEAN to endorse a law on AI ethics and regulation—I think that’s a very good one. Also, ASEAN has guidelines on ethics. All of these are points of departure. Switzerland did a very nice example of what public AI can look like. So there are a lot of very good examples. The question is not so much about what to do, I think, but how to do it, and why. At the end of the day, it’s really that simple. What’s the intent behind it? What do we want the post-2030 agenda to look like? We know that the SDG—Sustainable Development Goals—are not going to be fulfilled between now and 2030. So are we learning from these lessons, or are we following the track pattern of doing more of the same and maybe throwing in a couple of additional indicators, or can we really take a step back and look ourselves and the world in the face and think, what have we missed? Now, frame it however you want, but think about hybrid development goals and ways in which means and ends—society and business—come together into a more holistic equation that respects planetary health. Because at the end of the day, our survival still depends on the survival and flourishing of planet Earth, and some might cherish the idea of emigrating to Mars, but I still think that overall the majority of us would prefer to stay here. Ross: Yes, planet Earth is beautiful, and it’d be nice to keep it that way. How can people find more about your work? Could you just tell people about your new book and any resources where people can find out more? Cornelia: Thank you so much. They are very welcome to reach out via LinkedIn. Also, I’m writing regularly on Psychology Today, on Knowledge at Wharton, and various other platforms. The new book that you mentioned is coming out next month, and there will be another one, hopefully by the end of the year. Overall, feel free to reach out. I really feel that the more people get into this different trend of thinking, the better. But thank you so much for the opportunity. Ross: Thanks so much for all of your work, Cornelia. It’s very important. The post Cornelia C. Walther on AI for Inspired Action, return on values, prosocial AI, and the hybrid tipping zone (AC Ep35) appeared first on Humans + AI.

    Ross Dawson on Humans + AI Agentic Systems (AC Ep34)

    Play Episode Listen Later Mar 4, 2026 19:12


    “Transparency has to be built into the structure so that you know where the decision is made, what authorizations are given, and have an audit trail visible so you can always see what is going on.” –Ross Dawson About Ross Dawson Ross Dawson is a futurist, keynote speaker, strategy advisor, author, and host of Amplifying Cognition podcast. He is Chairman of the Advanced Human Technologies group of companies and Founder of Humans + AI startup Informivity. He has delivered keynote speeches and strategy workshops in 33 countries and is the bestselling author of 5 books, most recently Thriving on Overload. LinkedIn Profile: Ross Dawson What you will learn How human-AI teams outperform human-only teams in productivity and efficiency The crucial role of understanding AI strengths and limitations when designing collaborative workflows Ways AI collaboration can lead to output homogenization and strategies to preserve human creativity Key principles of intelligent delegation within multi-agent AI systems, including dynamic assessment and trust Understanding accountability, transparency, and auditability in decision-making with autonomous AI agents How user intent and ‘machine fluency’ impact the effectiveness of AI agents in economic and organizational contexts The emergence of an ‘agentic economy’ and its implications for fairness, capability gaps, and representation Counterintuitive findings on AI-mediated negotiation, particularly advantages for women, and what it reveals about AI-human interaction Episode Resources Transcript Ross Dawson: This episode is a little bit different. Instead of doing an interview with somebody remarkable, as usual, today I’m going to just share a bit of an update and then share insights from three recent research papers that dig into something which I think is exceptionally important, which is how humans work with AI agentic systems. And we’ll look at a few different layers of that, from how small humans plus agent teams work through to how we can delegate decisions to AI through to some of the broader implications. But first, a bit of an update. 2026 seems to be moving exceptionally fast. It’s a very interesting time to be alive, and I think it’s pretty even hard to see what the end of this year is going to look like. So for me, I am doing my client work as usual. So I’ve got keynotes around the world on usually various things related to AI, the future of AI, humans plus AI, and so on. A few industry-specific ones in financial services and so on. And also doing some work as an advisor on AI transformation programs, so helping organizations and their leaders to frame the pathways, drawing on my AI roadmap framework in how it is you look at the phases, mapping those out, working out the issues, and being able to guide and coach the leaders to do that effectively. But the rest of my time is focused on three ventures, and I’ll share some more about these later on. But these are fairly evidently tied to my core interests. Fractious is our AI for strategy app. So this was really building a way in which we can capture the detailed nuance of the strategic thinking of leaders of the organization, to disambiguate it, to clarify it, and enable that to then be built into strategic options, strategic hypotheses, and to be able to evolve effectively. So that’ll be in beta soon. Please reach out if you’re interested in being part of the beta program, and that’ll go to market. So that’s deeply involved in that. We also have our Thought Weaver software, rebuilding previous software which had already built on AI-augmented thinking workflows. So again, that’ll be going to beta. That’s more an individual tool that will be going into beta in the next weeks. So again, go to Thought Weaver. Actually, don’t—the website isn’t updated yet—but I’ll let you know when it’s out, or keep posted for updates on that. And also building an enterprise course on humans plus AI teaming. It’s my fundamental belief that we’ve kind of been through the phase of augmentation of individuals, and we still need to work hard at doing that better. But the next phase for organizations is to focus on teams. How do you work with teams where we have both human members and AI Agentic members? And it creates a whole different series of dynamics and new skills and capabilities. It really calls for how to participate in the humans plus AI team and how to lead humans plus AI teams. And that is again going into the first few test organizations in the next month or so. So again, just let me know. So today what we’re going to look at is this theme: teams of humans working with AI agents. So not individual AI as in chat, but where we have a lot of agents with various degrees of autonomy, but also agentic systems where these agents are interacting with each other as well as with humans. So there are three papers which I want to just talk about, just give you a quick overview, and please go and check out the papers in more detail if you’re interested. There’ll be links in the show notes. First is Collaborating with AI Agents: A Field Experiment on Teamwork, Productivity and Performance, by Harang Ju at Johns Hopkins and Sinan Aral at MIT. So this, there was an experiment which had over 2,300 participants who were working on creating advertisements. And they had a whole array of humans plus AI, human-human teams, human-AI teams, sort of quite small or just in duos and so on, working on being able to create those which were then assessed in terms of quality and how they worked. So a few particularly interesting findings from that. So individually, just having a human-AI team essentially enhanced performance significantly compared to just human-only teams. And so they were able to move faster and to complete more of their tasks, and the quality was strong. But there’s a phrase which is commonly used around the jagged frontier of capability of AI, and it was quite clear that there were some domains where AI does very well and others where it didn’t. And so this comes to the part where, in terms of the design of the tasks, the design of the human-AI systems, and also the understanding by the human users of what AI is good at or not, is fundamental in being able to do that. And so in some cases, if AI was used in some domains such as image quality, they actually decreased quality. So we need to understand where and how both to apply AI in this jagged frontier and design the systems around that. This changes the role of the humans, of course. Humans then tend to delegate more. And there’s one of the things which they tested for, which is how do you behave differently if you know your teammate is an AI as opposed to not knowing whether a human or AI. And it changes. So they become more task-oriented. They are less using the social cues to interact, and they are essentially becoming more efficient. But some of these social cues which are valuable in the human-human collaboration started to disappear. And this automation process meant that there was not, in the end, as much creative diversity. Now I’ve often pointed to the role of AI in creativity tasks. It depends fundamentally on the architecture—where does the AI sit in terms of initial ideas which are then sorted by filtered by humans and then are involved, or where it sits in that process. But in this particular structure, they found that humans plus AI teams started to create more and more similar-type outputs. So this homogenization of outputs in these human-AI teams was very notable and significant. And so this again creates a design factor for how it is that we build human-AI systems which actually do not lead to homogeneous output. And we’re making sure that we are ensuring that the human diversity is maintained. Often that can be done by being able to have human outputs first without AI then blunting or narrowing the breadth of the creative outputs of humans. Second paper I’d like to point to is called Intelligent AI Delegation, from a team at Google DeepMind. So this is this point where we now have not just single AI agents to delegate decisions to or problems to, but in fact systems of AI. And so this creates a different challenge. And the key point is, I’m saying this, is around you are delegating tasks, but when you are delegating tasks it’s more than just saying, okay, which agent gets the task. You have to understand responsibility. So where does accountability reside? Who is responsible for that? How clarity around the roles of the agents, what are the boundaries of what it is they can do and cannot do, the clarity of the intent, and how that’s communicated and cascaded through the agents, and the critical role of trust and appropriate degrees of trust in the systems. So this means that we have to define what are the different characteristics of the task. And in the paper it goes through quite a few different characteristics. And a few of the critical ones was the degree of uncertainty around the task. Obviously, if it is very clear that can be appropriately delegated, but many tasks and problems are uncertain. And so this creates a different dynamic. Whether verifiable, as you know you have high-quality information, or whether that’s the degree of uncertainty around whether decisions are reversible, the degree of subjectivity, because not everything is data-driven. And so assessing these task characteristics start to define where human judgment plays a role, how do you create those checks, and how do you build that. So this creates a system so intelligent delegation is not just how the humans delegate, but in turn the structure of how that cascades down through the agents. So this requires this idea of dynamic assessment. So you’re not just setting and forgetting. You are continuously reassessing what is happening with the context, what is changing in the stakes, any uncertainty. So you’re coming back to be able to ensure there’s not just a single delegation structure, but you’re changing it over time. And you’ll continue to adapt as you’re executing, and be able to monitor, replan, and set. So transparency has to be built into the structure so that you have where the decision is made, what authorizations are given, you know where the audit trail is visible so you can always see what is going on in those structures. And being able to scale how you are coordinating the systems. And if it’s just small scale that’s fine, but you want to be able to build something which has been able to move across many agents. And so this requires a way of being able to discover which agents are most appropriate and be able to essentially establish the delegation of a particular task to them again on a dynamic basis. And essentially this final principle of systemic resilience, where you have to expect that things will go wrong. So there’s continuing monitoring, being able to understand that these systems can be attacked in various ways and being able to recover. So, very solid paper, quite deep, but really giving some very good principles for how it is we can delegate to AI systems. So the final of the three papers goes to a bit of a higher level. It’s called Agentic Interactions, and it’s from Alex Imas, Sanjog Misra of the University of Chicago, and Kevin Lee at the University of Michigan. And what they’re looking at is what happens on a macro scale when increasingly decisions are delegated to AI agents. So this is the agent economy that I’ve been talking about for a very long time, which is now very much coming to the fore. And so what they do is they look at what happens when we start to delegate more and more economic decisions, such as buying and selling decisions. So what they found is extraordinarily interesting. They found that the AI agents in fact do behave very similarly to their human creators. And in fact what you can observe is that there are differences in the agents where you can infer the gender and the personality of the person who is delegating the agent. Even though there is no information, the agent doesn’t even know what the gender or the personality is, they are actually flowing through. So in fact agents represent us in the market as it were, potentially very accurately. But this goes directly to the second point where this idea of machine fluency. And so AI fluency is very much a term in vogue at the moment. So the authors talk about this idea of machine fluency which is how well can a user put their intent and align that with the agent so the agent is aligned with them. And in fact they found that there’s very significant degrees of difference in those. And those people who are better at being able to get their agents to express their wishes could in fact amplify the economic outcomes of these people. And related to that in fact they showed there was a correlation that higher educational levels mean that you were able to better delegate to AI, and your AI agents performed better and gave you better returns. So again pointing to these ways in which we’re starting to see potentials for aggravation of differences in the agentic economy when our agents who act for us in the economy start to reflect among other things educational differences or capabilities in how it is we express our results and our intentions through AI. There was one very interesting and I suppose counterintuitive result. Women get better outcomes in negotiation when using AI agents than they do in human-to-human interactions. Again this is without the AI agents knowing that they are representing a woman or not. But in fact this shows that the style and the way on the machine fluency the ways in which women are able to instruct and put their intent into the AI agents is in this study superior to those of males. And there’s of course in the real world unfortunately a bias towards male performance in negotiation. And that was inversed in the study. So exceptionally interesting. So just pulling back some of the common themes of these three papers. We increasingly want a world where humans have relationships to agents. We are starting to work with them in teams and systems. And we’re starting to build economies where humans are represented by agents. And essentially our relationship to those agents and our ability to delegate effectively is driving value of course to the individual but also across these agentic systems that are emerging. So this is early on because the realities of these agentic human-agent systems are pretty early at this point. But this starts to point to some of the potential, some of the challenges, some of the opportunities, and some of the work that we have to do. So I will be sharing more on these kinds of topics in my interviews with people and also of course on the Humans Plus AI website. So just go to humansplus.ai. Actually to be frank it hasn’t been updated a lot recently but we will be sharing a lot more there. Or LinkedIn is where I share the most actually, and getting back on Twitter as well if you’re interested. But I’ll be diving deep and trying to share what I find is useful as well as interesting in helping us to create a world where humans are first. AI complements us. The reality is we are moving to humans plus AI systems. And if we design that well with the right intentions we can make this absolutely one which drives human value first. So glad to have you on the journey. Have a wonderful rest of your day. The post Ross Dawson on Humans + AI Agentic Systems (AC Ep34) appeared first on Humans + AI.

    Davide Dell'Anna on hybrid intelligence, guidelines for human-AI teams, calibrating trust, and team ethics (AC Ep33)

    Play Episode Listen Later Feb 25, 2026 35:46


    “In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation.” – Davide Dell'Anna About Davide Dell'Anna Davide Dell'Anna is Assistant Professor of Responsible AI at Utrecht University, and a member of the Hybrid Intelligence Centre. His research focuses on how AI can cooperate synergistically and proactively with humans. Davide has published a wide range of leading research in the space. Webiste: davidedellanna.com LinkedIn Profile: Davide Dell'Anna University Profile: Davide Dell'Anna What you will learn The core concept of hybrid intelligence as collaborative human-AI teaming, not replacement Why effective hybrid teams require acknowledging and leveraging both human and AI strengths and weaknesses How lessons from human-human and human-animal teams inform better design of human-AI collaboration Key differences between humans and AI in teams, such as accountability, replaceability, and identity The importance of process-oriented evaluation, including satisfaction, trust, and adaptability, for measuring hybrid team effectiveness Why appropriately calibrated trust and shared ethics are central to performance and cohesion in hybrid teams The shift from explainability to justifiability in AI, emphasizing actions aligned with shared team norms and values New organizational roles and skills—like team facilitation and dynamic team design—needed to support successful human-AI collaboration Episode Resources Transcript Ross Dawson: Hi Davide. It’s wonderful to have you on the show. Davide Dell’Anna: Hi Ross, nice to meet you. Thank you so much for having me. Ross: So you do a lot of work around what you call hybrid intelligence, and I think that’s pretty well aligned with a lot of the topics we have on the podcast. But I’d love to hear your definition and framing—what is hybrid intelligence? Davide: Well, thank you so much for the question. Hybrid intelligence is a new paradigm, or a paradigm that tries to move the public narrative away from the common focus on replacement—AI or robots taking over our jobs. While that’s an understandable fear, more scientifically and societally, I think it’s more interesting and relevant to think of humans and AI as collaborators. In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation. In a human-AI team, members can compensate for each other’s weaknesses and amplify each other’s strengths. The goal is not to substitute human capabilities, but to augment them. This immediately moves the discussion from “what can the AI do to replace me?” to “how can we design the best possible team to work together?” I think that’s the foundation of the concept of hybrid intelligence. So hybrid intelligence, per se, is the ultimate goal. We aim at designing or engineering these human-AI teams so that we can effectively and responsibly collaborate together to achieve this superior type of intelligence, which we then call hybrid intelligence. Ross: That’s fantastic. And so extremely aligned with the humans plus AI thesis. That’s very similar to what I might have said myself, not using the word hybrid intelligence, but humans plus AI to say the same thing. We want to dive into the humans-AI teaming specifically in a moment. But in some of your writing, you’ve commented that, while others are thinking about augmentation in various ways, you point out that these are not necessarily as holistic as they could be. So what do you think is missing in some of the other ways people are approaching AI as a tool of augmentation? Davide: Yeah, so I think when you look at the literature—as a computer scientist myself, I notice how easily I fall into the trap of only discussing AI capabilities. When I talk about AI or even human-AI teams, I end up talking about how I can build the AI to do this, or how I can improve the process in this way. Most of the literature does that as well. There’s a technology-centric perspective to the discussion of even human-AI teams. We try to understand what we can build from the AI point of view to improve a team. But if you think of human-AI teams in this way, you realize that this significantly limits our vocabulary and our ability to look at the team from a broader, system-level perspective, where each member—including and especially human team members—is treated individually, and their skills and identity are considered and leveraged. So, if you look at the literature, you often end up talking about how to add one feature to the AI or how to extend its feature set in other ways. But what people often miss is looking at the weaknesses and strengths of the different individuals, so that we can engineer for their compensation and amplification. Machines and people are fundamentally different: humans are good at some things, AI is good at others, and we shouldn’t try to negate or hide or be ashamed of the things we’re worse at than AI, and vice versa. Instead, we should leverage those differences. For instance, just as an example, consider memory and context awareness. At the moment, at least, AI is much more powerful in having access to memory and retrieving it in a matter of seconds—AI can access basically the whole internet. But often, when you talk nowadays with these language model agents, they are completely decontextualized. They talk in the same way to millions across the world and often have very little clue about who the specific person is in front of them, what that person’s specific situation is—maybe they’re in an airport with noise, or just one minute from giving a lecture and in a rush. The type of things you might say also change based on the specific situation. While this is a limitation of AI, we shouldn’t forget that there is the human there. The human has that contextual knowledge. The human brings that crucial context. Sometimes we tend to say, “Okay, but then we can build an AI that can understand the context around it,” but we already have the human for that. Ross: Yes, yes. I don’t think that’s what I call the framing. Framing should come from the human, because that’s what we understand—including the ethical and other human aspects of the context, as well as that broader frame. It’s interesting because, in talking about hybrid intelligence, I think many who come to augmentation or hybrid intelligence think of it on an individual basis: how can an individual be augmented by AI, or, for example, in playing various games or simulations, humans plus AI teaming together, collaborating. But the team means you have multiple humans and quite probably multiple AI agents. So, in your research, what have you observed if you’re comparing a human-only team and a team which has both human and AI participants? What are some of the things that are the same, and what are some of the things that are different? Davide: Yes, this is a very interesting question. We’ve recently done work in collaboration with a number of researchers from the Hybrid Intelligence Center, which I am part of. If you’re not familiar with it, the Hybrid Intelligence Center is a collaboration that involves practically all the Dutch universities focused on hybrid intelligence, and it’s a long project—lasting around 10 years. One of the works we’ve done recently is to try to study to what extent established properties of effective human teams could be used to characterize human-AI teams. We looked at instruments that people use in practice to characterize human teams. One of them is called the Team Diagnostic Survey, which is an instrument people use to diagnose the strengths and weaknesses of human teams. It includes a number of dimensions that are generally considered important for effective human teams. These include aspects like members demonstrating their commitment to the team by putting in extra time and effort to help it succeed, the presence of coaches available in the team to help the team improve over time, and things related to the satisfaction of the members with the team, with the relationships with other members, and with the work they’re doing. What we’ve done was to study the extent to which we could use these dimensions to characterize human-AI teams. We looked at different types of configurations of teams—some had one AI agent and one human, others had multiple agents and multiple humans, for example in a warehouse context where you have multiple robots helping out in the warehouse that have to cooperate and collaborate with multiple humans. We tried to understand whether the properties of—by the way, we also looked at an interesting case, which is human-animal-animal teams, which is another example that’s interesting in the context of hybrid intelligence. You see very often in human-animal interaction—basically two species, two alien species—interacting and collaborating with each other. They often manage to collaborate pretty effectively, and there is an awareness of what both the humans and the animals are doing that is fascinating, at least for me. So, we tried to analyze whether properties of human teams could be understood when looking at human-AI teams or hybrid teams, and to what extent. One of the things we found is that some concepts are very well understood and easily applicable to different types of hybrid teams. For example, the idea of interdependence—the fact that members in the team, in order to be a team, need to be mutually dependent, at least to some extent. Otherwise, if they’re all doing separate jobs, there’s a lack of common goal. There are also things related to having a clear mission or a clear objective as a team, and aspects related to the possibility of exhibiting autonomy in the operation of the team and taking initiative. Also, the presence and awareness of team norms, like a shared ethical code or shared knowledge about what is appropriate or not. These were things that we found people could easily understand and apply to different configurations of teams. Ross: Just actually, one thing—I don’t know if you’re familiar with the work of Mohammad Hussain Johari, who did this wonderful paper called “What Human-Horse Interactions May Teach Us About Effective Human-AI Interactions.” Again, these are the cases where we can have these parallels—learning how to do human-AI interactions from human-human and human-animal interactions. But again, it comes back to that original question: what is the same? I think you described many of those facets of the nature of teams and collaboration, which means they are the same. But there are, of course, some differences. One of the many differences is accountability, essentially, where the AI agents are not accountable, whereas the humans are. That’s one thing. So, this allocation of decision rights across different participants—human and AI—needs to take into account that they’re not equal participants. Humans have accountability, and AI does not. That’s one possible example. Davide: Yeah, definitely. I totally agree, and I remember the paper you mentioned. I agree that human-animal collaboration is a very interesting source of inspiration. When looking at this paper, we looked at the case of shepherds and shepherd dogs. I didn’t know much about it before, but then I started digging a little bit. Shepherd dogs are trained at the beginning, but over time, they learn a type of communication with the shepherd. Through whistles, the shepherd can give very short commands, and then the shepherd dogs—even in pairs—can quickly understand what they need to do. They go through the mountains, collect all the sheep, and bring them exactly as intended by the shepherd, with very little need for words or other types of communication. They manage to achieve their goals very effectively. So, I think we have a lot to learn from these cases, even though it’s difficult to study. But just to mention differences, of course—one of the things that emerged from this paper is the inherent human-AI asymmetry. Like you mentioned, accountability is definitely one aspect. I think overall, we should always give the human a different type of role in the team, similar to the shepherd and the shepherd dogs. There is some hierarchy among the members, and this makes it possible for humans to preserve meaningful control in the interactions. This also implies that different rules or expectations apply to different team members. Beyond these, there is asymmetry in skills and capabilities, as we mentioned earlier, and also in aspects related to the identity of the members. For instance, some AI could be more easily replaceable than humans. Think, for example, of robots in a warehouse. In a human team, you wouldn’t say you “replace” a team member—it’s not the nicest way to say you let someone go and bring someone else in. But with robots, you could say, “I replace this machine because it’s not working anymore,” and that’s fine. We can replace machines with little consequence, though this doesn’t always hold, because there are studies showing that people get attached to machines and AI in general. There was a recent case of ChatGPT releasing a new version and stopping the previous one, and people complained because they got attached to the previous version. So, in some cases, replacing the AI member would work well, but in others, it needs to be done more carefully. Ross: So one of the other things looked at is the evaluation of human-AI teams. If we’re looking at human teams and possibly relative performance compared to human-AI teams, what are ways in which we can measure effectiveness? I suppose this includes not just output or speed or outcomes, but potentially risk, uncertainty, explainability, or other factors. Davide: Yes, this is an interesting question, and I think it’s still an open question to some extent. From the study I mentioned earlier, we looked at how people measure human team effectiveness. There are aspects concerning, of course, the success of the team in doing the task, but these are not the only measures of effectiveness that people consider in human teams. People often consider things related to the satisfaction of the members—with their teammates, with the process of working together, and with the overall goals of the team. This often leads to reflection from the team itself during operation, at least in human teams, where people reassess and evaluate their output throughout the process to make sure satisfaction with the process and relationships goes well over time. In general, there are aspects to measure concerning the effectiveness of teams related to the process itself, which are often forgotten. It’s a matter, at least from a research point of view, of resources, because to evaluate a full process over time, you need to run experiments for longer periods. Often people stop at one instant or a few interactions, but if you think of human teams, like the usual forming, storming, norming, and performing, that often goes over a long time. Teams often operate for a long time and improve over time. So, the process itself needs to be monitored and reassessed over time. This is a way to also measure the effectiveness of the team, but over time. Ross: Interesting point, because as you say, the dynamics of team performance with a human team improve as people get to know each other and find ways of working. They can become cohesive as a team. That’s classically what happens in defense forces and in creating high-performance teams, where you understand and build trust in each other. Trust is a key component of that. With AI agents, if they are well designed, they can learn themselves or respond to changing situations in order to evolve. But it becomes a different dynamic when you have humans building trust and mutual understanding, where that becomes a system in which the AI is potentially responding or evolving. At its best, there’s the potential for that to create a better performing team, but it does require both the attitudes of the humans and well the agents. Davide: Related to this—if I can interrupt you—I think this is very important that you mentioned trust. Indeed, this is one of the aspects that needs to be considered very carefully. You shouldn’t over-trust another team member, but also shouldn’t under-trust. Appropriate trust is key. One of the things that drives, at least in human teams, trust and overall performance is also team ethics. Related to the metrics you mentioned earlier, the ability of a team to gather around a shared ethical code and stick to that, and to continuously and regularly update each other’s norms and ensure that actions are aligned with the shared norms, is crucial. This ethical code significantly affects trust in operation. You can see it very easily in human teams: considering ethical aspects is essential, and we take them into account all the time. We respect each other’s goals and values. We expect our collaborators to keep their promises and commitments, and if they cannot, they can explain or justify what they are doing. These justifications are also a key element. The ability to provide justifications for behavior is very important for hybrid teams as well. Not only the AI, but also the human should be able to justify their actions when necessary. This is where the concept of hybrid teams and, in general, hybrid intelligence requires a bit of a philosophical shift from the traditional technology-centric perspective. For example, in AI, we often talk about explainability or explainable AI, which is about looking at model computations and understanding why a decision was made. But here, we’re talking about a different concept: justifiability, which looks at the same problem from a different angle. It considers team actions in the context of shared values, shared goals, and the norms we’ve agreed upon. This requires a shift in the way we implement AI agents—they need to be aware of these norms, able to learn and adapt to team norms, and reason about them in the same way we do in society. Ross: Let’s say you’ve got an organization and they have teams, as most organizations do, and now we’re moving from classic human teams to humans plus AI teams—collaborative human-AI teams. What are the skills and capabilities that the individual participants and the leaders in the teams need to transition from human-only teams to teams that include both humans and AI members? Davide: This is a complicated question, and I don’t have a full answer, but I can definitely reflect on different skills that a hybrid team should have. I’m thinking now of recent work—not published yet—where we started moving from the quality model work I mentioned earlier towards more detailed guidelines for human-AI teams. There, we developed a number of guidelines for organizations for putting in place and operating effective teams. We categorized these guidelines in terms of different phases of team processes. For instance, we developed guidelines related to structuring the teamwork—the envisioning of the operations of the team, which roles the team members would have, which responsibilities the different team members should have. Here, I’m talking about team members, but I’m still referring to hybrid teams, so this applies to both humans and AI. This also implies different types of skills that we often don’t have yet in AI systems. For example, flexible team composition is a type of skill required to make it possible at the early stage of the team to structure the team in the right way. There are also skills related to developing shared awareness and aspects related to breaking down the task collaboratively or ensuring a continuous evolution of the team over time, with regular reassessment of the output. If you think of these notions, it’s easy to think about them in terms of traditional organizations, but when you imagine a human-AI team or a small hybrid organization, then this continuous evolution, regular output assessment, and flexible team composition are not so natural anymore. What does it mean for an LLM agent to interact with someone else? Usually, LLM architectures rely on static roles and predefined workflows—you need to define beforehand the prompts they will exchange—whereas humans use much more flexible protocols. We can adjust our protocols over time, monitor what we’re doing, and reassess whether it works or not, and change the protocols. These are skills required for the assistants, but also for the organization itself to make hybrid teaming possible. One of the things that emerges in this recent work is a new figure that would probably come up in organizations: a team designer or a team facilitator. This is not a team member per se, but an expert in teams and AI teammates, who can perhaps configure the AI teammates based on the needs of the team, and provide human team members with information needed about the skills or capabilities of the specific AI team member. It’s an intermediary between humans and AI, with expertise that other human team members may not have, and could help these teams work together. Ross: That’s fantastic. It’s wonderful to learn about all this work. Is there anywhere people can go to find out more about your research? Davide: Yeah, sure. You can look me up at my website, davidedellanna.com. That’s my main website—I try to keep it up to date. Through there, you can see the different projects I’m involved in, the papers we’re working on, both with collaborators and with PhD and master students, who often bring great contributions to our research, even in their short studies. That’s the main hub, and you can also find many openly available resources linked to the projects that people may find useful. Ross: Fantastic. Well, it’s wonderful work—very highly aligned with the idea of hybrid intelligence, and it’s fantastic that you are focusing on that, because there’s not enough people yet focusing in the area. So you and your colleagues are ahead, and I’m sure many more will join you. Thank you so much for your time and your insights. Davide: Thank you so much, Ross. Pleasure to meet you. The post Davide Dell'Anna on hybrid intelligence, guidelines for human-AI teams, calibrating trust, and team ethics (AC Ep33) appeared first on Humans + AI.

    Felipe Csaszar on AI in strategy, AI evaluations of startups, improving foresight, and distributed representations of strategy (AC Ep32)

    Play Episode Listen Later Feb 18, 2026 38:18


    “You can create a virtual board of directors that will have different expertises and that will come up with ideas that a given person may not come up with.” – Felipe Csaszar About Felipe Csaszar Felipe Csaszar is the Alexander M. Nick Professor and chair of the Strategy Area at the University of Michigan’s Ross School of Business. He has published and held senior editorial roles in top academic journals including Strategy Science, Management Science, and Organization Science, and is co-editor of the upcoming Handbook of AI and Strategy. Webiste: papers.ssrn.com LinkedIn Profile: Felipe Csaszar University Profile: Felipe Csaszar What you will learn How AI transforms the three core cognitive operations in strategic decision making: search, representation, and aggregation. The powerful ways large language models (LLMs) can enhance and speed up strategic search beyond human capabilities. The concept and importance of different types of representations—internal, external, and distributed—in strategy formulation. How AI assists in both visualizing strategists' mental models and expanding the complexity of strategic frameworks. Experimental findings showing AI's ability to generate and evaluate business strategies, often matching or outperforming humans. Emerging best practices and challenges in human-AI collaboration for more effective strategy processes. The anticipated growth in framework complexity as AI removes traditional human memory constraints in strategic planning. Why explainability and prediction quality in AI-driven strategy will become central, shaping the future of strategic foresight and decision-making. Episode Resources Transcript Ross Dawson: Felipe, it’s a delight to have you on the show. Felipe Csaszar: Oh, the pleasure is mine, Ross. Thank you very much for inviting me. Ross Dawson: So many, many interesting things for us to dive into. But one of the themes that you’ve been doing a lot of research and work on recently is the role of AI in strategic decision making. Of course, humans have been traditionally the ones responsible for strategy, and presumably will continue to be for some time. However, AI can play a role. Perhaps set the scene a little bit first in how you see this evolving. Felipe Csaszar: Yeah, yeah. So, as you say, strategic decision making so far has always been a human task. People have been in charge of picking the strategy of a firm, of a startup, of anything, and AI opens a possibility that now you could have humans helped by AI, and maybe at some point, AI is designing the strategies of companies. One way of thinking about why this may be the case is to think about the cognitive operations that are involved in strategic decision making. Before AI, that was my research—how people came up with strategies. There are three main cognitive operations. One is to search: you try different things, you try different ideas, until you find one which is good enough—that is searching. The other is representing: you think about the world from a given perspective, and from that perspective, there’s a clear solution, at least for you. That’s another way of coming up with strategies. And then another one is aggregating: you have different opinions of different people, and you have to combine them. This can be done in different ways, but a typical one is to use the majority rule or unanimity rule sometimes. In reality, the way in which you combine ideas is much more complicated than that—you take parts of ideas, you pick and choose, and you combine something. So there are these three operations: search, representation, and aggregation. And it turns out that AI can change each one of those. Let’s go one by one. So, search: now AIs, the current LLMs, they know much more about any domain than most people. There’s no one who has read as much as an LLM, and they are quite fast, and you can have multiple LLMs doing things at the same time. So LLMs can search faster than humans and farther away, because you can only search things which you are familiar with, while an LLM is familiar with many, many things that we are not familiar with. So they can search faster and farther than humans—a big effect on search. Then, representation: a typical example before AI about the value of representations is the story of Merrill Lynch. The big idea of Merrill Lynch was how good a bank would look if it was like a supermarket. That’s a shift in representations. You know how a bank looks like, but now you’re thinking of the bank from the perspective of a supermarket, and that leads to a number of changes in how you organize the bank, and that was the big idea of Mr. Merrill Lynch, and the rest is history. That’s very difficult for a human—to change representations. People don’t like changing; it’s very difficult for them, while for an AI, it’s automatic, it’s free. You change their prompt, and immediately you will have a problem looked at from a different representation. And then the last one was aggregating. You can aggregate with AI virtual personas. For example, you can create a virtual board of directors that will have different expertises and that will come up with ideas that a given person may not come up with. And now you can aggregate those. Those are just examples, because there are different ways of changing search, representation, and aggregation, but it’s very clear that AI, at least the current version of AI, has the potential to change these three cognitive operations of strategy. Ross Dawson: That’s fantastic. It’s a novel framing—search, representation, aggregation. Many ways of framing strategy and the strategy process, and that is, I think, quite distinctive and very, very insightful, because it goes to the cognitive aspect of strategy. There’s a lot to dig into there, but I’d like to start with the representation. I think of it as the mental models, and you can have implicit mental models and explicit mental models, and also individual mental models and collective mental models, which goes to the aggregation piece. But when you talk about representation, to what degree—I mean, you mentioned a metaphor there, which, of course, is a form of representing a strategic space. There are, of course, classic two by twos. There are also the mental models which were classically used in investment strategy. So what are the ways in which we can think about representation from a human cognitive perspective, before we look at how AI can complement it? Felipe Csaszar: I think it’s important to distinguish—again, it’s three different things. There are three different types of representations. There are the internal representations: how people think in their minds about a given problem, and that usually people learn through experience, by doing things many times, by working at a given company—you start looking at the world from a given perspective. Part of the internal representations you can learn at school, also, like the typical frameworks. Then there are external representations—things that are outside our mind that help us make decisions. In strategy, essentially everything that we teach are external representations. The most famous one is called Porter’s Five Forces, and it’s a way of thinking about what affects the attractiveness of an industry in terms of five different things. This is useful to have as an external representation; it has many benefits, because you can write it down, you can externalize it, and once it’s outside of your mind, you free up space in your mind to think about other things, to consider other dimensions apart from those five. External representations help you to expand the memory, the working memory that you have to think about strategy. Visuals in general, in strategy, are typical external representations. They play a very important role also because strategy usually involves multiple people, so you want everybody to be on the same page. A great way of doing that is by having a visual so that we all see the same. So we have internal—what’s in your mind; external—what you can draw, essentially, in strategy. And then there are distributed representations, where multiple people—and now with AI, artifacts and software—among all of them, they share the whole representation, so they have parts of the representation. Then you need to aggregate those parts—partial representations; some of them can be internal, some of them are external, but they are aggregated in a given way. So representations are really core in strategic decision making. All strategic decisions come from a given set of representations. Ross Dawson: Yeah, that’s fantastic. So looking at—so again, so much to dive into—but thinking about the visual representations, again, this is a core interest of mine. Can you talk a little bit about how AI can assist? There’s an iterative process. Of course, visualization can be quite simple—a simple framework—or visuals can provide metaphors. There are wonderful strategy roadmaps which are laid out visually, and so on. So what are the ways in which you see AI being able to assist in that, both in the two-way process of the human being able to make their mental model explicit in a visualization, and the visualization being able to inform the internal representation of the strategist? Are there any particular ways you’ve seen AI be useful in that context? Felipe Csaszar: So I was very intrigued—as soon as LLMs became popular, were launched—yeah, ChatGPT, that was in November 2022—I started thinking, there are so many ways in which this could be used. So myself and two co-authors, Hyunjin Kim and Harsh Ketkar, we wrote a paper, one of the initial papers on how AI can be used in strategy. It’s published in Strategy Science, and in that paper, we explore many ways in which AI could be used in strategy. Of course, you can ask AI about coming up with answers to questions that you may have. You can also use AI to use any of these frameworks that have been developed in strategy. It was very clear to us that it was usable. Then the question was, how good are those uses? What’s the quality of current AI doing this type of task? So what we did is an experiment where we compared the performance of AI to the performance of humans. In strategy, there are two types of tasks: one is to generate alternatives, and the other is to select alternatives. You have a problem—the first thing you want to do is have possible solutions, and then you want to be able to pick the best out of those. So we had two experiments: one where we measured the ability of AI to generate alternatives, another to select. For generation, what we did is we got data from a business plan competition where people were applying with business plans that all had the same format. The important thing is that the first paragraph of that application had the problem—a problem that they thought was important. So we took all of those applications and removed everything except for the problem, and then we gave that problem to an AI and asked the AI, “Hey, complete the rest of the business plan.” So now we have business plans that are real, and the AI twins of those—business plans created by an AI that try to solve the same problem. Then we put both in a kind of business plan competition, where we had people with experience in investments ranking all of these business plans, and they didn’t know which ones were created by humans and which ones were created by AIs. We looked at their evaluations at the end of the day, and on average, the ones that were generated by the AI were ranked a little bit higher—7% higher—than the ones that were generated by humans. So at least in this very specific context of business plan competitions, there’s potential. We’re saying, hey, AI could generate things at a level that is comparable to the people applying to this type of business plan competition. That has a lot of potential. We could use it in different ways. The other part of this study was to measure the ability of AI to select strategies among strategies. There, what we did is use data from another business plan competition, where all of the business plans had been evaluated by venture capitalists according to 10 dimensions: how strong is the idea, how strong is the team, how strong is the technology, etc. Then we gave an AI the same rubric that the venture capitalists received and asked the AI to rank or grade each one of these startups according to these 10 dimensions. Then we compared how similar the evaluations of the LLM were to the evaluations of the venture capitalists, and we showed that they are quite similar—there’s a correlation of 52%. This, again, tells us that there is potential here. An AI could do things that are quite similar to an experienced human evaluating this type of startup. A very interesting result there is that the correlation between two venture capitalists is lower than that 52%. So if you want to predict what a venture capitalist is going to say about your business, you’re better off asking an LLM than asking another venture capitalist. Ross Dawson: Yes, which perhaps shows the broad distribution of VC opinions. So obviously, LLMs can play valuable roles in many aspects of the strategy process, but this brings us back to the humans plus AI role. There are many—again, a big topic—but rather than looking at them, comparing what humans and AI did, where do you see the primary opportunities for humans and AI to collaborate in the strategy process? Felipe Csaszar: Yeah, yeah. So I think that’s a fascinating question, and my guess is that the study of the strategy process will completely change in the next 10 to 20 years. So far, all of the strategy process has been to study what happens when you have multiple people making strategy decisions. In the past, we studied things like devil’s advocate, or we have studied the role of changing the size of the group of people making decisions, or the consensus level required. But in the future, there will be AIs in this process that will have completely different bounds or capacities than humans. So we will need to learn what’s the best way of collaborating with them and including them into the strategic decision making process. Today, we don’t know much about it. We are beginning to learn things, like the study I mentioned—hey, in this task, it seems to be better—but there’s so much that we need to learn. I am working on some things, but it’s still early. Ross Dawson: Going back to the distributed representation—this is something where, of course, distributed representation can be in multiple people. Arguably, it can include human and AI agents as each having different representations. But this goes, of course, to the aggregation piece, where the aggregation is—you have a board of directors, group of executives, potentially a participative strategy process bringing more people into the organization. What are the specific roles of AI in assisting or facilitating effective aggregation to form a cohesive strategy? Felipe Csaszar: Yeah, so the truth is, we yet don’t know. There’s not enough research. We’re starting to think about it. We can see many uses, and I think what people should be doing now is running experiments to see when those add value and when they don’t. It will be different for different companies in different industries, so probably there’s no one solution that’s the same for everybody. For example, one possible use in strategic decision making is predicting what your competitors would do. If I do this, what would be the most likely reaction of my competitor? That’s one. Another one is predicting consumers: if I launch this product with this set of characteristics, what would be the most likely response of my consumers? In strategy, something that has been very popular for the last 20 years is something called the Blue Ocean Strategy, which is a method to come up with new offerings, with new value propositions, but that requires a lot of creativity. With AI, you can automate part of that. At the end of the day, it’s a search process. You have to think about what would happen if I add this, or if I add this other thing, or if I increase this. Part of that can be automated—that would be another use. Or if you have different proposals—in this other study, we show, hey, AI is good at evaluating, so if you have the right rubric, this can automate the evaluation, or can automate the first part of that evaluation so that you only have to spend your time among the really complicated, more sophisticated decisions or alternatives. There are many, many things that can be done at this point. Ross Dawson: Which goes to, I think, one of the interesting points in your work—representational complexity. Some strategies are arguably simple; other strategies, you can call them more sophisticated, but they are more complex. The representation of complexity is greater. There are two things that are required for that. One is, of course, sophisticated thinking, but also, because strategy in any organization involves multiple people, it requires that there is an ability for a number of people together to hold a hopefully similar or very similar representation of a quite complex topic. What are ways in which AI can be used to enhance that development of more sophisticated or nuanced or complex representations that can support a better strategy? Felipe Csaszar: So that’s a great point. I have a paper from before this new round of AI called exactly that—representation complexity. There has been a long-standing discussion in strategy of when you want to use a simple representation, whether it’s better to use a complex representation, or something in between. We tried to clarify when each one of these applies. But then came this new round of AI, and I think it changes things a lot. I talk a little bit about this in a chapter I uploaded recently—it’s called “Unbounding Rationality.” The key thing there is that humans—we have our own computer here, it’s the brain, and the brain has some constraints. One very important for strategy is the capacity of our working memory. There’s this famous paper from the 1950s called “The Magical Number Seven,” that we can hold in our working memory seven plus or minus two items—so between five and nine things we can keep at the same time in our mind. That’s why, for example, I think all strategy frameworks are very simple. There’s the five forces—fits within our working memory—or these typical two by twos, they have four quadrants—fits within our working memory. But AIs don’t have that bound. They are not constrained by the same working memory constraint that we have. So I would expect that future frameworks will be much more complex, that representational complexity will increase because of AI. Of course, frameworks of the future won’t have a million things, because when you put too many things, you’re overfitting—it works well with things that happened in the past, but not in the future—but they will probably have more than five things. Also, another reason for not having a million things inside a framework is that at the end of the day, you will still need to communicate frameworks. You will need to convince the other people in the organization, the ones that are implementing the strategy, that this is the right strategy. You will need to convince them, so you don’t want to have something that’s extremely complex. But my guess would be that the complexity of frameworks and of strategies will increase with AI. Ross Dawson: So looking forward—you talked about 10 or 20 years. If we see the current pace of capability development of LLMs on a similar trajectory, where do you see the remaining role of humans as a complement to AI in shaping strategy? I think you mentioned this possibility of essentially AI forming strategy, but I think for a wide array of reasons, it will be human plus AI—humans will play a role as final decision maker or other things. So where do you see those fundamental human capabilities still being retained for the foreseeable future, as a complement to AI in strategy? Felipe Csaszar: So I think that for the next 10, 20, maybe 30 years, humans will be really busy coming up with how to use AI—all of these experiments that we mentioned, people will be running all of those things in all different industries, and that takes a while. That will require human ingenuity and trying things and really understanding strategy and understanding the capabilities of AI. So I don’t see AI replacing human strategists in the very short term. On the contrary, because of AI, strategists will be more busy finding what are the best ways of using AI in their businesses. I think 10, 20, or 30 years is very reasonable. If you think about the previous technological revolution, which I could say was the Internet—the technology for the Internet, we could say, existed since around ’94. The World Wide Web is from ’94, browsers are from ’94, bandwidth enough to send email. Essentially all of the technology that supports internet business today was mostly in place in the mid to late ’90s. But the businesses, or people, ended up using all of those things 10 or 20 years after that, because it takes a long time for people, for strategists, to come up with the idea—for someone to come up with the idea of, let’s say, Netflix or eBay or PayPal or Facebook—all of those things, they take time for people to understand this is doable. Then it takes time to implement. Then it takes time for users to say, “Hey, this is useful.” There’s a lot of adaptation, and then there will be regulation. So the whole process takes a long time. I don’t think that businesses will change from one day to the next. It will be a relatively slow process that will take decades. When we look back in 20 years from now, we will see, “Hey, everything changed,” but every year we will see just a little bit of change, like what happened with the Internet. So I imagine that people designing strategies, implementing strategies, they will be very busy in the next 20 years. Ross Dawson: So to round out, I won’t ask you to make predictions, but maybe some hypotheses. What do you think are some interesting hypotheses that will inform your research—not just next year, but in the years beyond? Where do you think are the interesting avenues that we should be not just exploring and researching, but where there is a valid and useful hypothesis? Felipe Csaszar: Yeah, so many things, but one very important—I think that strategy will be more about making the right predictions. The role of foresight. It turns out that when you want to train a machine learning algorithm, you need to have some signal that informs how you train the system. It’s called the gradient, or the objective function. So in strategy, we will need to make that more central, and then think, what are the best ways in which you can use AI to make the right predictions? That requires measuring the quality of predictions. So you change this in the business, and this ends up happening. We want an AI to be able to do that. So coming up with ways in which you can measure the quality of decisions will become more important, so that we can train those AIs. That’s one. And very related to that is, well, the thing that’s generating the predictions are representations, and then it’s coming up with those more complex representations that are better at making decisions or are better at discovering things that are hard for humans to discover. Those are the two main things. I think the future of strategy will be about finding ways of improving foresight and finding ways of improving the thing that creates that foresight, which are the representations. All of that will change what has been called the strategy process—how we make decisions in strategy. Ross Dawson: So I just need to pick up on that point around prediction. One of the challenges with external predictions is that, then, as a strategist, you have to say, either I will build my strategy based on that prediction, or I question that prediction. I think there are alternatives or attribute probabilities to it. So even if a prediction machine gets better, it’s still very challenging, particularly cognitively, in terms of accountability for the strategist to incorporate a prediction where you don’t necessarily have all of the logic behind the prediction as a machine learning model to incorporate. So how can a strategist incorporate what may be a relatively black box prediction into an effective strategy? Felipe Csaszar: Yeah, well, and here we are in the conjecture part of this interview. So my answer is in that spirit. I think there are two ways out of this. One is that we will ask for explainable predictions. There’s a whole area of AI called Explainable AI, which is exactly trying to do this—not just say what’s the best prediction, but why the AI is saying that’s the right prediction. So that could develop, and probably that will develop, because humans will question whatever the AI will predict. That’s one way. The other is, imagine that the AI becomes very, very, very good at making predictions. Then at some point, it doesn’t matter if it can explain it or not—it’s just making very good predictions. It’s like, imagine you want to win at chess and you have this machine that can play chess very well. This machine wins at chess. You don’t need to exactly understand how that machine is making each one of those decisions. But if the machine is very good at it, and it’s consistently good at it, people will use it. In a sense, the market will decide. If this works better than a machine that provides an explanation for each one of the steps, people will just go with the one that’s making the right prediction. Ross Dawson: I think there’s all sorts of other places we can go to from there, but that’s fascinating. So where can people go to find out more about your work? Felipe Csaszar: Well, I upload all of my stuff to SSRN. So if you Google my name and SSRN, you will find all of my papers. In the near future, like in the next three months or so, I’ll have two things coming out. One is a Handbook of AI, written also with my co-editor Nan Jia from USC, that will have 20 chapters that will explore different ways in which AI will be affecting strategies—the Handbook of AI and Strategy, published by Elgar. And then around that same time, there will be a special issue of the Strategy Science journal where I’m one of the co-editors, which will be exactly about the same—about AI and strategic decision making. We already have accepted several of the papers for that special issue. Those papers will be pushing the frontier of what we know about AI and strategic decision making. Ross Dawson: That’s fantastic. I will certainly be following your work—very highly aligned with the humans plus AI movement. And thank you for all of the wonderful research and work you’re doing. Felipe Csaszar: Thank you so much, Ross. It’s been a pleasure. The post Felipe Csaszar on AI in strategy, AI evaluations of startups, improving foresight, and distributed representations of strategy (AC Ep32) appeared first on Humans + AI.

    Lavinia Iosub on AI in leadership, People & AI Resources (PAIR), AI upskilling, and developing remote skills (AC Ep31)

    Play Episode Listen Later Feb 11, 2026 38:05


    The post Lavinia Iosub on AI in leadership, People & AI Resources (PAIR), AI upskilling, and developing remote skills (AC Ep31) appeared first on Humans + AI.

    Jeremy Korst on the state of AI adoption, accountable acceleration, changing business models, and synthetic personas (AC Ep30)

    Play Episode Listen Later Jan 30, 2026 36:07


    The post Jeremy Korst on the state of AI adoption, accountable acceleration, changing business models, and synthetic personas (AC Ep30) appeared first on Humans + AI.

    Nikki Barua on reinvention, reframing problems, identity shifts for AI adoption, and the future workforce (AC Ep29)

    Play Episode Listen Later Jan 22, 2026 36:15


    The post Nikki Barua on reinvention, reframing problems, identity shifts for AI adoption, and the future workforce (AC Ep29) appeared first on Humans + AI.

    Alexandra Samuel on her personal AI coach Viv, simulated personalities, catalyzing insights, and strengthening social interactions (AC Ep28)

    Play Episode Listen Later Jan 14, 2026 50:43


    The post Alexandra Samuel on her personal AI coach Viv, simulated personalities, catalyzing insights, and strengthening social interactions (AC Ep28) appeared first on Humans + AI.

    Lisa Carlin on AI in strategy execution, participative strategy, cultural intelligence, and AI's impact on consulting (AC Ep27)

    Play Episode Listen Later Dec 17, 2025 37:18


    The post Lisa Carlin on AI in strategy execution, participative strategy, cultural intelligence, and AI's impact on consulting (AC Ep27) appeared first on Humans + AI.

    Nicole Radziwill on organizational consciousness, reimagining work, reducing collaboration barriers, and GenAI for teams (AC Ep26)

    Play Episode Listen Later Dec 10, 2025 37:20


    The post Nicole Radziwill on organizational consciousness, reimagining work, reducing collaboration barriers, and GenAI for teams (AC Ep26) appeared first on Humans + AI.

    Joel Pearson on putting human first, 5 rules for intuition, AI for mental imagery, and cognitive upsizing (AC Ep25)

    Play Episode Listen Later Dec 3, 2025 37:23


    The post Joel Pearson on putting human first, 5 rules for intuition, AI for mental imagery, and cognitive upsizing (AC Ep25) appeared first on Humans + AI.

    Diyi Yang on augmenting capabilities and wellbeing, levels of human agency, AI in the scientific process, and the ideation-execution gap (AC Ep24)

    Play Episode Listen Later Nov 26, 2025 39:53


    The post Diyi Yang on augmenting capabilities and wellbeing, levels of human agency, AI in the scientific process, and the ideation-execution gap (AC Ep24) appeared first on Humans + AI.

    Ganna Pogrebna on behavioural data science, machine bias, digital twins vs digital shadows, and stakeholder simulations (AC Ep23)

    Play Episode Listen Later Nov 19, 2025 40:08


    The post Ganna Pogrebna on behavioural data science, machine bias, digital twins vs digital shadows, and stakeholder simulations (AC Ep23) appeared first on Humans + AI.

    Sue Keay on prioritizing experimentation, new governance styles, sovereign AI, and the treasure of national data sets (AC Ep22)

    Play Episode Listen Later Nov 12, 2025 39:16


    The post Sue Keay on prioritizing experimentation, new governance styles, sovereign AI, and the treasure of national data sets (AC Ep22) appeared first on Humans + AI.

    Dominique Turcq on strategy stakeholders, AI for board critical thinking, ecology of mind, and amplifying cognition (AC Ep21)

    Play Episode Listen Later Nov 6, 2025 39:04


    The post Dominique Turcq on strategy stakeholders, AI for board critical thinking, ecology of mind, and amplifying cognition (AC Ep21) appeared first on Humans + AI.

    Beth Kanter on AI to augment nonprofits, Socratic dialogue, AI team charters, and using Taylor Swift's pens (AC Ep20)

    Play Episode Listen Later Oct 29, 2025 35:15


    The post Beth Kanter on AI to augment nonprofits, Socratic dialogue, AI team charters, and using Taylor Swift's pens (AC Ep20) appeared first on Humans + AI.

    Ross Dawson on Levels of Humans + AI in Organizations (AC Ep19)

    Play Episode Listen Later Oct 22, 2025 16:46


    The post Ross Dawson on Levels of Humans + AI in Organizations (AC Ep19) appeared first on Humans + AI.

    Iskander Smit on human-AI-things relationships, designing for interruptions and intentions, and streams of consciousness in AI (AC Ep18)

    Play Episode Listen Later Sep 10, 2025 36:30


    The post Iskander Smit on human-AI-things relationships, designing for interruptions and intentions, and streams of consciousness in AI (AC Ep18) appeared first on Humans + AI.

    Brian Kropp on AI adoption, intrinsic incentives, identifying pain points, and organizational redesign (AC Ep17)

    Play Episode Listen Later Sep 3, 2025 39:49


    The post Brian Kropp on AI adoption, intrinsic incentives, identifying pain points, and organizational redesign (AC Ep17) appeared first on Humans + AI.

    Suranga Nanayakkara on augmenting humans, contextual nudging, cognitive flow, and intention implementation (AC Ep16)

    Play Episode Listen Later Aug 27, 2025 31:08


    The post Suranga Nanayakkara on augmenting humans, contextual nudging, cognitive flow, and intention implementation (AC Ep16) appeared first on Humans + AI.

    Michael I. Jordan on a collectivist perspective on AI, humble genius, design for social welfare, and the missing middle kingdom (AC Ep15)

    Play Episode Listen Later Aug 20, 2025 42:07


    The post Michael I. Jordan on a collectivist perspective on AI, humble genius, design for social welfare, and the missing middle kingdom (AC Ep15) appeared first on Humans + AI.

    Paula Goldman on trust patterns, intentional orchestration, enhancing human connection, and humans at the helm (AC Ep14)

    Play Episode Listen Later Aug 13, 2025 34:24


    The post Paula Goldman on trust patterns, intentional orchestration, enhancing human connection, and humans at the helm (AC Ep14) appeared first on Humans + AI.

    Vivienne Ming on hybrid collective intelligence, building cyborgs, meta-uncertainty, and the unknown infinite (AC Ep13)

    Play Episode Listen Later Aug 6, 2025 47:56


    The post Vivienne Ming on hybrid collective intelligence, building cyborgs, meta-uncertainty, and the unknown infinite (AC Ep13) appeared first on Humans + AI.

    Matt Beane on the 3 Cs of skill development, AI augmentation design templates, inverted apprenticeships, and AI for skill enhancement (AC Ep12)

    Play Episode Listen Later Jul 30, 2025 39:17


    The post Matt Beane on the 3 Cs of skill development, AI augmentation design templates, inverted apprenticeships, and AI for skill enhancement (AC Ep12) appeared first on Humans + AI.

    Tim O'Reilly on AI native organizations, architectures of participation, creating value for users, and learning by exploring (AC Ep11)

    Play Episode Listen Later Jul 23, 2025 41:00


    The post Tim O'Reilly on AI native organizations, architectures of participation, creating value for users, and learning by exploring (AC Ep11) appeared first on Humans + AI.

    Jacob Taylor on collective intelligence for SDGs, interspecies money, vibe-teaming, and AI ecosystems for people and planet (AC Ep10)

    Play Episode Listen Later Jul 16, 2025


    The post Jacob Taylor on collective intelligence for SDGs, interspecies money, vibe-teaming, and AI ecosystems for people and planet (AC Ep10) appeared first on Humans + AI.

    AI & The Future of Strategy (AC Ep9)

    Play Episode Listen Later Jul 9, 2025 12:29


    The post AI & The Future of Strategy (AC Ep9) appeared first on Humans + AI.

    Matt Lewis on augmenting brain capital, AI for mental health, neurotechnology, and dealing in hope (AC Ep8)

    Play Episode Listen Later Jun 25, 2025 34:28


    The post Matt Lewis on augmenting brain capital, AI for mental health, neurotechnology, and dealing in hope (AC Ep8) appeared first on Humans + AI.

    Amir Barsoum on AI transforming services, pricing innovation, improving healthcare workflows, and accelerating prosperity (AC Ep7)

    Play Episode Listen Later Jun 18, 2025 34:02


    The post Amir Barsoum on AI transforming services, pricing innovation, improving healthcare workflows, and accelerating prosperity (AC Ep7) appeared first on Humans + AI.

    Minyang Jiang on AI augmentation, transcending constraints, fostering creativity, and the levers of AI strategy (AC Ep6)

    Play Episode Listen Later Jun 4, 2025 34:21


    The post Minyang Jiang on AI augmentation, transcending constraints, fostering creativity, and the levers of AI strategy (AC Ep6) appeared first on Humans + AI.

    Sam Arbesman on the magic of code, tools for thought, interdisciplinary ideas, and latent spaces (AC Ep5)

    Play Episode Listen Later May 28, 2025 35:56


    The post Sam Arbesman on the magic of code, tools for thought, interdisciplinary ideas, and latent spaces (AC Ep5) appeared first on Humans + AI.

    Bruce Randall on energy healing and AI, embedding AI in humans, and the implications of brain-computer interfaces (AC Ep4)

    Play Episode Listen Later May 21, 2025 26:14


    The post Bruce Randall on energy healing and AI, embedding AI in humans, and the implications of brain-computer interfaces (AC Ep4) appeared first on Humans + AI.

    Carl Wocke on cloning human expertise, the ethics of digital twins, AI employment agencies, and communities of AI experts (AC Ep3)

    Play Episode Listen Later May 14, 2025 37:04


    The post Carl Wocke on cloning human expertise, the ethics of digital twins, AI employment agencies, and communities of AI experts (AC Ep3) appeared first on Humans + AI.

    Nisha Talagala on the four Cs of AI literacy, vibe coding, critical thinking about AI, and teaching AI fundamentals (AC Ep2)

    Play Episode Listen Later May 7, 2025 33:24


    The post Nisha Talagala on the four Cs of AI literacy, vibe coding, critical thinking about AI, and teaching AI fundamentals (AC Ep2) appeared first on Humans + AI.

    HAI Launch episode

    Play Episode Listen Later Apr 30, 2025 13:07


    The post HAI Launch episode appeared first on Humans + AI.

    Kunal Gupta on the impact of AI on everything and its potential for overcoming barriers, health, learning, and far more (AC Ep86)

    Play Episode Listen Later Apr 23, 2025 33:55


    The post Kunal Gupta on the impact of AI on everything and its potential for overcoming barriers, health, learning, and far more (AC Ep86) appeared first on Amplifying Cognition.

    Lee Rainie on being human in 2035, expert predictions, the impact of AI on cognition and social skills, and insights from generalists (AC Ep85)

    Play Episode Listen Later Apr 16, 2025 40:09


    The post Lee Rainie on being human in 2035, expert predictions, the impact of AI on cognition and social skills, and insights from generalists (AC Ep85) appeared first on Amplifying Cognition.

    Kieran Gilmurray on agentic AI, software labor, restructuring roles, and AI native intelligence businesses (AC Ep84)

    Play Episode Listen Later Apr 9, 2025 34:50


    The post Kieran Gilmurray on agentic AI, software labor, restructuring roles, and AI native intelligence businesses (AC Ep84) appeared first on Amplifying Cognition.

    Jennifer Haase on human-AI co-creativity, uncommon ideas, creative synergy, and humans outperforming (AC Ep83)

    Play Episode Listen Later Apr 2, 2025 32:16


    The post Jennifer Haase on human-AI co-creativity, uncommon ideas, creative synergy, and humans outperforming (AC Ep83) appeared first on Amplifying Cognition.

    Pat Pataranutaporn on human flourishing with AI, augmenting reasoning, enhancing motivation, and benchmarking human-AI interaction (AC Ep82)

    Play Episode Listen Later Mar 26, 2025 38:58


    The post Pat Pataranutaporn on human flourishing with AI, augmenting reasoning, enhancing motivation, and benchmarking human-AI interaction (AC Ep82) appeared first on Amplifying Cognition.

    Amplifying Foresight Compilation (AC Ep81)

    Play Episode Listen Later Mar 19, 2025 30:36


    The post Amplifying Foresight Compilation (AC Ep81) appeared first on Amplifying Cognition.

    AI for Strategy Compilation (AC Ep80)

    Play Episode Listen Later Mar 12, 2025 32:02


    The post AI for Strategy Compilation (AC Ep80) appeared first on amplifyingcognition.

    Collective Intelligence Compilation (AC Ep79)

    Play Episode Listen Later Mar 5, 2025 31:22


    The post Collective Intelligence Compilation (AC Ep79) appeared first on amplifyingcognition.

    Helen Lee Kupp on redesigning work, enabling expression, creative constraints, and women defining AI (AC Ep78)

    Play Episode Listen Later Feb 19, 2025 34:56


    The post Helen Lee Kupp on redesigning work, enabling expression, creative constraints, and women defining AI (AC Ep78) appeared first on amplifyingcognition.

    Human AI Symbiosis Compilation (AC Ep77)

    Play Episode Listen Later Feb 12, 2025 26:09


    The post Human AI Symbiosis Compilation (AC Ep77) appeared first on amplifyingcognition.

    Rita McGrath on inflection points, AI-enhanced strategy, memories of the future, and the future of professional services (AC Ep76)

    Play Episode Listen Later Feb 5, 2025 33:10


    The post Rita McGrath on inflection points, AI-enhanced strategy, memories of the future, and the future of professional services (AC Ep76) appeared first on amplifyingcognition.

    Christian Stadler on AI in strategy, open strategy, AI in the boardroom, and capabilities for strategy (AC Ep75)

    Play Episode Listen Later Jan 29, 2025 34:10


    The post Christian Stadler on AI in strategy, open strategy, AI in the boardroom, and capabilities for strategy (AC Ep75) appeared first on amplifyingcognition.

    Valentina Contini on AI in innovation, multi-potentiality, AI-augmented foresight, and personas from the future (AC Ep74)

    Play Episode Listen Later Dec 18, 2024 34:45


    The post Valentina Contini on AI in innovation, multi-potentiality, AI-augmented foresight, and personas from the future (AC Ep74) appeared first on amplifyingcognition.

    Claim Thriving on Overload

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel