Thriving on Overload

Follow Thriving on Overload
Share on
Copy link to clipboard

Leading futurist and entrepreneur Ross Dawson speaks to the world’s best at creating value from infinite information. His guests share the insights and practices that will help you to thrive in a rapidly accelerating world.

Ross Dawson


    • Mar 12, 2026 LATEST EPISODE
    • every other week NEW EPISODES
    • 35m AVG DURATION
    • 189 EPISODES


    Search for episodes from Thriving on Overload with a specific topic:

    Latest episodes from Thriving on Overload

    Cornelia C. Walther on AI for Inspired Action, return on values, prosocial AI, and the hybrid tipping zone (AC Ep35)

    Play Episode Listen Later Mar 12, 2026 36:05


    “You and I, we’re part of this last analog generation. We had the opportunity to grow up in a time and age where our brains had to evolve against friction.” –Cornelia C. Walther About Cornelia C. Walther Cornelia C. Walther is Senior Fellow at Wharton School, a Visiting Research Fellow at Harvard University, and the Director of POZE, a global alliance for systemic change. She is author of many books, with her latest book, Artificial Intelligence for Inspired Action (AI4IA), due out shortly. She was previously a humanitarian leader working for over 20 years at the United Nations driving social change globally. Webiste: pozebeingchange LinkedIn Profile: Cornelia C. Walther University Profile: knowledge.wharton What you will learn How the ‘hybrid tipping zone’ between humans and AI shapes society’s future The dangers and consequences of ‘agency decay’ as individuals delegate critical thinking and action to AI The four accelerating phenomena influencing humanity: agency decay, AI mainstreaming, AI supremacy, and planetary deterioration Actionable frameworks, including ‘double literacy’ and the ‘A frame’, to balance human and algorithmic intelligence What defines ‘pro social AI’ and strategies to design, measure, and advocate for AI systems that benefit people and the planet The need to move beyond traditional ethics toward values-driven AI development and organizational ‘return on values’ Leadership principles for creating humane technology and building unique, purpose-led organizations in the age of AI Global contrasts in AI development (US, Europe, China, and the Global South) and emerging examples of pro social AI initiatives Episode Resources Transcript Ross Dawson: Cornelia, it is fantastic to have you on the show Cornelia Walther: Thank you for having me Ross. Ross: So your work is very wonderfully humans plus AI, in being able to look at humans and humanity and how we can amplify the best as possible. That’s one really interesting starting point is your idea of the hybrid tipping zone. Could you share with us what that is? Cornelia: Yes, happy to. I would argue that we’re currently navigating a very dangerous transition where we have four disconnected yet mutually accelerating phenomena happening. At the micro level, we have agency decay, and I’m sure we’ll talk more about that later, but individuals are gradually delegating ever more of their thinking, feeling, and doing to AI. We’re losing not only control, but also the appetite and ability to take on all of these aspects, which are part of being ourselves. At the meso level, we have AI mainstreaming, where institutions—public, private, academic—are rushing to jump on the AI train, even though there are no medium or long-term evidences about how the consequences will play out. Then at the macro level, we have the race towards AI supremacy, which, if we’re honest, is not just something that the tech giants are engaged in, but also governments, because this is not just about money, it’s also about power and geopolitical rivalry. And finally, at the meta level, we have the deterioration of the planet, with seven out of nine boundaries now crossed, some with partially irreversible damages. Now, you have these four phenomena happening in parallel, simultaneously, and mutually accelerating each other. So the time to do something—and I would argue that the human level is the one where we have the most leeway, at least for now, to act—is now. You and I, we’re part of this last analog generation. We had the opportunity to grow up in a time and age where our brains had to evolve against friction. I don’t know about you, but I didn’t have a cell phone when I was a child, so I still remember my grandmother’s phone number from when I was five years old. Today, I barely remember my own. Same thing with Google Maps—when was the last time you went to a city and explored with a paper map? Now, these are isolated functions in the brain, but with ChatGPT, there’s this general offloading opportunity, which is very convenient. But being human, I would argue, it’s a very dangerous luxury to have. Ross: I just want to dig down quite a lot in there, but I want to come back to this. So, just that phrase—the hybrid tipping zone. The hybrid is the humans plus AI, so humans and AI are essentially, whatever words we use, now working in tandem. The tipping zone suggests that it could tip in more than one way. So I suppose the issue then is, what are those futures? Which way could it tip, and what are the things we can do to push it in one way or another—obviously towards the more desirable outcome? Cornelia: Thank you. I think you’re pointing towards a very important aspect, which is that tipping points can be positive or negative, but the essential thing is that we can do something to influence which way it goes. Right now, we consider AI like this big phenomenon that is happening to us. It is not—it is happening with, amongst, and because of us. I think that is the big change that needs to happen in our minds, which is that AI is neutral at the end of the day. It’s a means to an end, not an end in itself. We have an opportunity to shift from the old saying—which I think still holds true—garbage in, garbage out, towards values in, values out. But for that, we need to start offline and think: what are the values that we stand for? What is the world that we want to live in and leave behind? As you know, I’m a big defender of pro social AI, which refers to AI systems that are deliberately tailored, trained, tested, and targeted to bring out the best in and for people and planet. Ross: So again, lots of angles to dig into, but I just want to come back to that agency decay. I created a framework around the cognitive impact of AI, going from, at the bottom, cognitive corruption and cognitive erosion, through to neutral aspects, to the potential for cognitive augmentation. There are some individuals, of course, who are getting their thinking corrupted or eroded, as you’ve suggested; others are using it well and in ways which are potentially enhancing their cognition. So, there is what individuals can do to be able to do that. There’s also what institutions, including education and employers, can do to provide the conditions where people are more likely to have a positive impact on cognition. But more broadly, the question is, again, how can we tip that more in the positive direction? Because absolutely, not just the potential, but the reality of cognitive erosion—or agency decay, as you describe it, which I think is a great phrase. So are there things we can do to move away from the widespread agency decay, which we are in danger of? Cornelia: Yeah, I think maybe we could marry our two frameworks, because the scale of agency decay that I have developed looks at experience, experimentation, integration, reliance, and addiction. I would say we have now passed the stage of experimentation, and most of us are very deeply into the field of integration. That means we’re just half a step away from reliance, where all of a sudden it becomes nearly unthinkable to write that email yourself, to do that calendar scheduling yourself, or to write that report from scratch. But that means we’re just one step away from full-blown addiction. At least now, we still have the possibility to compare the before and after, which comes back to us as an analog generation. Now is the time to invest in what I would call double literacy—a holistic understanding of our NI, our natural intelligence, but also our algorithmic, our AI. That requires a double literacy—not just AI literacy or digital literacy, but the complementarity of these two intelligences and their mutual influence, because none of them happens in a vacuum anymore. Ross: Absolutely, So what you described—experiment, integration, reliance, addiction—sounds like a slippery slope. So, what are the things we can do to mitigate or push back against that, to use AI without being over-reliant, and where that experiment leads to integration in a positive way? What can we do, either as individuals or as employers or institutions, to stop that negative slide and potentially push back to a more positive use and frame? Cornelia: A very useful tool that I have found resonates with many people is the A frame, which looks at awareness, appreciation, acceptance, and accountability. I have an alliteration affinity, as you can see. The awareness stage looks at the mindset itself and really disciplines us not to slip down that slope, but to be aware of the steps we’re taking. The appreciation is about what makes us, in our own NI, unique, and the appreciation of where, in combination with certain external tools, it can be better. We all have gaps, we all have weaknesses, and that’s what we have to accept. The human being, even though now it’s sometimes put in opposition to AI as the better one, is not perfect either. Like probably you and most of the listeners have read Thinking, Fast and Slow by Daniel Kahneman and many others—there are libraries about human heuristics, human fallacies, our inability for actual rational thinking. But the fact that you have read a book does not mean that you are immune to that. We need to accept that this is part of our modus operandi, and in the same way as we are imperfect, AI, in many different ways, is also imperfect. And finally, the accountability. Because at the end of the day, no matter how powerful our tools are going to be, we as the human decision makers should consider ourselves accountable for the outcomes. Ross: Absolutely, that’s one of the points I make. We can’t obviously make machines accountable—ultimately, the accountability resides in humans. So we have to design systems, which I think provides a bit of a transition to pro social AI. So what is pro social AI, how do we build it, how do we deploy that, and how do we make that the center of AI development? Cornelia: Thank you for that. Pro social AI, in a way, is very simple. It’s the intent that matters, but it starts from scratch, so you have the regenerative intent embedded into the algorithmic architecture. It has four key elements that can be measured, tracked, and can also serve to sensitize those who use it and those who design it—tailored, framed, tested, targeted. The pro social AI index that I’ve been working on over the past months combines that with the quadruple bottom line: purpose, people, profit, planet. Now all of a sudden, rather than talking in an airy-fairy way about ethical AI—which is great and necessary, but I would argue is not enough—we need to systematically think about how we can harness AI as a catalyst of positive transformation that is with environmental dignity and seeks planetary health. How can we measure that? Ross: And so, what are we measuring? Are we measuring an AI system, or what is the assessment tool? What is it that is being assessed? Cornelia: It’s the how and the what for. For example, what data has been used? Is the data really representative? We know that the majority of AI tools are biased. And the other question is, is it only used for efficiency and effectiveness, but to what end? Ross: Yes, as we are seeing in current conversations around the use of models at Anthropic and OpenAI, there are tools, and there are questions around how they are used, not just what the tools are. Cornelia: Yes, so again, it comes back to the need for awareness and for hybrid intelligence, because at the end of the day, we can’t rely on companies whose purpose is to make money to give systems that serve people and planet first and foremost. Ross: This goes on to another one of your wonderful framings, which is AI for IA—AI for inspired action—around this idea of how do we amplify humans and humanity. Of course, this goes on to everything we’ve been discussing so far. But I think one of the things which is very useful there is AI, in a way, leading to humans taking action which is inspired around envisaging what is possible. So, how can we inspire positive action by people in the framing we’ve discussed? Cornelia: AI for IA is the title of the new book that’s coming out next month. But also, as with most of the things I’m saying, it’s not about the technology—it’s about the human being. We can’t expect the technology of tomorrow to be better than the humans of today. As I said before, garbage in, garbage out, or values in, values out—it’s so simple and it’s so uncomfortable, it’s so cumbersome, right? Because we like quick fixes. But unfortunately, AI or technology in general is not going to save us from ourselves, and as it is right now, we’re straightforward on a trend to repeat the mistakes made during the first, second, and third industrial revolutions, where technology and innovation were driven primarily by commercial intent. Now, I would argue that this time around, we can’t leave it at that, because this fourth industrial revolution has such a strong impact on the way we think, feel, and interact, that we need to start in our very own little courtyard to think: what kind of me do I want to see amplified? Ross: Yes, yes. I’ve always thought that if AI amplifies us, or technology generally amplifies us, we will discover who we are, because the more we are amplified, the more we see ourselves writ large. But we have choices around, as you say, what aspects of who we are as individuals and as a society we can amplify. That’s the critical choice. So the question is, how do we bring awareness to your word around what it is about us that we want to amplify, and how do we then selectively amplify that, rather than also amplify the negative aspects of humanity? Cornelia: The first thing, and that’s a simple one, is the A frame. I would argue that’s something everyone can integrate in their daily routine in a very simple way, to remind us of the four A’s: awareness, appreciation, acceptance, accountability. The other one, at the institutional level, is the integration of double literacy. Right now, there’s a lot of hype in schools and at the governmental level about AI literacy and digital literacy. I think that’s only half of the equation. This is now an opportunity to take a step back and finally address this gap that has characterized education systems for many decades, where thinking and thinking about thinking—metacognition—is not taught in schools. Systems thinking, understanding cognitive biases, understanding interplays—now is the time to learn about that. If the future will be populated by humans that interact with artificial counterparts configured to address and exploit every single one of our human Achilles heels, then we would be better advised to know those Achilles heels. So, I think these are two relatively simple ways moving forward that could take us to a better place. Ross: So this goes to one of your other books on human leadership for humane technology. So leadership of course, everyone is a leader in who they touch. We also have more formal leaders of organizations, nations, political parties, NGOs, and so on. But just taking this into a business context, there are many leaders now of organizations trying to transform their organizations because they understand that the world is different, and they need to be a different organization. They still need to make money to pay for their staff and what they are doing to develop the organization, but they have multiple purposes and multiple stakeholders. So, just thinking from an organizational leader perspective, what does human leadership for humane technology mean? What does that look like? What are the behaviors? What are the ways we can see that would show us? Cornelia: I think first, it’s a reframing away from this very narrow scope of return on investment, which has characterized the business scene for many decades, and looking at return on values. What is the bigger picture that we are actually part of and shaping here? What’s the why at the end of the day? I think that matters for leaders who are in their place to guide others, and guidance is not just telling people what they have to do, but also inspiring them to want to do it. Inspiration, at the end of the day, is something that comes from the inside out, because you see in the other person something that you would like in yourself. Power and money are not it—it’s vision. I think this is maybe the one thing that is right now missing. We all tend to see the opportunity, but then we go with what everybody else is doing, because we don’t really take the time to step back and think, well, there is the path of everyone, and there’s another one—how should I explore that one? Especially amidst AI, where just upscaling your company with additional tools is not really going to set you apart, it matters twice as much to not just think about how do I do more of the same with less investment and faster, but what makes me unique, and how can I now use the artificial treasure chests to amplify that? Ross: Yes, yes. I think purpose is now well recognized beyond the business agenda. One of the critical aspects is that it attracts the most talented people, but also, over the years, we’ve had more and more opportunities to be different as an organization. Back in the late ’90s and so on, organizations looked more and more the same. Now there are more and more opportunities to be different. The way in which AI and other technologies are brought into organizations gives an extraordinary array of possibilities to be unique, as you’ve described, and distinctive, which gives you a competitive position as well as being able to attract people who are aligned with your purpose. Cornelia: Yes, exactly. But for that, you need to know your purpose first. Ross: From everything we’ve just been talking about, or anything else, are there any examples of organizations or initiatives that you think are exemplars or support the way in which, or show how, we could be approaching this well? Cornelia: I think—this will now sound very biased—but I’m currently working with Sunway University, and I think they are the kind of academic institution that is showing a different path, seeking to leverage technology to be more sustainable, bringing in dimensions such as planetary health, like the Sunway Centre for Planetary Health, and thinking about business in a re-envisioned way, with the Institute for Global Strategy and Competitiveness. I think there are examples at the institutional level, there are examples at the individual level, and sometimes the most inspiring individuals are not those that make the headlines. That’s maybe, sorry, just on that, for me the most important takeaway: no matter which place one is in the social food chain, the essential thing is, who are you and how can you inspire the person next to you to make it a better day, to make it a better future. Ross: Yes, in fact, that word “inspired,” as you mentioned before. So that’s Sunway University in Malaysia? Cornelia: I think they are definitely a very, very good illustration of that. Ross: Just pulling this back to the global frame, and this gets quite macro, but I think it is very important. It pulls together some of the things we’ve pointed to—the difference between the approach of the United States, China, Europe, in how they are, you know, essentially the leaders in AI and how they’re going about it, but where the global south more generally, I think there’s some interesting things. Arguably, there’s a far more positive attitude generally in the populations, a sense of the opportunity to transform themselves, but of course a very different orientation in how they want to use and apply AI and in creating value for individuals, nations, and society. So how would you frame those four—the US, China, Europe, and the global south—and how they are, or could be, approaching the development of AI? Cornelia: Thank you for that. I think right now there are three mainstream patterns: the US, which is—I’m overly simplifying and aware of that—the US path, which is business overall; the European model, which is regulation overall; and the Chinese model, which is state dominance. I would argue there’s a fourth path, and I think that’s where leaders in the global south can step in. You might know I’m working, on the one hand, in Malaysia and, on the other hand, in Morocco, on the development of a sort of national blueprint of what pro social AI can look like. I think now is the time—again, coming back to leadership—to think about how countries can walk a different path and be pioneers in a field that, yes, AI has been around for various decades, but the latest trend, the latest wave that is engulfing society since November 2022, is still relatively new. So why not have nations in the global south that are very different from the West chart their own path and make it pro social, pro people, pro planet, and pro potential—and that potential that they have themselves, which sets them apart and makes them unique. Ross: Absolutely. Again, you mentioned Malaysia, Morocco. Looking around the world, of course, India is prominent. There are some African nations which have done some very interesting things. Just trying to think, where are other examples of these kinds of domestically born pro social initiatives happening? Of course, the Middle East—it’s quite different, because they’re wealthy, though they’re not among the major leaders, but there’s a whole array of different examples. Where would you point to as things which show how we could be using pro social AI at a national or regional level? Cornelia: Unfortunately, right now, there is not one country where one could say they have taken it from A to Z, but I think there are very inspiring or positive examples. For example, Vietnam was the first country in ASEAN to endorse a law on AI ethics and regulation—I think that’s a very good one. Also, ASEAN has guidelines on ethics. All of these are points of departure. Switzerland did a very nice example of what public AI can look like. So there are a lot of very good examples. The question is not so much about what to do, I think, but how to do it, and why. At the end of the day, it’s really that simple. What’s the intent behind it? What do we want the post-2030 agenda to look like? We know that the SDG—Sustainable Development Goals—are not going to be fulfilled between now and 2030. So are we learning from these lessons, or are we following the track pattern of doing more of the same and maybe throwing in a couple of additional indicators, or can we really take a step back and look ourselves and the world in the face and think, what have we missed? Now, frame it however you want, but think about hybrid development goals and ways in which means and ends—society and business—come together into a more holistic equation that respects planetary health. Because at the end of the day, our survival still depends on the survival and flourishing of planet Earth, and some might cherish the idea of emigrating to Mars, but I still think that overall the majority of us would prefer to stay here. Ross: Yes, planet Earth is beautiful, and it’d be nice to keep it that way. How can people find more about your work? Could you just tell people about your new book and any resources where people can find out more? Cornelia: Thank you so much. They are very welcome to reach out via LinkedIn. Also, I’m writing regularly on Psychology Today, on Knowledge at Wharton, and various other platforms. The new book that you mentioned is coming out next month, and there will be another one, hopefully by the end of the year. Overall, feel free to reach out. I really feel that the more people get into this different trend of thinking, the better. But thank you so much for the opportunity. Ross: Thanks so much for all of your work, Cornelia. It’s very important. The post Cornelia C. Walther on AI for Inspired Action, return on values, prosocial AI, and the hybrid tipping zone (AC Ep35) appeared first on Humans + AI.

    Ross Dawson on Humans + AI Agentic Systems (AC Ep34)

    Play Episode Listen Later Mar 4, 2026 19:12


    “Transparency has to be built into the structure so that you know where the decision is made, what authorizations are given, and have an audit trail visible so you can always see what is going on.” –Ross Dawson About Ross Dawson Ross Dawson is a futurist, keynote speaker, strategy advisor, author, and host of Amplifying Cognition podcast. He is Chairman of the Advanced Human Technologies group of companies and Founder of Humans + AI startup Informivity. He has delivered keynote speeches and strategy workshops in 33 countries and is the bestselling author of 5 books, most recently Thriving on Overload. LinkedIn Profile: Ross Dawson What you will learn How human-AI teams outperform human-only teams in productivity and efficiency The crucial role of understanding AI strengths and limitations when designing collaborative workflows Ways AI collaboration can lead to output homogenization and strategies to preserve human creativity Key principles of intelligent delegation within multi-agent AI systems, including dynamic assessment and trust Understanding accountability, transparency, and auditability in decision-making with autonomous AI agents How user intent and ‘machine fluency’ impact the effectiveness of AI agents in economic and organizational contexts The emergence of an ‘agentic economy’ and its implications for fairness, capability gaps, and representation Counterintuitive findings on AI-mediated negotiation, particularly advantages for women, and what it reveals about AI-human interaction Episode Resources Transcript Ross Dawson: This episode is a little bit different. Instead of doing an interview with somebody remarkable, as usual, today I’m going to just share a bit of an update and then share insights from three recent research papers that dig into something which I think is exceptionally important, which is how humans work with AI agentic systems. And we’ll look at a few different layers of that, from how small humans plus agent teams work through to how we can delegate decisions to AI through to some of the broader implications. But first, a bit of an update. 2026 seems to be moving exceptionally fast. It’s a very interesting time to be alive, and I think it’s pretty even hard to see what the end of this year is going to look like. So for me, I am doing my client work as usual. So I’ve got keynotes around the world on usually various things related to AI, the future of AI, humans plus AI, and so on. A few industry-specific ones in financial services and so on. And also doing some work as an advisor on AI transformation programs, so helping organizations and their leaders to frame the pathways, drawing on my AI roadmap framework in how it is you look at the phases, mapping those out, working out the issues, and being able to guide and coach the leaders to do that effectively. But the rest of my time is focused on three ventures, and I’ll share some more about these later on. But these are fairly evidently tied to my core interests. Fractious is our AI for strategy app. So this was really building a way in which we can capture the detailed nuance of the strategic thinking of leaders of the organization, to disambiguate it, to clarify it, and enable that to then be built into strategic options, strategic hypotheses, and to be able to evolve effectively. So that’ll be in beta soon. Please reach out if you’re interested in being part of the beta program, and that’ll go to market. So that’s deeply involved in that. We also have our Thought Weaver software, rebuilding previous software which had already built on AI-augmented thinking workflows. So again, that’ll be going to beta. That’s more an individual tool that will be going into beta in the next weeks. So again, go to Thought Weaver. Actually, don’t—the website isn’t updated yet—but I’ll let you know when it’s out, or keep posted for updates on that. And also building an enterprise course on humans plus AI teaming. It’s my fundamental belief that we’ve kind of been through the phase of augmentation of individuals, and we still need to work hard at doing that better. But the next phase for organizations is to focus on teams. How do you work with teams where we have both human members and AI Agentic members? And it creates a whole different series of dynamics and new skills and capabilities. It really calls for how to participate in the humans plus AI team and how to lead humans plus AI teams. And that is again going into the first few test organizations in the next month or so. So again, just let me know. So today what we’re going to look at is this theme: teams of humans working with AI agents. So not individual AI as in chat, but where we have a lot of agents with various degrees of autonomy, but also agentic systems where these agents are interacting with each other as well as with humans. So there are three papers which I want to just talk about, just give you a quick overview, and please go and check out the papers in more detail if you’re interested. There’ll be links in the show notes. First is Collaborating with AI Agents: A Field Experiment on Teamwork, Productivity and Performance, by Harang Ju at Johns Hopkins and Sinan Aral at MIT. So this, there was an experiment which had over 2,300 participants who were working on creating advertisements. And they had a whole array of humans plus AI, human-human teams, human-AI teams, sort of quite small or just in duos and so on, working on being able to create those which were then assessed in terms of quality and how they worked. So a few particularly interesting findings from that. So individually, just having a human-AI team essentially enhanced performance significantly compared to just human-only teams. And so they were able to move faster and to complete more of their tasks, and the quality was strong. But there’s a phrase which is commonly used around the jagged frontier of capability of AI, and it was quite clear that there were some domains where AI does very well and others where it didn’t. And so this comes to the part where, in terms of the design of the tasks, the design of the human-AI systems, and also the understanding by the human users of what AI is good at or not, is fundamental in being able to do that. And so in some cases, if AI was used in some domains such as image quality, they actually decreased quality. So we need to understand where and how both to apply AI in this jagged frontier and design the systems around that. This changes the role of the humans, of course. Humans then tend to delegate more. And there’s one of the things which they tested for, which is how do you behave differently if you know your teammate is an AI as opposed to not knowing whether a human or AI. And it changes. So they become more task-oriented. They are less using the social cues to interact, and they are essentially becoming more efficient. But some of these social cues which are valuable in the human-human collaboration started to disappear. And this automation process meant that there was not, in the end, as much creative diversity. Now I’ve often pointed to the role of AI in creativity tasks. It depends fundamentally on the architecture—where does the AI sit in terms of initial ideas which are then sorted by filtered by humans and then are involved, or where it sits in that process. But in this particular structure, they found that humans plus AI teams started to create more and more similar-type outputs. So this homogenization of outputs in these human-AI teams was very notable and significant. And so this again creates a design factor for how it is that we build human-AI systems which actually do not lead to homogeneous output. And we’re making sure that we are ensuring that the human diversity is maintained. Often that can be done by being able to have human outputs first without AI then blunting or narrowing the breadth of the creative outputs of humans. Second paper I’d like to point to is called Intelligent AI Delegation, from a team at Google DeepMind. So this is this point where we now have not just single AI agents to delegate decisions to or problems to, but in fact systems of AI. And so this creates a different challenge. And the key point is, I’m saying this, is around you are delegating tasks, but when you are delegating tasks it’s more than just saying, okay, which agent gets the task. You have to understand responsibility. So where does accountability reside? Who is responsible for that? How clarity around the roles of the agents, what are the boundaries of what it is they can do and cannot do, the clarity of the intent, and how that’s communicated and cascaded through the agents, and the critical role of trust and appropriate degrees of trust in the systems. So this means that we have to define what are the different characteristics of the task. And in the paper it goes through quite a few different characteristics. And a few of the critical ones was the degree of uncertainty around the task. Obviously, if it is very clear that can be appropriately delegated, but many tasks and problems are uncertain. And so this creates a different dynamic. Whether verifiable, as you know you have high-quality information, or whether that’s the degree of uncertainty around whether decisions are reversible, the degree of subjectivity, because not everything is data-driven. And so assessing these task characteristics start to define where human judgment plays a role, how do you create those checks, and how do you build that. So this creates a system so intelligent delegation is not just how the humans delegate, but in turn the structure of how that cascades down through the agents. So this requires this idea of dynamic assessment. So you’re not just setting and forgetting. You are continuously reassessing what is happening with the context, what is changing in the stakes, any uncertainty. So you’re coming back to be able to ensure there’s not just a single delegation structure, but you’re changing it over time. And you’ll continue to adapt as you’re executing, and be able to monitor, replan, and set. So transparency has to be built into the structure so that you have where the decision is made, what authorizations are given, you know where the audit trail is visible so you can always see what is going on in those structures. And being able to scale how you are coordinating the systems. And if it’s just small scale that’s fine, but you want to be able to build something which has been able to move across many agents. And so this requires a way of being able to discover which agents are most appropriate and be able to essentially establish the delegation of a particular task to them again on a dynamic basis. And essentially this final principle of systemic resilience, where you have to expect that things will go wrong. So there’s continuing monitoring, being able to understand that these systems can be attacked in various ways and being able to recover. So, very solid paper, quite deep, but really giving some very good principles for how it is we can delegate to AI systems. So the final of the three papers goes to a bit of a higher level. It’s called Agentic Interactions, and it’s from Alex Imas, Sanjog Misra of the University of Chicago, and Kevin Lee at the University of Michigan. And what they’re looking at is what happens on a macro scale when increasingly decisions are delegated to AI agents. So this is the agent economy that I’ve been talking about for a very long time, which is now very much coming to the fore. And so what they do is they look at what happens when we start to delegate more and more economic decisions, such as buying and selling decisions. So what they found is extraordinarily interesting. They found that the AI agents in fact do behave very similarly to their human creators. And in fact what you can observe is that there are differences in the agents where you can infer the gender and the personality of the person who is delegating the agent. Even though there is no information, the agent doesn’t even know what the gender or the personality is, they are actually flowing through. So in fact agents represent us in the market as it were, potentially very accurately. But this goes directly to the second point where this idea of machine fluency. And so AI fluency is very much a term in vogue at the moment. So the authors talk about this idea of machine fluency which is how well can a user put their intent and align that with the agent so the agent is aligned with them. And in fact they found that there’s very significant degrees of difference in those. And those people who are better at being able to get their agents to express their wishes could in fact amplify the economic outcomes of these people. And related to that in fact they showed there was a correlation that higher educational levels mean that you were able to better delegate to AI, and your AI agents performed better and gave you better returns. So again pointing to these ways in which we’re starting to see potentials for aggravation of differences in the agentic economy when our agents who act for us in the economy start to reflect among other things educational differences or capabilities in how it is we express our results and our intentions through AI. There was one very interesting and I suppose counterintuitive result. Women get better outcomes in negotiation when using AI agents than they do in human-to-human interactions. Again this is without the AI agents knowing that they are representing a woman or not. But in fact this shows that the style and the way on the machine fluency the ways in which women are able to instruct and put their intent into the AI agents is in this study superior to those of males. And there’s of course in the real world unfortunately a bias towards male performance in negotiation. And that was inversed in the study. So exceptionally interesting. So just pulling back some of the common themes of these three papers. We increasingly want a world where humans have relationships to agents. We are starting to work with them in teams and systems. And we’re starting to build economies where humans are represented by agents. And essentially our relationship to those agents and our ability to delegate effectively is driving value of course to the individual but also across these agentic systems that are emerging. So this is early on because the realities of these agentic human-agent systems are pretty early at this point. But this starts to point to some of the potential, some of the challenges, some of the opportunities, and some of the work that we have to do. So I will be sharing more on these kinds of topics in my interviews with people and also of course on the Humans Plus AI website. So just go to humansplus.ai. Actually to be frank it hasn’t been updated a lot recently but we will be sharing a lot more there. Or LinkedIn is where I share the most actually, and getting back on Twitter as well if you’re interested. But I’ll be diving deep and trying to share what I find is useful as well as interesting in helping us to create a world where humans are first. AI complements us. The reality is we are moving to humans plus AI systems. And if we design that well with the right intentions we can make this absolutely one which drives human value first. So glad to have you on the journey. Have a wonderful rest of your day. The post Ross Dawson on Humans + AI Agentic Systems (AC Ep34) appeared first on Humans + AI.

    Davide Dell'Anna on hybrid intelligence, guidelines for human-AI teams, calibrating trust, and team ethics (AC Ep33)

    Play Episode Listen Later Feb 25, 2026 35:46


    “In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation.” – Davide Dell'Anna About Davide Dell'Anna Davide Dell'Anna is Assistant Professor of Responsible AI at Utrecht University, and a member of the Hybrid Intelligence Centre. His research focuses on how AI can cooperate synergistically and proactively with humans. Davide has published a wide range of leading research in the space. Webiste: davidedellanna.com LinkedIn Profile: Davide Dell'Anna University Profile: Davide Dell'Anna What you will learn The core concept of hybrid intelligence as collaborative human-AI teaming, not replacement Why effective hybrid teams require acknowledging and leveraging both human and AI strengths and weaknesses How lessons from human-human and human-animal teams inform better design of human-AI collaboration Key differences between humans and AI in teams, such as accountability, replaceability, and identity The importance of process-oriented evaluation, including satisfaction, trust, and adaptability, for measuring hybrid team effectiveness Why appropriately calibrated trust and shared ethics are central to performance and cohesion in hybrid teams The shift from explainability to justifiability in AI, emphasizing actions aligned with shared team norms and values New organizational roles and skills—like team facilitation and dynamic team design—needed to support successful human-AI collaboration Episode Resources Transcript Ross Dawson: Hi Davide. It’s wonderful to have you on the show. Davide Dell’Anna: Hi Ross, nice to meet you. Thank you so much for having me. Ross: So you do a lot of work around what you call hybrid intelligence, and I think that’s pretty well aligned with a lot of the topics we have on the podcast. But I’d love to hear your definition and framing—what is hybrid intelligence? Davide: Well, thank you so much for the question. Hybrid intelligence is a new paradigm, or a paradigm that tries to move the public narrative away from the common focus on replacement—AI or robots taking over our jobs. While that’s an understandable fear, more scientifically and societally, I think it’s more interesting and relevant to think of humans and AI as collaborators. In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation. In a human-AI team, members can compensate for each other’s weaknesses and amplify each other’s strengths. The goal is not to substitute human capabilities, but to augment them. This immediately moves the discussion from “what can the AI do to replace me?” to “how can we design the best possible team to work together?” I think that’s the foundation of the concept of hybrid intelligence. So hybrid intelligence, per se, is the ultimate goal. We aim at designing or engineering these human-AI teams so that we can effectively and responsibly collaborate together to achieve this superior type of intelligence, which we then call hybrid intelligence. Ross: That’s fantastic. And so extremely aligned with the humans plus AI thesis. That’s very similar to what I might have said myself, not using the word hybrid intelligence, but humans plus AI to say the same thing. We want to dive into the humans-AI teaming specifically in a moment. But in some of your writing, you’ve commented that, while others are thinking about augmentation in various ways, you point out that these are not necessarily as holistic as they could be. So what do you think is missing in some of the other ways people are approaching AI as a tool of augmentation? Davide: Yeah, so I think when you look at the literature—as a computer scientist myself, I notice how easily I fall into the trap of only discussing AI capabilities. When I talk about AI or even human-AI teams, I end up talking about how I can build the AI to do this, or how I can improve the process in this way. Most of the literature does that as well. There’s a technology-centric perspective to the discussion of even human-AI teams. We try to understand what we can build from the AI point of view to improve a team. But if you think of human-AI teams in this way, you realize that this significantly limits our vocabulary and our ability to look at the team from a broader, system-level perspective, where each member—including and especially human team members—is treated individually, and their skills and identity are considered and leveraged. So, if you look at the literature, you often end up talking about how to add one feature to the AI or how to extend its feature set in other ways. But what people often miss is looking at the weaknesses and strengths of the different individuals, so that we can engineer for their compensation and amplification. Machines and people are fundamentally different: humans are good at some things, AI is good at others, and we shouldn’t try to negate or hide or be ashamed of the things we’re worse at than AI, and vice versa. Instead, we should leverage those differences. For instance, just as an example, consider memory and context awareness. At the moment, at least, AI is much more powerful in having access to memory and retrieving it in a matter of seconds—AI can access basically the whole internet. But often, when you talk nowadays with these language model agents, they are completely decontextualized. They talk in the same way to millions across the world and often have very little clue about who the specific person is in front of them, what that person’s specific situation is—maybe they’re in an airport with noise, or just one minute from giving a lecture and in a rush. The type of things you might say also change based on the specific situation. While this is a limitation of AI, we shouldn’t forget that there is the human there. The human has that contextual knowledge. The human brings that crucial context. Sometimes we tend to say, “Okay, but then we can build an AI that can understand the context around it,” but we already have the human for that. Ross: Yes, yes. I don’t think that’s what I call the framing. Framing should come from the human, because that’s what we understand—including the ethical and other human aspects of the context, as well as that broader frame. It’s interesting because, in talking about hybrid intelligence, I think many who come to augmentation or hybrid intelligence think of it on an individual basis: how can an individual be augmented by AI, or, for example, in playing various games or simulations, humans plus AI teaming together, collaborating. But the team means you have multiple humans and quite probably multiple AI agents. So, in your research, what have you observed if you’re comparing a human-only team and a team which has both human and AI participants? What are some of the things that are the same, and what are some of the things that are different? Davide: Yes, this is a very interesting question. We’ve recently done work in collaboration with a number of researchers from the Hybrid Intelligence Center, which I am part of. If you’re not familiar with it, the Hybrid Intelligence Center is a collaboration that involves practically all the Dutch universities focused on hybrid intelligence, and it’s a long project—lasting around 10 years. One of the works we’ve done recently is to try to study to what extent established properties of effective human teams could be used to characterize human-AI teams. We looked at instruments that people use in practice to characterize human teams. One of them is called the Team Diagnostic Survey, which is an instrument people use to diagnose the strengths and weaknesses of human teams. It includes a number of dimensions that are generally considered important for effective human teams. These include aspects like members demonstrating their commitment to the team by putting in extra time and effort to help it succeed, the presence of coaches available in the team to help the team improve over time, and things related to the satisfaction of the members with the team, with the relationships with other members, and with the work they’re doing. What we’ve done was to study the extent to which we could use these dimensions to characterize human-AI teams. We looked at different types of configurations of teams—some had one AI agent and one human, others had multiple agents and multiple humans, for example in a warehouse context where you have multiple robots helping out in the warehouse that have to cooperate and collaborate with multiple humans. We tried to understand whether the properties of—by the way, we also looked at an interesting case, which is human-animal-animal teams, which is another example that’s interesting in the context of hybrid intelligence. You see very often in human-animal interaction—basically two species, two alien species—interacting and collaborating with each other. They often manage to collaborate pretty effectively, and there is an awareness of what both the humans and the animals are doing that is fascinating, at least for me. So, we tried to analyze whether properties of human teams could be understood when looking at human-AI teams or hybrid teams, and to what extent. One of the things we found is that some concepts are very well understood and easily applicable to different types of hybrid teams. For example, the idea of interdependence—the fact that members in the team, in order to be a team, need to be mutually dependent, at least to some extent. Otherwise, if they’re all doing separate jobs, there’s a lack of common goal. There are also things related to having a clear mission or a clear objective as a team, and aspects related to the possibility of exhibiting autonomy in the operation of the team and taking initiative. Also, the presence and awareness of team norms, like a shared ethical code or shared knowledge about what is appropriate or not. These were things that we found people could easily understand and apply to different configurations of teams. Ross: Just actually, one thing—I don’t know if you’re familiar with the work of Mohammad Hussain Johari, who did this wonderful paper called “What Human-Horse Interactions May Teach Us About Effective Human-AI Interactions.” Again, these are the cases where we can have these parallels—learning how to do human-AI interactions from human-human and human-animal interactions. But again, it comes back to that original question: what is the same? I think you described many of those facets of the nature of teams and collaboration, which means they are the same. But there are, of course, some differences. One of the many differences is accountability, essentially, where the AI agents are not accountable, whereas the humans are. That’s one thing. So, this allocation of decision rights across different participants—human and AI—needs to take into account that they’re not equal participants. Humans have accountability, and AI does not. That’s one possible example. Davide: Yeah, definitely. I totally agree, and I remember the paper you mentioned. I agree that human-animal collaboration is a very interesting source of inspiration. When looking at this paper, we looked at the case of shepherds and shepherd dogs. I didn’t know much about it before, but then I started digging a little bit. Shepherd dogs are trained at the beginning, but over time, they learn a type of communication with the shepherd. Through whistles, the shepherd can give very short commands, and then the shepherd dogs—even in pairs—can quickly understand what they need to do. They go through the mountains, collect all the sheep, and bring them exactly as intended by the shepherd, with very little need for words or other types of communication. They manage to achieve their goals very effectively. So, I think we have a lot to learn from these cases, even though it’s difficult to study. But just to mention differences, of course—one of the things that emerged from this paper is the inherent human-AI asymmetry. Like you mentioned, accountability is definitely one aspect. I think overall, we should always give the human a different type of role in the team, similar to the shepherd and the shepherd dogs. There is some hierarchy among the members, and this makes it possible for humans to preserve meaningful control in the interactions. This also implies that different rules or expectations apply to different team members. Beyond these, there is asymmetry in skills and capabilities, as we mentioned earlier, and also in aspects related to the identity of the members. For instance, some AI could be more easily replaceable than humans. Think, for example, of robots in a warehouse. In a human team, you wouldn’t say you “replace” a team member—it’s not the nicest way to say you let someone go and bring someone else in. But with robots, you could say, “I replace this machine because it’s not working anymore,” and that’s fine. We can replace machines with little consequence, though this doesn’t always hold, because there are studies showing that people get attached to machines and AI in general. There was a recent case of ChatGPT releasing a new version and stopping the previous one, and people complained because they got attached to the previous version. So, in some cases, replacing the AI member would work well, but in others, it needs to be done more carefully. Ross: So one of the other things looked at is the evaluation of human-AI teams. If we’re looking at human teams and possibly relative performance compared to human-AI teams, what are ways in which we can measure effectiveness? I suppose this includes not just output or speed or outcomes, but potentially risk, uncertainty, explainability, or other factors. Davide: Yes, this is an interesting question, and I think it’s still an open question to some extent. From the study I mentioned earlier, we looked at how people measure human team effectiveness. There are aspects concerning, of course, the success of the team in doing the task, but these are not the only measures of effectiveness that people consider in human teams. People often consider things related to the satisfaction of the members—with their teammates, with the process of working together, and with the overall goals of the team. This often leads to reflection from the team itself during operation, at least in human teams, where people reassess and evaluate their output throughout the process to make sure satisfaction with the process and relationships goes well over time. In general, there are aspects to measure concerning the effectiveness of teams related to the process itself, which are often forgotten. It’s a matter, at least from a research point of view, of resources, because to evaluate a full process over time, you need to run experiments for longer periods. Often people stop at one instant or a few interactions, but if you think of human teams, like the usual forming, storming, norming, and performing, that often goes over a long time. Teams often operate for a long time and improve over time. So, the process itself needs to be monitored and reassessed over time. This is a way to also measure the effectiveness of the team, but over time. Ross: Interesting point, because as you say, the dynamics of team performance with a human team improve as people get to know each other and find ways of working. They can become cohesive as a team. That’s classically what happens in defense forces and in creating high-performance teams, where you understand and build trust in each other. Trust is a key component of that. With AI agents, if they are well designed, they can learn themselves or respond to changing situations in order to evolve. But it becomes a different dynamic when you have humans building trust and mutual understanding, where that becomes a system in which the AI is potentially responding or evolving. At its best, there’s the potential for that to create a better performing team, but it does require both the attitudes of the humans and well the agents. Davide: Related to this—if I can interrupt you—I think this is very important that you mentioned trust. Indeed, this is one of the aspects that needs to be considered very carefully. You shouldn’t over-trust another team member, but also shouldn’t under-trust. Appropriate trust is key. One of the things that drives, at least in human teams, trust and overall performance is also team ethics. Related to the metrics you mentioned earlier, the ability of a team to gather around a shared ethical code and stick to that, and to continuously and regularly update each other’s norms and ensure that actions are aligned with the shared norms, is crucial. This ethical code significantly affects trust in operation. You can see it very easily in human teams: considering ethical aspects is essential, and we take them into account all the time. We respect each other’s goals and values. We expect our collaborators to keep their promises and commitments, and if they cannot, they can explain or justify what they are doing. These justifications are also a key element. The ability to provide justifications for behavior is very important for hybrid teams as well. Not only the AI, but also the human should be able to justify their actions when necessary. This is where the concept of hybrid teams and, in general, hybrid intelligence requires a bit of a philosophical shift from the traditional technology-centric perspective. For example, in AI, we often talk about explainability or explainable AI, which is about looking at model computations and understanding why a decision was made. But here, we’re talking about a different concept: justifiability, which looks at the same problem from a different angle. It considers team actions in the context of shared values, shared goals, and the norms we’ve agreed upon. This requires a shift in the way we implement AI agents—they need to be aware of these norms, able to learn and adapt to team norms, and reason about them in the same way we do in society. Ross: Let’s say you’ve got an organization and they have teams, as most organizations do, and now we’re moving from classic human teams to humans plus AI teams—collaborative human-AI teams. What are the skills and capabilities that the individual participants and the leaders in the teams need to transition from human-only teams to teams that include both humans and AI members? Davide: This is a complicated question, and I don’t have a full answer, but I can definitely reflect on different skills that a hybrid team should have. I’m thinking now of recent work—not published yet—where we started moving from the quality model work I mentioned earlier towards more detailed guidelines for human-AI teams. There, we developed a number of guidelines for organizations for putting in place and operating effective teams. We categorized these guidelines in terms of different phases of team processes. For instance, we developed guidelines related to structuring the teamwork—the envisioning of the operations of the team, which roles the team members would have, which responsibilities the different team members should have. Here, I’m talking about team members, but I’m still referring to hybrid teams, so this applies to both humans and AI. This also implies different types of skills that we often don’t have yet in AI systems. For example, flexible team composition is a type of skill required to make it possible at the early stage of the team to structure the team in the right way. There are also skills related to developing shared awareness and aspects related to breaking down the task collaboratively or ensuring a continuous evolution of the team over time, with regular reassessment of the output. If you think of these notions, it’s easy to think about them in terms of traditional organizations, but when you imagine a human-AI team or a small hybrid organization, then this continuous evolution, regular output assessment, and flexible team composition are not so natural anymore. What does it mean for an LLM agent to interact with someone else? Usually, LLM architectures rely on static roles and predefined workflows—you need to define beforehand the prompts they will exchange—whereas humans use much more flexible protocols. We can adjust our protocols over time, monitor what we’re doing, and reassess whether it works or not, and change the protocols. These are skills required for the assistants, but also for the organization itself to make hybrid teaming possible. One of the things that emerges in this recent work is a new figure that would probably come up in organizations: a team designer or a team facilitator. This is not a team member per se, but an expert in teams and AI teammates, who can perhaps configure the AI teammates based on the needs of the team, and provide human team members with information needed about the skills or capabilities of the specific AI team member. It’s an intermediary between humans and AI, with expertise that other human team members may not have, and could help these teams work together. Ross: That’s fantastic. It’s wonderful to learn about all this work. Is there anywhere people can go to find out more about your research? Davide: Yeah, sure. You can look me up at my website, davidedellanna.com. That’s my main website—I try to keep it up to date. Through there, you can see the different projects I’m involved in, the papers we’re working on, both with collaborators and with PhD and master students, who often bring great contributions to our research, even in their short studies. That’s the main hub, and you can also find many openly available resources linked to the projects that people may find useful. Ross: Fantastic. Well, it’s wonderful work—very highly aligned with the idea of hybrid intelligence, and it’s fantastic that you are focusing on that, because there’s not enough people yet focusing in the area. So you and your colleagues are ahead, and I’m sure many more will join you. Thank you so much for your time and your insights. Davide: Thank you so much, Ross. Pleasure to meet you. The post Davide Dell'Anna on hybrid intelligence, guidelines for human-AI teams, calibrating trust, and team ethics (AC Ep33) appeared first on Humans + AI.

    Felipe Csaszar on AI in strategy, AI evaluations of startups, improving foresight, and distributed representations of strategy (AC Ep32)

    Play Episode Listen Later Feb 18, 2026 38:18


    “You can create a virtual board of directors that will have different expertises and that will come up with ideas that a given person may not come up with.” – Felipe Csaszar About Felipe Csaszar Felipe Csaszar is the Alexander M. Nick Professor and chair of the Strategy Area at the University of Michigan’s Ross School of Business. He has published and held senior editorial roles in top academic journals including Strategy Science, Management Science, and Organization Science, and is co-editor of the upcoming Handbook of AI and Strategy. Webiste: papers.ssrn.com LinkedIn Profile: Felipe Csaszar University Profile: Felipe Csaszar What you will learn How AI transforms the three core cognitive operations in strategic decision making: search, representation, and aggregation. The powerful ways large language models (LLMs) can enhance and speed up strategic search beyond human capabilities. The concept and importance of different types of representations—internal, external, and distributed—in strategy formulation. How AI assists in both visualizing strategists' mental models and expanding the complexity of strategic frameworks. Experimental findings showing AI's ability to generate and evaluate business strategies, often matching or outperforming humans. Emerging best practices and challenges in human-AI collaboration for more effective strategy processes. The anticipated growth in framework complexity as AI removes traditional human memory constraints in strategic planning. Why explainability and prediction quality in AI-driven strategy will become central, shaping the future of strategic foresight and decision-making. Episode Resources Transcript Ross Dawson: Felipe, it’s a delight to have you on the show. Felipe Csaszar: Oh, the pleasure is mine, Ross. Thank you very much for inviting me. Ross Dawson: So many, many interesting things for us to dive into. But one of the themes that you’ve been doing a lot of research and work on recently is the role of AI in strategic decision making. Of course, humans have been traditionally the ones responsible for strategy, and presumably will continue to be for some time. However, AI can play a role. Perhaps set the scene a little bit first in how you see this evolving. Felipe Csaszar: Yeah, yeah. So, as you say, strategic decision making so far has always been a human task. People have been in charge of picking the strategy of a firm, of a startup, of anything, and AI opens a possibility that now you could have humans helped by AI, and maybe at some point, AI is designing the strategies of companies. One way of thinking about why this may be the case is to think about the cognitive operations that are involved in strategic decision making. Before AI, that was my research—how people came up with strategies. There are three main cognitive operations. One is to search: you try different things, you try different ideas, until you find one which is good enough—that is searching. The other is representing: you think about the world from a given perspective, and from that perspective, there’s a clear solution, at least for you. That’s another way of coming up with strategies. And then another one is aggregating: you have different opinions of different people, and you have to combine them. This can be done in different ways, but a typical one is to use the majority rule or unanimity rule sometimes. In reality, the way in which you combine ideas is much more complicated than that—you take parts of ideas, you pick and choose, and you combine something. So there are these three operations: search, representation, and aggregation. And it turns out that AI can change each one of those. Let’s go one by one. So, search: now AIs, the current LLMs, they know much more about any domain than most people. There’s no one who has read as much as an LLM, and they are quite fast, and you can have multiple LLMs doing things at the same time. So LLMs can search faster than humans and farther away, because you can only search things which you are familiar with, while an LLM is familiar with many, many things that we are not familiar with. So they can search faster and farther than humans—a big effect on search. Then, representation: a typical example before AI about the value of representations is the story of Merrill Lynch. The big idea of Merrill Lynch was how good a bank would look if it was like a supermarket. That’s a shift in representations. You know how a bank looks like, but now you’re thinking of the bank from the perspective of a supermarket, and that leads to a number of changes in how you organize the bank, and that was the big idea of Mr. Merrill Lynch, and the rest is history. That’s very difficult for a human—to change representations. People don’t like changing; it’s very difficult for them, while for an AI, it’s automatic, it’s free. You change their prompt, and immediately you will have a problem looked at from a different representation. And then the last one was aggregating. You can aggregate with AI virtual personas. For example, you can create a virtual board of directors that will have different expertises and that will come up with ideas that a given person may not come up with. And now you can aggregate those. Those are just examples, because there are different ways of changing search, representation, and aggregation, but it’s very clear that AI, at least the current version of AI, has the potential to change these three cognitive operations of strategy. Ross Dawson: That’s fantastic. It’s a novel framing—search, representation, aggregation. Many ways of framing strategy and the strategy process, and that is, I think, quite distinctive and very, very insightful, because it goes to the cognitive aspect of strategy. There’s a lot to dig into there, but I’d like to start with the representation. I think of it as the mental models, and you can have implicit mental models and explicit mental models, and also individual mental models and collective mental models, which goes to the aggregation piece. But when you talk about representation, to what degree—I mean, you mentioned a metaphor there, which, of course, is a form of representing a strategic space. There are, of course, classic two by twos. There are also the mental models which were classically used in investment strategy. So what are the ways in which we can think about representation from a human cognitive perspective, before we look at how AI can complement it? Felipe Csaszar: I think it’s important to distinguish—again, it’s three different things. There are three different types of representations. There are the internal representations: how people think in their minds about a given problem, and that usually people learn through experience, by doing things many times, by working at a given company—you start looking at the world from a given perspective. Part of the internal representations you can learn at school, also, like the typical frameworks. Then there are external representations—things that are outside our mind that help us make decisions. In strategy, essentially everything that we teach are external representations. The most famous one is called Porter’s Five Forces, and it’s a way of thinking about what affects the attractiveness of an industry in terms of five different things. This is useful to have as an external representation; it has many benefits, because you can write it down, you can externalize it, and once it’s outside of your mind, you free up space in your mind to think about other things, to consider other dimensions apart from those five. External representations help you to expand the memory, the working memory that you have to think about strategy. Visuals in general, in strategy, are typical external representations. They play a very important role also because strategy usually involves multiple people, so you want everybody to be on the same page. A great way of doing that is by having a visual so that we all see the same. So we have internal—what’s in your mind; external—what you can draw, essentially, in strategy. And then there are distributed representations, where multiple people—and now with AI, artifacts and software—among all of them, they share the whole representation, so they have parts of the representation. Then you need to aggregate those parts—partial representations; some of them can be internal, some of them are external, but they are aggregated in a given way. So representations are really core in strategic decision making. All strategic decisions come from a given set of representations. Ross Dawson: Yeah, that’s fantastic. So looking at—so again, so much to dive into—but thinking about the visual representations, again, this is a core interest of mine. Can you talk a little bit about how AI can assist? There’s an iterative process. Of course, visualization can be quite simple—a simple framework—or visuals can provide metaphors. There are wonderful strategy roadmaps which are laid out visually, and so on. So what are the ways in which you see AI being able to assist in that, both in the two-way process of the human being able to make their mental model explicit in a visualization, and the visualization being able to inform the internal representation of the strategist? Are there any particular ways you’ve seen AI be useful in that context? Felipe Csaszar: So I was very intrigued—as soon as LLMs became popular, were launched—yeah, ChatGPT, that was in November 2022—I started thinking, there are so many ways in which this could be used. So myself and two co-authors, Hyunjin Kim and Harsh Ketkar, we wrote a paper, one of the initial papers on how AI can be used in strategy. It’s published in Strategy Science, and in that paper, we explore many ways in which AI could be used in strategy. Of course, you can ask AI about coming up with answers to questions that you may have. You can also use AI to use any of these frameworks that have been developed in strategy. It was very clear to us that it was usable. Then the question was, how good are those uses? What’s the quality of current AI doing this type of task? So what we did is an experiment where we compared the performance of AI to the performance of humans. In strategy, there are two types of tasks: one is to generate alternatives, and the other is to select alternatives. You have a problem—the first thing you want to do is have possible solutions, and then you want to be able to pick the best out of those. So we had two experiments: one where we measured the ability of AI to generate alternatives, another to select. For generation, what we did is we got data from a business plan competition where people were applying with business plans that all had the same format. The important thing is that the first paragraph of that application had the problem—a problem that they thought was important. So we took all of those applications and removed everything except for the problem, and then we gave that problem to an AI and asked the AI, “Hey, complete the rest of the business plan.” So now we have business plans that are real, and the AI twins of those—business plans created by an AI that try to solve the same problem. Then we put both in a kind of business plan competition, where we had people with experience in investments ranking all of these business plans, and they didn’t know which ones were created by humans and which ones were created by AIs. We looked at their evaluations at the end of the day, and on average, the ones that were generated by the AI were ranked a little bit higher—7% higher—than the ones that were generated by humans. So at least in this very specific context of business plan competitions, there’s potential. We’re saying, hey, AI could generate things at a level that is comparable to the people applying to this type of business plan competition. That has a lot of potential. We could use it in different ways. The other part of this study was to measure the ability of AI to select strategies among strategies. There, what we did is use data from another business plan competition, where all of the business plans had been evaluated by venture capitalists according to 10 dimensions: how strong is the idea, how strong is the team, how strong is the technology, etc. Then we gave an AI the same rubric that the venture capitalists received and asked the AI to rank or grade each one of these startups according to these 10 dimensions. Then we compared how similar the evaluations of the LLM were to the evaluations of the venture capitalists, and we showed that they are quite similar—there’s a correlation of 52%. This, again, tells us that there is potential here. An AI could do things that are quite similar to an experienced human evaluating this type of startup. A very interesting result there is that the correlation between two venture capitalists is lower than that 52%. So if you want to predict what a venture capitalist is going to say about your business, you’re better off asking an LLM than asking another venture capitalist. Ross Dawson: Yes, which perhaps shows the broad distribution of VC opinions. So obviously, LLMs can play valuable roles in many aspects of the strategy process, but this brings us back to the humans plus AI role. There are many—again, a big topic—but rather than looking at them, comparing what humans and AI did, where do you see the primary opportunities for humans and AI to collaborate in the strategy process? Felipe Csaszar: Yeah, yeah. So I think that’s a fascinating question, and my guess is that the study of the strategy process will completely change in the next 10 to 20 years. So far, all of the strategy process has been to study what happens when you have multiple people making strategy decisions. In the past, we studied things like devil’s advocate, or we have studied the role of changing the size of the group of people making decisions, or the consensus level required. But in the future, there will be AIs in this process that will have completely different bounds or capacities than humans. So we will need to learn what’s the best way of collaborating with them and including them into the strategic decision making process. Today, we don’t know much about it. We are beginning to learn things, like the study I mentioned—hey, in this task, it seems to be better—but there’s so much that we need to learn. I am working on some things, but it’s still early. Ross Dawson: Going back to the distributed representation—this is something where, of course, distributed representation can be in multiple people. Arguably, it can include human and AI agents as each having different representations. But this goes, of course, to the aggregation piece, where the aggregation is—you have a board of directors, group of executives, potentially a participative strategy process bringing more people into the organization. What are the specific roles of AI in assisting or facilitating effective aggregation to form a cohesive strategy? Felipe Csaszar: Yeah, so the truth is, we yet don’t know. There’s not enough research. We’re starting to think about it. We can see many uses, and I think what people should be doing now is running experiments to see when those add value and when they don’t. It will be different for different companies in different industries, so probably there’s no one solution that’s the same for everybody. For example, one possible use in strategic decision making is predicting what your competitors would do. If I do this, what would be the most likely reaction of my competitor? That’s one. Another one is predicting consumers: if I launch this product with this set of characteristics, what would be the most likely response of my consumers? In strategy, something that has been very popular for the last 20 years is something called the Blue Ocean Strategy, which is a method to come up with new offerings, with new value propositions, but that requires a lot of creativity. With AI, you can automate part of that. At the end of the day, it’s a search process. You have to think about what would happen if I add this, or if I add this other thing, or if I increase this. Part of that can be automated—that would be another use. Or if you have different proposals—in this other study, we show, hey, AI is good at evaluating, so if you have the right rubric, this can automate the evaluation, or can automate the first part of that evaluation so that you only have to spend your time among the really complicated, more sophisticated decisions or alternatives. There are many, many things that can be done at this point. Ross Dawson: Which goes to, I think, one of the interesting points in your work—representational complexity. Some strategies are arguably simple; other strategies, you can call them more sophisticated, but they are more complex. The representation of complexity is greater. There are two things that are required for that. One is, of course, sophisticated thinking, but also, because strategy in any organization involves multiple people, it requires that there is an ability for a number of people together to hold a hopefully similar or very similar representation of a quite complex topic. What are ways in which AI can be used to enhance that development of more sophisticated or nuanced or complex representations that can support a better strategy? Felipe Csaszar: So that’s a great point. I have a paper from before this new round of AI called exactly that—representation complexity. There has been a long-standing discussion in strategy of when you want to use a simple representation, whether it’s better to use a complex representation, or something in between. We tried to clarify when each one of these applies. But then came this new round of AI, and I think it changes things a lot. I talk a little bit about this in a chapter I uploaded recently—it’s called “Unbounding Rationality.” The key thing there is that humans—we have our own computer here, it’s the brain, and the brain has some constraints. One very important for strategy is the capacity of our working memory. There’s this famous paper from the 1950s called “The Magical Number Seven,” that we can hold in our working memory seven plus or minus two items—so between five and nine things we can keep at the same time in our mind. That’s why, for example, I think all strategy frameworks are very simple. There’s the five forces—fits within our working memory—or these typical two by twos, they have four quadrants—fits within our working memory. But AIs don’t have that bound. They are not constrained by the same working memory constraint that we have. So I would expect that future frameworks will be much more complex, that representational complexity will increase because of AI. Of course, frameworks of the future won’t have a million things, because when you put too many things, you’re overfitting—it works well with things that happened in the past, but not in the future—but they will probably have more than five things. Also, another reason for not having a million things inside a framework is that at the end of the day, you will still need to communicate frameworks. You will need to convince the other people in the organization, the ones that are implementing the strategy, that this is the right strategy. You will need to convince them, so you don’t want to have something that’s extremely complex. But my guess would be that the complexity of frameworks and of strategies will increase with AI. Ross Dawson: So looking forward—you talked about 10 or 20 years. If we see the current pace of capability development of LLMs on a similar trajectory, where do you see the remaining role of humans as a complement to AI in shaping strategy? I think you mentioned this possibility of essentially AI forming strategy, but I think for a wide array of reasons, it will be human plus AI—humans will play a role as final decision maker or other things. So where do you see those fundamental human capabilities still being retained for the foreseeable future, as a complement to AI in strategy? Felipe Csaszar: So I think that for the next 10, 20, maybe 30 years, humans will be really busy coming up with how to use AI—all of these experiments that we mentioned, people will be running all of those things in all different industries, and that takes a while. That will require human ingenuity and trying things and really understanding strategy and understanding the capabilities of AI. So I don’t see AI replacing human strategists in the very short term. On the contrary, because of AI, strategists will be more busy finding what are the best ways of using AI in their businesses. I think 10, 20, or 30 years is very reasonable. If you think about the previous technological revolution, which I could say was the Internet—the technology for the Internet, we could say, existed since around ’94. The World Wide Web is from ’94, browsers are from ’94, bandwidth enough to send email. Essentially all of the technology that supports internet business today was mostly in place in the mid to late ’90s. But the businesses, or people, ended up using all of those things 10 or 20 years after that, because it takes a long time for people, for strategists, to come up with the idea—for someone to come up with the idea of, let’s say, Netflix or eBay or PayPal or Facebook—all of those things, they take time for people to understand this is doable. Then it takes time to implement. Then it takes time for users to say, “Hey, this is useful.” There’s a lot of adaptation, and then there will be regulation. So the whole process takes a long time. I don’t think that businesses will change from one day to the next. It will be a relatively slow process that will take decades. When we look back in 20 years from now, we will see, “Hey, everything changed,” but every year we will see just a little bit of change, like what happened with the Internet. So I imagine that people designing strategies, implementing strategies, they will be very busy in the next 20 years. Ross Dawson: So to round out, I won’t ask you to make predictions, but maybe some hypotheses. What do you think are some interesting hypotheses that will inform your research—not just next year, but in the years beyond? Where do you think are the interesting avenues that we should be not just exploring and researching, but where there is a valid and useful hypothesis? Felipe Csaszar: Yeah, so many things, but one very important—I think that strategy will be more about making the right predictions. The role of foresight. It turns out that when you want to train a machine learning algorithm, you need to have some signal that informs how you train the system. It’s called the gradient, or the objective function. So in strategy, we will need to make that more central, and then think, what are the best ways in which you can use AI to make the right predictions? That requires measuring the quality of predictions. So you change this in the business, and this ends up happening. We want an AI to be able to do that. So coming up with ways in which you can measure the quality of decisions will become more important, so that we can train those AIs. That’s one. And very related to that is, well, the thing that’s generating the predictions are representations, and then it’s coming up with those more complex representations that are better at making decisions or are better at discovering things that are hard for humans to discover. Those are the two main things. I think the future of strategy will be about finding ways of improving foresight and finding ways of improving the thing that creates that foresight, which are the representations. All of that will change what has been called the strategy process—how we make decisions in strategy. Ross Dawson: So I just need to pick up on that point around prediction. One of the challenges with external predictions is that, then, as a strategist, you have to say, either I will build my strategy based on that prediction, or I question that prediction. I think there are alternatives or attribute probabilities to it. So even if a prediction machine gets better, it’s still very challenging, particularly cognitively, in terms of accountability for the strategist to incorporate a prediction where you don’t necessarily have all of the logic behind the prediction as a machine learning model to incorporate. So how can a strategist incorporate what may be a relatively black box prediction into an effective strategy? Felipe Csaszar: Yeah, well, and here we are in the conjecture part of this interview. So my answer is in that spirit. I think there are two ways out of this. One is that we will ask for explainable predictions. There’s a whole area of AI called Explainable AI, which is exactly trying to do this—not just say what’s the best prediction, but why the AI is saying that’s the right prediction. So that could develop, and probably that will develop, because humans will question whatever the AI will predict. That’s one way. The other is, imagine that the AI becomes very, very, very good at making predictions. Then at some point, it doesn’t matter if it can explain it or not—it’s just making very good predictions. It’s like, imagine you want to win at chess and you have this machine that can play chess very well. This machine wins at chess. You don’t need to exactly understand how that machine is making each one of those decisions. But if the machine is very good at it, and it’s consistently good at it, people will use it. In a sense, the market will decide. If this works better than a machine that provides an explanation for each one of the steps, people will just go with the one that’s making the right prediction. Ross Dawson: I think there’s all sorts of other places we can go to from there, but that’s fascinating. So where can people go to find out more about your work? Felipe Csaszar: Well, I upload all of my stuff to SSRN. So if you Google my name and SSRN, you will find all of my papers. In the near future, like in the next three months or so, I’ll have two things coming out. One is a Handbook of AI, written also with my co-editor Nan Jia from USC, that will have 20 chapters that will explore different ways in which AI will be affecting strategies—the Handbook of AI and Strategy, published by Elgar. And then around that same time, there will be a special issue of the Strategy Science journal where I’m one of the co-editors, which will be exactly about the same—about AI and strategic decision making. We already have accepted several of the papers for that special issue. Those papers will be pushing the frontier of what we know about AI and strategic decision making. Ross Dawson: That’s fantastic. I will certainly be following your work—very highly aligned with the humans plus AI movement. And thank you for all of the wonderful research and work you’re doing. Felipe Csaszar: Thank you so much, Ross. It’s been a pleasure. The post Felipe Csaszar on AI in strategy, AI evaluations of startups, improving foresight, and distributed representations of strategy (AC Ep32) appeared first on Humans + AI.

    Lavinia Iosub on AI in leadership, People & AI Resources (PAIR), AI upskilling, and developing remote skills (AC Ep31)

    Play Episode Listen Later Feb 11, 2026 38:05


    The post Lavinia Iosub on AI in leadership, People & AI Resources (PAIR), AI upskilling, and developing remote skills (AC Ep31) appeared first on Humans + AI.

    Jeremy Korst on the state of AI adoption, accountable acceleration, changing business models, and synthetic personas (AC Ep30)

    Play Episode Listen Later Jan 30, 2026 36:07


    The post Jeremy Korst on the state of AI adoption, accountable acceleration, changing business models, and synthetic personas (AC Ep30) appeared first on Humans + AI.

    Nikki Barua on reinvention, reframing problems, identity shifts for AI adoption, and the future workforce (AC Ep29)

    Play Episode Listen Later Jan 22, 2026 36:15


    The post Nikki Barua on reinvention, reframing problems, identity shifts for AI adoption, and the future workforce (AC Ep29) appeared first on Humans + AI.

    Alexandra Samuel on her personal AI coach Viv, simulated personalities, catalyzing insights, and strengthening social interactions (AC Ep28)

    Play Episode Listen Later Jan 14, 2026 50:43


    The post Alexandra Samuel on her personal AI coach Viv, simulated personalities, catalyzing insights, and strengthening social interactions (AC Ep28) appeared first on Humans + AI.

    Lisa Carlin on AI in strategy execution, participative strategy, cultural intelligence, and AI's impact on consulting (AC Ep27)

    Play Episode Listen Later Dec 17, 2025 37:18


    The post Lisa Carlin on AI in strategy execution, participative strategy, cultural intelligence, and AI's impact on consulting (AC Ep27) appeared first on Humans + AI.

    Nicole Radziwill on organizational consciousness, reimagining work, reducing collaboration barriers, and GenAI for teams (AC Ep26)

    Play Episode Listen Later Dec 10, 2025 37:20


    The post Nicole Radziwill on organizational consciousness, reimagining work, reducing collaboration barriers, and GenAI for teams (AC Ep26) appeared first on Humans + AI.

    Joel Pearson on putting human first, 5 rules for intuition, AI for mental imagery, and cognitive upsizing (AC Ep25)

    Play Episode Listen Later Dec 3, 2025 37:23


    The post Joel Pearson on putting human first, 5 rules for intuition, AI for mental imagery, and cognitive upsizing (AC Ep25) appeared first on Humans + AI.

    Diyi Yang on augmenting capabilities and wellbeing, levels of human agency, AI in the scientific process, and the ideation-execution gap (AC Ep24)

    Play Episode Listen Later Nov 26, 2025 39:53


    The post Diyi Yang on augmenting capabilities and wellbeing, levels of human agency, AI in the scientific process, and the ideation-execution gap (AC Ep24) appeared first on Humans + AI.

    Ganna Pogrebna on behavioural data science, machine bias, digital twins vs digital shadows, and stakeholder simulations (AC Ep23)

    Play Episode Listen Later Nov 19, 2025 40:08


    The post Ganna Pogrebna on behavioural data science, machine bias, digital twins vs digital shadows, and stakeholder simulations (AC Ep23) appeared first on Humans + AI.

    Sue Keay on prioritizing experimentation, new governance styles, sovereign AI, and the treasure of national data sets (AC Ep22)

    Play Episode Listen Later Nov 12, 2025 39:16


    The post Sue Keay on prioritizing experimentation, new governance styles, sovereign AI, and the treasure of national data sets (AC Ep22) appeared first on Humans + AI.

    Dominique Turcq on strategy stakeholders, AI for board critical thinking, ecology of mind, and amplifying cognition (AC Ep21)

    Play Episode Listen Later Nov 6, 2025 39:04


    The post Dominique Turcq on strategy stakeholders, AI for board critical thinking, ecology of mind, and amplifying cognition (AC Ep21) appeared first on Humans + AI.

    Beth Kanter on AI to augment nonprofits, Socratic dialogue, AI team charters, and using Taylor Swift's pens (AC Ep20)

    Play Episode Listen Later Oct 29, 2025 35:15


    The post Beth Kanter on AI to augment nonprofits, Socratic dialogue, AI team charters, and using Taylor Swift's pens (AC Ep20) appeared first on Humans + AI.

    Ross Dawson on Levels of Humans + AI in Organizations (AC Ep19)

    Play Episode Listen Later Oct 22, 2025 16:46


    The post Ross Dawson on Levels of Humans + AI in Organizations (AC Ep19) appeared first on Humans + AI.

    Iskander Smit on human-AI-things relationships, designing for interruptions and intentions, and streams of consciousness in AI (AC Ep18)

    Play Episode Listen Later Sep 10, 2025 36:30


    The post Iskander Smit on human-AI-things relationships, designing for interruptions and intentions, and streams of consciousness in AI (AC Ep18) appeared first on Humans + AI.

    Brian Kropp on AI adoption, intrinsic incentives, identifying pain points, and organizational redesign (AC Ep17)

    Play Episode Listen Later Sep 3, 2025 39:49


    The post Brian Kropp on AI adoption, intrinsic incentives, identifying pain points, and organizational redesign (AC Ep17) appeared first on Humans + AI.

    Suranga Nanayakkara on augmenting humans, contextual nudging, cognitive flow, and intention implementation (AC Ep16)

    Play Episode Listen Later Aug 27, 2025 31:08


    The post Suranga Nanayakkara on augmenting humans, contextual nudging, cognitive flow, and intention implementation (AC Ep16) appeared first on Humans + AI.

    Michael I. Jordan on a collectivist perspective on AI, humble genius, design for social welfare, and the missing middle kingdom (AC Ep15)

    Play Episode Listen Later Aug 20, 2025 42:07


    The post Michael I. Jordan on a collectivist perspective on AI, humble genius, design for social welfare, and the missing middle kingdom (AC Ep15) appeared first on Humans + AI.

    Paula Goldman on trust patterns, intentional orchestration, enhancing human connection, and humans at the helm (AC Ep14)

    Play Episode Listen Later Aug 13, 2025 34:24


    The post Paula Goldman on trust patterns, intentional orchestration, enhancing human connection, and humans at the helm (AC Ep14) appeared first on Humans + AI.

    Vivienne Ming on hybrid collective intelligence, building cyborgs, meta-uncertainty, and the unknown infinite (AC Ep13)

    Play Episode Listen Later Aug 6, 2025 47:56


    The post Vivienne Ming on hybrid collective intelligence, building cyborgs, meta-uncertainty, and the unknown infinite (AC Ep13) appeared first on Humans + AI.

    Matt Beane on the 3 Cs of skill development, AI augmentation design templates, inverted apprenticeships, and AI for skill enhancement (AC Ep12)

    Play Episode Listen Later Jul 30, 2025 39:17


    The post Matt Beane on the 3 Cs of skill development, AI augmentation design templates, inverted apprenticeships, and AI for skill enhancement (AC Ep12) appeared first on Humans + AI.

    Tim O'Reilly on AI native organizations, architectures of participation, creating value for users, and learning by exploring (AC Ep11)

    Play Episode Listen Later Jul 23, 2025 41:00


    The post Tim O'Reilly on AI native organizations, architectures of participation, creating value for users, and learning by exploring (AC Ep11) appeared first on Humans + AI.

    Jacob Taylor on collective intelligence for SDGs, interspecies money, vibe-teaming, and AI ecosystems for people and planet (AC Ep10)

    Play Episode Listen Later Jul 16, 2025


    The post Jacob Taylor on collective intelligence for SDGs, interspecies money, vibe-teaming, and AI ecosystems for people and planet (AC Ep10) appeared first on Humans + AI.

    AI & The Future of Strategy (AC Ep9)

    Play Episode Listen Later Jul 9, 2025 12:29


    The post AI & The Future of Strategy (AC Ep9) appeared first on Humans + AI.

    Matt Lewis on augmenting brain capital, AI for mental health, neurotechnology, and dealing in hope (AC Ep8)

    Play Episode Listen Later Jun 25, 2025 34:28


    The post Matt Lewis on augmenting brain capital, AI for mental health, neurotechnology, and dealing in hope (AC Ep8) appeared first on Humans + AI.

    Amir Barsoum on AI transforming services, pricing innovation, improving healthcare workflows, and accelerating prosperity (AC Ep7)

    Play Episode Listen Later Jun 18, 2025 34:02


    The post Amir Barsoum on AI transforming services, pricing innovation, improving healthcare workflows, and accelerating prosperity (AC Ep7) appeared first on Humans + AI.

    Minyang Jiang on AI augmentation, transcending constraints, fostering creativity, and the levers of AI strategy (AC Ep6)

    Play Episode Listen Later Jun 4, 2025 34:21


    The post Minyang Jiang on AI augmentation, transcending constraints, fostering creativity, and the levers of AI strategy (AC Ep6) appeared first on Humans + AI.

    Sam Arbesman on the magic of code, tools for thought, interdisciplinary ideas, and latent spaces (AC Ep5)

    Play Episode Listen Later May 28, 2025 35:56


    The post Sam Arbesman on the magic of code, tools for thought, interdisciplinary ideas, and latent spaces (AC Ep5) appeared first on Humans + AI.

    Bruce Randall on energy healing and AI, embedding AI in humans, and the implications of brain-computer interfaces (AC Ep4)

    Play Episode Listen Later May 21, 2025 26:14


    The post Bruce Randall on energy healing and AI, embedding AI in humans, and the implications of brain-computer interfaces (AC Ep4) appeared first on Humans + AI.

    Carl Wocke on cloning human expertise, the ethics of digital twins, AI employment agencies, and communities of AI experts (AC Ep3)

    Play Episode Listen Later May 14, 2025 37:04


    The post Carl Wocke on cloning human expertise, the ethics of digital twins, AI employment agencies, and communities of AI experts (AC Ep3) appeared first on Humans + AI.

    Nisha Talagala on the four Cs of AI literacy, vibe coding, critical thinking about AI, and teaching AI fundamentals (AC Ep2)

    Play Episode Listen Later May 7, 2025 33:24


    The post Nisha Talagala on the four Cs of AI literacy, vibe coding, critical thinking about AI, and teaching AI fundamentals (AC Ep2) appeared first on Humans + AI.

    HAI Launch episode

    Play Episode Listen Later Apr 30, 2025 13:07


    The post HAI Launch episode appeared first on Humans + AI.

    Kunal Gupta on the impact of AI on everything and its potential for overcoming barriers, health, learning, and far more (AC Ep86)

    Play Episode Listen Later Apr 23, 2025 33:55


    The post Kunal Gupta on the impact of AI on everything and its potential for overcoming barriers, health, learning, and far more (AC Ep86) appeared first on Amplifying Cognition.

    Lee Rainie on being human in 2035, expert predictions, the impact of AI on cognition and social skills, and insights from generalists (AC Ep85)

    Play Episode Listen Later Apr 16, 2025 40:09


    The post Lee Rainie on being human in 2035, expert predictions, the impact of AI on cognition and social skills, and insights from generalists (AC Ep85) appeared first on Amplifying Cognition.

    Kieran Gilmurray on agentic AI, software labor, restructuring roles, and AI native intelligence businesses (AC Ep84)

    Play Episode Listen Later Apr 9, 2025 34:50


    The post Kieran Gilmurray on agentic AI, software labor, restructuring roles, and AI native intelligence businesses (AC Ep84) appeared first on Amplifying Cognition.

    Jennifer Haase on human-AI co-creativity, uncommon ideas, creative synergy, and humans outperforming (AC Ep83)

    Play Episode Listen Later Apr 2, 2025 32:16


    The post Jennifer Haase on human-AI co-creativity, uncommon ideas, creative synergy, and humans outperforming (AC Ep83) appeared first on Amplifying Cognition.

    Pat Pataranutaporn on human flourishing with AI, augmenting reasoning, enhancing motivation, and benchmarking human-AI interaction (AC Ep82)

    Play Episode Listen Later Mar 26, 2025 38:58


    The post Pat Pataranutaporn on human flourishing with AI, augmenting reasoning, enhancing motivation, and benchmarking human-AI interaction (AC Ep82) appeared first on Amplifying Cognition.

    Amplifying Foresight Compilation (AC Ep81)

    Play Episode Listen Later Mar 19, 2025 30:36


    The post Amplifying Foresight Compilation (AC Ep81) appeared first on Amplifying Cognition.

    AI for Strategy Compilation (AC Ep80)

    Play Episode Listen Later Mar 12, 2025 32:02


    The post AI for Strategy Compilation (AC Ep80) appeared first on amplifyingcognition.

    Collective Intelligence Compilation (AC Ep79)

    Play Episode Listen Later Mar 5, 2025 31:22


    The post Collective Intelligence Compilation (AC Ep79) appeared first on amplifyingcognition.

    Helen Lee Kupp on redesigning work, enabling expression, creative constraints, and women defining AI (AC Ep78)

    Play Episode Listen Later Feb 19, 2025 34:56


    The post Helen Lee Kupp on redesigning work, enabling expression, creative constraints, and women defining AI (AC Ep78) appeared first on amplifyingcognition.

    Human AI Symbiosis Compilation (AC Ep77)

    Play Episode Listen Later Feb 12, 2025 26:09


    The post Human AI Symbiosis Compilation (AC Ep77) appeared first on amplifyingcognition.

    Rita McGrath on inflection points, AI-enhanced strategy, memories of the future, and the future of professional services (AC Ep76)

    Play Episode Listen Later Feb 5, 2025 33:10


    The post Rita McGrath on inflection points, AI-enhanced strategy, memories of the future, and the future of professional services (AC Ep76) appeared first on amplifyingcognition.

    Christian Stadler on AI in strategy, open strategy, AI in the boardroom, and capabilities for strategy (AC Ep75)

    Play Episode Listen Later Jan 29, 2025 34:10


    The post Christian Stadler on AI in strategy, open strategy, AI in the boardroom, and capabilities for strategy (AC Ep75) appeared first on amplifyingcognition.

    Valentina Contini on AI in innovation, multi-potentiality, AI-augmented foresight, and personas from the future (AC Ep74)

    Play Episode Listen Later Dec 18, 2024 34:45


    The post Valentina Contini on AI in innovation, multi-potentiality, AI-augmented foresight, and personas from the future (AC Ep74) appeared first on amplifyingcognition.

    Anthea Roberts on dragonfly thinking, integrating multiple perspectives, human-AI metacognition, and cognitive renaissance (AC Ep73)

    Play Episode Listen Later Dec 11, 2024 34:26


    The post Anthea Roberts on dragonfly thinking, integrating multiple perspectives, human-AI metacognition, and cognitive renaissance (AC Ep73) appeared first on amplifyingcognition.

    Kevin Eikenberry on flexible leadership, both/and thinking, flexor spectrums, and skills for flexibility (AC Ep72)

    Play Episode Listen Later Dec 4, 2024 35:03


    The post Kevin Eikenberry on flexible leadership, both/and thinking, flexor spectrums, and skills for flexibility (AC Ep72) appeared first on amplifyingcognition.

    Alexandra Diening on Human-AI Symbiosis, cyberpsychology, human-centricity, and organizational leadership in AI (AC Ep71)

    Play Episode Listen Later Nov 27, 2024 35:28


    The post Alexandra Diening on Human-AI Symbiosis, cyberpsychology, human-centricity, and organizational leadership in AI (AC Ep71) appeared first on amplifyingcognition.

    Claim Thriving on Overload

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel