Leading futurist and entrepreneur Ross Dawson speaks to the world’s best at creating value from infinite information. His guests share the insights and practices that will help you to thrive in a rapidly accelerating world.

“In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation.” – Davide Dell'Anna About Davide Dell'Anna Davide Dell'Anna is Assistant Professor of Responsible AI at Utrecht University, and a member of the Hybrid Intelligence Centre. His research focuses on how AI can cooperate synergistically and proactively with humans. Davide has published a wide range of leading research in the space. Webiste: davidedellanna.com LinkedIn Profile: Davide Dell'Anna University Profile: Davide Dell'Anna What you will learn The core concept of hybrid intelligence as collaborative human-AI teaming, not replacement Why effective hybrid teams require acknowledging and leveraging both human and AI strengths and weaknesses How lessons from human-human and human-animal teams inform better design of human-AI collaboration Key differences between humans and AI in teams, such as accountability, replaceability, and identity The importance of process-oriented evaluation, including satisfaction, trust, and adaptability, for measuring hybrid team effectiveness Why appropriately calibrated trust and shared ethics are central to performance and cohesion in hybrid teams The shift from explainability to justifiability in AI, emphasizing actions aligned with shared team norms and values New organizational roles and skills—like team facilitation and dynamic team design—needed to support successful human-AI collaboration Episode Resources Transcript Ross Dawson: Hi Davide. It’s wonderful to have you on the show. Davide Dell’Anna: Hi Ross, nice to meet you. Thank you so much for having me. Ross: So you do a lot of work around what you call hybrid intelligence, and I think that’s pretty well aligned with a lot of the topics we have on the podcast. But I’d love to hear your definition and framing—what is hybrid intelligence? Davide: Well, thank you so much for the question. Hybrid intelligence is a new paradigm, or a paradigm that tries to move the public narrative away from the common focus on replacement—AI or robots taking over our jobs. While that’s an understandable fear, more scientifically and societally, I think it’s more interesting and relevant to think of humans and AI as collaborators. In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation. In a human-AI team, members can compensate for each other’s weaknesses and amplify each other’s strengths. The goal is not to substitute human capabilities, but to augment them. This immediately moves the discussion from “what can the AI do to replace me?” to “how can we design the best possible team to work together?” I think that’s the foundation of the concept of hybrid intelligence. So hybrid intelligence, per se, is the ultimate goal. We aim at designing or engineering these human-AI teams so that we can effectively and responsibly collaborate together to achieve this superior type of intelligence, which we then call hybrid intelligence. Ross: That’s fantastic. And so extremely aligned with the humans plus AI thesis. That’s very similar to what I might have said myself, not using the word hybrid intelligence, but humans plus AI to say the same thing. We want to dive into the humans-AI teaming specifically in a moment. But in some of your writing, you’ve commented that, while others are thinking about augmentation in various ways, you point out that these are not necessarily as holistic as they could be. So what do you think is missing in some of the other ways people are approaching AI as a tool of augmentation? Davide: Yeah, so I think when you look at the literature—as a computer scientist myself, I notice how easily I fall into the trap of only discussing AI capabilities. When I talk about AI or even human-AI teams, I end up talking about how I can build the AI to do this, or how I can improve the process in this way. Most of the literature does that as well. There’s a technology-centric perspective to the discussion of even human-AI teams. We try to understand what we can build from the AI point of view to improve a team. But if you think of human-AI teams in this way, you realize that this significantly limits our vocabulary and our ability to look at the team from a broader, system-level perspective, where each member—including and especially human team members—is treated individually, and their skills and identity are considered and leveraged. So, if you look at the literature, you often end up talking about how to add one feature to the AI or how to extend its feature set in other ways. But what people often miss is looking at the weaknesses and strengths of the different individuals, so that we can engineer for their compensation and amplification. Machines and people are fundamentally different: humans are good at some things, AI is good at others, and we shouldn’t try to negate or hide or be ashamed of the things we’re worse at than AI, and vice versa. Instead, we should leverage those differences. For instance, just as an example, consider memory and context awareness. At the moment, at least, AI is much more powerful in having access to memory and retrieving it in a matter of seconds—AI can access basically the whole internet. But often, when you talk nowadays with these language model agents, they are completely decontextualized. They talk in the same way to millions across the world and often have very little clue about who the specific person is in front of them, what that person’s specific situation is—maybe they’re in an airport with noise, or just one minute from giving a lecture and in a rush. The type of things you might say also change based on the specific situation. While this is a limitation of AI, we shouldn’t forget that there is the human there. The human has that contextual knowledge. The human brings that crucial context. Sometimes we tend to say, “Okay, but then we can build an AI that can understand the context around it,” but we already have the human for that. Ross: Yes, yes. I don’t think that’s what I call the framing. Framing should come from the human, because that’s what we understand—including the ethical and other human aspects of the context, as well as that broader frame. It’s interesting because, in talking about hybrid intelligence, I think many who come to augmentation or hybrid intelligence think of it on an individual basis: how can an individual be augmented by AI, or, for example, in playing various games or simulations, humans plus AI teaming together, collaborating. But the team means you have multiple humans and quite probably multiple AI agents. So, in your research, what have you observed if you’re comparing a human-only team and a team which has both human and AI participants? What are some of the things that are the same, and what are some of the things that are different? Davide: Yes, this is a very interesting question. We’ve recently done work in collaboration with a number of researchers from the Hybrid Intelligence Center, which I am part of. If you’re not familiar with it, the Hybrid Intelligence Center is a collaboration that involves practically all the Dutch universities focused on hybrid intelligence, and it’s a long project—lasting around 10 years. One of the works we’ve done recently is to try to study to what extent established properties of effective human teams could be used to characterize human-AI teams. We looked at instruments that people use in practice to characterize human teams. One of them is called the Team Diagnostic Survey, which is an instrument people use to diagnose the strengths and weaknesses of human teams. It includes a number of dimensions that are generally considered important for effective human teams. These include aspects like members demonstrating their commitment to the team by putting in extra time and effort to help it succeed, the presence of coaches available in the team to help the team improve over time, and things related to the satisfaction of the members with the team, with the relationships with other members, and with the work they’re doing. What we’ve done was to study the extent to which we could use these dimensions to characterize human-AI teams. We looked at different types of configurations of teams—some had one AI agent and one human, others had multiple agents and multiple humans, for example in a warehouse context where you have multiple robots helping out in the warehouse that have to cooperate and collaborate with multiple humans. We tried to understand whether the properties of—by the way, we also looked at an interesting case, which is human-animal-animal teams, which is another example that’s interesting in the context of hybrid intelligence. You see very often in human-animal interaction—basically two species, two alien species—interacting and collaborating with each other. They often manage to collaborate pretty effectively, and there is an awareness of what both the humans and the animals are doing that is fascinating, at least for me. So, we tried to analyze whether properties of human teams could be understood when looking at human-AI teams or hybrid teams, and to what extent. One of the things we found is that some concepts are very well understood and easily applicable to different types of hybrid teams. For example, the idea of interdependence—the fact that members in the team, in order to be a team, need to be mutually dependent, at least to some extent. Otherwise, if they’re all doing separate jobs, there’s a lack of common goal. There are also things related to having a clear mission or a clear objective as a team, and aspects related to the possibility of exhibiting autonomy in the operation of the team and taking initiative. Also, the presence and awareness of team norms, like a shared ethical code or shared knowledge about what is appropriate or not. These were things that we found people could easily understand and apply to different configurations of teams. Ross: Just actually, one thing—I don’t know if you’re familiar with the work of Mohammad Hussain Johari, who did this wonderful paper called “What Human-Horse Interactions May Teach Us About Effective Human-AI Interactions.” Again, these are the cases where we can have these parallels—learning how to do human-AI interactions from human-human and human-animal interactions. But again, it comes back to that original question: what is the same? I think you described many of those facets of the nature of teams and collaboration, which means they are the same. But there are, of course, some differences. One of the many differences is accountability, essentially, where the AI agents are not accountable, whereas the humans are. That’s one thing. So, this allocation of decision rights across different participants—human and AI—needs to take into account that they’re not equal participants. Humans have accountability, and AI does not. That’s one possible example. Davide: Yeah, definitely. I totally agree, and I remember the paper you mentioned. I agree that human-animal collaboration is a very interesting source of inspiration. When looking at this paper, we looked at the case of shepherds and shepherd dogs. I didn’t know much about it before, but then I started digging a little bit. Shepherd dogs are trained at the beginning, but over time, they learn a type of communication with the shepherd. Through whistles, the shepherd can give very short commands, and then the shepherd dogs—even in pairs—can quickly understand what they need to do. They go through the mountains, collect all the sheep, and bring them exactly as intended by the shepherd, with very little need for words or other types of communication. They manage to achieve their goals very effectively. So, I think we have a lot to learn from these cases, even though it’s difficult to study. But just to mention differences, of course—one of the things that emerged from this paper is the inherent human-AI asymmetry. Like you mentioned, accountability is definitely one aspect. I think overall, we should always give the human a different type of role in the team, similar to the shepherd and the shepherd dogs. There is some hierarchy among the members, and this makes it possible for humans to preserve meaningful control in the interactions. This also implies that different rules or expectations apply to different team members. Beyond these, there is asymmetry in skills and capabilities, as we mentioned earlier, and also in aspects related to the identity of the members. For instance, some AI could be more easily replaceable than humans. Think, for example, of robots in a warehouse. In a human team, you wouldn’t say you “replace” a team member—it’s not the nicest way to say you let someone go and bring someone else in. But with robots, you could say, “I replace this machine because it’s not working anymore,” and that’s fine. We can replace machines with little consequence, though this doesn’t always hold, because there are studies showing that people get attached to machines and AI in general. There was a recent case of ChatGPT releasing a new version and stopping the previous one, and people complained because they got attached to the previous version. So, in some cases, replacing the AI member would work well, but in others, it needs to be done more carefully. Ross: So one of the other things looked at is the evaluation of human-AI teams. If we’re looking at human teams and possibly relative performance compared to human-AI teams, what are ways in which we can measure effectiveness? I suppose this includes not just output or speed or outcomes, but potentially risk, uncertainty, explainability, or other factors. Davide: Yes, this is an interesting question, and I think it’s still an open question to some extent. From the study I mentioned earlier, we looked at how people measure human team effectiveness. There are aspects concerning, of course, the success of the team in doing the task, but these are not the only measures of effectiveness that people consider in human teams. People often consider things related to the satisfaction of the members—with their teammates, with the process of working together, and with the overall goals of the team. This often leads to reflection from the team itself during operation, at least in human teams, where people reassess and evaluate their output throughout the process to make sure satisfaction with the process and relationships goes well over time. In general, there are aspects to measure concerning the effectiveness of teams related to the process itself, which are often forgotten. It’s a matter, at least from a research point of view, of resources, because to evaluate a full process over time, you need to run experiments for longer periods. Often people stop at one instant or a few interactions, but if you think of human teams, like the usual forming, storming, norming, and performing, that often goes over a long time. Teams often operate for a long time and improve over time. So, the process itself needs to be monitored and reassessed over time. This is a way to also measure the effectiveness of the team, but over time. Ross: Interesting point, because as you say, the dynamics of team performance with a human team improve as people get to know each other and find ways of working. They can become cohesive as a team. That’s classically what happens in defense forces and in creating high-performance teams, where you understand and build trust in each other. Trust is a key component of that. With AI agents, if they are well designed, they can learn themselves or respond to changing situations in order to evolve. But it becomes a different dynamic when you have humans building trust and mutual understanding, where that becomes a system in which the AI is potentially responding or evolving. At its best, there’s the potential for that to create a better performing team, but it does require both the attitudes of the humans and well the agents. Davide: Related to this—if I can interrupt you—I think this is very important that you mentioned trust. Indeed, this is one of the aspects that needs to be considered very carefully. You shouldn’t over-trust another team member, but also shouldn’t under-trust. Appropriate trust is key. One of the things that drives, at least in human teams, trust and overall performance is also team ethics. Related to the metrics you mentioned earlier, the ability of a team to gather around a shared ethical code and stick to that, and to continuously and regularly update each other’s norms and ensure that actions are aligned with the shared norms, is crucial. This ethical code significantly affects trust in operation. You can see it very easily in human teams: considering ethical aspects is essential, and we take them into account all the time. We respect each other’s goals and values. We expect our collaborators to keep their promises and commitments, and if they cannot, they can explain or justify what they are doing. These justifications are also a key element. The ability to provide justifications for behavior is very important for hybrid teams as well. Not only the AI, but also the human should be able to justify their actions when necessary. This is where the concept of hybrid teams and, in general, hybrid intelligence requires a bit of a philosophical shift from the traditional technology-centric perspective. For example, in AI, we often talk about explainability or explainable AI, which is about looking at model computations and understanding why a decision was made. But here, we’re talking about a different concept: justifiability, which looks at the same problem from a different angle. It considers team actions in the context of shared values, shared goals, and the norms we’ve agreed upon. This requires a shift in the way we implement AI agents—they need to be aware of these norms, able to learn and adapt to team norms, and reason about them in the same way we do in society. Ross: Let’s say you’ve got an organization and they have teams, as most organizations do, and now we’re moving from classic human teams to humans plus AI teams—collaborative human-AI teams. What are the skills and capabilities that the individual participants and the leaders in the teams need to transition from human-only teams to teams that include both humans and AI members? Davide: This is a complicated question, and I don’t have a full answer, but I can definitely reflect on different skills that a hybrid team should have. I’m thinking now of recent work—not published yet—where we started moving from the quality model work I mentioned earlier towards more detailed guidelines for human-AI teams. There, we developed a number of guidelines for organizations for putting in place and operating effective teams. We categorized these guidelines in terms of different phases of team processes. For instance, we developed guidelines related to structuring the teamwork—the envisioning of the operations of the team, which roles the team members would have, which responsibilities the different team members should have. Here, I’m talking about team members, but I’m still referring to hybrid teams, so this applies to both humans and AI. This also implies different types of skills that we often don’t have yet in AI systems. For example, flexible team composition is a type of skill required to make it possible at the early stage of the team to structure the team in the right way. There are also skills related to developing shared awareness and aspects related to breaking down the task collaboratively or ensuring a continuous evolution of the team over time, with regular reassessment of the output. If you think of these notions, it’s easy to think about them in terms of traditional organizations, but when you imagine a human-AI team or a small hybrid organization, then this continuous evolution, regular output assessment, and flexible team composition are not so natural anymore. What does it mean for an LLM agent to interact with someone else? Usually, LLM architectures rely on static roles and predefined workflows—you need to define beforehand the prompts they will exchange—whereas humans use much more flexible protocols. We can adjust our protocols over time, monitor what we’re doing, and reassess whether it works or not, and change the protocols. These are skills required for the assistants, but also for the organization itself to make hybrid teaming possible. One of the things that emerges in this recent work is a new figure that would probably come up in organizations: a team designer or a team facilitator. This is not a team member per se, but an expert in teams and AI teammates, who can perhaps configure the AI teammates based on the needs of the team, and provide human team members with information needed about the skills or capabilities of the specific AI team member. It’s an intermediary between humans and AI, with expertise that other human team members may not have, and could help these teams work together. Ross: That’s fantastic. It’s wonderful to learn about all this work. Is there anywhere people can go to find out more about your research? Davide: Yeah, sure. You can look me up at my website, davidedellanna.com. That’s my main website—I try to keep it up to date. Through there, you can see the different projects I’m involved in, the papers we’re working on, both with collaborators and with PhD and master students, who often bring great contributions to our research, even in their short studies. That’s the main hub, and you can also find many openly available resources linked to the projects that people may find useful. Ross: Fantastic. Well, it’s wonderful work—very highly aligned with the idea of hybrid intelligence, and it’s fantastic that you are focusing on that, because there’s not enough people yet focusing in the area. So you and your colleagues are ahead, and I’m sure many more will join you. Thank you so much for your time and your insights. Davide: Thank you so much, Ross. Pleasure to meet you. The post Davide Dell'Anna on hybrid intelligence, guidelines for human-AI teams, calibrating trust, and team ethics (AC Ep33) appeared first on Humans + AI.

“You can create a virtual board of directors that will have different expertises and that will come up with ideas that a given person may not come up with.” – Felipe Csaszar About Felipe Csaszar Felipe Csaszar is the Alexander M. Nick Professor and chair of the Strategy Area at the University of Michigan’s Ross School of Business. He has published and held senior editorial roles in top academic journals including Strategy Science, Management Science, and Organization Science, and is co-editor of the upcoming Handbook of AI and Strategy. Webiste: papers.ssrn.com LinkedIn Profile: Felipe Csaszar University Profile: Felipe Csaszar What you will learn How AI transforms the three core cognitive operations in strategic decision making: search, representation, and aggregation. The powerful ways large language models (LLMs) can enhance and speed up strategic search beyond human capabilities. The concept and importance of different types of representations—internal, external, and distributed—in strategy formulation. How AI assists in both visualizing strategists' mental models and expanding the complexity of strategic frameworks. Experimental findings showing AI's ability to generate and evaluate business strategies, often matching or outperforming humans. Emerging best practices and challenges in human-AI collaboration for more effective strategy processes. The anticipated growth in framework complexity as AI removes traditional human memory constraints in strategic planning. Why explainability and prediction quality in AI-driven strategy will become central, shaping the future of strategic foresight and decision-making. Episode Resources Transcript Ross Dawson: Felipe, it’s a delight to have you on the show. Felipe Csaszar: Oh, the pleasure is mine, Ross. Thank you very much for inviting me. Ross Dawson: So many, many interesting things for us to dive into. But one of the themes that you’ve been doing a lot of research and work on recently is the role of AI in strategic decision making. Of course, humans have been traditionally the ones responsible for strategy, and presumably will continue to be for some time. However, AI can play a role. Perhaps set the scene a little bit first in how you see this evolving. Felipe Csaszar: Yeah, yeah. So, as you say, strategic decision making so far has always been a human task. People have been in charge of picking the strategy of a firm, of a startup, of anything, and AI opens a possibility that now you could have humans helped by AI, and maybe at some point, AI is designing the strategies of companies. One way of thinking about why this may be the case is to think about the cognitive operations that are involved in strategic decision making. Before AI, that was my research—how people came up with strategies. There are three main cognitive operations. One is to search: you try different things, you try different ideas, until you find one which is good enough—that is searching. The other is representing: you think about the world from a given perspective, and from that perspective, there’s a clear solution, at least for you. That’s another way of coming up with strategies. And then another one is aggregating: you have different opinions of different people, and you have to combine them. This can be done in different ways, but a typical one is to use the majority rule or unanimity rule sometimes. In reality, the way in which you combine ideas is much more complicated than that—you take parts of ideas, you pick and choose, and you combine something. So there are these three operations: search, representation, and aggregation. And it turns out that AI can change each one of those. Let’s go one by one. So, search: now AIs, the current LLMs, they know much more about any domain than most people. There’s no one who has read as much as an LLM, and they are quite fast, and you can have multiple LLMs doing things at the same time. So LLMs can search faster than humans and farther away, because you can only search things which you are familiar with, while an LLM is familiar with many, many things that we are not familiar with. So they can search faster and farther than humans—a big effect on search. Then, representation: a typical example before AI about the value of representations is the story of Merrill Lynch. The big idea of Merrill Lynch was how good a bank would look if it was like a supermarket. That’s a shift in representations. You know how a bank looks like, but now you’re thinking of the bank from the perspective of a supermarket, and that leads to a number of changes in how you organize the bank, and that was the big idea of Mr. Merrill Lynch, and the rest is history. That’s very difficult for a human—to change representations. People don’t like changing; it’s very difficult for them, while for an AI, it’s automatic, it’s free. You change their prompt, and immediately you will have a problem looked at from a different representation. And then the last one was aggregating. You can aggregate with AI virtual personas. For example, you can create a virtual board of directors that will have different expertises and that will come up with ideas that a given person may not come up with. And now you can aggregate those. Those are just examples, because there are different ways of changing search, representation, and aggregation, but it’s very clear that AI, at least the current version of AI, has the potential to change these three cognitive operations of strategy. Ross Dawson: That’s fantastic. It’s a novel framing—search, representation, aggregation. Many ways of framing strategy and the strategy process, and that is, I think, quite distinctive and very, very insightful, because it goes to the cognitive aspect of strategy. There’s a lot to dig into there, but I’d like to start with the representation. I think of it as the mental models, and you can have implicit mental models and explicit mental models, and also individual mental models and collective mental models, which goes to the aggregation piece. But when you talk about representation, to what degree—I mean, you mentioned a metaphor there, which, of course, is a form of representing a strategic space. There are, of course, classic two by twos. There are also the mental models which were classically used in investment strategy. So what are the ways in which we can think about representation from a human cognitive perspective, before we look at how AI can complement it? Felipe Csaszar: I think it’s important to distinguish—again, it’s three different things. There are three different types of representations. There are the internal representations: how people think in their minds about a given problem, and that usually people learn through experience, by doing things many times, by working at a given company—you start looking at the world from a given perspective. Part of the internal representations you can learn at school, also, like the typical frameworks. Then there are external representations—things that are outside our mind that help us make decisions. In strategy, essentially everything that we teach are external representations. The most famous one is called Porter’s Five Forces, and it’s a way of thinking about what affects the attractiveness of an industry in terms of five different things. This is useful to have as an external representation; it has many benefits, because you can write it down, you can externalize it, and once it’s outside of your mind, you free up space in your mind to think about other things, to consider other dimensions apart from those five. External representations help you to expand the memory, the working memory that you have to think about strategy. Visuals in general, in strategy, are typical external representations. They play a very important role also because strategy usually involves multiple people, so you want everybody to be on the same page. A great way of doing that is by having a visual so that we all see the same. So we have internal—what’s in your mind; external—what you can draw, essentially, in strategy. And then there are distributed representations, where multiple people—and now with AI, artifacts and software—among all of them, they share the whole representation, so they have parts of the representation. Then you need to aggregate those parts—partial representations; some of them can be internal, some of them are external, but they are aggregated in a given way. So representations are really core in strategic decision making. All strategic decisions come from a given set of representations. Ross Dawson: Yeah, that’s fantastic. So looking at—so again, so much to dive into—but thinking about the visual representations, again, this is a core interest of mine. Can you talk a little bit about how AI can assist? There’s an iterative process. Of course, visualization can be quite simple—a simple framework—or visuals can provide metaphors. There are wonderful strategy roadmaps which are laid out visually, and so on. So what are the ways in which you see AI being able to assist in that, both in the two-way process of the human being able to make their mental model explicit in a visualization, and the visualization being able to inform the internal representation of the strategist? Are there any particular ways you’ve seen AI be useful in that context? Felipe Csaszar: So I was very intrigued—as soon as LLMs became popular, were launched—yeah, ChatGPT, that was in November 2022—I started thinking, there are so many ways in which this could be used. So myself and two co-authors, Hyunjin Kim and Harsh Ketkar, we wrote a paper, one of the initial papers on how AI can be used in strategy. It’s published in Strategy Science, and in that paper, we explore many ways in which AI could be used in strategy. Of course, you can ask AI about coming up with answers to questions that you may have. You can also use AI to use any of these frameworks that have been developed in strategy. It was very clear to us that it was usable. Then the question was, how good are those uses? What’s the quality of current AI doing this type of task? So what we did is an experiment where we compared the performance of AI to the performance of humans. In strategy, there are two types of tasks: one is to generate alternatives, and the other is to select alternatives. You have a problem—the first thing you want to do is have possible solutions, and then you want to be able to pick the best out of those. So we had two experiments: one where we measured the ability of AI to generate alternatives, another to select. For generation, what we did is we got data from a business plan competition where people were applying with business plans that all had the same format. The important thing is that the first paragraph of that application had the problem—a problem that they thought was important. So we took all of those applications and removed everything except for the problem, and then we gave that problem to an AI and asked the AI, “Hey, complete the rest of the business plan.” So now we have business plans that are real, and the AI twins of those—business plans created by an AI that try to solve the same problem. Then we put both in a kind of business plan competition, where we had people with experience in investments ranking all of these business plans, and they didn’t know which ones were created by humans and which ones were created by AIs. We looked at their evaluations at the end of the day, and on average, the ones that were generated by the AI were ranked a little bit higher—7% higher—than the ones that were generated by humans. So at least in this very specific context of business plan competitions, there’s potential. We’re saying, hey, AI could generate things at a level that is comparable to the people applying to this type of business plan competition. That has a lot of potential. We could use it in different ways. The other part of this study was to measure the ability of AI to select strategies among strategies. There, what we did is use data from another business plan competition, where all of the business plans had been evaluated by venture capitalists according to 10 dimensions: how strong is the idea, how strong is the team, how strong is the technology, etc. Then we gave an AI the same rubric that the venture capitalists received and asked the AI to rank or grade each one of these startups according to these 10 dimensions. Then we compared how similar the evaluations of the LLM were to the evaluations of the venture capitalists, and we showed that they are quite similar—there’s a correlation of 52%. This, again, tells us that there is potential here. An AI could do things that are quite similar to an experienced human evaluating this type of startup. A very interesting result there is that the correlation between two venture capitalists is lower than that 52%. So if you want to predict what a venture capitalist is going to say about your business, you’re better off asking an LLM than asking another venture capitalist. Ross Dawson: Yes, which perhaps shows the broad distribution of VC opinions. So obviously, LLMs can play valuable roles in many aspects of the strategy process, but this brings us back to the humans plus AI role. There are many—again, a big topic—but rather than looking at them, comparing what humans and AI did, where do you see the primary opportunities for humans and AI to collaborate in the strategy process? Felipe Csaszar: Yeah, yeah. So I think that’s a fascinating question, and my guess is that the study of the strategy process will completely change in the next 10 to 20 years. So far, all of the strategy process has been to study what happens when you have multiple people making strategy decisions. In the past, we studied things like devil’s advocate, or we have studied the role of changing the size of the group of people making decisions, or the consensus level required. But in the future, there will be AIs in this process that will have completely different bounds or capacities than humans. So we will need to learn what’s the best way of collaborating with them and including them into the strategic decision making process. Today, we don’t know much about it. We are beginning to learn things, like the study I mentioned—hey, in this task, it seems to be better—but there’s so much that we need to learn. I am working on some things, but it’s still early. Ross Dawson: Going back to the distributed representation—this is something where, of course, distributed representation can be in multiple people. Arguably, it can include human and AI agents as each having different representations. But this goes, of course, to the aggregation piece, where the aggregation is—you have a board of directors, group of executives, potentially a participative strategy process bringing more people into the organization. What are the specific roles of AI in assisting or facilitating effective aggregation to form a cohesive strategy? Felipe Csaszar: Yeah, so the truth is, we yet don’t know. There’s not enough research. We’re starting to think about it. We can see many uses, and I think what people should be doing now is running experiments to see when those add value and when they don’t. It will be different for different companies in different industries, so probably there’s no one solution that’s the same for everybody. For example, one possible use in strategic decision making is predicting what your competitors would do. If I do this, what would be the most likely reaction of my competitor? That’s one. Another one is predicting consumers: if I launch this product with this set of characteristics, what would be the most likely response of my consumers? In strategy, something that has been very popular for the last 20 years is something called the Blue Ocean Strategy, which is a method to come up with new offerings, with new value propositions, but that requires a lot of creativity. With AI, you can automate part of that. At the end of the day, it’s a search process. You have to think about what would happen if I add this, or if I add this other thing, or if I increase this. Part of that can be automated—that would be another use. Or if you have different proposals—in this other study, we show, hey, AI is good at evaluating, so if you have the right rubric, this can automate the evaluation, or can automate the first part of that evaluation so that you only have to spend your time among the really complicated, more sophisticated decisions or alternatives. There are many, many things that can be done at this point. Ross Dawson: Which goes to, I think, one of the interesting points in your work—representational complexity. Some strategies are arguably simple; other strategies, you can call them more sophisticated, but they are more complex. The representation of complexity is greater. There are two things that are required for that. One is, of course, sophisticated thinking, but also, because strategy in any organization involves multiple people, it requires that there is an ability for a number of people together to hold a hopefully similar or very similar representation of a quite complex topic. What are ways in which AI can be used to enhance that development of more sophisticated or nuanced or complex representations that can support a better strategy? Felipe Csaszar: So that’s a great point. I have a paper from before this new round of AI called exactly that—representation complexity. There has been a long-standing discussion in strategy of when you want to use a simple representation, whether it’s better to use a complex representation, or something in between. We tried to clarify when each one of these applies. But then came this new round of AI, and I think it changes things a lot. I talk a little bit about this in a chapter I uploaded recently—it’s called “Unbounding Rationality.” The key thing there is that humans—we have our own computer here, it’s the brain, and the brain has some constraints. One very important for strategy is the capacity of our working memory. There’s this famous paper from the 1950s called “The Magical Number Seven,” that we can hold in our working memory seven plus or minus two items—so between five and nine things we can keep at the same time in our mind. That’s why, for example, I think all strategy frameworks are very simple. There’s the five forces—fits within our working memory—or these typical two by twos, they have four quadrants—fits within our working memory. But AIs don’t have that bound. They are not constrained by the same working memory constraint that we have. So I would expect that future frameworks will be much more complex, that representational complexity will increase because of AI. Of course, frameworks of the future won’t have a million things, because when you put too many things, you’re overfitting—it works well with things that happened in the past, but not in the future—but they will probably have more than five things. Also, another reason for not having a million things inside a framework is that at the end of the day, you will still need to communicate frameworks. You will need to convince the other people in the organization, the ones that are implementing the strategy, that this is the right strategy. You will need to convince them, so you don’t want to have something that’s extremely complex. But my guess would be that the complexity of frameworks and of strategies will increase with AI. Ross Dawson: So looking forward—you talked about 10 or 20 years. If we see the current pace of capability development of LLMs on a similar trajectory, where do you see the remaining role of humans as a complement to AI in shaping strategy? I think you mentioned this possibility of essentially AI forming strategy, but I think for a wide array of reasons, it will be human plus AI—humans will play a role as final decision maker or other things. So where do you see those fundamental human capabilities still being retained for the foreseeable future, as a complement to AI in strategy? Felipe Csaszar: So I think that for the next 10, 20, maybe 30 years, humans will be really busy coming up with how to use AI—all of these experiments that we mentioned, people will be running all of those things in all different industries, and that takes a while. That will require human ingenuity and trying things and really understanding strategy and understanding the capabilities of AI. So I don’t see AI replacing human strategists in the very short term. On the contrary, because of AI, strategists will be more busy finding what are the best ways of using AI in their businesses. I think 10, 20, or 30 years is very reasonable. If you think about the previous technological revolution, which I could say was the Internet—the technology for the Internet, we could say, existed since around ’94. The World Wide Web is from ’94, browsers are from ’94, bandwidth enough to send email. Essentially all of the technology that supports internet business today was mostly in place in the mid to late ’90s. But the businesses, or people, ended up using all of those things 10 or 20 years after that, because it takes a long time for people, for strategists, to come up with the idea—for someone to come up with the idea of, let’s say, Netflix or eBay or PayPal or Facebook—all of those things, they take time for people to understand this is doable. Then it takes time to implement. Then it takes time for users to say, “Hey, this is useful.” There’s a lot of adaptation, and then there will be regulation. So the whole process takes a long time. I don’t think that businesses will change from one day to the next. It will be a relatively slow process that will take decades. When we look back in 20 years from now, we will see, “Hey, everything changed,” but every year we will see just a little bit of change, like what happened with the Internet. So I imagine that people designing strategies, implementing strategies, they will be very busy in the next 20 years. Ross Dawson: So to round out, I won’t ask you to make predictions, but maybe some hypotheses. What do you think are some interesting hypotheses that will inform your research—not just next year, but in the years beyond? Where do you think are the interesting avenues that we should be not just exploring and researching, but where there is a valid and useful hypothesis? Felipe Csaszar: Yeah, so many things, but one very important—I think that strategy will be more about making the right predictions. The role of foresight. It turns out that when you want to train a machine learning algorithm, you need to have some signal that informs how you train the system. It’s called the gradient, or the objective function. So in strategy, we will need to make that more central, and then think, what are the best ways in which you can use AI to make the right predictions? That requires measuring the quality of predictions. So you change this in the business, and this ends up happening. We want an AI to be able to do that. So coming up with ways in which you can measure the quality of decisions will become more important, so that we can train those AIs. That’s one. And very related to that is, well, the thing that’s generating the predictions are representations, and then it’s coming up with those more complex representations that are better at making decisions or are better at discovering things that are hard for humans to discover. Those are the two main things. I think the future of strategy will be about finding ways of improving foresight and finding ways of improving the thing that creates that foresight, which are the representations. All of that will change what has been called the strategy process—how we make decisions in strategy. Ross Dawson: So I just need to pick up on that point around prediction. One of the challenges with external predictions is that, then, as a strategist, you have to say, either I will build my strategy based on that prediction, or I question that prediction. I think there are alternatives or attribute probabilities to it. So even if a prediction machine gets better, it’s still very challenging, particularly cognitively, in terms of accountability for the strategist to incorporate a prediction where you don’t necessarily have all of the logic behind the prediction as a machine learning model to incorporate. So how can a strategist incorporate what may be a relatively black box prediction into an effective strategy? Felipe Csaszar: Yeah, well, and here we are in the conjecture part of this interview. So my answer is in that spirit. I think there are two ways out of this. One is that we will ask for explainable predictions. There’s a whole area of AI called Explainable AI, which is exactly trying to do this—not just say what’s the best prediction, but why the AI is saying that’s the right prediction. So that could develop, and probably that will develop, because humans will question whatever the AI will predict. That’s one way. The other is, imagine that the AI becomes very, very, very good at making predictions. Then at some point, it doesn’t matter if it can explain it or not—it’s just making very good predictions. It’s like, imagine you want to win at chess and you have this machine that can play chess very well. This machine wins at chess. You don’t need to exactly understand how that machine is making each one of those decisions. But if the machine is very good at it, and it’s consistently good at it, people will use it. In a sense, the market will decide. If this works better than a machine that provides an explanation for each one of the steps, people will just go with the one that’s making the right prediction. Ross Dawson: I think there’s all sorts of other places we can go to from there, but that’s fascinating. So where can people go to find out more about your work? Felipe Csaszar: Well, I upload all of my stuff to SSRN. So if you Google my name and SSRN, you will find all of my papers. In the near future, like in the next three months or so, I’ll have two things coming out. One is a Handbook of AI, written also with my co-editor Nan Jia from USC, that will have 20 chapters that will explore different ways in which AI will be affecting strategies—the Handbook of AI and Strategy, published by Elgar. And then around that same time, there will be a special issue of the Strategy Science journal where I’m one of the co-editors, which will be exactly about the same—about AI and strategic decision making. We already have accepted several of the papers for that special issue. Those papers will be pushing the frontier of what we know about AI and strategic decision making. Ross Dawson: That’s fantastic. I will certainly be following your work—very highly aligned with the humans plus AI movement. And thank you for all of the wonderful research and work you’re doing. Felipe Csaszar: Thank you so much, Ross. It’s been a pleasure. The post Felipe Csaszar on AI in strategy, AI evaluations of startups, improving foresight, and distributed representations of strategy (AC Ep32) appeared first on Humans + AI.

The post Lavinia Iosub on AI in leadership, People & AI Resources (PAIR), AI upskilling, and developing remote skills (AC Ep31) appeared first on Humans + AI.

The post Jeremy Korst on the state of AI adoption, accountable acceleration, changing business models, and synthetic personas (AC Ep30) appeared first on Humans + AI.

The post Nikki Barua on reinvention, reframing problems, identity shifts for AI adoption, and the future workforce (AC Ep29) appeared first on Humans + AI.

The post Alexandra Samuel on her personal AI coach Viv, simulated personalities, catalyzing insights, and strengthening social interactions (AC Ep28) appeared first on Humans + AI.

The post Lisa Carlin on AI in strategy execution, participative strategy, cultural intelligence, and AI's impact on consulting (AC Ep27) appeared first on Humans + AI.

The post Nicole Radziwill on organizational consciousness, reimagining work, reducing collaboration barriers, and GenAI for teams (AC Ep26) appeared first on Humans + AI.

The post Joel Pearson on putting human first, 5 rules for intuition, AI for mental imagery, and cognitive upsizing (AC Ep25) appeared first on Humans + AI.

The post Diyi Yang on augmenting capabilities and wellbeing, levels of human agency, AI in the scientific process, and the ideation-execution gap (AC Ep24) appeared first on Humans + AI.

The post Ganna Pogrebna on behavioural data science, machine bias, digital twins vs digital shadows, and stakeholder simulations (AC Ep23) appeared first on Humans + AI.

The post Sue Keay on prioritizing experimentation, new governance styles, sovereign AI, and the treasure of national data sets (AC Ep22) appeared first on Humans + AI.

The post Dominique Turcq on strategy stakeholders, AI for board critical thinking, ecology of mind, and amplifying cognition (AC Ep21) appeared first on Humans + AI.

The post Beth Kanter on AI to augment nonprofits, Socratic dialogue, AI team charters, and using Taylor Swift's pens (AC Ep20) appeared first on Humans + AI.

The post Ross Dawson on Levels of Humans + AI in Organizations (AC Ep19) appeared first on Humans + AI.

The post Iskander Smit on human-AI-things relationships, designing for interruptions and intentions, and streams of consciousness in AI (AC Ep18) appeared first on Humans + AI.

The post Brian Kropp on AI adoption, intrinsic incentives, identifying pain points, and organizational redesign (AC Ep17) appeared first on Humans + AI.

The post Suranga Nanayakkara on augmenting humans, contextual nudging, cognitive flow, and intention implementation (AC Ep16) appeared first on Humans + AI.

The post Michael I. Jordan on a collectivist perspective on AI, humble genius, design for social welfare, and the missing middle kingdom (AC Ep15) appeared first on Humans + AI.

The post Paula Goldman on trust patterns, intentional orchestration, enhancing human connection, and humans at the helm (AC Ep14) appeared first on Humans + AI.

The post Vivienne Ming on hybrid collective intelligence, building cyborgs, meta-uncertainty, and the unknown infinite (AC Ep13) appeared first on Humans + AI.

The post Matt Beane on the 3 Cs of skill development, AI augmentation design templates, inverted apprenticeships, and AI for skill enhancement (AC Ep12) appeared first on Humans + AI.

The post Tim O'Reilly on AI native organizations, architectures of participation, creating value for users, and learning by exploring (AC Ep11) appeared first on Humans + AI.

The post Jacob Taylor on collective intelligence for SDGs, interspecies money, vibe-teaming, and AI ecosystems for people and planet (AC Ep10) appeared first on Humans + AI.

The post AI & The Future of Strategy (AC Ep9) appeared first on Humans + AI.

The post Matt Lewis on augmenting brain capital, AI for mental health, neurotechnology, and dealing in hope (AC Ep8) appeared first on Humans + AI.

The post Amir Barsoum on AI transforming services, pricing innovation, improving healthcare workflows, and accelerating prosperity (AC Ep7) appeared first on Humans + AI.

The post Minyang Jiang on AI augmentation, transcending constraints, fostering creativity, and the levers of AI strategy (AC Ep6) appeared first on Humans + AI.

The post Sam Arbesman on the magic of code, tools for thought, interdisciplinary ideas, and latent spaces (AC Ep5) appeared first on Humans + AI.

The post Bruce Randall on energy healing and AI, embedding AI in humans, and the implications of brain-computer interfaces (AC Ep4) appeared first on Humans + AI.

The post Carl Wocke on cloning human expertise, the ethics of digital twins, AI employment agencies, and communities of AI experts (AC Ep3) appeared first on Humans + AI.

The post Nisha Talagala on the four Cs of AI literacy, vibe coding, critical thinking about AI, and teaching AI fundamentals (AC Ep2) appeared first on Humans + AI.

The post HAI Launch episode appeared first on Humans + AI.

The post Kunal Gupta on the impact of AI on everything and its potential for overcoming barriers, health, learning, and far more (AC Ep86) appeared first on Amplifying Cognition.

The post Lee Rainie on being human in 2035, expert predictions, the impact of AI on cognition and social skills, and insights from generalists (AC Ep85) appeared first on Amplifying Cognition.

The post Kieran Gilmurray on agentic AI, software labor, restructuring roles, and AI native intelligence businesses (AC Ep84) appeared first on Amplifying Cognition.

The post Jennifer Haase on human-AI co-creativity, uncommon ideas, creative synergy, and humans outperforming (AC Ep83) appeared first on Amplifying Cognition.

The post Pat Pataranutaporn on human flourishing with AI, augmenting reasoning, enhancing motivation, and benchmarking human-AI interaction (AC Ep82) appeared first on Amplifying Cognition.

The post Amplifying Foresight Compilation (AC Ep81) appeared first on Amplifying Cognition.

The post AI for Strategy Compilation (AC Ep80) appeared first on amplifyingcognition.

The post Collective Intelligence Compilation (AC Ep79) appeared first on amplifyingcognition.

The post Helen Lee Kupp on redesigning work, enabling expression, creative constraints, and women defining AI (AC Ep78) appeared first on amplifyingcognition.

The post Human AI Symbiosis Compilation (AC Ep77) appeared first on amplifyingcognition.

The post Rita McGrath on inflection points, AI-enhanced strategy, memories of the future, and the future of professional services (AC Ep76) appeared first on amplifyingcognition.

The post Christian Stadler on AI in strategy, open strategy, AI in the boardroom, and capabilities for strategy (AC Ep75) appeared first on amplifyingcognition.

The post Valentina Contini on AI in innovation, multi-potentiality, AI-augmented foresight, and personas from the future (AC Ep74) appeared first on amplifyingcognition.

The post Anthea Roberts on dragonfly thinking, integrating multiple perspectives, human-AI metacognition, and cognitive renaissance (AC Ep73) appeared first on amplifyingcognition.

The post Kevin Eikenberry on flexible leadership, both/and thinking, flexor spectrums, and skills for flexibility (AC Ep72) appeared first on amplifyingcognition.

The post Alexandra Diening on Human-AI Symbiosis, cyberpsychology, human-centricity, and organizational leadership in AI (AC Ep71) appeared first on amplifyingcognition.

The post Kevin Clark & Kyle Shannon on collective intelligence, digital twin elicitation, data collaboratives, and the evolution of content (AC Ep70) appeared first on amplifyingcognition.

The post Samar Younes on pluridisciplinary art, AI as artisanal intelligence, future ancestors, and nomadic culture (AC Ep69) appeared first on amplifyingcognition.