Podcasts about responsible ai

  • 697PODCASTS
  • 1,245EPISODES
  • 36mAVG DURATION
  • 1DAILY NEW EPISODE
  • Mar 4, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about responsible ai

Show all podcasts related to responsible ai

Latest podcast episodes about responsible ai

Pondering AI
A Student's Perspective with Seth Rabinowitz

Pondering AI

Play Episode Listen Later Mar 4, 2026 39:17


Seth Rabinowitz uses AI with intent by studiously prioritizing learning, actively resisting dependency, promoting ethical practices, and seeing people in the data.    Seth and Kimberly discuss his shift from fearing AI to fearing (some) people using AI; expertise and critical thinking; how different cohorts use AI; resisting dependency and intentional use; the role of educators; developing soft skills; not confusing AI's learning with your own; stewarding AI; business ethics and data privacy; prioritizing AI fundamentals and putting people first.Seth Rabinowitz is pursuing a Master's degree in Data Science and Business Analytics at UNC Charlotte. A transcript of this episode is here.   

ServiceNow Podcasts
The Human in the Loop | Ethical AI with Di Le

ServiceNow Podcasts

Play Episode Listen Later Mar 4, 2026 29:03


The Human in the Loop | Ethical AI with Di Le ServicveNow Insights Podcast - hosted By Bobby Brill What does it actually mean to build AI responsibly? Not the buzzword version. The real version. In our latest episode, I sat down with Di Le — AI Ethicist and Human-Centered AI Strategist at ServiceNow — and she broke it down in a way I hadn't heard before. Most people use Ethical AI, Responsible AI, and Human-Centered AI interchangeably, and Di breaks down exactly where each one lives and how they apply to building AI that aligns with our societal values. Fairness. Transparency. Bias. Beyond evaluation and technical talking points, these are also design decisions with real consequences for real people — and operationalizing them is harder than most organizations want to admit. One line from Di that stopped me: "People have crossed oceans and built monuments in honor of our capability to think. And I just want people to preserve that and not surrender it so freely." That's the episode in one sentence. To learn more about Ethical AI and reseatch from Di Le and more - https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1020&context=sighci2025 https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1025&context=sighci2024 https://www.youtube.com/watch?v=QhVY-85A-Wk&t=5s ServiceNow Insights Podcast

Outgrow's Marketer of the Month
Snippet- Rakesh Doddamane, Leader – Generative AI / Responsible AI & User Experience at Philips, Reflects on The Evolution of AI Adoption.

Outgrow's Marketer of the Month

Play Episode Listen Later Mar 3, 2026 0:42


CanadianSME Small Business Podcast
Rose Genele on Building Ethical & Scalable AI for the Enterprise

CanadianSME Small Business Podcast

Play Episode Listen Later Mar 3, 2026 16:43


Welcome to the CanadianSME Small Business Podcast, hosted by Maheen Bari, where we explore the strategies and innovations helping businesses master the age of Artificial Intelligence. Today, we're diving into AI Governance and Enterprise Transformation, spotlighting how organizations can deploy AI safely, ethically, and at scale. Our guest, Rose Genele, CEO of The Opening Door (TOD), is an award-winning AI transformationalist guiding organizations in Responsible AI strategy, governance, and human-centered automation. Key Highlights: Enterprise AI Maturity: Rose explains how organizations can advance from curiosity to structured adoption through responsible, scalable frameworks.   Foundations of Responsible AI: She shares how her roots in ML, combined with Law, Business, and Ethics, shape her holistic approach to ethical AI.   Human-Centered AI Agents: Rose breaks down how TOD's specialized agents deliver measurable ROI while keeping humans in control.   AI Literacy & Education: She discusses the urgent need for balanced AI education and how her AI Foundations certificate helps leaders adopt safely.   Global Vision & Future Impact: Rose reflects on her industry recognition, the future of Black women in tech, and TOD's mission to advance responsible AI globally. Special Thanks to Our Partners: UPS: https://solutions.ups.com/ca-beunstoppable.html?WT.mc_id=BUSMEWA Google: https://www.google.ca/ A1 Global College: https://a1globalcollege.ca/ ADP Canada: https://www.adp.ca/en.aspx For more expert insights, visit www.canadiansme.ca and subscribe to the CanadianSME Small Business Magazine. Stay innovative, stay informed, and thrive in the digital age! Disclaimer: The information shared in this podcast is for general informational purposes only and should not be considered as direct financial or business advice. Always consult with a qualified professional for advice specific to your situation.

Spark of Ages
The Data Moat: A Google Veteran's Investment Thesis for AI/David Yakobovitch ~ Spark of Ages Ep 58

Spark of Ages

Play Episode Listen Later Feb 28, 2026 58:38 Transcription Available


We chart how AI leapt from chat to code, why product is now the leverage point, and how startups can market to algorithms without losing trust. David Yakobovitch shares hard-won views on moats, data, defense tech, and the immigrant energy powering American dynamism.• leaders and market share across Google, OpenAI, Anthropic• vibe coding benefits, code quality risks, review loops• prompt libraries, agent swarms, PRD automation• weekly shipping pace and the SaaS squeeze• marketing to algorithms, buyer agents, bot traffic control• pilot to production gap, rise of forward-deployed engineers• moats beyond models via domain, workflow, and proprietary data• China's progress, open source, and on-device AI bets• defense tech, swarms, and physical AI opportunities• endurance mindset, yoga discipline, and founder stamina• personal workflows across Gemini, Claude, and OpenAI• investing across seed and growth with outcome focusThe model wars aren't theoretical anymore—they're shaping how software gets built, shipped, and sold. We sit down with David Yakobovitch, GP at Data Power Capital and former global product lead at Google, to map where AI is actually working in 2026: vibe coding that shrinks teams, agent swarms that harden quality, and product-led moats that outlast model churn. David pulls back the curtain on how Claude, OpenAI, and Google now compete neck and neck on code and content, why prompt engineering as a job vanished while prompts became more valuable, and how forward-deployed engineers bridge the stubborn pilot-to-production gap that has haunted data projects for a decade.We explore go-to-market in a world where buyer agents screen your pitch before a human blinks. That means structuring materials for machines, tuning sites for humans and crawlers, and building demos that agents can evaluate safely. We also go into what happens as models commoditize: the moat shifts to domain depth, proprietary offline data, secure connectors, and measurable workflow outcomes. From small language models running on CPUs in air‑gapped containers to Apple's on-device bet, the edge is back—especially for Europe's sovereignty demands and public sector buyers.Then we widen the lens. Defense and “physical AI” blend hardware and autonomy: swarms, hypersonics, and resilient edge compute that must perform in the real world. David shares why he's backing both the silicon and the software, and how American dynamism—powered by immigrants and impatient builders—remains a durable advantage. Along the way, we trade notes on multi-model workflows, open source momentum, China's narrowed gap, and the endurance mindset that carries teams through the disappointment dip after the first shiny demo.David Yakoboitch: https://www.linkedin.com/in/davidyakobovitch/David Yakobovitch is a General Partner and Managing Director of DataPower Capital, a New York City-based venture capital firm investing across Applied AI, Inference Infrastructure, and DeepTech.  With a portfolio of over 36 companies, David is an investor in the most defining frontier technology firms of our era, including OpenAI, Anthropic, xAI, Neuralink, DataBricks, Groq, Cruesoe, Anduril and SpaceX. David is a leading voice as the host of HumAIn, a podcast focused on Applied and Responsible AI.  Previously, David served as a Global Product Lead aWebsite: https://www.position2.com/podcast/Rajiv Parikh: https://www.linkedin.com/in/rajivparikh/Sandeep Parikh: https://www.instagram.com/sandeepparikh/Email us with any feedback for the show: sparkofages.podcast@position2.com

HPE Tech Talk
How can we coexist with AI?

HPE Tech Talk

Play Episode Listen Later Feb 26, 2026 23:41


How has the idea of ethics been affected by the rise of AI? This week, Technology Now is exploring the ideas of ethical and responsible AI. We examine how integrated into society AI has become, we ask how we co-exist with AI, and we look into how regular people, organisations, and governments are having to respond to the increasing adoption of AI. Kay Firth-Butterfield, CEO of Good Tech Advisory LLC and the world's first Chief AI Ethics Officer, tells us more.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week, hosts Michael Bird and Sam Jarrell look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations. This episode is available in both video and audio formats.About Kay: https://kayfirthbutterfield.comSources:https://www.bbc.co.uk/news/business-66807456https://www.bbc.co.uk/news/world-us-canada-65735769https://www.bbc.co.uk/news/articles/cq808px90wxohttps://www.npr.org/2025/05/07/g-s1-64640/ai-impact-statement-murder-victimhttps://www.academia.edu/123541578/The_Clinical_Chemist

CallumConnects Podcast
Santanu Sengupta - The habit that's been critical to my success.

CallumConnects Podcast

Play Episode Listen Later Feb 25, 2026 2:42


Santanu Sengupta is a Seasoned Board and Global Banking Leader with three decades of experience, shaping business growth, enterprise-scale governance, strategy, and risk oversight across leading financial institutions. As the former Managing Director and APAC South Head at Wells Fargo Bank, Singapore, he led a diverse team across multiple countries, driving sustainable growth through risk-aligned business transformation. Currently, he advises Boards and founders of technology-enabled businesses, helping them navigate complexity and create long-term value by aligning capital strategy, risk discipline, ESG priorities, and Responsible AI into a cohesive, future-ready governance framework. Linkedin : https://www.linkedin.com/in/santanu-sengupta X/Twitter : https://x.com/ssg2211india  CallumConnects Micro-Podcast is your daily dose of wholesome leadership inspiration. Hear from many different leaders in just 5 minutes what hurdles they have faced, how they overcame them, and what their key learning is. Be inspired, subscribe, leave a comment, go and change the world!

Thriving on Overload
Davide Dell'Anna on hybrid intelligence, guidelines for human-AI teams, calibrating trust, and team ethics (AC Ep33)

Thriving on Overload

Play Episode Listen Later Feb 25, 2026 35:46


“In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation.” – Davide Dell'Anna About Davide Dell'Anna Davide Dell'Anna is Assistant Professor of Responsible AI at Utrecht University, and a member of the Hybrid Intelligence Centre. His research focuses on how AI can cooperate synergistically and proactively with humans. Davide has published a wide range of leading research in the space. Webiste: davidedellanna.com LinkedIn Profile: Davide Dell'Anna University Profile: Davide Dell'Anna What you will learn The core concept of hybrid intelligence as collaborative human-AI teaming, not replacement Why effective hybrid teams require acknowledging and leveraging both human and AI strengths and weaknesses How lessons from human-human and human-animal teams inform better design of human-AI collaboration Key differences between humans and AI in teams, such as accountability, replaceability, and identity The importance of process-oriented evaluation, including satisfaction, trust, and adaptability, for measuring hybrid team effectiveness Why appropriately calibrated trust and shared ethics are central to performance and cohesion in hybrid teams The shift from explainability to justifiability in AI, emphasizing actions aligned with shared team norms and values New organizational roles and skills—like team facilitation and dynamic team design—needed to support successful human-AI collaboration Episode Resources Transcript Ross Dawson: Hi Davide. It’s wonderful to have you on the show. Davide Dell’Anna: Hi Ross, nice to meet you. Thank you so much for having me. Ross: So you do a lot of work around what you call hybrid intelligence, and I think that’s pretty well aligned with a lot of the topics we have on the podcast. But I’d love to hear your definition and framing—what is hybrid intelligence? Davide: Well, thank you so much for the question. Hybrid intelligence is a new paradigm, or a paradigm that tries to move the public narrative away from the common focus on replacement—AI or robots taking over our jobs. While that’s an understandable fear, more scientifically and societally, I think it’s more interesting and relevant to think of humans and AI as collaborators. In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation. In a human-AI team, members can compensate for each other’s weaknesses and amplify each other’s strengths. The goal is not to substitute human capabilities, but to augment them. This immediately moves the discussion from “what can the AI do to replace me?” to “how can we design the best possible team to work together?” I think that’s the foundation of the concept of hybrid intelligence. So hybrid intelligence, per se, is the ultimate goal. We aim at designing or engineering these human-AI teams so that we can effectively and responsibly collaborate together to achieve this superior type of intelligence, which we then call hybrid intelligence. Ross: That’s fantastic. And so extremely aligned with the humans plus AI thesis. That’s very similar to what I might have said myself, not using the word hybrid intelligence, but humans plus AI to say the same thing. We want to dive into the humans-AI teaming specifically in a moment. But in some of your writing, you’ve commented that, while others are thinking about augmentation in various ways, you point out that these are not necessarily as holistic as they could be. So what do you think is missing in some of the other ways people are approaching AI as a tool of augmentation? Davide: Yeah, so I think when you look at the literature—as a computer scientist myself, I notice how easily I fall into the trap of only discussing AI capabilities. When I talk about AI or even human-AI teams, I end up talking about how I can build the AI to do this, or how I can improve the process in this way. Most of the literature does that as well. There’s a technology-centric perspective to the discussion of even human-AI teams. We try to understand what we can build from the AI point of view to improve a team. But if you think of human-AI teams in this way, you realize that this significantly limits our vocabulary and our ability to look at the team from a broader, system-level perspective, where each member—including and especially human team members—is treated individually, and their skills and identity are considered and leveraged. So, if you look at the literature, you often end up talking about how to add one feature to the AI or how to extend its feature set in other ways. But what people often miss is looking at the weaknesses and strengths of the different individuals, so that we can engineer for their compensation and amplification. Machines and people are fundamentally different: humans are good at some things, AI is good at others, and we shouldn’t try to negate or hide or be ashamed of the things we’re worse at than AI, and vice versa. Instead, we should leverage those differences. For instance, just as an example, consider memory and context awareness. At the moment, at least, AI is much more powerful in having access to memory and retrieving it in a matter of seconds—AI can access basically the whole internet. But often, when you talk nowadays with these language model agents, they are completely decontextualized. They talk in the same way to millions across the world and often have very little clue about who the specific person is in front of them, what that person’s specific situation is—maybe they’re in an airport with noise, or just one minute from giving a lecture and in a rush. The type of things you might say also change based on the specific situation. While this is a limitation of AI, we shouldn’t forget that there is the human there. The human has that contextual knowledge. The human brings that crucial context. Sometimes we tend to say, “Okay, but then we can build an AI that can understand the context around it,” but we already have the human for that. Ross: Yes, yes. I don’t think that’s what I call the framing. Framing should come from the human, because that’s what we understand—including the ethical and other human aspects of the context, as well as that broader frame. It’s interesting because, in talking about hybrid intelligence, I think many who come to augmentation or hybrid intelligence think of it on an individual basis: how can an individual be augmented by AI, or, for example, in playing various games or simulations, humans plus AI teaming together, collaborating. But the team means you have multiple humans and quite probably multiple AI agents. So, in your research, what have you observed if you’re comparing a human-only team and a team which has both human and AI participants? What are some of the things that are the same, and what are some of the things that are different? Davide: Yes, this is a very interesting question. We’ve recently done work in collaboration with a number of researchers from the Hybrid Intelligence Center, which I am part of. If you’re not familiar with it, the Hybrid Intelligence Center is a collaboration that involves practically all the Dutch universities focused on hybrid intelligence, and it’s a long project—lasting around 10 years. One of the works we’ve done recently is to try to study to what extent established properties of effective human teams could be used to characterize human-AI teams. We looked at instruments that people use in practice to characterize human teams. One of them is called the Team Diagnostic Survey, which is an instrument people use to diagnose the strengths and weaknesses of human teams. It includes a number of dimensions that are generally considered important for effective human teams. These include aspects like members demonstrating their commitment to the team by putting in extra time and effort to help it succeed, the presence of coaches available in the team to help the team improve over time, and things related to the satisfaction of the members with the team, with the relationships with other members, and with the work they’re doing. What we’ve done was to study the extent to which we could use these dimensions to characterize human-AI teams. We looked at different types of configurations of teams—some had one AI agent and one human, others had multiple agents and multiple humans, for example in a warehouse context where you have multiple robots helping out in the warehouse that have to cooperate and collaborate with multiple humans. We tried to understand whether the properties of—by the way, we also looked at an interesting case, which is human-animal-animal teams, which is another example that’s interesting in the context of hybrid intelligence. You see very often in human-animal interaction—basically two species, two alien species—interacting and collaborating with each other. They often manage to collaborate pretty effectively, and there is an awareness of what both the humans and the animals are doing that is fascinating, at least for me. So, we tried to analyze whether properties of human teams could be understood when looking at human-AI teams or hybrid teams, and to what extent. One of the things we found is that some concepts are very well understood and easily applicable to different types of hybrid teams. For example, the idea of interdependence—the fact that members in the team, in order to be a team, need to be mutually dependent, at least to some extent. Otherwise, if they’re all doing separate jobs, there’s a lack of common goal. There are also things related to having a clear mission or a clear objective as a team, and aspects related to the possibility of exhibiting autonomy in the operation of the team and taking initiative. Also, the presence and awareness of team norms, like a shared ethical code or shared knowledge about what is appropriate or not. These were things that we found people could easily understand and apply to different configurations of teams. Ross: Just actually, one thing—I don’t know if you’re familiar with the work of Mohammad Hussain Johari, who did this wonderful paper called “What Human-Horse Interactions May Teach Us About Effective Human-AI Interactions.” Again, these are the cases where we can have these parallels—learning how to do human-AI interactions from human-human and human-animal interactions. But again, it comes back to that original question: what is the same? I think you described many of those facets of the nature of teams and collaboration, which means they are the same. But there are, of course, some differences. One of the many differences is accountability, essentially, where the AI agents are not accountable, whereas the humans are. That’s one thing. So, this allocation of decision rights across different participants—human and AI—needs to take into account that they’re not equal participants. Humans have accountability, and AI does not. That’s one possible example. Davide: Yeah, definitely. I totally agree, and I remember the paper you mentioned. I agree that human-animal collaboration is a very interesting source of inspiration. When looking at this paper, we looked at the case of shepherds and shepherd dogs. I didn’t know much about it before, but then I started digging a little bit. Shepherd dogs are trained at the beginning, but over time, they learn a type of communication with the shepherd. Through whistles, the shepherd can give very short commands, and then the shepherd dogs—even in pairs—can quickly understand what they need to do. They go through the mountains, collect all the sheep, and bring them exactly as intended by the shepherd, with very little need for words or other types of communication. They manage to achieve their goals very effectively. So, I think we have a lot to learn from these cases, even though it’s difficult to study. But just to mention differences, of course—one of the things that emerged from this paper is the inherent human-AI asymmetry. Like you mentioned, accountability is definitely one aspect. I think overall, we should always give the human a different type of role in the team, similar to the shepherd and the shepherd dogs. There is some hierarchy among the members, and this makes it possible for humans to preserve meaningful control in the interactions. This also implies that different rules or expectations apply to different team members. Beyond these, there is asymmetry in skills and capabilities, as we mentioned earlier, and also in aspects related to the identity of the members. For instance, some AI could be more easily replaceable than humans. Think, for example, of robots in a warehouse. In a human team, you wouldn’t say you “replace” a team member—it’s not the nicest way to say you let someone go and bring someone else in. But with robots, you could say, “I replace this machine because it’s not working anymore,” and that’s fine. We can replace machines with little consequence, though this doesn’t always hold, because there are studies showing that people get attached to machines and AI in general. There was a recent case of ChatGPT releasing a new version and stopping the previous one, and people complained because they got attached to the previous version. So, in some cases, replacing the AI member would work well, but in others, it needs to be done more carefully. Ross: So one of the other things looked at is the evaluation of human-AI teams. If we’re looking at human teams and possibly relative performance compared to human-AI teams, what are ways in which we can measure effectiveness? I suppose this includes not just output or speed or outcomes, but potentially risk, uncertainty, explainability, or other factors. Davide: Yes, this is an interesting question, and I think it’s still an open question to some extent. From the study I mentioned earlier, we looked at how people measure human team effectiveness. There are aspects concerning, of course, the success of the team in doing the task, but these are not the only measures of effectiveness that people consider in human teams. People often consider things related to the satisfaction of the members—with their teammates, with the process of working together, and with the overall goals of the team. This often leads to reflection from the team itself during operation, at least in human teams, where people reassess and evaluate their output throughout the process to make sure satisfaction with the process and relationships goes well over time. In general, there are aspects to measure concerning the effectiveness of teams related to the process itself, which are often forgotten. It’s a matter, at least from a research point of view, of resources, because to evaluate a full process over time, you need to run experiments for longer periods. Often people stop at one instant or a few interactions, but if you think of human teams, like the usual forming, storming, norming, and performing, that often goes over a long time. Teams often operate for a long time and improve over time. So, the process itself needs to be monitored and reassessed over time. This is a way to also measure the effectiveness of the team, but over time. Ross: Interesting point, because as you say, the dynamics of team performance with a human team improve as people get to know each other and find ways of working. They can become cohesive as a team. That’s classically what happens in defense forces and in creating high-performance teams, where you understand and build trust in each other. Trust is a key component of that. With AI agents, if they are well designed, they can learn themselves or respond to changing situations in order to evolve. But it becomes a different dynamic when you have humans building trust and mutual understanding, where that becomes a system in which the AI is potentially responding or evolving. At its best, there’s the potential for that to create a better performing team, but it does require both the attitudes of the humans and well the agents. Davide: Related to this—if I can interrupt you—I think this is very important that you mentioned trust. Indeed, this is one of the aspects that needs to be considered very carefully. You shouldn’t over-trust another team member, but also shouldn’t under-trust. Appropriate trust is key. One of the things that drives, at least in human teams, trust and overall performance is also team ethics. Related to the metrics you mentioned earlier, the ability of a team to gather around a shared ethical code and stick to that, and to continuously and regularly update each other’s norms and ensure that actions are aligned with the shared norms, is crucial. This ethical code significantly affects trust in operation. You can see it very easily in human teams: considering ethical aspects is essential, and we take them into account all the time. We respect each other’s goals and values. We expect our collaborators to keep their promises and commitments, and if they cannot, they can explain or justify what they are doing. These justifications are also a key element. The ability to provide justifications for behavior is very important for hybrid teams as well. Not only the AI, but also the human should be able to justify their actions when necessary. This is where the concept of hybrid teams and, in general, hybrid intelligence requires a bit of a philosophical shift from the traditional technology-centric perspective. For example, in AI, we often talk about explainability or explainable AI, which is about looking at model computations and understanding why a decision was made. But here, we’re talking about a different concept: justifiability, which looks at the same problem from a different angle. It considers team actions in the context of shared values, shared goals, and the norms we’ve agreed upon. This requires a shift in the way we implement AI agents—they need to be aware of these norms, able to learn and adapt to team norms, and reason about them in the same way we do in society. Ross: Let’s say you’ve got an organization and they have teams, as most organizations do, and now we’re moving from classic human teams to humans plus AI teams—collaborative human-AI teams. What are the skills and capabilities that the individual participants and the leaders in the teams need to transition from human-only teams to teams that include both humans and AI members? Davide: This is a complicated question, and I don’t have a full answer, but I can definitely reflect on different skills that a hybrid team should have. I’m thinking now of recent work—not published yet—where we started moving from the quality model work I mentioned earlier towards more detailed guidelines for human-AI teams. There, we developed a number of guidelines for organizations for putting in place and operating effective teams. We categorized these guidelines in terms of different phases of team processes. For instance, we developed guidelines related to structuring the teamwork—the envisioning of the operations of the team, which roles the team members would have, which responsibilities the different team members should have. Here, I’m talking about team members, but I’m still referring to hybrid teams, so this applies to both humans and AI. This also implies different types of skills that we often don’t have yet in AI systems. For example, flexible team composition is a type of skill required to make it possible at the early stage of the team to structure the team in the right way. There are also skills related to developing shared awareness and aspects related to breaking down the task collaboratively or ensuring a continuous evolution of the team over time, with regular reassessment of the output. If you think of these notions, it’s easy to think about them in terms of traditional organizations, but when you imagine a human-AI team or a small hybrid organization, then this continuous evolution, regular output assessment, and flexible team composition are not so natural anymore. What does it mean for an LLM agent to interact with someone else? Usually, LLM architectures rely on static roles and predefined workflows—you need to define beforehand the prompts they will exchange—whereas humans use much more flexible protocols. We can adjust our protocols over time, monitor what we’re doing, and reassess whether it works or not, and change the protocols. These are skills required for the assistants, but also for the organization itself to make hybrid teaming possible. One of the things that emerges in this recent work is a new figure that would probably come up in organizations: a team designer or a team facilitator. This is not a team member per se, but an expert in teams and AI teammates, who can perhaps configure the AI teammates based on the needs of the team, and provide human team members with information needed about the skills or capabilities of the specific AI team member. It’s an intermediary between humans and AI, with expertise that other human team members may not have, and could help these teams work together. Ross: That’s fantastic. It’s wonderful to learn about all this work. Is there anywhere people can go to find out more about your research? Davide: Yeah, sure. You can look me up at my website, davidedellanna.com. That’s my main website—I try to keep it up to date. Through there, you can see the different projects I’m involved in, the papers we’re working on, both with collaborators and with PhD and master students, who often bring great contributions to our research, even in their short studies. That’s the main hub, and you can also find many openly available resources linked to the projects that people may find useful. Ross: Fantastic. Well, it’s wonderful work—very highly aligned with the idea of hybrid intelligence, and it’s fantastic that you are focusing on that, because there’s not enough people yet focusing in the area. So you and your colleagues are ahead, and I’m sure many more will join you. Thank you so much for your time and your insights. Davide: Thank you so much, Ross. Pleasure to meet you. The post Davide Dell'Anna on hybrid intelligence, guidelines for human-AI teams, calibrating trust, and team ethics (AC Ep33) appeared first on Humans + AI.

CallumConnects Podcast
Santanu Sengupta - The advice I give most often.

CallumConnects Podcast

Play Episode Listen Later Feb 24, 2026 2:55


Santanu Sengupta is a Seasoned Board and Global Banking Leader with three decades of experience, shaping business growth, enterprise-scale governance, strategy, and risk oversight across leading financial institutions. As the former Managing Director and APAC South Head at Wells Fargo Bank, Singapore, he led a diverse team across multiple countries, driving sustainable growth through risk-aligned business transformation. Currently, he advises Boards and founders of technology-enabled businesses, helping them navigate complexity and create long-term value by aligning capital strategy, risk discipline, ESG priorities, and Responsible AI into a cohesive, future-ready governance framework. Linkedin : https://www.linkedin.com/in/santanu-sengupta X/Twitter : https://x.com/ssg2211india  CallumConnects Micro-Podcast is your daily dose of wholesome leadership inspiration. Hear from many different leaders in just 5 minutes what hurdles they have faced, how they overcame them, and what their key learning is. Be inspired, subscribe, leave a comment, go and change the world!

What's Next, Agencies?
#175, Thomas Knüwer, Chief Creative Officer von Accenture Song

What's Next, Agencies?

Play Episode Listen Later Feb 24, 2026 52:34


Folge #175, Thomas Knüwer, Chief Creative Officer von Accenture Song. Thema: Die neuen Kreativen „Ich glaube, Mindset wird wieder viel wichtiger, und das finde ich großartig. Das Mindesthaltbarkeitsdatum von Skillsets ist gerade ungefähr wie bei einer Tüte Milch. Morgen kommt das nächste Tool, übermorgen machen wir wieder was anderes. Dieser Turnaround ist so schnell, dass am Ende das Mindset entscheidet.“ Was Thomas Knüwer, Chief Creative Officer von Accenture Song, damit beschreibt, ist eine grundlegende Verschiebung im Selbstverständnis kreativer Arbeit. Wenn Tools und Produktionsmöglichkeiten immer zugänglicher werden und sich Skills im Wochentakt überholen, reicht es nicht mehr, „nur“ kreativ zu sein. Entscheidend wird, wer einordnet, kuratiert und Wirkung herstellt, für Marken, für Teams und für die Kultur, in der kreative Arbeit entsteht. In der neuen Episode von #WhatsNextCreatives sprechen Kim Alexandra Notz und Bärbel Egli-Unckrich mit Thomas über neue kreative Rollen und darüber, wie sie sich gerade neu sortieren. Es geht um Creative Consultants, die zwischen Business, Technologie und Marke vermitteln, um Inventors, die Ideen erfinden und ihre Stärke mit KI gezielt multiplizieren, und um Tastemakers, die nicht einzelne Assets auswählen, sondern die holistische Stimme einer Marke kuratieren und ihr Richtung geben. Im Zentrum steht die Frage, warum es heute nicht mehr reicht, allein auf handwerkliche Exzellenz zu setzen. Warum Mindset wichtiger wird als Skillset und weshalb kreative Führung bedeutet, Rahmenbedingungen zu schaffen, in denen unterschiedliche Profile wirken und sich weiterentwickeln können. Ein zentrales Thema ist dabei die Rolle von KI. Thomas beschreibt sie als Output-Demokratisierung und warnt zugleich vor dem Shortcut-Denken. Entscheidend ist nicht, ob Technologie genutzt wird, sondern wie bewusst. Responsible AI, kritisches Denken und eine lebendige Feedbackkultur werden zu zentralen Faktoren für kreative Exzellenz.

CallumConnects Podcast
Santanu Sengupta - My biggest hurdle as a leader.

CallumConnects Podcast

Play Episode Listen Later Feb 23, 2026 3:29


Santanu Sengupta is a Seasoned Board and Global Banking Leader with three decades of experience, shaping business growth, enterprise-scale governance, strategy, and risk oversight across leading financial institutions. As the former Managing Director and APAC South Head at Wells Fargo Bank, Singapore, he led a diverse team across multiple countries, driving sustainable growth through risk-aligned business transformation. Currently, he advises Boards and founders of technology-enabled businesses, helping them navigate complexity and create long-term value by aligning capital strategy, risk discipline, ESG priorities, and Responsible AI into a cohesive, future-ready governance framework. Linkedin : https://www.linkedin.com/in/santanu-sengupta X/Twitter : https://x.com/ssg2211india  CallumConnects Micro-Podcast is your daily dose of wholesome leadership inspiration. Hear from many different leaders in just 5 minutes what hurdles they have faced, how they overcame them, and what their key learning is. Be inspired, subscribe, leave a comment, go and change the world!

Cisco TechBeat
S7 E1: Talking sovereign critical infrastructure, AI, and the EMEA Moment with Gordon Thomson

Cisco TechBeat

Play Episode Listen Later Feb 23, 2026 14:26


AB sits down with Gordon Thomson, Cisco's SVP EMEA Sales and EMEA President, for a great talk about The EMEA Moment, sovereign critical infrastructure, and how AI is a tool that is helping unlock human potential.

Data Culture Podcast
Success factors for GenAI – with Valerie De Naeyer, DPG Media België

Data Culture Podcast

Play Episode Listen Later Feb 23, 2026 29:01


"You need to have both the bottom-up experimentation to learn what's possible, and the top-down business view from executive level -- what do we want to do now and in the future?"

CareTalk Podcast: Healthcare. Unfiltered.
One Giant Leap for Healthcare AI w/ Dr. Robert Wachter, Author, A Giant Leap

CareTalk Podcast: Healthcare. Unfiltered.

Play Episode Listen Later Feb 20, 2026 38:52 Transcription Available


Send a textHealthcare has long promised a digital revolution, yet many clinicians feel more burdened than empowered. With AI now accelerating at a rapid pace, can this moment finally deliver on that promise?Dr. Robert Wachter, author of A Giant Leap, joins host John Driscoll to discuss how AI is evolving clinical workflows and decision-making, why "better than human" is good enough in our overburdened system, and the leadership choices that will determine whether AI reduces burnout or deepens healthcare's existing failures.

Black Woman Leading
S8E17: Reimagining Responsible AI with Dr. Brandeis Marshall

Black Woman Leading

Play Episode Listen Later Feb 19, 2026 60:42


In this conversation, Laura and Dr. Brandeis Marshall explore the concept of responsible AI and the critical need to reframe our understanding of it. Dr. Marshall's insights shed light on how leaders and everyday users can navigate this complex terrain with a focus on ethics and responsibility.  Key takeaways from this discussion include the importance of informed leadership, mindful AI usage, and the power of community support in driving responsible AI initiatives. Whether you're overseeing AI in your organization or using it personally, this conversation will reshape how you approach AI ethically, legally, and practically.   About Dr. Marshall Brandeis Marshall is founder and CEO of DataedX Group™, a data & AI governance consulting agency. Formerly a college professor, she speaks, writes, teaches and consults on how to move slower and build better people-first tech. Dr. Marshall helps cross-functional teams close gaps amongst data strategy, human decision-making competencies and AI adoption activities. She guides them in effectively executing responsible AI and data tactics and implementations. She also founded Black Women in Data in 2020 to broaden awareness, support and retain senior-level Black women whose expertise intersect with the data industry.  Dr. Marshall is the author of Data Conscience: Algorithmic Siege on our Humanity (Wiley, 2022), co-editor of Mitigating Bias in Machine Learning (McGraw-Hill, 2024) and contributing author in The Black Agenda (Macmillan, 2022). She holds a Ph.D. and Master of Science in Computer Science from Rensselaer Polytechnic Institute and a Bachelor of Science in Computer Science from the University of Rochester. Dr. Marshall recently obtained her EMBA from Quantic School of Business and Technology.   Connect with Dr. Marshall Website: https://dataedx.com/ LInkedIn: http://www.linkedin.com/in/brandeis-marshall   BWL Resources: Join us at the 2026 Black Woman Leading LIVE! Conference & Retreat.  May 11-14, 2026 in Myrtle Beach, SC.  Save your seat at www.BWLretreat.com Full podcast episodes are now on Youtube.  Subscribe to the BWL channel today! Check out the BWL theme song here: https://www.youtube.com/watch?v=l68EqEJjXq0  Check out the BWL line dance tutorial here: https://www.youtube.com/watch?v=Eui89AmJwUg  Download the free Black Woman Leading Career Reset Kit - https://blackwomanleading.com/career-reset-kit/   Credits: Learn about all Black Woman Leading® programs, resources, and events at www.blackwomanleading.com Learn more about our consulting work with organizations at https://knightsconsultinggroup.com/ Email Laura: info@knightsconsultinggroup.com  Connect with Laura on LinkedIn: https://www.linkedin.com/in/lauraeknights/  Follow BWL on LinkedIn: https://www.linkedin.com/showcase/blackwomanleading  Instagram: @blackwomanleading Facebook: @blackwomanleading Youtube: @blackwomanleading  Podcast Music & Production: Marshall Knights - https://marshallknights.com/  Graphics: Dara Adams Listen and follow the podcast on all major platforms: Apple Podcasts  Spotify Stitcher iHeartRadio Audible Podbay  

AI Tool Report Live
How to save $100M in Tariffs with 1 Platform | Peter Swartz, Altana

AI Tool Report Live

Play Episode Listen Later Feb 19, 2026 48:17


In this episode, Peter Swartz, Co-Founder and Chief Science Officer at Altana, reveals how the company's AI-powered supply chain knowledge graph has helped stop hundreds of millions of dollars in forced labor goods from crossing borders and contributed to some of the largest counter-narcotics seizures in investigators' careers. Peter shares the real-world impact Altana is making across both the public and private sectors.Peter breaks down how Altana's multi-tier supply chain visibility works to trace forced labor cotton through global networks, how dual-use chemicals are being diverted into fentanyl production, and how the platform helps governments and enterprises collaborate to avoid billions of dollars in trade disruptions while saving hundreds of millions in tariff fees.Key Topics Covered- How Altana blocked hundreds of millions of dollars in forced labor goods at U.S. borders- The role of AI knowledge graphs in mapping multi-tier global supply chains- How Altana supports CBP enforcement of the Uyghur Forced Labor Prevention Act- Product passports and how they expedite legitimate goods through customs- The difference between forced labor entering legit supply chains vs. legit goods entering illicit ones- How logistics companies use Altana to prevent their networks from being misused- Proactive vs. reactive approaches to supply chain risk using probabilistic AI models- Scenario modeling for geopolitical disruptions including Taiwan and global conflicts- Saving billions in supply chain disruptions and hundreds of millions in tariff feesEpisode Timestamps00:00 - Introduction and overview of Altana's real-world impact00:41 - Understanding forced labor as a multi-tier supply chain problem03:09 - Hundreds of millions in forced labor goods stopped at borders03:45 - How the AI knowledge graph maps global supply chain connections04:15 - Working with CBP on the Uyghur Forced Labor Prevention Act04:35 - Product passports and expediting goods through customs04:51 - Counter-narcotics and the dual-use chemical problem05:45 - Helping logistics companies stop network misuse06:27 - From alert to action and the system handoff process06:49 - Responsible AI and the role of human-in-the-loop decisions07:33 - Proactive vs. reactive supply chain intelligence08:08 - Scenario modeling for geopolitical disruptions and resiliencyAbout Peter SwartzPeter Swartz is Co-Founder and Chief Science Officer at Altana. He has spoken on global trade, supply chains, and machine learning at the World Trade Organization, the World Customs Organization, the U.S. Court of International Trade, and the National Academies of Medicine. Previously, Peter was Head of Data Science at Panjiva, listed as one of Fast Company's most innovative data science companies in 2018 and later acquired by S&P Global. He holds patents in machine learning and global trade, and completed his education at Yale, MIT, and EPFL.About AltanaAltana is the world's first Value Chain Management System, providing AI-powered supply chain intelligence to governments, enterprises, and logistics providers. The platform is built on a proprietary knowledge graph comprising more than 2.8 billion shipments, tracking over 500 million companies and 850 million facilities globally. Altana covers more than 50% of global trade, making it the most comprehensive and accurate supply chain map available.Resources Mentioned- Altana Atlas platform and AI knowledge graph- U.S. Customs and Border Protection (CBP)- Uyghur Forced Labor Prevention Act (UFLPA)- Product passports for cross-border compliance- Altana's disruption and tariff scenario modeling toolsPeter's Socials:LinkedIn — https://www.linkedin.com/in/pgswartz/Partner LinksBook Enterprise Training — https://www.upscaile.com/

LawPod
AI, Accountability, and Civilian Harm

LawPod

Play Episode Listen Later Feb 19, 2026 43:03


In this episode, Mae Thompson speaks with Prof Luke Moffett, Dr Jessica Dorsey, and Chris Rogers about how artificial intelligence is already reshaping military decision making and what that means for civilian harm, accountability, and redress. The guests distinguish AI‑enabled decision support from lethal autonomy, unpack the cognitive risks of automation bias, anchoring, and de‑skilling, and consider how AI might responsibly support civilian‑harm tracking and investigations through data fusion and triage. They discuss the “triple black box” of accountability (model opacity, military secrecy, and diffused responsibility), the importance of lawful‑by‑design guardrails across the AI lifecycle, and why NGOs must pair new tools with people‑centred documentation. Looking ahead, they reflect on opportunities for a UK statutory redress scheme to deliver prompt acknowledgement, amends, and mitigation—keeping accountability pace with capability while centring affected communities. Prof Luke Moffett — Chair of Human Rights and International Humanitarian Law, Queen's University Belfast; author of Algorithms of War: The Human Cost of AI and Conflict (forthcoming, Bristol University Press). Dr Jessica Dorsey — Assistant Professor of International Law, Utrecht University; Director of the Realities of Algorithmic Warfare; expert member of the Global Commission on Responsible AI in the Military Domain; Ambassador for the Lawful by Design initiative; Executive Board Member at Airwars. Chris Rogers — Senior Fellow at the Reiss (Reese) Center on Law and Security, New York University School of Law; former Branch Chief and Law & Policy Advisor at the U.S. Department of Defense's Civilian Protection Center of Excellence. This podcast is the sixth in a series of episodes on Civilian Harm in Conflict – hosted by Mae Thompson, advocacy officer at Ceasefire. The podcast is an output of the AHRC‑funded ‘Reparations during Armed Conflict' project with Queen's University Belfast, University College London and Ceasefire, led by Professor Luke Moffett.

Pondering AI
Orchestrating Public Sector AI with Taka Ariga

Pondering AI

Play Episode Listen Later Feb 18, 2026 56:53


Taka Ariga hits all the right notes for AI at scale: clarity of purpose, strong foundations, sustainable innovation, engaged ownership, and a confident workforce.    Taka and Kimberly discuss going beyond novel AI prototypes; the limits of automation; context building; data sovereignty and integrity; the unstructured data deluge; the unique sensitivities and needs of public agencies; valuing ownership and viable ways to scale; plagiarizing for good; foundations for AI success; wanting innovation without change; rethinking governance; enabling confident AI use; making space for reinvention; and being a skeptical AI advocate.Taka Ariga is a heretical technologist and the founder of Sol Imagination. He focuses on AI strategy design, implementation, and value capture. Taka served the US Office of Personnel Management (OPM) as CDO and CAIO and the US Government Accountability Office (GAO) as Chief Data Scientist and Director of the Innovation Lab. Related Resources:Sol Imagination (company)                  https://sol-imagination.ai/  A transcript of this episode is here.   

FundraisingAI
Episode 78 - Rethinking Intelligence: Why Adaptability Beats Expertise

FundraisingAI

Play Episode Listen Later Feb 18, 2026 39:54


The future belongs to those who are willing to see past what worked before and design for what comes next. With intelligence becoming abundant, organizations are forced to face a new challenge: turning powerful tools into meaningful, ethical impact. Responsible AI use has come to depend more on culture, operations, and leadership mindset than on technology. In order for this to work, curiosity must be expected, experimentation must be safe, and systems must translate insight into action. With a combination of people, processes, and purpose, AI becomes a force multiplier rather than a checkbox. Those who are willing to rethink talent, reward better questions, and build operational guardrails that allow innovation without chaos have the real advantage of becoming successful in AI implementation.     Join Nathan and Scott on this live conversation to discover why AI is a cultural shift, not a technology purchase, the reasons for organizations to be successful in AI implementation when they prioritize psychological safety and learning, build systems to support action, and only invest in tools, how adaptability can be an advantage when intelligence has become a commodity, how learning scales through transparency and safe failure builds confidence and accelerates responsible adoption across an organization, and also the importance of addressing AI fatigue through manageable experimentation and visible learning.      HIGHLIGHTS   [00.55] The origin story and initial success of Fundraising.ai.   [03.46] The challenges of AI adoption in nonprofits.  [07.10] The importance of using AI as a thought partner.   [11.00] Cultural readiness, operational structure, and technical solutions.  [15.38] Cultural & talent adaptation in the AI era.   [18.44] Building a culture of curiosity that rewards experimentation, questioning, and learning.  [25.26] The need for acceptable use policies to ensure safe and effective AI adoption.  [32.02] Highlighting iteration, mistakes, and learning.  [36.23] The importance of addressing AI fatigue.  Connect with Nathan and Scott:  LinkedIn (Nathan): ⁠⁠⁠⁠⁠linkedin.com/in/nathanchappell/⁠⁠⁠⁠⁠  LinkedIn (Scott): ⁠⁠⁠⁠⁠linkedin.com/in/scott-rosenkrans⁠⁠⁠⁠⁠  Website: ⁠⁠⁠⁠⁠fundraising.ai/⁠⁠⁠⁠⁠ 

Innovation in Compliance with Tom Fox
Navigating AI: Governance, Risk with some Culture Thrown in with Matt Kunkel

Innovation in Compliance with Tom Fox

Play Episode Listen Later Feb 17, 2026 29:00


Innovation spans many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode,  host Tom Fox interviews Matt Kunkel, CEO and Co-Founder at LogicGate, about the company's governance, risk, and compliance (GRC) platform and current market trends. Matt recounts his path into regulatory risk and compliance work that led to founding LogicGate and launching its Risk Cloud platform in 2015. A major focus is AI governance. Tom and Matt explore how and why senior management is asking compliance teams to provide governance frameworks despite the absence of a single standard (e.g., NIST/ISO/SOC). Matt explains organizations need scalable processes to triage and route large volumes of AI usage requests, apply guardrails based on data sensitivity and criticality, and avoid becoming a bottleneck to innovation. He emphasizes training and culture to address employee misuse, highlighting risks of exposing proprietary data and the need to define what information is acceptable to input into AI models. The discussion turns to LogicGate's culture and how it has been sustained during rapid, organic growth (no acquisitions). Matt outlines LogicGate's six values: Be as One, Embrace Your Curiosity, Empower Customers, Raise the Bar, Own It, and Do the Right Thing. For evaluating AI and modernizing compliance programs, he frames value in three outcomes: making money, reducing costs, or reducing risk, and describes LogicGate's value realization framework that translates efficiency and ROI into business terms. He also describes Risk Cloud as an orchestration layer for compliance programs and anticipates more “intentional AI” and selective use of agentic capabilities rather than fully autonomous end-to-end program execution. Key highlights: From Consulting to GRC: Coding, Madoff Investigation, and Founding LogicGate Why AI Is Supercharging the “G” in GRC LogicGate's Culture Playbook: Values That Scale with Hypergrowth How to Evaluate AI Tools in Compliance: Proving Value, ROI, and “Intentional AI” Cybersecurity in 2026: AI-Powered Social Engineering, Deepfakes, and Risk Mapping What's Next for GRC by 2030: Agents, Responsible AI, and Tech as the Glue Resources: Matt Kunkel on LinkedIn LogicGate Innovation in Compliance was recently ranked Number 4 in Risk Management by 1,000,000 Podcasts.

Faces of Digital Health
What GTM Strategy Should Digital Health Startups Have in 2026? (Ruchi Dass)

Faces of Digital Health

Play Episode Listen Later Feb 13, 2026 51:51


Digital health is no longer in its honeymoon phase. The funding boom is over. AI hype is everywhere. Health systems are overwhelmed. And startups can no longer survive on compelling pitch decks alone. In this episode of Faces of Digital Health, Tjaša Zajc speaks with Ruchi Dass, a former dental surgeon turned public health leader, policy contributor, investor, and advisor to startups scaling across the US, UK, India, Africa, and the Middle East. Ruchi describes a fundamental change in go-to-market (GTM) strategy: Workflow integration is non-negotiable (standalone apps struggle). Reimbursement clarity is critical. Regulatory strategy is part of GTM, not an afterthought. Time stamps: 00:06 – Introduction: startups, global markets, and unconventional careers 01:18 – From dental surgery to global public health and digital health 03:05 – The GTM shift: from promise to proof 04:49 – Staying investable: the four pillars 08:22 – AI ROI: clinical vs operational value 12:17 – Enterprise scaling and “sell to the mindset” 15:05 – Responsible AI: transparency, bias, and lifecycle regulation 19:56 – Predictability vs black-box AI in medicine 22:44 – Global innovation differences: Europe, India, Middle East, Africa 26:21 – Pilotitis: why pilots fail to scale 28:40 – Designing pilots for commercialization 30:26 – Capital flows, geopolitics, and reverse innovation 34:25 – The $1 teleconsultation model in India 37:56 – Digital health and equity: design vs digitization 42:43 – How regulators can keep up with AI 46:03 – Advice for Gen Z and Gen Alpha in digital health 48:50 – Grassroots realities shaping policy Watch the full discussion: https://youtu.be/bmvPzz3Ffp4 www.facesofdigitalhealth.com Newsletter: https://fodh.substack.com/

Behind the Blue
February 13, 2026 - Ky State Senator Amanda Mays Bledsoe (UK, Kentucky, and responsible AI development)

Behind the Blue

Play Episode Listen Later Feb 12, 2026 27:45


 LEXINGTON, Ky. (February 13, 2026) – Artificial intelligence is moving fast — and Kentucky lawmakers are working to make sure the state can take advantage of new tools without sacrificing transparency, privacy or public trust. On this episode of 'Behind the Blue', Kentucky State Senator Amanda Mays Bledsoe — a Lexington native and University of Kentucky alum — joins host Kody Kiser to talk about her path into public service, what she's hearing from constituents in Senate District 12, and how she views UK's land-grant mission of service to communities across the Commonwealth.  Bledsoe represents parts of Fayette County along with Woodford, Mercer and Boyle counties. In the conversation, she points to infrastructure — including roads and aging water and wastewater systems — as a major concern for the region, while also highlighting the role higher education, signature industries and health care play in central Kentucky's future.  The interview also explores Bledsoe's emerging leadership on technology policy, including Kentucky Senate Bill 4, which she describes as a framework for "responsible AI governance" within state government. Bledsoe explains that the goal is not to regulate every minor use of technology, but to establish guardrails for higher-risk, decision-making tools — including creating transparency around where and how AI is used, and building oversight to ensure accountability.   "AI is not spellcheck," Bledsoe said, emphasizing the need for stronger scrutiny when government systems generate new outputs or influence decisions. She also discusses concerns around deceptive AI-generated political content and the importance of ensuring voters can trust what they see — particularly in the final days leading up to an election.  Looking ahead, Bledsoe points to a wide range of challenges and opportunities — from consumer protection and privacy to safeguarding minors online — and says Kentucky will likely need to keep refining its approach as the technology evolves. She also describes how institutions like UK can help shape the state's AI future through research, workforce preparation and teaching students to be critical, responsible users of these tools. 'Behind the Blue' is available via a variety of podcast providers, including Apple Podcasts, YouTube and Spotify. Subscribe to receive new episodes each week, featuring UK's latest medical breakthroughs, research, artists, writers and the most important news impacting the university. 'Behind the Blue' is a production of the University of Kentucky. Transcripts for most episodes are now embedded in the audio file and can be accessed in many podcast apps during playback. Transcripts for older episodes remain available on the show's blog page.  To discover how the University of Kentucky is advancing our Commonwealth, click here. This interview has been edited for time and clarity.

Irish Tech News Audio Articles
Learnovate launches RAIL initiative on responsible Artificial Intelligence for teaching and learning

Irish Tech News Audio Articles

Play Episode Listen Later Feb 11, 2026 5:54


Learnovate, a leading global future of work and learning research hub in Trinity College Dublin, is leading a new Community of Practice for AI implementers and practitioners involved in teaching and learning. The Responsible AI for Learning (RAIL) initiative will allow practitioners to share knowledge, interpret guidelines, and comply with AI regulations. Learnovate is leading the RAIL initiative, which is made up of professionals from all four education domains, including schools, higher education, vocational education and training, and professional education, as well as representatives from the Department of Education, teaching unions, and other sectors. RAIL was formed in November last year when more than 50 professionals in the education sector came together in Trinity College Dublin to discuss the need for a collective interpretation of the AI Advisory Council's guidelines on the use of AI in education. There was also agreement at the meeting on the need for a facility to share knowledge, discuss the opportunities and risks accompanying the use of AI in education, and support each other in complying with the EU AI Act. RAIL will host its inaugural meeting on February 24 2026. The one-hour event is one of three virtual meetings set to take place this year, with a fourth in-person event to follow in November. Those wishing to attend the free event can register at www.learnovatecentre.org/events The February 24 meeting will be led by Dr Gill Ferrell, Executive Director for Europe of 1EdTech, a global organisation promoting and supporting education standards and protocols for K-12 through to higher education and professional education. She will deliver a presentation to the event entitled, 'A European and Global Perspective on AI in Education: Opportunity, Risk, and a Vision for the Future'. Dr Ferrell's expertise is in understanding, managing and guiding the use of technology in learning. She has held senior roles with Jisc, the agency that manages shared services for education institutions and provides advice and guidance to UK education, and has published research in curriculum, student data, social media, assessment and feedback, and design of learning spaces. She has also worked with Universities and Colleges Information Systems Association (UCISA) and European University Information Systems Association (EUNIS). The Community of Practice will be chaired in 2026 by Jonathan Dempsey, Commercial Lead for Diotima, an AI-enabled platform for formative assessment and feedback. Diotima supports teaching practice using responsible AI to provide learners with feedback, leading to more and better assessments and improved learning outcomes for students, and a more manageable workload for teachers. In 2025, Diotima received €500,000 in funding from the Enterprise Ireland Commercialisation Fund, which helps third-level researchers to translate their research into innovative and commercially viable products, services and companies. Diotima partnered with Learnovate in February last year and will spin out of Trinity College Dublin as a company in 2026. The Learnovate Centre at Trinity College Dublin is a leading global future of work and learning research hub funded by Enterprise Ireland and IDA Ireland. Learnovate Managing Director Nessa McEniff said: "Learnovate is delighted to lead the formation of Responsible AI for Learning, a new Community of Practice. The group was formed following the publication of the guidelines on the use of AI in education by the AI Advisory Council. Rather than try to interpret those guidelines in a silo, implementers and practitioners came together to establish a collective interpretation, share knowledge, and ensure compliance with AI regulations. We look forward to the inaugural virtual meeting of RAIL on February 24 2026, the first of four planned for 2026, including one in-person meeting in November." RAIL Chair and Diotima Commercial Lead Jonathan Dempsey said: "Everyone involved in schools, highe...

TechFirst with John Koetsier
SLMs vs LLMs: 10% of the cost, 100% of the accuracy?

TechFirst with John Koetsier

Play Episode Listen Later Feb 10, 2026 18:17


Large language models have dominated the AI conversation — but are small language models (SLMs) actually the future?In this episode of TechFirst, host John Koetsier sits down with Andy Markus, SVP & Chief Data and AI Officer at AT&T, to unpack how small language models are delivering enterprise-grade accuracy at a fraction of the cost and latency of massive LLMs.Andy explains how AT&T uses SLMs for:• Contract analysis at massive scale• Network analytics and outage root-cause analysis • Fraud detection and enterprise knowledge systems• AI-driven “field coding” and agent-based workflowsThey also dive into the rise of agentic AI, how structured “archetypes” replace risky vibe coding, and why the future of software development may be humans supervising autonomous AI systems rather than writing every line of code.If you're building AI for real-world, high-scale use cases — especially in enterprise environments — this conversation is essential.⸻GuestAndy MarkusSVP & Chief Data and AI Officer, AT&TFormer SVP at Time Warner Media⸻

The Business Development Podcast
How AMII Helps Businesses Adapt to AI with Adam Danyleyko

The Business Development Podcast

Play Episode Listen Later Feb 8, 2026 60:55


Episode 314 features Adam Danyleyko from AMII (the Alberta Machine Intelligence Institute) breaking down what AMII actually does and how they help organizations move from AI curiosity to real adoption. Adam explains AMII's foundation in world class research and how the institute translates that research into industry impact by supporting everyone from startups to large corporations through training, shared AI language inside teams, roadmap building, and hands on proof of concept work.The real lesson of the episode is that adapting to AI starts with clarity, not hype. Adam walks through how the “right tool for the problem” mindset changes everything, why data strategy matters especially for startups, and why AI projects often require experimentation with no guaranteed outcome the way a typical software build might. He also touches on where AI is headed next through more efficient models, edge computing, and practical real world constraints, plus how AMII screens work through a principled AI lens focused on impact, fairness, and responsible use.Additional note: This episode also marks three years of The Business Development Podcast.Follow Adam Danyleyko on LinkedIn: https://www.linkedin.com/in/adam-danyleyko/ Learn more about AMII: https://www.amii.caKey Takeaways: AI is not a strategy on its own; it only works when it supports a clearly defined business problem.Starting with the tool instead of the bottleneck almost always leads to wasted time and stalled initiatives.Businesses need a shared AI language internally before they can successfully adopt or scale it.Data readiness matters more than model choice when it comes to real-world AI outcomes.AI projects often require experimentation, iteration, and learning rather than guaranteed deliverables.The right AI solution depends on context, constraints, and environment, not what is trending.Building internal capability is more sustainable than outsourcing all AI decision-making.Responsible AI requires intentional choices around fairness, impact, and long-term use.AI works best as an amplifier of good processes, not a fix for broken ones.Organizations that adapt to AI successfully treat it as infrastructure, not a magic product.This episode of The Business Development Podcast is proudly sponsored by Hypervac Technologies and Hyperfab, our 2026 Title Sponsors. We're incredibly grateful for their continued support of the show and the work they do building world-class industrial solutions right here in Canada. Hypervac and Hyperfab represent innovation, reliability, and execution at the highest level, and we genuinely appreciate them being part of this journey.If you're in the industrial space, we highly encourage you to check them out at www.hypervac.com.If you're the kind of...

The Brand Called You
Looi Teck Kheong, President, Global Council for Responsible AI (Singapore): Leading with Law, Technology & Ethical AI

The Brand Called You

Play Episode Listen Later Feb 7, 2026 107:34


Welcome to The Brand Called You! In this insightful episode, host Stephen Ibaraki sits down with Looi Teck Kheong, President of the Singapore Chapter of the Global Council for Responsible AI, to explore his remarkable journey across law, technology, public policy, cybersecurity, and AI governance.From a curiosity-driven childhood to a defining pivot from engineering to law, Looi Teck Kheong shares how multidisciplinary thinking shaped his 25+ year career spanning private legal practice, ASEAN policy work, journalism, consulting, and teaching. The conversation dives deep into responsible AI, regional and global governance, South–South collaboration, and the growing impact of quantum computing on cybersecurity.This episode also tackles pressing questions around AI ethics, existential risks, data privacy, and how societies can embrace rapid technological change while keeping humans at the center of innovation.Whether you're a technologist, policymaker, entrepreneur, or lifelong learner, this conversation offers practical insights and thoughtful reflections on building a future guided by ethics, collaboration, and curiosity.

Leveraging Thought Leadership with Peter Winick
The Tech Humanist Playbook for Responsible AI | 693 | Kate O'Neill

Leveraging Thought Leadership with Peter Winick

Play Episode Listen Later Feb 5, 2026 21:46


What happens when your AI strategy moves faster than your team's ability to trust it, govern it, or explain it? In this episode of Leveraging Thought Leadership, Peter Winick sits down with Kate O'Neill—Founder & CEO of KO Insights, author of "What Matters Next", and globally recognized as a "tech humanist"—to unpack what leaders are getting dangerously wrong about digital transformation right now. Kate challenges the default mindset that tech exists to serve the business first and humans second. She reframes the entire conversation as a three-way relationship between business, humans, and technology. That shift matters, because "human impact" isn't a nice-to-have. It's the core variable that determines whether innovation scales sustainably or collapses under backlash, risk, and regret. You'll hear why so many companies are racing into AI with confidence on the surface and fear underneath. Boards want speed. Markets reward bold moves. But many executives privately admit they don't fully understand the complexity or consequences of the decisions they're being pressured to make. Kate gives language for that tension and practical frameworks for "future-ready" leadership that doesn't sacrifice long-term resilience for short-term acceleration. The conversation gets real about what trust and risk actually mean in an AI-driven world. Kate argues that leaders need a better taxonomy of both—because without it, AI becomes a multiplier of bad decisions, not a generator of better ones. Faster isn't automatically smarter. And speed without wisdom is just expensive chaos. Finally, Kate shares the larger mission behind her work: influencing the decisions that impact millions of people downstream. Her "10,000 Boardrooms for 1 Billion People" initiative is built around one big idea—if we want human-friendly tech at scale, we need better thinking at the top. Not performative ethics. Not buzzwords. Better decisions, made earlier, by the people with the power to set direction. If you lead strategy, product, innovation, or culture—and you're feeling the pressure to "move faster" with AI—this episode gives you the language, frameworks, and leadership posture to move responsibly without losing momentum. Three Key Takeaways: • Human impact isn't a soft metric—it's a strategy decision. Kate reframes transformation as a three-way relationship between business, humans, and technology. If you don't design for the human outcome, the business outcome eventually breaks. • AI speed without trust creates risk. Leaders feel pressure to move fast, but trust, governance, and clarity lag behind. Without a shared understanding of risk and responsibility, AI becomes a multiplier of bad decisions. • Better decisions upstream create better outcomes at scale. Kate's "10,000 Boardrooms for 1 Billion People" idea drives home that the biggest lever isn't the tool—it's leadership judgment. The earlier the thinking improves at the top, the safer and more scalable innovation becomes. If Kate's "tech humanist" lens made you rethink how you're leading AI and transformation, your next listen should be our episode 149 with Brian Solis. Brian goes deep on what most leaders miss—the human side of digital change, the behavioral ripple effects of technology, and why transformation only works when it's designed for people, not just performance. Queue it up now and pair the two episodes back-to-back for a powerful executive playbook: Kate helps you decide what matters next—Brian helps you understand what your customers and employees will do next.

Mission Matters Podcast with Adam Torres
MissionHealth.io Launch: Bobby Clark on Responsible AI for Small Health Teams

Mission Matters Podcast with Adam Torres

Play Episode Listen Later Feb 5, 2026 13:17


In this episode, Adam Torres interviews Bobby Clark, Founder & Executive Director of MissionHealth.io. Bobby shares the story behind launching MissionHealth.io, his focus on helping small mission-driven health organizations build capacity, and practical insights on responsible AI adoption—from AI literacy and governance to low-risk pilots that support long-term scale. About Bobby Clark Bobby Clark founded missionhealth.io after more than twenty years working across the health ecosystem, from federal policymaking to multinational companies, consultancies, and academia.  ​ His career has included senior roles in government, where he advised the U.S. Secretary of Health and Human Services and Congress on complex national health issues. He has also worked closely with the leadership teams of national nonprofits, foundations, and advocacy groups that shoulder major responsibilities and missions. About MissionHealth.io missionhealth.io is a capacity-building organization that strengthens individuals and teams working to improve health across the United States. They believe in purpose-driven institutions and individuals are essential to advancing health and well-being. Their  role is to magnify their impact by investing in the skills, systems, connections, and strategies that power their work. As a capacity-building partner, they focus on developing the core capabilities that help people grow, adapt, lead, and deliver results. Follow Adam on Instagram at https://www.instagram.com/askadamtorres/ for up to date information on book releases and tour schedule. Apply to be a guest on our podcast: https://missionmatters.lpages.co/podcastguest/ Visit our website: https://missionmatters.com/ More FREE content from Mission Matters here: https://linktr.ee/missionmattersmedia Learn more about your ad choices. Visit podcastchoices.com/adchoices

Mission Matters Innovation
MissionHealth.io Launch: Bobby Clark on Responsible AI for Small Health Teams

Mission Matters Innovation

Play Episode Listen Later Feb 5, 2026 13:17


In this episode, ⁠Adam Torres⁠ interviews ⁠Bobby Clark⁠, Founder & Executive Director of MissionHealth.io. Bobby shares the story behind launching MissionHealth.io, his focus on helping small mission-driven health organizations build capacity, and practical insights on responsible AI adoption—from AI literacy and governance to low-risk pilots that support long-term scale. About ⁠Bobby Clark⁠ Bobby Clark founded missionhealth.io after more than twenty years working across the health ecosystem, from federal policymaking to multinational companies, consultancies, and academia.  ​ His career has included senior roles in government, where he advised the U.S. Secretary of Health and Human Services and Congress on complex national health issues. He has also worked closely with the leadership teams of national nonprofits, foundations, and advocacy groups that shoulder major responsibilities and missions. About ⁠MissionHealth.io⁠ missionhealth.io is a capacity-building organization that strengthens individuals and teams working to improve health across the United States. They believe in purpose-driven institutions and individuals are essential to advancing health and well-being. Their  role is to magnify their impact by investing in the skills, systems, connections, and strategies that power their work. As a capacity-building partner, they focus on developing the core capabilities that help people grow, adapt, lead, and deliver results. Follow Adam on Instagram at ⁠https://www.instagram.com/askadamtorres/⁠ for up to date information on book releases and tour schedule. Apply to be a guest on our podcast: ⁠https://missionmatters.lpages.co/podcastguest/⁠ Visit our website: ⁠https://missionmatters.com/⁠ More FREE content from Mission Matters here: ⁠https://linktr.ee/missionmattersmedia⁠ Learn more about your ad choices. Visit podcastchoices.com/adchoices

Pondering AI
Navigating AI in Banking with Theodora Lau

Pondering AI

Play Episode Listen Later Feb 4, 2026 52:20


Theodora Lau banks on AI becoming our financial GPS and OS but flags required waypoints to protect consumer data rights, maintain trust and close the digital divide.Theo and Kimberly discuss the progression toward a financial GPS powered by AI; consumer data rights and trust; the billion dollar question for 2026; analog identify verification; reducing risk and improving the customer experience; valuing people above transactions; the widening digital divide; upskilling and reskilling; cultivating curiosity and reclaiming time; financial security as the foundation for health; agentic commerce and AI as the financial OS; and always being human.Theodora Lau is the Founder of Unconventional Ventures. A prolific speaker, author and advisor, Theo is an American Banker's Top 20 Influential Women in FinTech. Recognizing that health and financial security are innately entwined, Theo works to spark innovation in the public and private sectors to meet the needs of underrepresented consumers. Related Resources:Banking on (Artificial) Intelligence (book)One Vision Podcast (RSS feed)Unconventional Ventures (company)A transcript of this episode is here.

The Recruiting Brainfood Podcast
Brainfood Live On Air - Ep359 - What Responsible AI Means for Recruiters?

The Recruiting Brainfood Podcast

Play Episode Listen Later Feb 4, 2026 62:08


WHAT RESPONSIBLE AI MEANS FOR RECRUITERS?   What is Responsible AI for Talent Acquisition Teams? is a practical, straight-talking podcast for recruiters who are already using AI—and now need to make sure they're using it properly.   AI is no longer a future experiment in hiring. It's embedded in sourcing, screening, assessment, and workforce planning. The real question facing TA leaders today is not whether to use AI, but how to use it in a way that stands up to governance scrutiny, fairness expectations, and growing legal risk. Regulators, candidates, and internal stakeholders are all paying closer attention—and the margin for error is shrinking.   This podcast explores the reality behind “responsible AI” in talent acquisition, cutting through vague principles and focusing on what recruiters actually need to know. We'll examine why so many organisations still lack formal AI governance, why confidence in bias reduction remains low, and what that means for teams deploying AI at scale. Drawing on 2024–2025 data and real-world TA use cases, the discussion will unpack the tension between automation, efficiency, and human accountability.   Key areas covered include: • What responsible AI really means in a TA context • Governance frameworks recruiters should understand—even if legal owns them • Bias, fairness, and explainability in screening and assessment tools • Legal and regulatory risk, including emerging obligations under the EU AI Act and employment law • The role of recruiters as AI operators, not just end users • How to balance speed, cost savings, and candidate trust • What “human-in-the-loop” looks like in practice   Listeners will learn how to evaluate their current AI stack, ask better questions of vendors, reduce risk exposure, and build hiring processes that are efficient, defensible, and fair.     We're with Martyn Redstone, Head of Responsible AI & Industry Engagement (Warden.AI) & friends on Wednesday 4th February, 12pm GMT. Register by clicking on the green button (save my spot) and follow the channel here (recommended)     Ep359 is sponsored by Oleeo   AI is now used by 62% of companies for hiring, but rapid efficiency shouldn't come at the expense of fairness. Oleeo and Aptitude Research's new report highlights a major gap: only 20% of employers have fully established AI governance frameworks, which can lead to unintended bias. To keep things fair and compliant, 85% of recruiters demand final decision-making authority.   Download 'Setting the Standard for Responsible AI: A Guide For Modern Recruiters' today to build a transparent, human-led strategy that uses AI responsibly.

Data Science Salon Podcast
Beyond the Model: Building Scalable, Responsible AI Systems

Data Science Salon Podcast

Play Episode Listen Later Feb 3, 2026 27:47


Dushyanth shares his journey into AI, the challenges of building complex pipelines, and how to integrate responsible and ethical practices into machine learning workflows.Key Highlights:Scaling AI Systems: How to design and deploy pipelines that handle real-time inference, multimodal data, and production-level demands.Model Interpretability & Explainability: Strategies for making complex models understandable and accountable.Optimizing AI for Real-World Impact: Balancing performance, robustness, and human oversight in AI systems.Responsible AI Practices: Embedding ethics, fairness, and transparency in machine learning workflows.

ITSPmagazine | Technology. Cybersecurity. Society
AI Art vs Human Creativity — The Real Difference and why AI Cannot Be An Artist | A Conversation with AI Expert Andrea Isoni, PhD, Chief AI Officer, AI speaker | Redefining Society and Technology with Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Feb 2, 2026 30:14


The Last Touch: Why AI Will Never Be an ArtistI had one of those conversations... the kind where you're nodding along, then suddenly stop because someone just articulated something you've been feeling but couldn't quite name.Andrea Isoni is a Chief AI Officer. He builds and delivers AI solutions for a living. And yet, sitting across from him (virtually, but still), I heard something I rarely hear from people deep in the AI industry: a clear, unromantic take on what this technology actually is — and what it isn't.His argument is elegant in its simplicity. Think about Michelangelo. We picture him alone with a chisel, carving David from marble. But that's not how it worked. Michelangelo ran a workshop. He had apprentices — skilled craftspeople who did the bulk of the work. The master would look at a semi-finished piece, decide what needed refinement, and add the final touch.That final touch is everything.Andrea draws the same line with chefs. A Michelin-starred kitchen isn't one person cooking. It's a team executing the chef's vision. But the chef decides what's on the menu. The chef check the dish before it leaves. The chef adds that last adjustment that transforms good into memorable.AI, in this framework, is the newest apprentice. It can do the bulk work. It can generate drafts, produce code, create images. But it cannot — and here's the key — provide that final touch. Because that touch comes from somewhere AI doesn't have access to: lived experience, suffering, joy, the accumulated weight of being human in a particular time and place.This matters beyond art. Andrea calls it the "hacker economy" — a future where AI handles the volume, but humans handle the value. Think about code generation. Yes, AI can write software. But code with a bug doesn't work. Period. Someone has to fix that last bug. And in a world where AI produces most of the code, the value of fixing that one critical bug increases exponentially. The work becomes rarer but more valuable. Less frequent, but essential.We went somewhere unexpected in our conversation — to electricity. What does AI "need"? Not food. Not warmth. Electricity. So if AI ever developed something like feelings, they wouldn't be tied to hunger or cold or human vulnerability. They'd be tied to power supply. The most important being to an AI wouldn't be a human — it would be whoever controls the electricity grid.That's not a being we can relate to. And that's the point.Andrea brought up Guernica. Picasso's masterpiece isn't just innovative in style — it captures something society was feeling in 1937, the horror of the Spanish Civil War. Great art does two things: it innovates, and it expresses something the collective needs expressed. AI might be able to generate the first. It cannot do the second. It doesn't know what we feel. It doesn't know what moment we're living through. It doesn't have that weight of context.The research community calls this "world models" — the attempt to give AI some built-in understanding of reality. A dog doesn't need to be taught to swim; it's born knowing. Humans have similar innate knowledge, layered with everything we learn from family, culture, experience. AI starts from zero. Every time.Andrea put it simply: AI contextualization today is close to zero.I left the conversation thinking about what we protect when we acknowledge AI's limits. Not anti-technology. Not fear. Just clarity. The "last touch" isn't a romantic notion — it's what makes something resonate. And that resonance comes from us.Stay curious. Subscribe to the podcast. And if you have thoughts, drop them in the comments — I actually read them.Marco CiappelliSubscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.> https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Disruption Now
Disruption Now Episode 191 | Your Refrigerator Is More Cyber Secure than Your Computer?

Disruption Now

Play Episode Listen Later Jan 30, 2026 50:16


On this episode 191 of the Disruption Now podcast:What happens when an algorithm knows more about your health than your doctor ever will? When AI can process threats faster than any human operator? When China, Russia, Iran, and North Korea are probing our systems 24/7?Dr. Richard Harknett has spent 30+ years answering these questions at the highest levels. As the first Scholar-in-Residence at US Cyber Command and NSA, a key architect of the US Cybersecurity Strategy 2023, and Fulbright Professor in Cyber Studies at Oxford, he's one of the few people who's seen how cyber threats actually unfold—and what we're doing (or not doing) about them.In this conversation, Richard breaks down:

Beyond The Valley
Can We Control AI? DeepMind's Plan for Responsible AI

Beyond The Valley

Play Episode Listen Later Jan 29, 2026 44:25


Google DeepMind's Dawn Bloxwich and Tom Lue join "The Tech Download" to explore one of the biggest questions in technology today: Can we control AI? They break down how DeepMind is building safeguards, stress‑testing its models and working with global regulators to ensure advanced AI develops responsibly.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

BCG on Compliance
Beyond AI Policies: The Ethical Nightmare Challenge with Reid Blackman

BCG on Compliance

Play Episode Listen Later Jan 29, 2026 23:06


AI risk is moving faster than most organizations can write, let alone implement new policies.By the time a “Responsible AI” policy is approved, Generative AI has shifted the goalposts and “Agentic AI” is already raising the stakes for compliance professionals.In this episode our host Hanjo Seibert sits down with Reid Blackman, author of The Ethical Nightmare Challenge. This book unpacks a more innovative, scalable approach to AI governance: one that's designed to be rapidly implementable, adaptable, and pilotable, without turning governance into a bottleneck.You'll also hear why his approach is meant to work across levels (i.e. project, department, enterprise), and why cross-functional collaboration is non-negotiable when AI touches everything.We're kicking off a brand new series of BCG on Compliance with our 2026 opener and we're excited to bring you more stories from the ever-evolving world of compliance.On BCG on Compliance, we dive deep into the extraordinary minds driving change—from ethics champions to risk innovators, bringing rich insights from global players shaping the future of compliance, in a dynamic and compact episode.Whether you're a seasoned pro or new to the field, BCG on Compliance is your quick, comprehensive guide. Connect with us at bcgoncompliance@bcg.com. New episodes are released monthly. Listen wherever you get your podcasts.Episode LinksReid Blackman LinkedInThe Ethical Nightmare Challenge (book)Hanjo Seibert LinkedInBCG WebsiteBCG LinkedInApple: https://podcasts.apple.com/ca/podcast/bcg-on-compliance/id1716794444

The Cloud Pod
340: Azure releases a new SQL AI Assistant… Jimmy Droptables

The Cloud Pod

Play Episode Listen Later Jan 27, 2026 73:07


Welcome to episode 340 of The Cloud Pod, where the forecast is always cloudy! It's a full house (eventually) with Justin, Jonathan, Ryan, and Matt all on board for today's episode. We've got a lot of announcements, from Gemini for Gov (no more CamoGPT!) to Route 52 and Claude. Let's get started!  Titles we almost went with this week Claude’s Pricing Tiers: Free, Pro, and Maximum Overdrive GitHub Copilot Learns Database Schema: Finally an AI That Understands Your Joins SSMS Gets a Copilot: Your T-SQL Now Writes Itself While You Grab Coffee Too Many Cooks in the Cloud Kitchen: How 32 GPUs Outcooked the Big Tech Industrial Kitchens Uncle Sam Gets a Gemini Twin: Google’s AI Goes Federal Route 53 Gets Domain of Its Own: .ai Joins the Party Thai One On: Google Cloud Plants Its Flag in Bangkok NAT So Fast: Azure’s Gateway Gets a V2 Glow-Up Beware Azure's SQL Assistant doesn't smoke your joints. AI Is Going Great, Or How ML Makes Money   30:10 Announcing BlackIce: A Containerized Red Teaming Toolkit for AI Security Testing | Databricks Blog Databricks released BlackIce, an open-source containerized toolkit that bundles 14 AI security testing tools into a single Docker image available on Docker Hub as databricksruntime/blackice:17.3-LTS.  The toolkit addresses common red teaming challenges, including conflicting dependencies, complex setup requirements, and the fragmented landscape of AI security tools, by providing a unified command-line interface similar to how Kali Linux works for traditional penetration testing. The toolkit includes tools covering three main categories: Responsible AI, Security testing, and classical adversarial ML, with capabilities mapped to MITRE ATLAS and the Databricks AI Security Framework.  Tools are organized as either static (simple CLI-based with minimal programming needed) or dynamic (Python-based with customization options), with static tools isolated in separate virtual environments and dynamic tools in a global environment with managed dependencies. BlackIce integrates directly with Databricks Model Serving endpoints through custom patches applied to several tools, allowing security teams to test for vulnerabilities like prompt injections, data leakage, hallucination detection, jailbreak attacks, and supply chain security issues.  Users can deploy it via Databricks Container Services by specifying the Docker image URL when creating compute clusters. The release includes a demo notebook showing how to orchestrate multiple security tools in a single environment, with all build artifacts, tool documentation, and examples available in the GitHub repository.  The CAMLIS Red Paper provides additional technical details on tool selection criteria and the Docker image architecture. 04:30 Ryan – “It's very difficult to feel confident in your AI security practice or patterns. I feel like it's just bleeding edge, and I

The Tech Blog Writer Podcast
3564: Why Banking Is the Ultimate Test for Responsible AI

The Tech Blog Writer Podcast

Play Episode Listen Later Jan 23, 2026 34:15


If artificial intelligence is meant to earn trust anywhere, should banking be the place where it proves itself first? In this episode of Tech Talks Daily, I'm joined by Ravi Nemalikanti, Chief Product and Technology Officer at Abrigo, for a grounded conversation about what responsible AI actually looks like when the consequences are real. Abrigo works with more than 2,500 banks and credit unions across the United States, many of them community institutions where every decision affects local businesses, families, and entire regional economies. That reality makes this discussion feel refreshingly practical rather than theoretical. We talk about why financial services has become one of the toughest proving grounds for AI, and why that is a good thing. Ravi explains why concepts like transparency, explainability, and auditability are not optional add-ons in banking, but table stakes. From fraud detection and lending decisions to compliance and portfolio risk, every model has to stand up to regulatory, ethical, and operational scrutiny. A false positive or an opaque decision is not just a technical issue, it can damage trust, disrupt livelihoods, and undermine confidence in an institution. A big focus of the conversation is how AI assistants are already changing day-to-day banking work, largely behind the scenes. Rather than flashy chatbots, Ravi describes assistants embedded directly into lending, anti-money laundering, and compliance workflows. These systems summarize complex documents, surface anomalies, and create consistent narratives that free human experts to focus on judgment, context, and relationships. What surprised me most was how often customers value consistency and clarity over raw speed or automation. We also explore what other industries can learn from community banks, particularly their modular, measured approach to adoption. With limited budgets and decades-old core systems, these institutions innovate cautiously, prioritizing low-risk, high-return use cases and strong governance from day one. Ravi shares why explainable AI must speak the language of bankers and regulators, not data scientists, and why showing the "why" behind a decision is essential to keeping humans firmly in control. As we look toward 2026 and beyond, the conversation turns to where AI can genuinely support better outcomes in lending and credit risk without sidelining human judgment. Ravi is clear that fully autonomous decisioning still has a long way to go in high-stakes environments, and that the future is far more about partnership than replacement. AI can surface patterns, speed up insight, and flag risks early, but people remain essential for context, empathy, and final accountability. If you're trying to cut through the AI noise and understand how trust, governance, and real-world impact intersect, this episode offers a rare look at how responsible AI is actually being built and deployed today. And once you've listened, I'd love to hear your perspective. Where do you see AI earning trust, and where does it still have something to prove?

Disruption Now
Disruption Now Episode 190 | What Is Explainable AI?

Disruption Now

Play Episode Listen Later Jan 23, 2026 48:41


Dr. Kelly Cohen is a Professor of Aerospace Engineering at the University of Cincinnati and a leading authority in explainable, certifiable AI systems. With more than 31 years of experience in artificial intelligence, his research focuses on fuzzy logic, safety-critical systems, and responsible AI deployment in aerospace and autonomous environments. His lab's work has received international recognition, with students earning top global research awards and building real-world AI products used in industry.In this episode 190 of the Disruption Now Podcast,

Disruption Now
Disruption Now Episode 189 | WHY AI AGENTS NEED CRYPTO

Disruption Now

Play Episode Listen Later Jan 23, 2026 43:06


Bitcoin and blockchain are reshaping money, trust, and the future of AI. In Episode 189 of the Disruption Now Podcast, Rob Richardson sits down with Andrew Burchwell, Executive Director of the Ohio Blockchain Council, to break down why blockchain matters more than ever—and why understanding it now is critical for anyone navigating the next decade of technology.This conversation dives into Bitcoin's core value as a trust engine, why blockchain is essential for the AI era, how decentralized systems empower individuals and communities, and the massive economic transformation coming to states like Ohio. Andrew shares the personal story behind his leap from a secure energy-tech career into full-time blockchain advocacy, why his faith guided the transition, and how local policy can unlock global innovation.We unpack the realities behind Bitcoin's volatility, long-term value, inflation, the S-curve of exponential tech adoption, and why blockchain should be seen as a utility—not a gamble. You'll learn how agentic AI will depend on blockchain rails for payments, how on-chain verification combats deepfakes, and why crypto is a once-in-a-generation opportunity for regions left behind by globalization.Andrew also shares why privacy matters, what people misunderstand about crypto, where regulation should (and shouldn't) go, and why the next five years will be the fastest technological pivot in human history.If you've been curious, skeptical, or overwhelmed by crypto, this is the conversation that makes it all click.What You'll Learn:Why Bitcoin is “better, faster, more secure money”How blockchain + AI together solve trust, speed, and verification gapsWhy inflation quietly erodes wealth and how Bitcoin counters itThe real difference between gambling memes vs. real digital assetsHow agentic AI will need blockchain for payments and micro-transactionsWhy Ohio is emerging as a national leader in blockchain policyHow decentralized tech can help rebuild forgotten communitiesWhere privacy, transparency, and security intersect in Web3Chapters:00:00 Welcome & Andrew's story03:15 Why he left a secure career for blockchain09:45 The meaning of Bitcoin as sound money14:20 Inflation, trust, and why blockchain matters19:30 Blockchain + AI: the critical connection26:40 Privacy, regulation & misuse: what's real33:10 Meme coins vs. real utility38:20 The next 5 years: “The Pivot”42:15 What's ahead for Ohio & cryptoQuick Q&A:Q: Why does Bitcoin matter today?A: It creates trust, speed, and financial sovereignty in a system where inflation and centralization reduce purchasing power.Q: How do AI and blockchain work together?A: AI creates speed; blockchain creates trust and verification. Together they enable secure agentic automation.Q: What do people misunderstand most about crypto?A: They confuse speculation with utility. Blockchain's long-term value is in its function, not its hype.Connect with Andrew Burchwell:Website / Organization: https://ohioblockchain.org/X (Twitter): https://x.com/AndrewBurchwellLinkedIn: https://www.linkedin.com/in/andrew-burchwell-a7284994/Ohio Blockchain Council (organization page): https://ohioblockchain.org/Be a guest on the podcast or Subscribe to our newsletterAll our links  - https://linktr.ee/disruptionnow#Blockchain #Bitcoin #Web3 #aiagents Music credit:calm before storm - moñoñaband

Edtech Insiders
John Gamba on What EdTech Needs to Get Right About AI, Scale, and Learning Outcomes at Catalyst @ Penn GSE

Edtech Insiders

Play Episode Listen Later Jan 23, 2026 51:51 Transcription Available


Send us a textJohn Gamba is Entrepreneur in Residence at Catalyst @ Penn GSE, where he mentors education entrepreneurs and leads the Milken-Penn GSE Education Business Plan Competition. Over 17 years, the competition has awarded $2M to ventures that have gone on to raise more than $200M in follow-on funding, with a strong focus on equity and research-to-practice impact.

High-Impact Growth
Beyond “AI for Good”: Building Responsible AI

High-Impact Growth

Play Episode Listen Later Jan 22, 2026 50:17


AI is moving fast in global health and development, but good intentions alone don't guarantee good outcomes. In this episode of High-Impact Growth, we sit down with Genevieve Smith, Founding Director of the Responsible AI Initiative at UC Berkeley's AI Research Lab, to unpack what it really means to build and deploy AI responsibly in contexts that matter most.Drawing on a decade of experience in international development and cutting-edge research on AI bias, Genevieve explains why labeling a project “AI for good” isn't enough. She introduces five critical lenses –  fairness, privacy, security, transparency, and accountability – that program managers and product leaders must apply to avoid reinforcing existing inequalities or creating new risks.The conversation explores real-world examples, from AI-driven credit assessments that unintentionally disadvantage women, to the challenges of deploying generative AI in low-resource and multilingual settings. Genevieve also shares emerging alternatives, like data cooperatives, that give communities governance over how their data is used, shifting power toward trust, agency, and long-term impact.This episode offers practical insights for anyone navigating the hype, pressure, and promise of AI in development, and looking to get it right.Responsible AI Initiative – UC Berkeley AI Research Lab – A multidisciplinary initiative advancing research and practice around responsible, trustworthy AI.Mitigating Bias in Artificial Intelligence - A playbook for business leaders who build & use AI to unlock value responsibly & equitablyUC Berkeley AI Research (BAIR) – A leading AI research lab focused on advancing the science and real-world impact of artificial intelligence.Fairness, Accountability, and Transparency (FAccT) Conference – A major interdisciplinary conference on ethical and responsible AI systems.UN Women – An organization referenced in Genevieve's background, focused on gender equality and women's empowerment globally.International Center for Research on Women (ICRW) – A research organization mentioned in the episode, specializing in gender, equity, and inclusive development.Sign up to our newsletter⁠, and stay informed of Dimagi's workWe are on social media - follow us for the latest from Dimagi: ⁠LinkedIn⁠, ⁠Twitter⁠, ⁠Facebook⁠⁠⁠, ⁠Youtube⁠⁠⁠⁠If you enjoy this show, please leave us a 5-Star Review and share your favorite episodes with friends. Hosts: ⁠Jonathan Jackson⁠ and ⁠Amie Vaccaro

Pondering AI
AI Is As Data Does with Gretchen Stewart

Pondering AI

Play Episode Listen Later Jan 21, 2026 47:03


Gretchen Stewart knows she doesn't know it all, always asks why, challenges oversimplified AI stories, champions multi-disciplinary teams and doubles down on data.   Gretchen and Kimberly discuss conflating GenAI with AI, data as the underpinning for all things AI, workflow engineering, AI as a team sport, organizational and data siloes, programming as a valued skill, agentic AI and workforce reductions, the complexity inherent in an interconnected world, data volume vs. quality, backsliding on governance, not knowing it all and diversity as a force multiplier.Gretchen Stewart is a Principal Engineer at Intel. She serves as the Chief Data Scientist for the public sector and is a member of the enterprise HPC and AI architecture team. A self-professed human to geek translator, Gretchen was recently nominated as a Top 100 Data and AI Leader by OnConferences. A transcript of this episode is here.   

HerCsuite™ Radio - For Women Leaders On The Move
Balancing Trust, Security, and AI's Potential with Tina Lampe

HerCsuite™ Radio - For Women Leaders On The Move

Play Episode Listen Later Jan 21, 2026 20:47


As AI tools become essential for efficiency, they also introduce new vulnerabilities for founders and companies. Tina Lampe, a cybersecurity expert and Responsible AI advocate, joins host Natalie Benamou discuss risks and data exposure. They reveal why AI note-takers should never be used for any board meeting and the legal risks of data discoverability. Tina shares how to balance these risks against all the potential AI can create in the world.Listen in to future-proof your business while staying secure.Keep shining your light bright. The world needs you.About Our Guest Tina LampeTina Lampe is a Technology and Digital Risk Executive with extensive expertise as a speaker, board director and AI and Cybersecurity leader. Tina is an AI Luminary in the NEXT2LEAD AI Council powered by HerCsuite®.Tina Lampe on LinkedInHerCsuite® is a leadership network where women build what's next. Our members land board roles, grow businesses, lead the AI conversation, and live their best portfolio career with our programs. Join us at HerCsuite.com. Connect with host Natalie Benamou on LinkedIn.

AI for Kids
Fan Favorites Replay: How a Puzzle-Loving Kid Became an Expert in AI and Robotics (Middle+)

AI for Kids

Play Episode Listen Later Jan 20, 2026 34:01 Transcription Available


Send us a textThis week, we're sharing a fan-favorite replay, an episode that ranks in the top ten of our all-time most listened-to episodes. In this week's replay episode we unlock the secrets of building adaptive, personalized robots with Dr. Randi Williams, a leading figure in AI and robotics, as she shares her journey from a math-obsessed child inspired by Jimmy Neutron to a pioneering expert aiming to make technology fairer and more inclusive. Dr. Williams takes us behind the scenes of her work at the Algorithmic Justice League (AJL), discussing the triumphs and challenges of creating robots that can truly engage with humans. Through the lens of projects like PopBots, you'll discover how even preschoolers can grasp foundational AI concepts and start innovating from an early age. Hear the inspiring story of a young learner who programmed a multilingual robot, and explore the engaging tools and platforms like MIT's Playground that make learning AI fun and accessible. Finally, we tackle the crucial issue of algorithmic bias and the importance of diverse data sets in AI training. This episode underscores how creativity and a passion for learning can drive meaningful advancements in AI and robotics.  Resources for parents and kids:Preschool-Oriented Programming (POP) Platform PopBotsPlayground Raise MITDay of AITuring Test GameUnmasking AICoded BiasPersonal Robots GroupScratchNational Coding Week Support the showHey parents and teachers, if you want to stay on top of the AI news shaping your kids' world, subscribe to our weekly AI for Kids Substack: https://aiforkidsweekly.substack.com/ Help us become the #1 podcast for AI for Kids and best AI podcast for kids, parents, teachers, and families. Buy our debut book “AI… Meets… AI”Social Media & Contact: Website: www.aidigitales.com Email: contact@aidigitales.com Follow Us: Instagram, YouTube Books on Amazon or Free AI Worksheets Listen, rate, and subscribe! Apple Podcasts Amazon Music Spotify YouTube Other Like o...

Manager Memo podcast
Glia: Automate and Elevate

Manager Memo podcast

Play Episode Listen Later Jan 18, 2026 36:15


Rick DeLisi is an author and Lead Research Analyst at Glia, an online leader in Digital Customer Service. Rick shares his expertise on integrating AI into work processes to achieve effortless interaction. Enjoy the listen. Along the way we discuss AI for All (5:00), AI and new product launch (10:00), AI Pre-Op (15:30), the Chainsaw Analogy (17:30), Communicating with Bots (19:00), flying the plane (21:00), dealing with the skeptics (21:45), Glia, Data Security, and Responsible AI (25:15), and the AI 24/7 Focus Group (31:15).    Empower your teams and drive revenue @ Glia, AI Built for Community Impact This podcast is partnered with LukeLeaders1248, a nonprofit that provides scholarships for the children of military Veterans. Send a donation, large or small, through PayPal @LukeLeaders1248; Venmo @LukeLeaders1248; or our website @ www.lukeleaders1248.com. Music intro and outro from the creative brilliance of Kenny Kilgore. Lowriders and Beautiful Rainy Day.

ITSPmagazine | Technology. Cybersecurity. Society
CES 2026 Recap | AI, Robotics, Quantum, And Renewable Energy: The Future Is More Practical Than You Think | A Conversation with CTA Senior Director and Futurist Brian Comiskey | Redefining Society and Technology with Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Jan 17, 2026 23:55


CES 2026 Just Showed Us the Future. It's More Practical Than You Think.CES has always been part crystal ball, part carnival. But something shifted this year.I caught up with Brian Comiskey—Senior Director of Innovation and Trends at CTA and a futurist by trade—days after 148,000 people walked the Las Vegas floor. What he described wasn't the usual parade of flashy prototypes destined for tech graveyards. This was different. This was technology getting serious about actually being useful.Three mega trends defined the show: intelligent transformation, longevity, and engineering tomorrow. Fancy terms, but they translate to something concrete: AI that works, health tech that extends lives, and innovations that move us, power us, and feed us. Not technology for its own sake. Technology with a job to do.The AI conversation has matured. A year ago, generative AI was the headline—impressive demos, uncertain applications. Now the use cases are landing. Industrial AI is optimizing factory operations through digital twins. Agentic AI is handling enterprise workflows autonomously. And physical AI—robotics—is getting genuinely capable. Brian pointed to robotic vacuums that now have arms, wash floors, and mop. Not revolutionary in isolation, but symbolic of something larger: AI escaping the screen and entering the physical world.Humanoid robots took a visible leap. Companies like Sharpa and Real Hand showcased machines folding laundry, picking up papers, playing ping pong. The movement is becoming fluid, dexterous, human-like. LG even introduced a consumer-facing humanoid. We're past the novelty phase. The question now is integration—how these machines will collaborate, cowork, and coexist with humans.Then there's energy—the quiet enabler hiding behind the AI headlines.Korea Hydro Nuclear Power demonstrated small modular reactors. Next-generation nuclear that could cleanly power cities with minimal waste. A company called Flint Paper Battery showcased recyclable batteries using zinc instead of lithium and cobalt. These aren't sexy announcements. They're foundational.Brian framed it well: AI demands energy. Quantum computing demands energy. The future demands energy. Without solving that equation, everything else stalls. The good news? AI itself is being deployed for grid modernization, load balancing, and optimizing renewable cycles. The technologies aren't competing—they're converging.Quantum made the leap from theory to presence. CES launched a new area called Foundry this year, featuring innovations from D-Wave and Quantum Computing Inc. Brian still sees quantum as a 2030s defining technology, but we're in the back half of the 2020s now. The runway is shorter than we thought.His predictions for 2026: quantum goes more mainstream, humanoid robotics moves beyond enterprise into consumer markets, and space technologies start playing a bigger role in connectivity and research. The threads are weaving together.Technology conversations often drift toward dystopia—job displacement, surveillance, environmental cost. Brian sees it differently. The convergence of AI, quantum, and clean energy could push things toward something better. The pieces exist. The question is whether we assemble them wisely.CES is a snapshot. One moment in the relentless march. But this year's snapshot suggests technology is entering a phase where substance wins over spectacle.That's a future worth watching.This episode is part of the Redefining Society and Technology podcast's CES 2026 coverage. Subscribe to stay informed as technology and humanity continue to intersect.Subscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.> https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Trust Issues
EP 23 - Red teaming AI governance: catching model risk early

Trust Issues

Play Episode Listen Later Jan 14, 2026 34:37


AI systems are moving fast, sometimes faster than the guardrails meant to contain them. In this episode of Security Matters, host David Puner digs into the hidden risks inside modern AI models with Pamela K. Isom, exploring the governance gaps that allow agents to make decisions, recommendations, and even commitments far beyond their intended authority.Isom, former director of AI and technology at the U.S. Department of Energy (DOE) and now founder and CEO of IsAdvice & Consulting, explains why AI red teaming must extend beyond cybersecurity, how to stress test AI governance before something breaks, and why human oversight, escalation paths, and clear limits are essential for responsible AI.The conversation examines real-world examples of AI drift, unintended or unethical model behavior, data lineage failures, procurement and vendor blind spots, and the rising need for scalable AI governance, AI security, responsible AI practices, and enterprise red teaming as organizations adopt generative AI.Whether you work in cybersecurity, identity security, AI development, or technology leadership, this episode offers practical insights for managing AI risk and building systems that stay aligned, accountable, and trustworthy.

Breakfast Leadership
Deep Dive: When AI Becomes More Than a Tool — How AI Predictions from PwC Signal a New Era for Work, Culture, and Leadership

Breakfast Leadership

Play Episode Listen Later Jan 9, 2026 14:57


Introduction In this Deep Dive episode, we dive into PwC's latest AI Business Predictions — a roadmap offering insight into how companies can harness artificial intelligence not just for efficiency, but as a strategic lever to reshape operations, workforce, and long-term growth. We explore why “AI adoption” is now about more than technology: it's about vision, leadership, and rethinking what work and human potential look like in a rapidly shifting landscape. Key Insights from PwC AI success is as much about vision as about adoption According to PwC, what separates companies that succeed with AI from those that merely dabble is leadership clarity and strategic alignment. Firms that view AI as central to their business model — rather than as an add-on — are more likely to reap measurable gains.  AI agents can meaningfully expand capacity — even double workforce impact One bold prediction: with AI agents and automation, a smaller human team can produce work at a scale that might resemble having a much larger workforce — without proportionally increasing staff size. For private firms especially, this means you can “leapfrog” traditional growth limitations.  From pilots to scale: real ROI is emerging — but requires discipline While many organizations experimented with AI in 2023–2024, PwC argues that 2025 and 2026 are about turning experiments into engines of growth. The companies that succeed are those that pick strategic high-impact areas, double down, and avoid spreading efforts too thin.  Workforce composition will shift — rise of the “AI-generalist” As AI agents take over more routine, data-heavy or repetitive tasks, human roles will trend toward design, oversight, strategy, and creative judgment. The “AI-generalist” — someone who can bridge human judgment, organizational culture, and AI tools — will become increasingly valuable.  Responsible AI, governance, and sustainability are non-negotiables PwC insists that success with AI isn't just about technology rollout; it's also about embedding ethical governance, sustainability, and data integrity. Organizations that treat AI as a core piece of long-term strategy — not a flashy add-on — will be the ones that unlock lasting value.  What This Means for Leaders, Culture & Burnout (Especially for Humans, Not Just AI) Opportunity to reimagine roles — more meaning, less drudgery As AI takes over repetitive, transactional work, human roles can shift toward creativity, strategy, mentorship, emotional intelligence, and leadership. That aligns with your mission around workplace culture and “Burnout-Proof” leadership: this could reduce burnout if implemented thoughtfully. Culture becomes the strategic differentiator As more companies adopt similar AI tools, organizational vision, values, psychological safety, and human connection may become the real competitive edge. Leaders who “get culture right” will be ahead — not because of tech, but because of people. Upskilling, transparency and trust are essential With AI in the mix, employees need clarity, training, and trust. Mismanaged adoption could lead to fear, resistance, or misalignment. Leaders must shepherd not just technology, but human transition. AI-driven efficiency must be balanced with empathy and human-centered leadership The automation and “workforce multiplier” potential is seductive — but if leaders lose sight of human needs, purpose, and wellbeing, there's a risk of burnout, disengagement, or erosion of cultural integrity. For small & private companies: a chance to leapfrog giants — but only with clarity and discipline Smaller firms often lack the resources of large enterprises, but according to PwC, those constraints may shrink when AI is used strategically. For mission-driven companies (like yours), this creates an opportunity to scale impact — provided leadership stays grounded in purpose and values. Why This Topic Matters for the Breakfast Leadership Network & Our Audience Given your work in leadership development, burnout prevention, workplace culture, and coaching — PwC's predictions offer a crucial lens. It's no longer optional for organizations to ignore AI. The question isn't “Will we use AI?” but “How will we use AI — and who do we become in the process?” For founders, people-leaders, HR strategists: this is a call to be intentional. To lead with vision, grounded in human values. To design workplaces that thrive in the AI era — not suffer. Questions for Reflection  What parts of your organization's workflow could be transformed by AI — and what human strengths should those tools free up rather than replace? How might embracing AI shift your organizational culture and the expectations for leaders? What ethical, psychological, or human-impact considerations must you address before “going all in” on AI? As a leader, how will you ensure the “AI-generalists” — employees blending tech fluency with empathy, creativity, and human judgment — are cultivated and supported? How do you prevent burnout and disconnection while dramatically increasing capacity and output via AI? Learn more at https://BreakfastLeadership.com/blog Research:  https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-predictions.html    

Product Talk
Cognizant Senior Director on Building Responsible AI Gateways for Healthcare at Scale

Product Talk

Play Episode Listen Later Jan 7, 2026 38:22


How do you scale generative AI in healthcare without sacrificing trust, transparency, or governance? In this podcast hosted by Mphasis Vice President of Products Chenny Solaiyappan, Cognizant Senior Director Elliot Papadakis shares how Cognizant is building and operationalizing an AI gateway that sits at the center of its responsible AI strategy. Elliot discusses embedding generative AI into payer workflows, designing human-in-the-loop guardrails, and using AI orchestration to unlock productivity gains across complex healthcare systems while keeping accountability and patient impact front and center.