POPULARITY
In Episode 262 of the House of #EdTech, Chris Nesi explores the timely and necessary topic of creating a responsible AI policy for your classroom. With artificial intelligence tools becoming more integrated into educational spaces, the episode breaks down why teachers need to set clear expectations and how they can do it with transparency, collaboration, and flexibility. Chris offers a five-part framework that educators can use to guide students toward ethical and effective AI use. Before the featured content, Chris reflects on a growing internal debate: is it time to step back from tech-heavy classrooms and return to more analog methods? He also shares three edtech recommendations, including tools for generating copyright-free images, discovering daily AI tool capabilities, and randomizing seating charts for better classroom dynamics. Topics Discussed: EdTech Thought: Chris debates the “Tech or No Tech” question in modern classrooms EdTech Recommendations: https://nomorecopyright.com/ - Upload an image to transform it into a unique, distinct version designed solely for inspiration and creative exploration. https://www.shufflebuddy.com/ - Never worry about seating charts again Foster a strong classroom community by frequently shuffling your seating charts while respecting your students' individual needs. https://whataicandotoday.com/ - We've analysed 16362 AI Tools and identified their capabilities with OpenAI GPT-4.1, to bring you a free list of 83054 tasks of what AI can do today. Why classrooms need a responsible AI policy A five-part framework to build your AI classroom policy Define What AI Is (and Isn't) Clarify When and How AI Can Be Used Promote Transparency and Attribution Include Privacy and Tool Approval Guidelines Make It Collaborative and Flexible The importance of modeling digital citizenship and AI literacy Free editable AI policy template by Chris for grades K–12 Mentions: Mike Brilla – The Inspired Teacher podcast Jake Miller – Educational Duct Tape podcast // Educational Duct Tape Book
"If you're going to be running a very elite research institution, you have to have the best people. To have the best people, you have to trust them and empower them. You can't hire a world expert in some area and then tell them what to do. They know more than you do. They're smarter than you are in their area. So you've got to trust your people. One of our really foundational commitments to our people is: we trust you. We're going to work to empower you. Go do the thing that you need to do. If somebody in the labs wants to spend 5, 10, 15 years working on something they think is really important, they're empowered to do that." - Doug Burger Fresh out of the studio, Doug Burger, Technical Fellow and Corporate Vice President at Microsoft Research, joins us to explore Microsoft's bold expansion into Southeast Asia with the recent launch of the Microsoft Research Asia lab in Singapore. From there, Doug shares his accidental journey from academia to leading global research operations, reflecting on how Microsoft Research's open collaboration model empowers over thousands of researchers worldwide to tackle humanity's biggest challenges. Following on, he highlights the recent breakthroughs from Microsoft Research for example, the quantum computing breakthrough with topological qubits, the evolution from lines of code to natural language programming, and how AI is accelerating innovation across multiple scaling dimensions beyond traditional data limits. Addressing the intersection of three computing paradigms—logic, probability, and quantum—he emphasizes that geographic diversity in research labs enables Microsoft to build AI that works for everyone, not just one region. Closing the conversation, Doug shares his vision of what great looks like for Microsoft Research with researchers driven by purpose and passion to create breakthroughs that advance both science and society. Episode Highlights: [00:00] Quote of the Day by Doug Burger [01:08] Doug Burger's journey from academia to Microsoft Research [02:24] Career advice: Always seek challenges, move when feeling restless or comfortable [03:07] Launch of Microsoft Research Asia in Singapore: Tapping local talent and culture for inclusive AI development [04:13] Singapore lab focuses on foundational AI, embodied AI, and healthcare applications [06:19] AI detecting seizures in children and assessing Parkinson's motor function [08:24] Embedding Southeast Asian societal norms and values into Foundational AI research [10:26] Microsoft Research's open collaboration model [12:42] Generative AI's rapid pace accelerating technological innovation and research tools [14:36] AI revolutionizing computer architecture by creating completely new interfaces [16:24] Open versus closed source AI models debate and Microsoft's platform approach [18:08] Reasoning models enabling formal verification and correctness guarantees in AI [19:35] Multiple scaling dimensions in AI beyond traditional data scaling laws [21:01] Project Catapult and Brainwave: Building configurable hardware acceleration platforms [23:29] Microsoft's 17-year quantum computing journey with topological qubits breakthrough [26:26] Balancing blue-sky foundational research with application-driven initiatives at scale [29:16] Three computing paradigms: logic, probability (AI), and quantum superposition [32:26] Microsoft Research's exploration-to-exploitation playbook for breakthrough discoveries [35:26] Research leadership secret: Curiosity across fields enables unexpected connections [37:11] Hidden Mathematical Structures Transformers Architecture in LLMs [40:04] Microsoft Research's vision: Becoming Bell Labs for AI era [42:22] Steering AI models for mental health and critical thinking conversations Profile: Doug Burger, Technical Fellow and Corporate Vice President, Microsoft Research LinkedIn: https://www.linkedin.com/in/dcburger/ Microsoft Research Profile: https://www.microsoft.com/en-us/research/people/dburger/ Podcast Information: Bernard Leong hosts and produces the show. The proper credits for the intro and end music are "Energetic Sports Drive." G. Thomas Craig mixed and edited the episode in both video and audio format. Here are the links to watch or listen to our podcast. Analyse Asia Main Site: https://analyse.asia Analyse Asia Spotify: https://open.spotify.com/show/1kkRwzRZa4JCICr2vm0vGl Analyse Asia Apple Podcasts: https://podcasts.apple.com/us/podcast/analyse-asia-with-bernard-leong/id914868245 Analyse Asia YouTube: https://www.youtube.com/@AnalyseAsia Analyse Asia LinkedIn: https://www.linkedin.com/company/analyse-asia/ Analyse Asia X (formerly known as Twitter): https://twitter.com/analyseasia Analyse Asia Threads: https://www.threads.net/@analyseasia Sign Up for Our This Week in Asia Newsletter: https://www.analyse.asia/#/portal/signup Subscribe Newsletter on LinkedIn https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7149559878934540288
Justin DiPietro, Co-Founder & Chief Strategy Officer of Glia, shares how they are leveraging AI to enhance the customer experience in the highly regulated world of financial institutions.Topics Include:Glia provides voice, digital, and AI services for customer-facing and internal operationsBuilt on "channel-less architecture" unlike traditional contact centers that added channels sequentiallyOne interaction can move seamlessly between channels (voice, chat, SMS, social)AI applies across all channels simultaneously rather than per individual channel700 customers, primarily banks and credit unions, 370 employees, headquartered in New YorkTargets 3,500 banks and credit unions across the United States marketFocuses exclusively on financial services and other regulated industriesAI for regulated industries requires different approach than non-regulated businessesTraditional contact centers had trade-off between cost and quality of serviceAI enables higher quality while simultaneously decreasing costs for contact centersNumber one reason people call banks: "What's my balance?" (20% of calls)Financial services require 100% accuracy, not 99.999% due to trust requirementsUses AWS exclusively for security, reliability, and future-oriented technology accessReal-time system requires triple-hot redundancy; seconds matter for live callsWorks with Bedrock team; customers certify Bedrock rather than individual featuresShowed examples of competitors' AI giving illegal million-dollar loans at 0%"Responsible AI" separates probabilistic understanding from deterministic responses to customersUses three model types: client models, network models, and protective modelsTraditional NLP had 50% accuracy; their LLM approach achieves 100% understandingPolicy is "use Nova unless" they can't, primarily for speed benefitsParticipants:Justin DiPietro – Co-Founder & Chief Strategy Officer, GliaFurther Links:Glia WebsiteGlia AWS MarketplaceSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/
In this thought leadership session, ITSPmagazine co-founders Sean Martin and Marco Ciappelli moderate a dynamic conversation with five industry leaders offering their take on what will dominate the show floor and side-stage chatter at Black Hat USA 2025.Leslie Kesselring, Founder of Kesselring Communications, surfaces how media coverage is shifting in real time—no longer driven solely by talk submissions but now heavily influenced by breaking news, regulation, and public-private sector dynamics. From government briefings to cyberweapon disclosures, the pressure is on to cover what matters, not just what's scheduled.Daniel Cuthbert, member of the Black Hat Review Board and Global Head of Security Research at Banco Santander, pushes back on the hype. He notes that while tech moves fast, security research often revisits decades-old bugs. His sharp observation? “The same bugs from the ‘90s are still showing up—sometimes discovered by researchers younger than the vulnerabilities themselves.”Michael Parisi, Chief Growth Officer at Steel Patriot Partners, shifts the conversation to operational risk. He raises concern over Model-Chained Prompting (MCP) and how AI agents can rewrite enterprise processes without visibility or traceability—especially alarming in environments lacking kill switches or proper controls.Richard Stiennon, Chief Research Analyst at IT-Harvest, offers market-level insights, forecasting AI agent saturation with over 20 vendors already present in the expo hall. While excited by real advancements, he warns of funding velocity outpacing substance and cautions against the cycle of overinvestment in vaporware.Rupesh Chokshi, SVP & GM at Akamai Technologies, brings the product and customer lens—framing the security conversation around how AI use cases are rolling out fast while security coverage is still catching up. From OT to LLMs, securing both AI and with AI is a top concern.This episode is not just about placing bets on buzzwords. It's about uncovering what's real, what's noise, and what still needs fixing—no matter how long we've been talking about it.___________Guests:Leslie Kesselring, Founder at Cyber PR Firm Kesselring Communications | On LinkedIn: https://www.linkedin.com/in/lesliekesselring/“This year, it's the news cycle—not the sessions—that's driving what media cover at Black Hat.”Daniel Cuthbert, Black Hat Training Review Board and Global Head of Security Research for Banco Santander | On LinkedIn: https://www.linkedin.com/in/daniel-cuthbert0x/“Why are we still finding bugs older than the people presenting the research?”Richard Stiennon, Chief Research Analyst at IT-Harvest | On LinkedIn: https://www.linkedin.com/in/stiennon/“The urge to consolidate tools is driven by procurement—not by what defenders actually need.”Michael Parisi, Chief Growth Officer at Steel Patriot Partners | On LinkedIn: https://www.linkedin.com/in/michael-parisi-4009b2261/“Responsible AI use isn't a policy—it's something we have to actually implement.”Rupesh Chokshi, SVP & General Manager at Akamai Technologies | On LinkedIn: https://www.linkedin.com/in/rupeshchokshi/“The business side is racing to deploy AI—but security still hasn't caught up.”Hosts:Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com___________Episode SponsorsThreatLocker: https://itspm.ag/threatlocker-r974BlackCloak: https://itspm.ag/itspbcwebAkamai: https://itspm.ag/akamailbwcDropzoneAI: https://itspm.ag/dropzoneai-641Stellar Cyber: https://itspm.ag/stellar-9dj3___________ResourcesLearn more and catch more stories from our Black Hat USA 2025 coverage: https://www.itspmagazine.com/bhusa25ITSPmagazine Webinar: What's Heating Up Before Black Hat 2025: Place Your Bet on the Top Trends Set to Shake Up this Year's Hacker Conference — An ITSPmagazine Thought Leadership Webinar | https://www.crowdcast.io/c/whats-heating-up-before-black-hat-2025-place-your-bet-on-the-top-trends-set-to-shake-up-this-years-hacker-conferenceCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More
In this thought leadership session, ITSPmagazine co-founders Sean Martin and Marco Ciappelli moderate a dynamic conversation with five industry leaders offering their take on what will dominate the show floor and side-stage chatter at Black Hat USA 2025.Leslie Kesselring, Founder of Kesselring Communications, surfaces how media coverage is shifting in real time—no longer driven solely by talk submissions but now heavily influenced by breaking news, regulation, and public-private sector dynamics. From government briefings to cyberweapon disclosures, the pressure is on to cover what matters, not just what's scheduled.Daniel Cuthbert, member of the Black Hat Review Board and Global Head of Security Research at Banco Santander, pushes back on the hype. He notes that while tech moves fast, security research often revisits decades-old bugs. His sharp observation? “The same bugs from the ‘90s are still showing up—sometimes discovered by researchers younger than the vulnerabilities themselves.”Michael Parisi, Chief Growth Officer at Steel Patriot Partners, shifts the conversation to operational risk. He raises concern over Model-Chained Prompting (MCP) and how AI agents can rewrite enterprise processes without visibility or traceability—especially alarming in environments lacking kill switches or proper controls.Richard Stiennon, Chief Research Analyst at IT-Harvest, offers market-level insights, forecasting AI agent saturation with over 20 vendors already present in the expo hall. While excited by real advancements, he warns of funding velocity outpacing substance and cautions against the cycle of overinvestment in vaporware.Rupesh Chokshi, SVP & GM at Akamai Technologies, brings the product and customer lens—framing the security conversation around how AI use cases are rolling out fast while security coverage is still catching up. From OT to LLMs, securing both AI and with AI is a top concern.This episode is not just about placing bets on buzzwords. It's about uncovering what's real, what's noise, and what still needs fixing—no matter how long we've been talking about it.___________Guests:Leslie Kesselring, Founder at Cyber PR Firm Kesselring Communications | On LinkedIn: https://www.linkedin.com/in/lesliekesselring/“This year, it's the news cycle—not the sessions—that's driving what media cover at Black Hat.”Daniel Cuthbert, Black Hat Training Review Board and Global Head of Security Research for Banco Santander | On LinkedIn: https://www.linkedin.com/in/daniel-cuthbert0x/“Why are we still finding bugs older than the people presenting the research?”Richard Stiennon, Chief Research Analyst at IT-Harvest | On LinkedIn: https://www.linkedin.com/in/stiennon/“The urge to consolidate tools is driven by procurement—not by what defenders actually need.”Michael Parisi, Chief Growth Officer at Steel Patriot Partners | On LinkedIn: https://www.linkedin.com/in/michael-parisi-4009b2261/“Responsible AI use isn't a policy—it's something we have to actually implement.”Rupesh Chokshi, SVP & General Manager at Akamai Technologies | On LinkedIn: https://www.linkedin.com/in/rupeshchokshi/“The business side is racing to deploy AI—but security still hasn't caught up.”Hosts:Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com___________Episode SponsorsThreatLocker: https://itspm.ag/threatlocker-r974BlackCloak: https://itspm.ag/itspbcwebAkamai: https://itspm.ag/akamailbwcDropzoneAI: https://itspm.ag/dropzoneai-641Stellar Cyber: https://itspm.ag/stellar-9dj3___________ResourcesLearn more and catch more stories from our Black Hat USA 2025 coverage: https://www.itspmagazine.com/bhusa25ITSPmagazine Webinar: What's Heating Up Before Black Hat 2025: Place Your Bet on the Top Trends Set to Shake Up this Year's Hacker Conference — An ITSPmagazine Thought Leadership Webinar | https://www.crowdcast.io/c/whats-heating-up-before-black-hat-2025-place-your-bet-on-the-top-trends-set-to-shake-up-this-years-hacker-conferenceCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More
Multimodal interfaces. Real-time personalization. Data privacy. Content ownership. Responsible AI. In this episode, Eve Sangenito of global consultancy Perficient offers a grounded, enterprise lens on the evolving demands of AI-powered customer experience—and what leaders (and the partners who support them) need to understand right now. Eve and Sarah explore how generative AI is reshaping customer expectations, guiding tech investments, and redefining experience delivery at scale. For anyone driving digital transformation, building AI strategy, or modernizing enterprise CX, this conversation is a timely look at what's shifting—and what's ahead.
In this episode, our guest is Xiaochen Zhang, a global innovation leader and the driving force behind AI 2030 and FinTech for Good. Xiaochen shares his journey from the World Bank to founding organisations that champion responsible technology. He dives into the six-pillar framework of responsible AI—transparency, accountability, fairness, safety, security, sustainability, and privacy—and discusses the risks of digital divide, ethical AI design, and the future of collaborative intelligence. He also highlights the transformative potential of AI across climate, financial inclusion, and renewable energy, underscoring the urgency of responsible leadership and inclusive innovation. A fascinating conversation bridging technology, ethics, and global impact. Connect with Sohail Hasnie: Facebook @sohailhasnie X (Twitter) @shasnie LinkedIn @shasnie ADB Blog Sohail Hasnie YouTube @energypreneurs Instagram @energypreneurs Tiktok @energypreneurs Spotify Video @energypreneurs
In schools with limited resources, large class sizes, and wide differences in student ability, individualized learning has become a necessity. Artificial intelligence offers powerful tools to help meet those needs, especially in underserved communities. But the way we introduce those tools matters.This week, Matt Kirchner talks with Sam Whitaker, Director of Social Impact at StudyFetch, about how AI can support literacy, comprehension, and real learning outcomes when used with purpose. Sam shares his experience bringing AI education to a rural school in Uganda, where nearly every student had already used AI without formal guidance. The results of a two-hour project surprised everyone and revealed just how much potential exists when students are given the right tools.The conversation covers AI as a literacy tool, how to design platforms that encourage learning rather than shortcutting, and why student-facing AI should preserve creativity, curiosity, and joy. Sam also explains how responsible use of AI can reduce educational inequality rather than reinforce it.This is a hopeful, practical look at how education can evolve—if we build with intention.Listen to learn:Surprising lessons from working with students at a rural Ugandan school using artificial intelligenceWhat different MIT studies suggest about the impacts of AI use on memory and productivityHow AI can help U.S. literacy rates, and what far-reaching implications that will haveWhat China's AI education policy for six-year-olds might signal about the global race for responsible, guided AI use3 Big Takeaways:1. Responsible AI use must be taught early to prevent misuse and promote real learning. Sam compares AI to handing over a car without driver's ed—powerful but dangerous without structure. When AI is used to do the thinking for students, it stifles creativity and long-term retention instead of developing it.2. AI can help close educational gaps in schools that lack the resources for individualized learning. In many underserved districts, large class sizes make one-on-one instruction nearly impossible. AI tools can adapt to students' needs in real time, offering personalized learning that would otherwise be out of reach.3. AI can play a key role in addressing the U.S. literacy crisis. Sam points out that 70% of U.S. inmates read at a fourth-grade level or below, and 85% of juvenile offenders can't read. Adaptive AI tools are now being developed to assess, support, and gradually improve literacy for students who have been left behind.Resources in this Episode:To learn about StudyFetch, visit: www.studyfetch.comOther resources:MIT Study "Experimental Evidence on the Productivity Effects of General Artificial Intelligence"MIT Study "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task"Learn more about the Ugandan schools mentioned: African Rural University (ARU) and Uganda Rural Development anWe want to hear from you! Send us a text.Instagram - Facebook - YouTube - TikTok - Twitter - LinkedIn
On this episode of Embracing Erosion, Devon sits down with marketing executive and AI advisor, Liza Adams. Liza has held senior marketing leadership roles at major companies like Smartsheet, Juniper Networks, and Pure Storage, and now helps organizations accelerate growth through applied AI strategies at GrowthPath Partners.They discuss all things AI including, what marketing leaders are missing today, the future of the org chart, what tomorrows roles might look like, tactical tips on how to elevate your role using AI., and much more. Enjoy the conversation!
The rise of Artificial Intelligence ignites age-old questions about ethics, responsibility, and the nature of decision-making. As AI systems become more embedded in daily life, shaping what we see, buy, and believe, the call for "Ethical AI" grows louder. But what does that really mean? Is it about aligning machine behavior with human values? Can reasonable professionals agree on a set of standards to safely shepherd business into the AI Epoch? Register for this special two-hour DM Radio to find out! Host @eric_kavanagh will interview several industry luminaries, including: Andy Hannah, Founder of Blue Street Data, and Chairperson for the University of Pittsburgh's Responsible AI Advisory Board. Also joining will be Michael Colaresi, Associate Vice Provost for Data Science at the University of Pittsburgh. Another expert on the call will be Jessica Talisman, who draws on her background in library and information science to champion structured, ethical AI systems. Rounding out the panel will be Mench.ai Founder, Nikolai Mentchoukov, who created AI Agents before that name was even born!
Prepare for game-changing AI insights! Join Noelle Russell, CEO of the AI Leadership Institute and author of Scaling Responsible AI: From Enthusiasm to Execution. Noelle, an AI pioneer, shares her journey from the early Alexa team with Jeff Bezos, where her unique perspective shaped successful mindfulness apps. We'll explore her "I Love AI" community, which has taught over 3.4 million people. Unpack responsible, profitable AI, from the "baby tiger" analogy for AI development and organizational execution, to critical discussions around data bias and the cognitive cost of AI over-reliance.Key Moments: Journey into AI: From Jeff Bezos to Alexa (03:13): Noelle describes how she "stumbled into AI" after receiving an email from Jeff Bezos inviting her to join a new team at Amazon, later revealed to be the early Alexa team. She highlights that while she lacked inherent AI skills, her "purpose and passion" fueled her journey."I Love AI" Community & Learning (11:02): After leaving Amazon and experiencing a personal transition, Noelle created the "I Love AI" community. This free, neurodiverse space offers a safe environment for people, especially those laid off or transitioning careers, to learn AI without feeling alone, fundamentally changing their life trajectories.The "Baby Tiger" Analogy (17:21): Noelle introduces her "baby tiger" analogy for early AI model development. She explains that in the "peak of enthusiasm" (baby tiger mode), people get excited about novel AI models, but often fail to ask critical questions about scale, data needs, long-term care, or what happens if the model isn't wanted anymore.Model Selection & Explainability (32:01): Noelle stresses the importance of a clear rubric for model selection and evaluation, especially given rapid changes. She points to Stanford's HELM project (Holistic Evaluation of Language Models) as an open-source leaderboard that evaluates models on "toxicity" beyond just accuracy.Avoiding Data Bias (40:18): Noelle warns against prioritizing model selection before understanding the problem and analyzing the data landscape, as this often leads to biased outcomes and the "hammer-and-nail" problem.Cognitive Cost of AI Over-Reliance (44:43): Referencing recent industry research, Noelle warns about the potential "atrophy" of human creativity due to over-reliance on AI. Key Quotes:"Show don't tell... It's more about understanding what your review board does and how they're thinking and what their backgrounds are... And then being very thoughtful about your approach." - Noelle Russell"When we use AI as an aid rather than as writing the whole thing or writing the title, when we use it as an aid, like, can you make this title better for me? Then our brain actually is growing. The creative synapses are firing away." Noelle Russell"Most organizations, most leaders... they're picking their model before they've even figured out what the problem will be... it's kind of like, I have a really cool hammer, everything's a nail, right?" - Noelle RussellMentions:"I Love AI" CommunityScaling Responsible AI: From Enthusiasm to Execution - Noelle Russell"Your Brain on ChatGPT" - MIT Media LabPower to Truth: AI Narratives, Public Trust, and the New Tech Empire - StanfordMeta-learning, Social Cognition and Consciousness in Brains and MachinesHELM - A Reproductive and Transparent Framework for Evaluating Foundation ModelsGuest Bio: Noelle Russell is a multi-award-winning speaker, author, and AI Executive who specializes in transforming businesses through strategic AI adoption. She is a revenue growth + cost optimization expert, 4x Microsoft Responsible AI MVP, and named the #1 Agentic AI Leader in 2025. She has led teams at NPR, Microsoft, IBM, AWS and Amazon Alexa, and is a consistent champion for Data and AI literacy and is the founder of the "I ❤️ AI" Community teaching responsible AI for everyone.She is the founder of the AI Leadership Institute and empowers business owners to grow and scale with AI. In the last year, she has been named an awardee of the AI and Cyber Leadership Award from DCALive, the #1 Thought Leader in Agentic AI, and a Top 10 Global Thought Leader in Generative AI by Thinkers360. Hear more from Cindi Howson here. Sponsored by ThoughtSpot.
Dietmar Offenhuber reflects on synthetic data's break from reality, relates meaning to material use, and embraces data as a speculative and often non-digital artifact. Dietmar and Kimberly discuss data as a representation of reality; divorcing content from meaning; data settings vs. data sets; synthetic data quality and ground truth; data as a speculative artifact; the value in noise; data materiality and accountability; rethinking data literacy; Instagram data realities; non-digital computing and going beyond statistical analysis. Dietmar Offenhuber is a Professor and Department Chair of Art+Design at Northeastern University. Dietmar researches the material, sensory and social implications of environmental information and evidence construction. Related Resources Shapes and Frictions of Synthetic Data (paper): https://journals.sagepub.com/doi/10.1177/20539517241249390 Autographic Design: The Matter of Data in a Self-Inscribing World (book): https://autographic.design/ Reservoirs of Venice (project): https://res-venice.github.io/ Website: https://offenhuber.net/ A transcript of this episode is here.
The term ‘Responsible AI' is more than a buzzword; it's a call to action. When we talk about responsible AI, it's not about some fancy tech tools; it's about power, ethics, leadership, and long-term consequences. The question we need to ask is ‘Who defines what's safe? Who decides what's ethical?'. At present a handful of tech giants shape the answers to those questions. While using AI might feel like a moderate impact, developing these models comes with a great environmental cost. We live in a world that moves at hyperspeed where it's not healthy to treat AI as just a tech tool. The time has come for nonprofits and leaders to step up and lead with responsibility. In this week's episode, Scott and Nathan talk about the ever-evolving landscape of AI and the foundation of AI governance. AI technologies are generally developing at a remarkable rate, but the governance aspect is only slowly progressing. This mismatch shows how the regulations are trying to catch up instead of preventing harm. Starting the conversation, Nathan shares his thoughts on the need for an adaptive, forward-thinking governance framework for AI that is able to anticipate risk and not just to respond. Next, Nathan and Scott discuss the technological and geopolitical power in AI, where a handful of tech giants control the system, leaving the rest of us to decide if their definition of ‘Responsible AI' matches ours. Nathan kindly explains why responsible AI should be everyone's responsibility. We are in a dire need of drawing ethical lines, defining values, and demanding a transparent AI governance system before harm scales beyond our grasp. Further down in the conversation, Nathan and Scott pay attention to the following topics: challenges in AI governance, guardrails for using AI, the role of leadership in responsible AI use, environmental impact on developing AI, and more. HIGHLIGHTS [01:06] Governance and AI. [04:04] The lack of progress in the governance framework. [09:11] Challenges in AI governance. [13:02] The concentration of technological and geopolitical power in AI. [15:10] The importance of having guardrails of how to use AI. [20:21] The role of leadership in responsible AI. [25:31] The choice between acting in the fog or becoming irrelevant. [27:50] Navigating the ethics and safety in AI. [30:15] Environmental impact of AI. RESOURCES Laying the Foundation for AI Governance. aiforgood.itu.int/event/building-secure-and-trustworthy-ai-foundations-for-ai-governance/ Connect with Nathan and Scott: LinkedIn (Nathan): linkedin.com/in/nathanchappell/ LinkedIn (Scott): linkedin.com/in/scott-rosenkrans Website: fundraising.ai/
Today's guest is Miranda Jones, SVP of Data & AI Strategy at Emprise Bank. Miranda returns to discuss the evolving reality of responsible AI in the financial services sector. As generative and agentic systems mature, Jones emphasizes the importance of creating safe, low-risk environments where employees can experiment, learn prompt engineering, and develop a critical understanding of model limitations. She explores why domain-specific models outperform generalized foundational models in banking—where context, compliance, and communication style are essential to trust and performance. The episode also examines the strategic value of maintaining a deliberate pace in adopting agentic AI, ensuring human oversight and alignment with regulatory expectations. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the ‘AI in Business' podcast!
"[Question: So what was the biggest misconception for most business leaders usually when it comes to operationalizing AI governance?] Based on my interactions and conversations, now suddenly they think they have to erect a whole set of new committees, that they have to have these new programs. You almost hear a sigh from the room. Like, oh, we have now this whole additional compliance cost because we have to do all these new things. The reason I see that as a bit of a misconception, because building on everything that was just said earlier, you already have compliance, you already have committees, you already have governance. It's an integration of that because otherwise guess what's gonna happen? We all know that this is the next thing around the corner that's gonna pop up, whatever it's gonna be called. Are you gonna have to set up a whole new committee just because of that? Then the next thing, another one." - David Hardoon Fresh out of the studio, David Hardoon, Global Head of AI Enablement at Standard Chartered Bank, joins us in a conversation to explore how financial institutions can adopt AI responsibly at scale. He shares his unique journey from academia to government to global banking, reflecting on his fascination with human behavior that originally drew him to artificial intelligence. David explains how his time at Singapore's Monetary Authority shaped the groundbreaking FAIR principles, emphasizing how proper AI governance actually accelerates rather than inhibits innovation. He highlights real-world implementations from autonomous cash reconciliation agents to transaction monitoring systems, showcasing how banks are transforming operations while maintaining strict regulatory compliance. Addressing the biggest misconceptions about AI governance, he emphasizes the importance of integrating AI frameworks into existing structures rather than creating entirely new bureaucracies, while advocating for use-case-based approaches that build essential trust. Closing the conversation, David shares his philosophy that AI success ultimately depends on understanding human behavior and asks the fundamental question every organization should consider: "Why are we doing this?" Episode Highlights: [00:00] Quote of the Day by David Hardoon #QOTD - "AI governance isn't new bureaucracy." [00:46] Introduction: David Hardoon from Standard Chartered Bank. [02:02] How David's AI journey started with human behavior curiosity. [07:26] Governance accelerates innovation, like traffic rules enable fast driving. [10:31] FAIR principles in MAS Singapore born from lunches with compliance officers. [14:23] Don't reinvent governance wheel for AI implementations. [24:17] Banks already manage risk; apply same discipline to AI. [28:40] AI adoption problem is trust, not technology. [34:21] Autonomous AI agents handle cash reconciliation with bank IDs. [36:00] AI reduces transaction monitoring false positives by 50%. [39:54] AI requires full supply chain from infrastructure to translators. [41:52] Organizations must reward intelligent failure in AI innovation. [44:47] AI hallucination is a feature, not bug for innovation. [47:35] Measure AI ROI differently for innovation versus implementation teams. [56:27] Final wisdom: People always ask "why" about AI initiatives. Profile: David Hardoon, Global Head of AI Enablement, Standard Chartered Bank Personal Site: https://davidroihardoon.com/ LinkedIn: https://www.linkedin.com/in/davidrh/ Podcast Information: Bernard Leong hosts and produces the show. The proper credits for the intro and end music are "Energetic Sports Drive." G. Thomas Craig mixed and edited the episode in both video and audio format. Here are the links to watch or listen to our podcast. Analyse Asia Main Site: https://analyse.asia Analyse Asia Spotify: https://open.spotify.com/show/1kkRwzRZa4JCICr2vm0vGl Analyse Asia Apple Podcasts: https://podcasts.apple.com/us/podcast/analyse-asia-with-bernard-leong/id914868245 Analyse Asia YouTube: https://www.youtube.com/@Analys1eAsia Analyse Asia LinkedIn: https://www.linkedin.com/company/analyse-asia/ Analyse Asia X (formerly known as Twitter): https://twitter.com/analyseasia Analyse Asia Threads: https://www.threads.net/@analyseasia Sign Up for Our This Week in Asia Newsletter: https://www.analyse.asia/#/portal/signup Subscribe Newsletter on LinkedIn https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7149559878934540288
Tony chats with Lokesh Ballenahalli, Founder and Sunil Shivappa, COO at Enkefalos Technology, they are a research first global AI company focused on building AI solutions for the AI industry. They combine research deep research in LLMs, Responsible AI, and domain expertise to develop what they call the AI operating system for insurance. It has 3 core layers: Insurance GTP, Agentic AI Applications, and Monitoring of that AI.Lokesh Ballenahalli: https://www.linkedin.com/in/lokesh-ballenahalli/Sunil Shivappa: https://www.linkedin.com/in/sunil-m-shivappa-86273519a/Enkefalos Technology: https://www.enkefalos.com/Video Version: https://youtu.be/u3_xWZyPDEg
AI is reshaping how boards operate—but most leaders aren't ready. In this episode, we outline ten strategic actions every board must take to govern AI effectively and responsibly. From embedding AI into board agendas to modernising risk oversight and leadership structures, this is essential listening for executives navigating AI transformation. Learn how to align AI with business value, scale responsibly, and strengthen decision-making at the top.
She's not waiting for permission.Flavilla Fongang went from the ghettos of Paris to building Black Rise—one of the UK's boldest tech tribes. She's mixing identity, data, and storytelling to scale Black power through business.In this episode:-Why storytelling beats pitching in tech-How she turned oil & gas into a launchpad-Her strategy for building community-first platforms-Why AI is non-negotiable for Black excellence-The real ROI of diverse ecosystemsTimestamps:00:00 Intro — Paris to Power01:42 Childhood in the Paris ghettos03:55 Moving to London & early struggles06:18 From oil & gas to fashion to tech09:44 Founding 3 Colours Rule12:30 How storytelling became her weapon15:02 Why she built GTA Black Women in Tech17:50 Launching Black Rise — the AI-powered tribe21:33 Scaling community and credibility25:01 Diversity with data, not feelings28:40 What leadership looks like in 202531:09 Her advice to future founders34:00 Closing thoughts + how to connectAbout Flavilla Fongang:Multi-award-winning entrepreneur. Founder of 3 Colours Rule, GTA Black Women in Tech, and now Black Rise. Former oil & gas exec turned tech community builder. UN Brand Partner. Named UK's Most Influential Woman in Tech (Computer Weekly), Global Top 100 MIPAD Innovator, and Entrepreneur of the Year (BTA 2023). She also serves as an entrepreneurship expert at Oxford University Saïd Business School.Watch this if you lead with identity and build with vision.—Follow Flavilla Fongang:LinkedIn: https://www.linkedin.com/in/flavillafongangTwitter (X): https://x.com/FlavillaFongangInstagram: https://www.instagram.com/flavillafongangTikTok: https://www.tiktok.com/@flavillafWebsite: https://www.flavillafongang.comBlack Rise: https://www.theblackrise.comGTA Black Women in Tech: https://theblackwomenintech.com3 Colours Rule: https://www.3coloursrule.com#blackfounder #techstorytelling #communitytechDisruption Now: Disrupting the status quo, making emerging tech human-centric and Accessible to all. Website https://disruptionnow.com/Apply to get on the Podcast https://form.typeform.com/to/Ir6Agmzr?typeform-source=disruptionnow.comMusic: Powerful Beat - Oleksandr Stepanov
She was told she wasn't tall enough, thin enough, pretty enough.Now she's rewriting the rules of beauty for everyone else.In this episode 183 of the Disruption Now Podcast, Sian Bitner-Kearney, founder of Rock Your Beauty, reveals how perfectionism, filters, and social media lies damage self-worth—and what it takes to break free. Her nonprofit is helping women embrace their authentic selves, one fashion show and workshop at a time.
Laid off or lost in the noise? You're not alone.Episode 182: Nicole Dunbar—Cornell-trained strategist and viral LinkedIn voice—joins Disruption Now to dissect the “white collar recession.” From AI displacement to hiring freezes, she breaks down how even top-tier pros are invisible in today's job market. Learn:6 proven job search strategies (only 1 involves job boards)-Why resumes alone fail—and what recruiters actually trust-The mindset shift every mid-career pro must make now-How to “network” without small talk or selling out-The emotional trap that sabotages high performers-The hiring game changed. Here's how to stay in it.
How can companies harness Responsible AI without losing the human touch? PepsiCo's VP of People Solutions, Mark Sankarsingh, shares how the company boosts productivity and streamlines HR—while safeguarding trust, ethics, and human judgment. Learn how to lead with integrity as you scale teams and adopt AI in a digital-first world.
In this episode of the Disruption Now Podcast, host Rob Richardson sits down with Jacob D. Frankel (Kobi), the visionary Founder and CEO of Beyond Alpha Ventures. Jacob shares his journey from managing over $355 million in assets on Wall Street to leading a multi-strategy family office and hedge fund that invests in transformative technologies like AI, quantum computing, and cybersecurity. He discusses the importance of investing with purpose, the future of venture capital, and how Beyond Alpha Ventures is shaping the infrastructure of an AI-driven future. Tune in for an insightful conversation on strategic investing, innovation, and building a legacy.Top 3 Things You'll Learn from This Episode:Investing with Purpose – Jacob emphasizes the significance of deploying capital in ways that make a tangible difference, focusing on ventures that combine financial opportunity with real-world impact.Navigating Market Volatility – Learn how Beyond Alpha Ventures capitalizes on market dislocations and geopolitical shifts to identify high-conviction investment opportunities.The Future of Venture Capital – Discover Jacob's insights on the evolving landscape of venture capital, including the rise of AI, quantum computing, and the importance of ethical foresight in investment strategies.Jacob's Social Media Pages:LinkedIn: https://www.linkedin.com/in/jacobfrankelprivateequityWebsite: https://www.beyondalphaventures.com/Disruption Now: Building a Fair Share for Culture and Media. Join us and disrupt. Website https://bit.ly/2VUO9sfApply to get on the Podcast https://form.typeform.com/to/Ir6Agmzr?typeform-source=disruptionnow.comFacebook https://bit.ly/303IU8jInstagram https://bit.ly/2YOLl26Twitter https://bit.ly/2KfLaTfWebsite https://bit.ly/2VUO9sf
Co-hosts Mark Thompson and Steve Little examine the controversial rise of AI image "restoration" and discuss how entirely new images are being generated, rather than the original photos being restored. This is raising concerns about the preservation of authentic family photos.They discuss Mark's reconsideration of canceling his Perplexity subscription after rediscovering its unique strengths for supporting research.The hosts analyze recent court rulings that permit AI training on legally acquired content, plus Disney's ongoing case against Midjourney.This week's Tip of the Week explores how project workspaces in ChatGPT and Claude can greatly simplify your genealogical research.In RapidFire, the hosts cover Meta's aggressive AI hiring spree, the proliferation of AI tools in everyday software, including a new genealogy transcription tool from Dan Maloney, and the importance of reading AI news critically.Timestamps:In the News:06:50 The Pros and Cons of "Restoring" Family Photos with AI23:58 Mark is Cancelling Perplexity... Maybe32:33 AI Copyright Cases Are Starting to Work Their Way Through the CourtsTip of the Week:40:09 How Project Workspaces Help Genealogists Stay OrganizedRapidFire:48:51 Meta Goes on a Hiring Spree56:09 AI Is Everywhere!01:06:00 Reading AI News ResponsiblyResource LinksOpenAI: Introducing 4o Image Generation https://openai.com/index/introducing-4o-image-generation/Perplexity https://www.perplexity.ai/How does Perplexity work? https://www.perplexity.ai/help-center/en/articles/10352895-how-does-perplexity-workAnthropic wins key US ruling on AI training in authors' copyright lawsuit https://www.reuters.com/legal/litigation/anthropic-wins-key-ruling-ai-authors-copyright-lawsuit-2025-06-24/Meta wins AI copyright lawsuit as US judge rules against authors https://www.theguardian.com/technology/2025/jun/26/meta-wins-ai-copyright-lawsuit-as-us-judge-rules-against-authorsDisney, Universal sue image creator Midjourney for copyright infringement https://www.reuters.com/business/media-telecom/disney-universal-sue-image-creator-midjourney-copyright-infringement-2025-06-11/Disney and Universal Sue A.I. Firm for Copyright Infringement https://www.nytimes.com/2025/06/11/business/media/disney-universal-midjourney-ai.htmlProjects in ChatGPThttps://help.openai.com/en/articles/10169521-projects-in-chatgptMeta shares hit all-time high as Mark Zuckerberg goes on AI hiring blitz https://www.cnbc.com/2025/06/30/meta-hits-all-time-mark-zuckerberg-ai-blitz.htmlHere's What Mark Zuckerberg Is Offering Top AI Talent https://www.wired.com/story/mark-zuckerberg-meta-offer-top-ai-talent-300-million/Genealogy Assistant AI Handwritten Text Recognition Tool https://www.genea.ca/htr-tool/Borland Genetics https://borlandgenetics.com/Illusion of Thinking https://machinelearning.apple.com/research/illusion-of-thinkingSimon Willison: Seven replies to the viral Apple reasoning paper -- and why they fall short https://simonwillison.net/2025/Jun/15/viral-apple-reasoning-paper/MIT: Your Brain on ChatGPT https://www.media.mit.edu/projects/your-brain-on-chatgpt/overview/MIT researchers say using ChatGPT can rot your brain. The truth is a little more complicated https://theconversation.com/mit-researchers-say-using-chatgpt-can-rot-your-brain-the-truth-is-a-little-more-complicated-259450Guiding Principles for Responsible AI in Genealogy https://craigen.org/TagsArtificial Intelligence, Genealogy, Family History, AI Tools, Image Generation, AI Ethics, Perplexity, ChatGPT, Claude, Meta, Copyright Law, AI Training, Photo Restoration, Project Management, AI Development, Research Tools, Responsible AI Use, GRIP, AI News Analysis, Vibe Coding, Coalition for Responsible AI in Genealogy, AI Hiring, Dan Maloney, Handwritten Text Recognition
In this episode of The Greener Way, host Michelle Baltazar discusses the governance risks posed by AI with Elfreda Jonker from Alphinity Investment Management.They explore the impact of AI on cybersecurity and data privacy, as highlighted in Alphinity's latest sustainability report. The conversation covers the importance of a Responsible AI framework, how companies including Netflix and Wesfarmers address these risks, and the need for better investor disclosures by fund managers on how they tackle AI risks.01:38 Overview of Alphinity's Investment Management02:54 Highlights from the Sustainability Report04:20 What did Netflix do 08:35 AI as a governance risk11:09 Opportunities and challenges13:54 Conclusion Link: https://www.alphinity.com.au/This podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy
Are we on the brink of an AI revolution that could reshape our lives in unimaginable ways? Are we worrying about losing our jobs and ways of going things as usual? This is a very real concern that can affect our emotional well being. This week, we sit down with Kristof Horompoly, Head of AI Risk Management at ValidMind and former Head of Responsible AI for JP Morgan Chase, to tackle the biggest questions surrounding artificial intelligence. Kristof, with his deep expertise in the field, helps us navigate the promises and perils of AI. We explore a profound paradox: what if AI could unlock new realms of time, creativity, and even reignite our humanity, allowing us to focus on what truly matters? But conversely, what happens when we hand the steering wheel over to intelligent machines and they take us somewhere entirely unintended? In a world where machines can think, write, and create with increasing sophistication, we wonder: what is left for us to do? Should we be worried, or is there a path to embrace this future? Kristof provides thoughtful insights on how we can prepare for this evolving landscape, offering a grounded perspective on responsible AI development and what it means for our collective future. Tune in for an essential conversation on understanding, harnessing, and preparing for the age of AI. Topics covered: AI, artificial intelligence, Kristof Horompoly, ValidMind, JP Morgan Chase, AI risk management, responsible AI, future of AI, AI ethics, human-AI interaction, AI impact, technology, innovation, podcast, digital transformation, AI challenges, AI opportunities Video link: https://youtu.be/MGELXPkYMUU Did you enjoy this episode and would like to share some love?
Dr. Paul Hanona and Dr. Arturo Loaiza-Bonilla discuss how to safely and smartly integrate AI into the clinical workflow and tap its potential to improve patient-centered care, drug development, and access to clinical trials. TRANSCRIPT Dr. Paul Hanona: Hello, I'm Dr. Paul Hanona, your guest host of the ASCO Daily News Podcast today. I am a medical oncologist as well as a content creator @DoctorDiscover, and I'm delighted to be joined today by Dr. Arturo Loaiza-Bonilla, the chief of hematology and oncology at St. Luke's University Health Network. Dr. Bonilla is also the co-founder and chief medical officer at Massive Bio, an AI-driven platform that matches patients with clinical trials and novel therapies. Dr. Loaiza-Bonilla will share his unique perspective on the potential of artificial intelligence to advance precision oncology, especially through clinical trials and research, and other key advancements in AI that are transforming the oncology field. Our full disclosures are available in the transcript of the episode. Dr. Bonilla, it's great to be speaking with you today. Thanks for being here. Dr. Arturo Loaiza-Bonilla: Oh, thank you so much, Dr. Hanona. Paul, it's always great to have a conversation. Looking forward to a great one today. Dr. Paul Hanona: Absolutely. Let's just jump right into it. Let's talk about the way that we see AI being embedded in our clinical workflow as oncologists. What are some practical ways to use AI? Dr. Arturo Loaiza-Bonilla: To me, responsible AI integration in oncology is one of those that's focused on one principle to me, which is clinical purpose is first, instead of the algorithm or whatever technology we're going to be using. If we look at the best models in the world, they're really irrelevant unless we really solve a real day-to-day challenge, either when we're talking to patients in the clinic or in the infusion chair or making decision support. Currently, what I'm doing the most is focusing on solutions that are saving us time to be more productive and spend more time with our patients. So, for example, we're using ambient AI for appropriate documentation in real time with our patients. We're leveraging certain tools to assess for potential admission or readmission of patients who have certain conditions as well. And it's all about combining the listening of physicians like ourselves who are end users, those who create those algorithms, data scientists, and patient advocates, and even regulators, before they even write any single line of code. I felt that on my own, you know, entrepreneurial aspects, but I think it's an ethos that we should all follow. And I think that AI shouldn't be just bolted on later. We always have to look at workflows and try to look, for example, at clinical trial matching, which is something I'm very passionate about. We need to make sure that first, it's easier to access for patients, that oncologists like myself can go into the interface and be able to pull the data in real time when you really need it, and you don't get all this fatigue alerts. To me, that's the responsible way of doing so. Those are like the opportunities, right? So, the challenge is how we can make this happen in a meaningful way – we're just not reacting to like a black box suggestion or something that we have no idea why it came up to be. So, in terms of success – and I can tell you probably two stories of things that we know we're seeing successful – we all work closely with radiation oncologists, right? So, there are now these tools, for example, of automated contouring in radiation oncology, and some of these solutions were brought up in different meetings, including the last ASCO meeting. But overall, we know that transformer-based segmentation tools; transformer is just the specific architecture of the machine learning algorithm that has been able to dramatically reduce the time for colleagues to spend allotting targets for radiation oncology. So, comparing the target versus the normal tissue, which sometimes it takes many hours, now we can optimize things over 60%, sometimes even in minutes. So, this is not just responsible, but it's also an efficiency win, it's a precision win, and we're using it to adapt even mid-course in response to tumor shrinkage. Another success that I think is relevant is, for example, on the clinical trial matching side. We've been working on that and, you know, I don't want to preach to the choir here, but having the ability for us to structure data in real time using these tools, being able to extract information on biomarkers, and then show that multi-agentic AI is superior to what we call zero-shot or just throwing it into ChatGPT or any other algorithm, but using the same tools but just fine-tuned to the point that we can be very efficient and actually reliable to the level of almost like a research coordinator, is not just theory. Now, it can change lives because we can get patients enrolled in clinical trials and be activated in different places wherever the patient may be. I know it's like a long answer on that, but, you know, as we talk about responsible AI, that's important. And in terms of what keeps me up at night on this: data drift and biases, right? So, imaging protocols, all these things change, the lab switch between different vendors, or a patient has issues with new emerging data points. And health systems serve vastly different populations. So, if our models are trained in one context and deployed in another, then the output can be really inaccurate. So, the idea is to become a collaborative approach where we can use federated learning and patient-centricity so we can be much more efficient in developing those models that account for all the populations, and any retraining that is used based on data can be diverse enough that it represents all of us and we can be treated in a very good, appropriate way. So, if a clinician doesn't understand why a recommendation is made, as you probably know, you probably don't trust it, and we shouldn't expect them to. So, I think this is the next wave of the future. We need to make sure that we account for all those things. Dr. Paul Hanona: Absolutely. And even the part about the clinical trials, I want to dive a little bit more into in a few questions. I just kind of wanted to make a quick comment. Like you said, some of the prevalent things that I see are the ambient scribes. It seems like that's really taken off in the last year, and it seems like it's improving at a pretty dramatic speed as well. I wonder how quickly that'll get adopted by the majority of physicians or practitioners in general throughout the country. And you also mentioned things with AI tools regarding helping regulators move things quicker, even the radiation oncologist, helping them in their workflow with contouring and what else they might have to do. And again, the clinical trials thing will be quite interesting to get into. The first question I had subsequent to that is just more so when you have large datasets. And this pertains to two things: the paper that you published recently regarding different ways to use AI in the space of oncology referred to drug development, the way that we look at how we design drugs, specifically anticancer drugs, is pretty cumbersome. The steps that you have to take to design something, to make sure that one chemical will fit into the right chemical or the structure of the molecule, that takes a lot of time to tinker with. What are your thoughts on AI tools to help accelerate drug development? Dr. Arturo Loaiza-Bonilla: Yes, that's the Holy Grail and something that I feel we should dedicate as much time and effort as possible because it relies on multimodality. It cannot be solved by just looking at patient histories. It cannot be solved by just looking at the tissue alone. It's combining all these different datasets and being able to understand the microenvironment, the patient condition and prior treatments, and how dynamic changes that we do through interventions and also exposome – the things that happen outside of the patient's own control – can be leveraged to determine like what's the best next step in terms of drugs. So, the ones that we heard the news the most is, for example, the Nobel Prize-winning [for Chemistry awarded to Demis Hassabis and John Jumper for] AlphaFold, an AI system that predicts protein structures right? So, we solved this very interesting concept of protein folding where, in the past, it would take the history of the known universe, basically – what's called the Levinthal's paradox – to be able to just predict on amino acid structure alone or the sequence alone, the way that three-dimensionally the proteins will fold. So, with that problem being solved and the Nobel Prize being won, the next step is, “Okay, now we know how this protein is there and just by sequence, how can we really understand any new drug that can be used as a candidate and leverage all the data that has been done for many years of testing against a specific protein or a specific gene or knockouts and what not?” So, this is the future of oncology and where we're probably seeing a lot of investments on that. The key challenge here is mostly working on the side of not just looking at pathology, but leveraging this digital pathology with whole slide imaging and identifying the microenvironment of that specific tissue. There's a number of efforts currently being done. One isn't just H&E, like hematoxylin and eosin, slides alone, but with whole imaging, now we can use expression profiles, spatial transcriptomics, and gene whole exome sequencing in the same space and use this transformer technology in a multimodality approach that we know already the slide or the pathology, but can we use that to understand, like, if I knock out this gene, how is the microenvironment going to change to see if an immunotherapy may work better, right? If we can make a microenvironment more reactive towards a cytotoxic T cell profile, for example. So, that is the way where we're really seeing the field moving forward, using multimodality for drug discovery. So, the FDA now seems to be very eager to support those initiatives, so that's of course welcome. And now the key thing is the investment to do this in a meaningful way so we can see those candidates that we're seeing from different companies now being leveraged for rare disease, for things that are going to be almost impossible to collect enough data, and make it efficient by using these algorithms that sometimes, just with multiple masking – basically, what they do is they mask all the features and force the algorithm to find solutions based on the specific inputs or prompts we're doing. So, I'm very excited about that, and I think we're going to be seeing that in the future. Dr. Paul Hanona: So, essentially, in a nutshell, we're saying we have the cancer, which is maybe a dandelion in a field of grass, and we want to see the grass that's surrounding the dandelion, which is the pathology slides. The problem is, to the human eye, it's almost impossible to look at every single piece of grass that's surrounding the dandelion. And so, with tools like AI, we can greatly accelerate our study of the microenvironment or the grass that's surrounding the dandelion and better tailor therapy, come up with therapy. Otherwise, like you said, to truly generate a drug, this would take years and years. We just don't have the throughput to get to answers like that unless we have something like AI to help us. Dr. Arturo Loaiza-Bonilla: Correct. Dr. Paul Hanona: And then, clinical trials. Now, this is an interesting conversation because if you ever look up our national guidelines as oncologists, there's always a mention of, if treatment fails, consider clinical trials. Or in the really aggressive cancers, sometimes you might just start out with clinical trials. You don't even give the standard first-line therapy because of how ineffective it is. There are a few issues with clinical trials that people might not be aware of, but the fact that the majority of patients who should be on clinical trials are never given the chance to be on clinical trials, whether that's because of proximity, right, they might live somewhere that's far from the institution, or for whatever reason, they don't qualify for the clinical trial, they don't meet the strict inclusion criteria. But a reason you mentioned early on is that it's simply impossible for someone to be aware of every single clinical trial that's out there. And then even if you are aware of those clinical trials, to actually find the sites and put in the time could take hours. And so, how is AI going to revolutionize that? Because in my mind, it's not that we're inventing a new tool. Clinical trials have always been available. We just can't access them. So, if we have a tool that helps with access, wouldn't that be huge? Dr. Arturo Loaiza-Bonilla: Correct. And that has been one of my passions. And for those who know me and follow me and we've spoke about it in different settings, that's something that I think we can solve. This other paradox, which is the clinical trial enrollment paradox, right? We have tens of thousands of clinical trials available with millions of patients eager to learn about trials, but we don't enroll enough and many trials close to accrual because of lack of enrollment. It is completely paradoxical and it's because of that misalignment because patients don't know where to go for trials and sites don't know what patients they can help because they haven't reached their doors yet. So, the solution has to be patient-centric, right? We have to put the patient at the center of the equation. And that was precisely what we had been discussing during the ASCO meeting. There was an ASCO Education Session where we talked about digital prescreening hubs, where we, in a patient-centric manner, the same way we look for Uber, Instacart, any solution that you may think of that you want something that can be leveraged in real time, we can use these real-world data streams from the patient directly, from hospitals, from pathology labs, from genomics companies, to continuously screen patients who can match to the inclusion/exclusion criteria of unique trials. So, when the patient walks into the clinic, the system already knows if there's a trial and alerts the site proactively. The patient can actually also do decentralization. So, there's a number of decentralized clinical trial solutions that are using what I call the “click and mortar” approach, which is basically the patient is checking digitally and then goes to the site to activate. We can also have the click and mortar in the bidirectional way where the patient is engaged in person and then you give the solution like the ones that are being offered on things that we're doing at Massive Bio and beyond, which is having the patient to access all that information and then they make decisions and enroll when the time is right. As I mentioned earlier, there is this concept drift where clinical trials open and close, the patient line of therapy changes, new approvals come in and out, and sites may not be available at a given time but may be later. So, having that real-time alerts using tools that are able already to extract data from summarization that we already have in different settings and doing this natural language ingestion, we can not only solve this issue with manual chart review, which is extremely cumbersome and takes forever and takes to a lot of one-time assessments with very high screen failures, to a real-time dynamic approach where the patient, as they get closer to that eligibility criteria, they get engaged. And those tools can be built to activate trials, audit trials, and make them better and accessible to patients. And something that we know is, for example, 91%-plus of Americans live close to either a pharmacy or an imaging center. So, imagine that we can potentially activate certain of those trials in those locations. So, there's a number of pharmacies, special pharmacies, Walgreens, and sometimes CVS trying to do some of those efforts. So, I think the sky's the limit in terms of us working together. And we've been talking with corporate groups, they're all interested in those efforts as well, to getting patients digitally enabled and then activate the same way we activate the NCTN network of the corporate groups, that are almost just-in-time. You can activate a trial the patient is eligible for and we get all these breakthroughs from the NIH and NCI, just activate it in my site within a week or so, as long as we have the understanding of the protocol. So, using clinical trial matching in a digitally enabled way and then activate in that same fashion, but not only for NCTN studies, but all the studies that we have available will be the key of the future through those prescreening hubs. So, I think now we're at this very important time where collaboration is the important part and having this silo-breaking approach with interoperability where we can leverage data from any data source and from any electronic medical records and whatnot is going to be essential for us to move forward because now we have the tools to do so with our phones, with our interests, and with the multiple clinical trials that are coming into the pipelines. Dr. Paul Hanona: I just want to point out that the way you described the process involves several variables that practitioners often don't think about. We don't realize the 15 steps that are happening in the background. But just as a clarifier, how much time is it taking now to get one patient enrolled on a clinical trial? Is it on the order of maybe 5 to 10 hours for one patient by the time the manual chart review happens, by the time the matching happens, the calls go out, the sign-up, all this? And how much time do you think a tool that could match those trials quicker and get you enrolled quicker could save? Would it be maybe an hour instead of 15 hours? What's your thought process on that? Dr. Arturo Loaiza-Bonilla: Yeah, exactly. So one is the matching, the other one is the enrollment, which, as you mentioned, is very important. So, it can take, from, as you said, probably between 4 days to sometimes 30 days. Sometimes that's how long it takes for all the things to be parsed out in terms of logistics and things that could be done now agentically. So, we can use agents to solve those different steps that may take multiple individuals. We can just do it as a supply chain approach where all those different steps can be done by a single agent in a simultaneous fashion and then we can get things much faster. With an AI-based solution using these frontier models and multi-agentic AI – and we presented some of this data in ASCO as well – you can do 5,000 patients in an hour, right? So, just enrolling is going to be between an hour and maximum enrollment, it could be 7 days for those 5,000 patients if it was done at scale in a multi-level approach where we have all the trials available. Dr. Paul Hanona: No, definitely a very exciting aspect of our future as oncologists. It's one thing to have really neat, novel mechanisms of treatment, but what good is it if we can't actually get it to people who need it? I'm very much looking for the future of that. One of the last questions I want to ask you is another prevalent way that people use AI is just simply looking up questions, right? So, traditionally, the workflow for oncologists is maybe going on national guidelines and looking up the stage of the cancer and seeing what treatments are available and then referencing the papers and looking at who was included, who wasn't included, the side effects to be aware of, and sort of coming up with a decision as to how to treat a cancer patient. But now, just in the last few years, we've had several tools become available that make getting questions easier, make getting answers easier, whether that's something like OpenAI's tools or Perplexity or Doximity or OpenEvidence or even ASCO has a Guidelines Assistant as well that is drawing from their own guidelines as to how to treat different cancers. Do you see these replacing traditional sources? Do you see them saving us a lot more time so that we can be more productive in clinic? What do you think is the role that they're going to play with patient care? Dr. Arturo Loaiza-Bonilla: Such a relevant question, particularly at this time, because these AI-enabled query tools, they're coming left and right and becoming increasingly common in our daily workflows and things that we're doing. So, traditionally, when we go and we look for national guidelines, we try to understand the context ourselves and then we make treatment decisions accordingly. But that is a lot of a process that now AI is helping us to solve. So, at face value, it seems like an efficiency win, but in many cases, I personally evaluate platforms as the chief of hem/onc at St. Luke's and also having led the digital engagement things through Massive Bio and trying to put things together, I can tell you this: not all tools are created equal. In cancer care, each data point can mean the difference between cure and progression, so we cannot really take a lot of shortcuts in this case or have unverified output. So, the tools are helpful, but it has to be grounded in truth, in trusted data sources, and they need to be continuously updated with, like, ASCO and NCCN and others. So, the reason why the ASCO Guidelines Assistant, for instance, works is because it builds on all these recommendations, is assessed by end users like ourselves. So, that kind of verification is critical, right? We're entering a phase where even the source material may be AI-generated. So, the role of human expert validation is really actually more important, not less important. You know, generalist LLMs, even when fine-tuned, they may not be enough. You can pull a few API calls from PubMed, etc., but what we need now is specialized, context-aware, agentic tools that can interpret multimodal and real-time clinical inputs. So, something that we are continuing to check on and very relevant to have entities and bodies like ASCO looking into this so they can help us to be really efficient and really help our patients. Dr. Paul Hanona: Dr. Bonilla, what do you want to leave the listener with in terms of the future direction of AI, things that we should be cautious about, and things that we should be optimistic about? Dr. Arturo Loaiza-Bonilla: Looking 5 years ahead, I think there's enormous promise. As you know, I'm an AI enthusiast, but always, there's a few priorities that I think – 3 of them, I think – we need to tackle head-on. First is algorithmic equity. So, most AI tools today are trained on data from academic medical centers but not necessarily from community practices or underrepresented populations, particularly when you're looking at radiology, pathology, and what not. So, those blind spots, they need to be filled, and we can eliminate a lot of disparities in cancer care. So, those frameworks to incentivize while keeping the data sharing using federated models and things that we can optimize is key. The second one is the governance on the lifecycle. So, you know, AI is not really static. So, unlike a drug that is approved and it just, you know, works always, AI changes. So, we need to make sure that we have tools that are able to retrain and recall when things degrade or models drift. So, we need to use up-to-date AI for clinical practice, so we are going to be in constant revalidation and make it really easy to do. And lastly, the human-AI interface. You know, clinicians don't need more noise or we don't need more black boxes. We need decision support that is clear, that we can interpret, and that is actionable. “Why are you using this? Why did we choose this drug? Why this dose? Why now?” So, all these things are going to help us and that allows us to trace evidence with a single click. So, I always call it back to the Moravec's paradox where we say, you know, evolution gave us so much energy to discern in the sensory-neural and dexterity. That's what we're going to be taking care of patients. We can use AI to really be a force to help us to be better clinicians and not to really replace us. So, if we get this right and we decide for transparency with trust, inclusion, etc., it will never replace any of our work, which is so important, as much as we want, we can actually take care of patients and be personalized, timely, and equitable. So, all those things are what get me excited every single day about these conversations on AI. Dr. Paul Hanona: All great thoughts, Dr. Bonilla. I'm very excited to see how this field evolves. I'm excited to see how oncologists really come to this field. I think with technology, there's always a bit of a lag in adopting it, but I think if we jump on board and grow with it, we can do amazing things for the field of oncology in general. Thank you for the advancements that you've made in your own career in the field of AI and oncology and just ultimately with the hopeful outcomes of improving patient care, especially cancer patients. Dr. Arturo Loaiza-Bonilla: Thank you so much, Dr. Hanona. Dr. Paul Hanona: Thanks to our listeners for your time today. If you value the insights that you hear on ASCO Daily News Podcast, please take a moment to rate, review, and subscribe wherever you get your podcasts. Disclaimer: The purpose of this podcast is to educate and to inform. This is not a substitute for professional medical care and is not intended for use in the diagnosis or treatment of individual conditions. Guests on this podcast express their own opinions, experience, and conclusions. Guest statements on the podcast do not express the opinions of ASCO. The mention of any product, service, organization, activity, or therapy should not be construed as an ASCO endorsement. More on today's speakers: Dr. Arturo Loaiza-Bonilla @DrBonillaOnc Dr. Paul Hanona @DoctorDiscover on YouTube Follow ASCO on social media: @ASCO on Twitter ASCO on Facebook ASCO on LinkedIn ASCO on BlueSky Disclosures: Paul Hanona: No relationships to disclose. Dr. Arturo-Loaiza-Bonilla: Leadership: Massive Bio Stock & Other Ownership Interests: Massive Bio Consulting or Advisory Role: Massive Bio, Bayer, PSI, BrightInsight, CardinalHealth, Pfizer, AstraZeneca, Medscape Speakers' Bureau: Guardant Health, Ipsen, AstraZeneca/Daiichi Sankyo, Natera
In this episode of The Broadband Bunch, host Brad Hine sits down with Chris Draper, Board Chair at SafetAI, recorded live at day two of the Community Broadband Action Network (CBAN) conference in Ames, Iowa. With broadband providers increasingly overwhelmed by promises of “AI-infused” solutions, Chris brings clarity and expertise to the conversation around artificial intelligence in utilities and broadband. Drawing from his experience in high-risk technology environments—from rocket science to compliance in legal and government tech—Chris discusses the need for intentional, ethical AI implementation. He explores the "art of the possible" while highlighting the real-world risks of AI systems operating faster than human oversight can manage. Listeners will gain insight into how AI can amplify human action when deployed responsibly—especially in rural broadband and utility environments—and why now is the time to establish ethical frameworks before regulatory mandates catch up. Chris also shares his philosophy on data responsibility, automation pitfalls, the importance of transparency, and how SafetAI is helping organizations make informed decisions about AI adoption.
Pia Lauritzen questions our use of questions, the nature of humanity, the premise of AGI, the essence of tech, if humans can be optimized and why thinking is required. Pia and Kimberly discuss the function of questions, curiosity as a basic human feature, AI as an answer machine, why humans think, the contradiction at the heart of AGI, grappling with the three big Es, the fallacy of human optimization, respecting humanity, Heidegger's eerily precise predictions, the skill of critical thinking, and why it's not really about the questions at all. Pia Lauritzen, PhD is a philosopher, author and tech inventor asking big questions about tech and transformation. As the CEO and Founder of Qvest and a Thinkers50 Radar Member Pia is on a mission to democratize the power of questions. Related ResourcesQuestions (Book): https://www.press.jhu.edu/books/title/23069/questions TEDx Talk: https://www.ted.com/talks/pia_lauritzen_what_you_don_t_know_about_questions Question Jam: www.questionjam.comForbes Column: forbes.com/sites/pialauritzen LinkedIn Learning: www.Linkedin.com/learning/pialauritzen Personal Website: pialauritzen.dk A transcript of this episode is here.
"You can try to develop self-awareness and take a beginner's mind in all things. This includes being open to feedback and truly listening, even when it might be hard to receive. I think that's been something I've really tried to practice. The other area is recognizing that just like a company or country, as humans we have many stakeholders. You may wear many hats in different ways. So as we think of the totality of your life over time, what's your portfolio of passions? How do you choose—as individuals, as society, as organizations, as humans and families with our loved ones and friends—to not just spend your time and resources, but really invest your time, resources, and spirit into areas, people, and contexts that bring you meaning and where you can build a legacy? So it's not so much advice, but more like a north star." - Sabastian V. Niles Fresh out of the studio, Sabastian Niles, President and Chief Legal Officer at Salesforce Global, joins us to explore how trust and responsibility shape the future of enterprise AI. He shares his journey from being a high-tech corporate lawyer and trusted advisor to leading AI governance at a company whose number one value is trust, reflecting on the evolution from automation to agentic AI that can reason, plan, and execute tasks alongside humans. Sabastian explains how Agentforce 3.0 enables agent-to-agent interactions and human-AI collaboration through command centers and robust guardrails. He highlights how organizations are leveraging trusted AI for personalized customer experiences, while Salesforce's Office of Ethical and Humane Use operationalizes trust through transparency, explainability, and auditability. Addressing the black box problem in AI, he emphasizes that guardrails provide confidence to move faster rather than creating barriers. Closing the conversation, Sabastian shares his vision on what great looks like for trusted agentic AI at scale. Episode Highlights [00:00] Quote of the Day by Sabastian Niles: "Portfolio of passions - invest your spirit into areas that bring meaning" [01:02] Introduction: Sabastian Niles, President and Chief Legal Officer of Salesforce Global [02:29] Sabastian's Career Journey [04:50] From Trusted Advisor to SalesForce whose number one value is trust [08:09] Salesforce's 5 core values: Trust, Customer Success, Innovation, Equality, Sustainability [10:25] Defining Agentic AI: humans with AI agents driving stakeholder success together [13:13] Trust paradigm shift: trusted approaches become an accelerant, not obstacle [17:33] Agent interactions: not just human-to-agent, but agent-to-agent-to-agent handoffs [23:35] Enterprise AI requires transparency, explainability, and auditability [28:00] Trust philosophy: "begins long before prompt, continues after output" [34:06] Office of Ethical and Humane Use operationalizes trust values [40:00] Future vision: AI helps us spend time on uniquely human work [45:17] Governance philosophy: Guardrails provide confidence to move faster [48:24] What does great look like for Salesorce for Trust & Responsibility in the Era of AI? [50:16] Closing Profile: Sabastian V. Niles, President & Chief Legal Officer, LinkedIn: https://www.linkedin.com/in/sabastian-v-niles-b0175b2/ Podcast Information: Bernard Leong hosts and produces the show. The proper credits for the intro and end music are "Energetic Sports Drive." G. Thomas Craig mixed and edited the episode in both video and audio format. Here are the links to watch or listen to our podcast. Analyse Asia Main Site: https://analyse.asia Analyse Asia Spotify: https://open.spotify.com/show/1kkRwzRZa4JCICr2vm0vGl Analyse Asia Apple Podcasts: https://podcasts.apple.com/us/podcast/analyse-asia-with-bernard-leong/id914868245 Analyse Asia YouTube: https://www.youtube.com/@AnalyseAsia Analyse Asia LinkedIn: https://www.linkedin.com/company/analyse-asia/
Co-hosts Mark Thompson and Steve Little discuss recent updates from Google Gemini and Anthropic Claude that are reshaping AI capabilities for genealogists. Google's Gemini 2.5 Pro with its massive context window and Claude 4's hybrid reasoning models that excels at both writing and document analysis.They share insights from the RootsTech panel on responsible AI use in genealogy, and introduce the Coalition's five core principles for the response use of AI. The episode features an interview with Jessica Taylor, president of Legacy Tree Genealogists, who discusses how her company is thoughtfully experimenting with AI tools.In RapidFire, they preview ChatGPT 5's anticipated summer release, Meta's $14 billion acquisition to stay competitive, and Adobe Acrobat AI's new multi-document capabilities.Timestamps:In the News:03:45 Google Gemini 2.5 Pro: Massive Context Windows Transform Document Analysis15:09 Claude 4 Opus and Sonnet: Hybrid Reasoning Models for Writing and Research26:30 RootsTech Panel: Coalition for Responsible AI in GenealogyInterview:31:28 Jessica Taylor, CEO of Legacy Tree Genealogists, on her cautious approach to AI AdoptionRapidFire:45:07 ChatGPT 5 Coming Soon: One Model to Rule Them All51:08 Meta's $14.8 Billion Scale AI Acquisition56:42 Adobe Acrobat AI Assistant Adds Multi-Document AnalysisResource LinksGoogle I/O Conference Highlightshttps://blog.google/technology/ai/google-io-2025-all-our-announcements/Anthropic Announces Claude 4https://www.anthropic.com/news/claude-4Anthropic's new Claude 4 AI models can reason over many stepshttps://techcrunch.com/2025/05/22/anthropics-new-claude-4-ai-models-can-reason-over-many-steps/Coalition for Responsible AI in Genealogyhttps://craigen.org/Jessica M. Taylorhttps://www.apgen.org/users/jessica-m-taylorLegacy Tree Genealogistshttps://www.legacytree.com/Rootstechhttps://www.familysearch.org/en/rootstech/ChatGPT 5 is Coming Soonhttps://www.tomsguide.com/ai/chatgpt/chatgpt-5-is-coming-soon-heres-what-we-knowMeta's $14.8 billion Scale AI deal latest test of AI partnershipshttps://www.reuters.com/sustainability/boards-policy-regulation/metas-148-billion-scale-ai-deal-latest-test-ai-partnerships-2025-06-13/A frustrated Zuckerberg makes his biggest AI bethttps://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.htmlAdobe upgrades Acrobat AI chatbot to add multi-document analysishttps://www.androidauthority.com/adobe-ai-assistant-acrobat-3451988/TagsArtificial Intelligence, Genealogy, Family History, AI Tools, Google Gemini, Claude AI, OpenAI, ChatGPT, Meta AI, Adobe Acrobat, Responsible AI, Coalition for Responsible AI in Genealogy, RootsTech, AI Ethics, Document Analysis, AI Writing Tools, Hybrid Reasoning Models, Context Windows, Professional Genealogy, Legacy Tree Genealogists, Jessica Taylor, AI Integration, Multi-Document Analysis, AI Acquisitions
Multi-agentic AI is rewriting the future of work.... but are we racing ahead without checking for warning signs?Microsoft's new agent systems can split up work, make choices, and act on their own. The possibilities? Massive.But it's not without risks, which is why you NEED to listen to Sarah Bird. She's the Chief Product Officer of Responsible AI at Microsoft and is constantly building out safer agentic AI. So what's really at stake when AIs start making decisions together?And how do you actually stay in control?We're pulling back the curtain on the 3 critical risks of multi-agentic AI and unveiling the playbook to navigate them safely.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Responsible AI: Evolution and ChallengesAgentic AI's Ethical ImplicationsMulti-Agentic AI Responsibility ShiftMicrosoft's AI Governance StrategiesTesting Multi-Agentic Risks and PatternsAgentic AI: Future Workforce SkillsObservability in Multi-Agentic SystemsThree Risk Categories in AI ImplementationTimestamps:00:00 Evolving Challenges in Responsible AI05:50 Agent Technology: Benefits and Risks09:27 Complex System Governance and Observability12:26 AI Monitoring and Human Intervention15:14 Essential Testing for Trust Building19:43 Securing AI Agents with Entra22:06 Exploring Human-AI Interface Innovation26:06 AI Workforce Integration Challenges28:22 AI's Transformative Impact on JobsKeywords:Agentic AI, multi agentic AI, responsible AI, generative AI, Microsoft Build conference, AI governance, AI ethics, AI systems, AI risk, AI mitigation, AI tools, human in the loop, Foundry observability, AI testing, system security, AI monitoring, user intent, AI capability, prompt injection, Copilot, AI orchestration, AI deployment, system governance, Entra agent ID, AI education, AI upskilling, AI workforce integration, systemic risk, AI misuse, AI malfunctions, AI systemic risk, AI-powered solutions, AI development, AI innovation, AI technology, AI security measures.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
In this episode of the TribalHub Podcast, we're joined by SoCal Regional speaker Cheryl Goodman, author of How to Win Friends and Influence Robots. Cheryl shares insights from her session, AI Roadmap: Building Blocks and Best Practices, diving into what it really means to develop a thoughtful, sustainable AI strategy. We explore her professional journey, the evolving definition of intelligence in the age of AI, and what “responsible AI” looks like in real-world practice. Purchase How to Win Friends and Influence Robots and connect with Cheryl on LinkedIn.
In this episode, Alta Charo, emerita professor of law and bioethics at the University of Wisconsin–Madison, joins Sullivan for a conversation on the evolving landscape of genome editing and its regulatory implications. Drawing on decades of experience in biotechnology policy, Charo emphasizes the importance of distinguishing between hazards and risks and describes the field's approach to regulating applications of technology rather than the technology itself. The discussion also explores opportunities and challenges in biotech's multi-agency oversight model and the role of international coordination. Later, Daniel Kluttz, a partner general manager in Microsoft's Office of Responsible AI, joins Sullivan to discuss how insights from genome editing could inform more nuanced and robust governance frameworks for emerging technologies like AI.
The Big Unlock Podcast · Designing AI-Native Healthcare with Innovation, Automation, and Responsible AI. – Podcast with Sara Vaezy In this episode, Sara Vaezy, Chief Transformation Officer at Providence, discusses Providence's strategic approach to digital transformation, consumer engagement, and responsible AI adoption to improve both patient and caregiver experiences. Sara highlights the importance of delivering personalized, frictionless, and proactive healthcare experiences across digital touchpoints. At Providence, a standout initiative is the use of conversational AI to enable ‘message deflection' which reduces the volume of patient messages sent to physicians by helping patients resolve queries instantly through intelligent chatbots. Sara emphasizes building a digital workforce not just to automate routine tasks, but to rethink and redesign workflows creatively. With foundational investments in cloud infrastructure, unified data systems, and interoperability, Providence is well-positioned to scale AI use cases like ambient documentation and care navigation. Sara also shares how Providence has incubated and spun off innovative startups like DexCare and Praia Health to address critical gaps in supply-demand matching and patient personalization. She advocates for ethical AI governance, better observability tools, and designing AI-native healthcare processes that go beyond simply replacing human tasks. Take a listen.
In this episode of the FEI Podcast, Sam Peterson, EY's Global Innovation Leader for Financial Accounting Advisory Services, explores the transformative impact of generative AI on the finance function. From the evolution of AI since its breakout moment to real-world use cases in FP&A, treasury, and financial reporting, Sam shares insights on how finance leaders can harness AI to drive efficiency, accuracy, and strategic value. The conversation also dives into emerging trends like agentic AI and reasoning models, and how professionals can prepare their teams for the future of work. Special Guest: Sam Peterson.
In honor of National Safety Month, this special compilation episode of AI and the Future of Work brings together powerful conversations with four thought leaders focused on designing AI systems that protect users, prevent harm, and promote trust. Featuring past guests:Silvio Savarese (Executive Vice President and Chief Scientist, Salesforce) -Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/15548310Navindra Yadav (Co-founder & CEO, Theom) - Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/12370356Eric Siegel (CEO, Gooder AI & Author ) - Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/14464391Ben Kus (CTO, Box) - Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/14789034✅ What You'll Learn: What it means to design AI with safety, transparency, and human oversight in mindHow leading enterprises approach responsible AI development at scaleWhy data privacy and permissions are critical to safe AI deploymentHow to detect and mitigate bias in predictive modelsWhy responsible AI requires balancing speed with long-term impactHow trust, explainability, and compliance shape the future of enterprise AI ResourcesSubscribe to the AI & The Future of Work Newsletter: https://aiandwork.beehiiv.com/subscribe Other special compilation episodes Ethical AI in Hiring: How to Stay Compliant While Building a Fairer Future of Work (HR Day Special Episode)Data Privacy Day Special Episode: AI, Deepfakes & The Future of TrustThe Future of AI Ethics Special: Perspectives from Women Leaders in AI on Bias, Accountability & TrustWorld Health Day Special: How AI Is Making Healthcare Smarter, Cheaper, and Kinder
It's been three years of Environment Variables! What a landmark year for the Green Software Foundation. From launching behind-the-scenes Backstage episodes, to covering the explosive impact of AI on software emissions, to broadening our audience through beginner-friendly conversations; this retrospective showcases our mission to create a trusted ecosystem for sustainable software. Here's to many more years of EV!
Michael Strange has a healthy appreciation for complexity, diagnoses hype as antithetical to innovation and prescribes an interdisciplinary approach to making AI well. Michael and Kimberly discuss whether AI is good for healthcare; healthcare as a global system; radical shifts precipitated by the pandemic; why hype stifles nuance and innovation; how science works; the complexity of the human condition; human well-being vs. health; the limits of quantification; who is missing in healthcare and health data; the political-economy and material impacts of AI as infrastructure; the doctor in the loophole; the humility required to design healthy AI tools and create a resilient, holistic healthcare system. Michael Strange is an Associate Professor in the Dept of Global Political Affairs at Malmö University focusing on core questions of political agency and democratic engagement. In this context he works on Artificial Intelligence, health, trade, and migration. Michael directed the Precision Health & Everyday Democracy (PHED) Commission and serves on the board of two research centres: Citizen Health and the ICF (Imagining and Co-creating Futures). Related Resources If AI is to Heal Our Healthcare Systems, We Need to Redesign How AI Is Developed (article): https://www.techpolicy.press/if-ai-is-to-heal-our-healthcare-systems-we-need-to-redesign-how-ai-itself-is-developed/ Beyond ‘Our product is trusted!' – A processual approach to trust in AI healthcare (paper) https://mau.diva-portal.org/smash/record.jsf?pid=diva2%3A1914539 Michael Strange (website): https://mau.se/en/persons/michael.strange/ A transcript of this episode is here.
The integration of Artificial Intelligence (AI) in healthcare presents both opportunities and challenges that demand careful consideration. The complex interplay between innovation, regulation, and ethical governance are central themes at the heart of global discussions on health AI. This dialogue was brought to the forefront in a recent conversation with Ricardo Baptista Leite, CEO of Health AI - Global Agency for Responsible AI in Healthcare. Understanding Health AI and Its Mission Health AI, the global agency for responsible AI in health, is at the forefront of steering the development and adoption of AI solutions through collaborative regulatory mechanisms and global standards. www.facesofdigitalhealth.com https://fodh.substack.com/ Youtube:
“Climate activist in a suit”. This is how Rainer Karcher describes himself. It is an endless debate between people advocating for the system to change from the outside and those willing to change it from the inside. In this episode Gaël Duez welcomes a strong advocate of moving the corporate world into the right direction from within? Having spent 2 decades in companies such as Siemens or Allianz, Rainer Karsher knows the corporate world well, which he now advises on sustainability. In this Green IO episode, they analyse the current backlash against ESG in our corporate world and what can be done to keep big companies aligned with the Paris agreement, but also caring about biodiversity or human rights across their supply chain. Many topics were covered such as: Why ESB has nothing to with “saving the planet”, 3 tips to tackle the end of the month vs end-of-the world dilemma, Embracing a global perspective on ESG and why the current backlash is a western world only issue, Knowing the price we pay for AI and how to avoid rebound effect, the challenge with shadow AI and why training is pivotal, and yes they talked about whales also and many more things!
In this episode of The Wisdom Of ... Show, host Simon Bowen speaks with Dr. Catriona Wallace, a world-renowned AI pioneer, founder of the Responsible Metaverse Alliance, and one of the most influential voices in ethical technology development. With over two decades in AI, long before most people knew it existed, Catriona brings a unique perspective that bridges cutting-edge technology with ancient indigenous wisdom.As Chair of Boab AI, co-author of Checkmate Humanity, and a former Shark on Shark Tank Australia, Catriona has consistently been at the forefront of responsible technology development. But what makes this conversation extraordinary is her integration of plant medicine practices, indigenous community wisdom, and the "Seven Generations Principle" into the most advanced AI discussions of our time.Ready to transform your leadership approach? Join Simon's exclusive masterclass on The Models Method. Learn how to articulate your unique value and create scalable impact: https://thesimonbowen.com/masterclass Episode Breakdown00:00 Introduction and Catriona's journey from wanting to be a farmer to becoming an AI pioneer05:45 Why Australia risks becoming an "AI backwater" and the urgent need for responsible AI adoption12:30 The difference between AI ethics and responsible AI and why most leaders get this wrong18:15 The "evolutionary tipping point" toward transhumanism and what it means for business25:20 Plant medicine journeys and their impact on tech leaders' understanding of regenerative economics32:45 The Seven Generations Principle: How indigenous wisdom guides AI decision-making38:30 From extraction to regeneration: Why business models must fundamentally transform44:15 The eight principles of responsible AI and how to implement them in organizations50:30 "Rapid Transformation" and the five-step process for evolving leadership consciousness56:45 The intersection of technology love and nature love in shaping the future of humanityAbout Dr. Catriona WallaceDr. Catriona Wallace has been recognized as a Top Global Power Woman by the Centre of Economic & Leadership Development and as the Most Influential Woman in Business & Entrepreneurship by the Australian Financial Review. In 2023, she was a Shark on the hit TV series Shark Tank Australia.Catriona is the founder of the Responsible Metaverse Alliance and Chair of Boab AI, Artesian Capital's AI Accelerator and VC fund. She was also the founder of Ethical AI Advisory (now part of the Gradient Institute) and co-author of Checkmate Humanity: The How and Why of Responsible AI.As founder of AI company Flamingo AI (which exited in 2020), Catriona led only the second woman-led business ever to list on the Australian Stock Exchange. She's an international keynote speaker, one of the world's most cited experts on AI and the Metaverse, and has been recognized by Onalytica as one of the world's top AI speakers.With a PhD in Organizational Behaviour: Technology Substituting for Human Leaders and an Honorary Doctorate in Business, Dr. Wallace was inducted into the Royal Institution of Australia as one of Australia's most pre-eminent scientists. She is also a human rights activist, mother of five, trained Plant Medicine Guide, and strong advocate of the Psychedelic Renaissance.Connect with Dr. Catriona WallaceLinkedIn: Dr. Catriona Wallace Website: Responsible Metaverse Alliance Personal Website:
Kevin Werbach interviews Dale Cendali, one of the country's leading intellectual property (IP) attorneys, to discuss how courts are grappling with copyright questions in the age of generative AI. Over 30 lP awsuits already filed against major generative AI firms, and the outcomes may shape the future of AI as well as creative industries. While we couldn't discuss specifics of one of the most talked-about cases, Thomson Reuters v. ROSS -- because Cendali is litigating it on behalf of Thomson Reuters -- she drew on her decades of experience in IP law to provide an engaging look at the legal battlefield and the prospects for resolution. Cendali breaks down the legal challenges around training AI on copyrighted materials—from books to images to music—and explains why these cases are unusually complex for copyright law. She discusses the recent US Copyright Office report on Generative AI training, what counts as infringement in AU outputs, and what is sufficient human authorship for copyirght protection of AI works. While precedent offers some guidance, Cendali notes that outcomes will depend heavily on the specific facts of each case. The conversation also touches on how well courts can adapt existing copyright law to these novel technologies, and the prospects for a legislative solution. Dale Cendali is a partner at Kirkland & Ellis, where she leads the firm's nationwide copyright, trademark, and internet law practice. She has been named one of the 25 Icons of IP Law and one of the 100 Most Influential Lawyers in America. She also serves as an advisor to the American Law Institute's Copyright Restatement project and sits on the Board of the International Trademark Association. Transcript Thompson Reuters Wins Key Fair Use Fight With AI Startup Dale Cendali - 2024 Law360 MVP Copyright Office Report on Generative AI Training
In this episode of AI Answers, Paul Roetzer and Cathy McPhillips tackle 20 of the most pressing questions from our 48th Intro to AI class—covering everything from building effective AI roadmaps and selecting the right tools, using GPTs, navigating AI ethics, understanding great prompting, and more. Access the show notes and show links here Timestamps: 00:00:00 — Intro 00:08:46 — Question #1: How do you define a “human-first” approach to AI? 00:11:33 — Question #2: What uniquely human qualities do you believe we must preserve in an AI-driven world? 00:15:55 — Question #3: Where do we currently stand with AGI—and how close are OpenAI, Anthropic, Google, and Meta to making it real? 00:17:53 — Question #4: If AI becomes smarter, faster, and more accessible to all—how do individuals or companies stand out? 00:23:17 — Question #5: Do you see a future where AI agents can collaborate like human teams? 00:28:40 — Question #6: For those working with sensitive data, when does it make sense to use a local LLM over a cloud-based one? 00:30:50 — Question #7: What's the difference between ChatGPT Projects and Custom GPTs? 00:32:36 — Question #8: If an agency or consultant is managing dozens of GPTs, what are your best tips for organizing workflows, versioning, and staying sane at scale? 00:36:12 — Question #9: How do you personally decide which AI tools to use—and do you see a winner emerging? 00:38:53 — Question #10: What tools or platforms in the agent space are actually ready for production today? 00:43:10 — Question #11: For companies just getting started, how do you recommend they identify the right pain points and build their AI roadmap? 00:45:34 — Question #12: What AI tools do you believe deliver the most value to marketing leaders right now? 00:46:20 — Question #13: How is AI forcing agencies and consultants to rethink their models, especially with rising efficiency and lower costs? 00:51:14 — Question #14: What does great prompting actually look like? And how should employers think about evaluating that skill in job candidates? 00:54:40 — Question #15: As AI reshapes roles, does age or experience become a liability—or can being the most informed person in the room still win out? 00:56:52 — Question #16: What kind of changes should leaders expect in workplace culture as AI adoption grows? 01:00:54 — Question #17: What is ChatGPT really storing in its “memory,” and how persistent is user data across sessions? 01:02:11 — Question #18: How can businesses safely use LLMs while protecting personal or proprietary information? 01:02:55 — Question #19: Why do you think some companies still ban AI tools internally—and what will it take for those policies to shift? 01:04:13 — Question #20: If AI tools are free or low-cost, does that make us the product? Or is there a more optimistic future where creators and users both win This week's episode is brought to you by MAICON, our 6th annual Marketing AI Conference, happening in Cleveland, Oct. 14-16. The code POD100 saves $100 on all pass types. For more information on MAICON and to register for this year's conference, visit www.MAICON.ai. Visit our website Receive our weekly newsletter Join our community: Slack LinkedIn Twitter Instagram Facebook Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in our AI Academy
Tess Posner is the CEO and founding leader of AI4ALL, a nonprofit that works to ensure the next generation of AI leaders is diverse and well-quipped to innovate. Since joining in 2017, she has focused on embedding ethics, responsibility, and real-world impact into AI education. Her work connects students from underrepresented backgrounds to hands-on projects and mentorships that prepare them to lead in tech. Beyond her role at AI4ALL, Tess is a musician whose 2023 EP Alchemy has over 600,000 streams on Spotify. She was named a 2020 Brilliant Woman in AI Ethics Hall of Fame Honoree and holds degrees from St. John's University and Columbia University.In this conversation, we discuss:Why AI literacy is becoming essential for everyone, from casual users to future developersThe role of project-based learning in helping students see the real-world impact of AIWhat it takes to expand AI access for underrepresented communitiesHow AI can either reinforce bias or drive real change, depending on who's leading its developmentWhy schools should stop penalizing AI use and instead teach students to use it with curiosity and responsibilityTess's views on balancing optimism and caution in the development of AI toolsResources:Subscribe to the AI & The Future of Work NewsletterConnect with Tess on LinkedIn or learn more about AI4ALLAI fun fact articleOn How To Build and Activate a Powerful NetworkPast episodes mentioned in this conversation:[With Tess in 2020] - About what leaders do in a crisis[With Tess in 2019] - About how to mitigate AI bias and hiring best practices [With Chris Caren, Turnitin CEO] - On Using AI to Prevent Students from Cheating[With Marcus "Bellringer" Bell] - On Creating North America's First AI Artist
Kevin Werbach interviews Brenda Leong, Director of the AI division at boutique technology law firm ZwillGen, to explore how legal practitioners are adapting to the rapidly evolving landscape of artificial intelligence. Leong explains why meaningful AI audits require deep collaboration between lawyers and data scientists, arguing that legal systems have not kept pace with the speed and complexity of technological change. Drawing on her experience at Luminos.Law—one of the first AI-specialist law firms—she outlines how companies can leverage existing regulations, industry-specific expectations, and contextual risk assessments to build practical, responsible AI governance frameworks. Leong emphasizes that many organizations now treat AI oversight not just as a legal compliance issue, but as a critical business function. As AI tools become more deeply embedded in legal workflows and core operations, she highlights the growing need for cautious interpretation, technical fluency, and continuous adaptation within the legal field. Brenda Leong is Director of ZwillGen's AI Division, where she leads legal-technical collaboration on AI governance, risk management, and model audits. Formerly Managing Partner at Luminos.Law, she pioneered many of the audit practices now used at ZwillGen. She serves on the Advisory Board of the IAPP AI Center, teaches AI law at IE University, and previously led AI and ethics work at the Future of Privacy Forum. Transcript AI Audits: Who, When, How...Or Even If? Why Red Teaming Matters Even More When AI Starts Setting Its Own Agenda
Andriy Burkov talks down dishonest hype and sets realistic expectations for when LLMs, if properly and critically applied, are useful. Although maybe not as AI agents. Andriy and Kimberly discuss how he uses LLMs as an author; LLMs as unapologetic liars; how opaque training data impacts usability; not knowing if LLMs will save time or waste it; error-prone domains; when language fluency is useless; how expertise maximizes benefit; when some idea is better than no idea; limits of RAG; how LLMs go off the rails; why prompt engineering is not enough; using LLMs for rapid prototyping; and whether language models make good AI agents (in the strictest sense of the word). Andriy Burkov holds a PhD in Artificial Intelligence and is the author of The Hundred Page Machine Learning and Language Models books. His Artificial Intelligence Newsletter reaches 870,000+ subscribers. Andriy was previously the Machine Learning Lead at Talent Neuron and the Director of Data Science (ML) at Gartner. He has never been a Ukrainian footballer. Related Resources The Hundred Page Language Models Book: https://thelmbook.com/ The Hundred Page Machine Learning Book: https://themlbook.com/ True Positive Weekly (newsletter): https://aiweekly.substack.com/ A transcript of this episode is here.
How is sustainability covered in main tech conferences? Sure cybersecurity, DevOps, or anything related to SRE, is covered at length. Not to mention AI… But what room is left for the environmental impact of our job ? And what are the main trends which are filtered out from specialized conferences in Green IT such as Green IO, GreenTech Forum or eco-compute to generic Tech conferences? To talk about it Gaël Duez sat down in this latest Green IO episode with Erica Pisani who was the MC of the Performance and Sustainability track at QCon London this year. Together they discussed: - The inspiring speakers in the track - Why Qcon didn't become AIcon - How to get C-level buy-in by highlighting the new environmental rik - The limit to efficiency: fine balancing between hardware stress and usage optimization - Why performance and sustainability are tight in technology - Why assessing Edge computing's positive and negative impact is tricky And much more! ❤️ Subscribe, follow, like, ... stay connected the way you want to never miss an episode, twice a month, on Tuesday!
Eric Brown Jr. is the founder of ELVTE Coaching and Consulting and a Generative AI innovation lead at Microsoft. In this powerful conversation with Rob Richardson, he unpacks how early adversity became fuel for legacy. From mentoring underserved youth to helping enterprise teams align tech with purpose, Eric proves that impact isn't just about innovation — it's about elevation.Disruption Now Episode 180Inside This Episode:-Life Hacker Mindset: How reframing pain unlocks potential-AI with Empathy: Why tech that doesn't center people fails-The Power of Context: Making technology relatable and actionable for allConnect with Eric Brown Jr.:LinkedIn: www.linkedin.com/in/ericbrownjrForbes Council: councils.forbes.com/profile/Eric-Brown-Jr-Founder-%7C-Chief-Transformation-Officer-ELVTE-Coaching-and-Consulting/440ec31a-0e0d-4650-ae7c-a2b401148572Thought Leadership: linkedin.com/pulse/empowering-dreams-lessons-learned-from-any-fellow-eric-brown-jrDisruption Now Apply to be a guest: form.typeform.com/to/Ir6AgmzrWatch more episodes: podcast.disruptionnow.comDisruption Now: Building a fair share for the Culture and Media. Join us and disrupt.Apply to get on the Podcast: https://form.typeform.com/to/Ir6Agmzr?typeform-source=disruptionnow.comLinkedIn: https://www.linkedin.com/in/robrichardsonjr/Instagram: https://www.instagram.com/robforohio/Website: https://podcast.disruptionnow.com/
with Audrey Watters | Episode 903 | Tech Tool Tuesday Are we racing toward an AI future without asking the right questions? Author and ed-tech critic Audrey Watters joins me to show teachers how to hit pause, get thoughtful, and keep classroom relationships at the center. Sponsored by Rise Vision Did you know the same solution that powers my AI classroom also drives campus-wide emergency alerts and digital signage? See how Rise Vision can save your school thousands: RiseVision.com/10MinuteTeacher Highlights Include Why “human first” still beats the newest AI tool: Audrey explains how relationships drive real learning. Personalized learning myths busted: How algorithmic “solutions” can isolate students. Practical guardrails for AI: Three reflection questions every teacher should ask before hitting “assign.”
We only talk about the upside of agentic AI.But why don't we talk about the risks? As AI agents grow exponentially more capable, so too does the likelihood of something going wrong.So how can we take advantage of agentic AI while also addressing the risks head-on? Join us to learn from a global leader on Responsible AI. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Responsible AI: Evolution and ChallengesAgentic AI's Ethical ImplicationsMulti-Agentic AI Responsibility ShiftMicrosoft's AI Governance StrategiesTesting Multi-Agentic Risks and PatternsAgentic AI: Future Workforce SkillsObservability in Multi-Agentic SystemsThree Risk Categories in AI ImplementationTimestamps:00:00 Evolving Challenges in Responsible AI05:50 Agent Technology: Benefits and Risks09:27 Complex System Governance and Observability12:26 AI Monitoring and Human Intervention15:14 Essential Testing for Trust Building19:43 Securing AI Agents with Entra22:06 Exploring Human-AI Interface Innovation26:06 AI Workforce Integration Challenges28:22 AI's Transformative Impact on JobsKeywords:Agentic AI, multi agentic AI, responsible AI, generative AI, Microsoft Build conference, AI governance, AI ethics, AI systems, AI risk, AI mitigation, AI tools, human in the loop, Foundry observability, AI testing, system security, AI monitoring, user intent, AI capability, prompt injection, Copilot, AI orchestration, AI deployment, system governance, Entra agent ID, AI education, AI upskilling, AI workforce integration, systemic risk, AI misuse, AI malfunctions, AI systemic risk, AI-powered solutions, AI development, AI innovation, AI technology, AI security measures.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner