POPULARITY
Advancing Digital Transformation To Pioneer Global Health Solutions.In this first episode of Narratives of Purpose's special series from the 2025 HIMSS European Health Conference, host Claire Murigande speaks with Hal Wolf, the President and CEO of HIMSS.HIMSS (Healthcare Information and Management Systems Society) is a non-profit organization with a strong commitment to advancing global health through technology, supporting the transformation of the health ecosystem and fostering health equity.In this interview, Hal emphasizes the necessity for a significant leap forward in our approach to healthcare, particularly in the context of artificial intelligence and its transformative potential that can propel us towards more effective care delivery.Be sure to visit our podcast website for the full episode transcript.LINKS:Article covering HIMSS Europe 2025: AI capacity building in healthcareLinkedIn posts covering HIMSS Europe 2025: The future of the healthcare workforce | Responsible AI in health | Cybersecurity | The European Health Data Space | Women in Health IT | Women's Health in focusLearn more about HIMSS activities and events at himss.org Follow HIMSS on their social media channels: LinkedIn | Facebook | Instagram |
Dr. Rebecca Portnoff generates awareness of the threat landscape, enablers, challenges and solutions to the complex but addressable issue of online child sexual abuse. Rebecca and Kimberly discuss trends in online child sexual abuse; pillars of impact and harm; how GenAI expands the threat landscape; personalized targeting and bespoke abuse; Thorn's Safety by Design Initiative; scalable prevention strategies; technical and legal barriers; standards, consensus and commitment; building better from the beginning; accountability as an innovative goal; and not confusing complex with unsolvable. Dr. Rebecca Portnoff is the Vice President of Data Science at Thorn, a non-profit dedicated to protecting children from sexual abuse. Read Thorn's seminal Safety by Design paper, bookmark the Research Center to stay updated and support Thorn's critical work by donating here. Related Resources Thorn's Safety by Design Initiative (News): https://www.thorn.org/blog/generative-ai-principles/ Safety by Design Progress Reports: https://www.thorn.org/blog/thorns-safety-by-design-for-generative-ai-progress-reports/ Thorn + SIO AIG-CSAM Research (Report): https://cyber.fsi.stanford.edu/io/news/ml-csam-report A transcript of this episode is here.
What are the hidden dangers lurking beneath the surface of vibe coded apps and hyped-up CEO promises? And what is Influence Ops?I'm joined by Susanna Cox (Disesdi), an AI security architect, researcher, and red teamer who has been working at the intersection of AI and security for over a decade. She provides a masterclass on the current state of AI security, from explaining the "color teams" (red, blue, purple) to breaking down the fundamental vulnerabilities that make GenAI so risky.We dive into the recent wave of AI-driven disasters, from the Tea dating app that exposed its users' sensitive data to the massive Catholic Health breach. We also discuss why the trend of blindly vibe coding is an irresponsible and unethical shortcut that will create endless liabilities in the near term.Susanna also shares her perspective on AI policy, the myth of separating "responsible" from "secure" AI, and the one threat that truly keeps her up at night: the terrifying potential of weaponized globally scaled Influence Ops to manipulate public opinion and democracy itself.Find Disesdi Susanna Cox:Substack: https://disesdi.substack.com/Socials (LinkedIn, X, etc.): @DisesdiKEY MOMENTS:00:26 - Who is Disesdi Susanna Cox?03:52 - What are Red, Blue, and Purple Teams in Security?07:29 - Probabilistic vs. Deterministic Thinking: Why Data & Security Teams Clash12:32 - How GenAI Security is Different (and Worse) than Classical ML14:39 - Recent AI Disasters: Catholic Health, Agent Smith & the "T" Dating App18:34 - The Unethical Problem with "Vibe Coding"24:32 - "Vibe Companies": The Gaslighting from CEOs About AI30:51 - Why "Responsible AI" and "Secure AI" Are the Same Thing33:13 - Deconstructing the "Woke AI" Panic44:39 - What Keeps an AI Security Expert Up at Night? Influence Ops52:30 - The Vacuous, Haiku-Style Hellscape of LinkedIn
Welcome back to The Power Lounge, a space dedicated to meaningful conversations with industry leaders. In today's episode, "Empower Your Team with Responsible AI," host Amy Vaughan, Together Digital's Chief Empowerment Officer, explores a critical challenge for digital teams: adopting AI responsibly without compromising ethical standards.Joining Amy is Nikki Ferrell, Associate Director of Online Enrollment and Marketing Communications at Miami University. Nikki has been instrumental in launching an AI steering committee to manage the swift integration of generative AI in higher education. Together, they examine the potential risks of unmanaged AI use, the importance of establishing clear policies, and how continuous learning and experimentation can cultivate ethical and innovative teams.Whether you're a team leader, a business owner, or simply interested in the complexities of AI, this episode offers a practical framework for implementing technology that prioritizes people, purpose, and ethics. Gain actionable insights and hear real-world experiences right here on The Power Lounge.Chapters:00:00 - Introduction01:24 - AI's Impact: Unprepared Marketing Practices05:08 - Creating AI Steering Committees09:32 - Normalize Open AI Use at Work14:42 - Adopting AI for Organizational Success16:30 - Take Initiative to Lead21:00 - Cautious Marketing on Mother's Day25:25 - AI in Education: Gen Z & Alpha Hesitations29:19 - "AI as Amplifying Tool"30:55 - AI's Impact on Cognitive Skills36:31 - AI Augments, Not Replaces, Workforce38:30 - "Embracing Tech Amidst Red Tape"41:45 - "Responsible AI Adoption Insights"44:19 - AI Use Case Library Development48:03 - Embracing AI for Strategic Future51:01 - Exploring AI for Everyday Tasks54:58 - AI-Assisted Strategy Development58:51 - Subscribe for Updates & Community59:45 - OutroQuotes:"Empowerment begins when we stop being afraid of new technology and start building community around it."- Amy Vaughan"You don't need a title to lead the way with AI—start small, learn together, and let your curiosity spark real change."- Nikki FerrellKey TakeawaysStart Small, Stay Grounded in ResearchPolicies Aren't Optional—They EmpowerOpenness Overgoing UndergroundYou Don't Need a Title to LeadAlign with Mission and ValuesBuild a Culture of ExperimentationTransparency Builds Trust (and Avoids Backlash)AI Augments, Not ReplacesMeet People Where They AreThe Future is CollaborativeConnect with Nikki Ferrell:LinkedIn: https://www.linkedin.com/in/nferrell/Website: https://miamioh.edu/Connect with the host Amy Vaughan:LinkedIn: http://linkedin.com/in/amypvaughanPodcast:Power Lounge Podcast - Together DigitalLearn more about Together Digital and consider joining the movement by visitingHome - Together DigitalSupport the show
What does it take to build intelligent systems that are not only AI-powered but also secure, scalable, and grounded in real-world needs? In this episode of Tech Talks Daily, I speak with Srinivas Chippagiri, a senior technology leader and author of Building Intelligent Systems with AI and Cloud Technologies. With over a decade of experience spanning Wipro, GE Healthcare, Siemens, and now Tableau at Salesforce, Srinivas offers a practical view into how AI and cloud infrastructure are evolving together. We explore how AI is changing cloud-native development through predictive maintenance, automated DevOps pipelines, and developer co-pilots. But this is not just about technology. Srinivas highlights why responsible AI needs to be part of every system design, sharing examples from his own research into anomaly detection, fuzzy logic, and explainable models that support trust in regulated industries. The conversation also covers the rise of hybrid and edge computing, the real challenges of data fragmentation and compute costs, and how teams are adapting with new skills like prompt engineering and model observability. Srinivas gives a thoughtful view on what ethical AI deployment looks like in practice, from bias audits to AI governance boards. For those looking to break into this space, his advice is refreshingly clear. Start with small, end-to-end projects. Learn by doing. Contribute to open-source communities. And stay curious. Whether you're scaling AI systems, building a career in cloud tech, or just trying to keep pace with fast-moving trends, this episode offers a grounded and insightful guide to where things are heading next. Srinivas's book is available on Amazon under Building Intelligent Systems with AI and Cloud Technologies, and you can connect with him on LinkedIn to continue the conversation.
GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it. Generative AI and the Future of the Digital Commons David Sacks on X: "A BEST CASE SCENARIO FOR AI? The Doomer narratives were wrong. Predicated on a "rapid take-off" to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence. Instead, we" / X A taxonomy of hallucinations (see table 2) Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' for Enterprise Medicare will test using AI to help decide whether patients get coverage — which could delay or deny care, critics warn Podcasting's 'Serial' Era Ends as Video Takes Over Sara Kehaulani Goo named President of the Creator Network What Happened When Mark Zuckerberg Moved In Next Door Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments Two-mile suspension bridge Will Giz allow the Skee-ballers to make this their next outing? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Tulsee Doshi Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit Melissa.com/twit
GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it. Generative AI and the Future of the Digital Commons David Sacks on X: "A BEST CASE SCENARIO FOR AI? The Doomer narratives were wrong. Predicated on a "rapid take-off" to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence. Instead, we" / X A taxonomy of hallucinations (see table 2) Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' for Enterprise Medicare will test using AI to help decide whether patients get coverage — which could delay or deny care, critics warn Podcasting's 'Serial' Era Ends as Video Takes Over Sara Kehaulani Goo named President of the Creator Network What Happened When Mark Zuckerberg Moved In Next Door Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments Two-mile suspension bridge Will Giz allow the Skee-ballers to make this their next outing? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Tulsee Doshi Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit Melissa.com/twit
GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it. Generative AI and the Future of the Digital Commons David Sacks on X: "A BEST CASE SCENARIO FOR AI? The Doomer narratives were wrong. Predicated on a "rapid take-off" to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence. Instead, we" / X A taxonomy of hallucinations (see table 2) Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' for Enterprise Medicare will test using AI to help decide whether patients get coverage — which could delay or deny care, critics warn Podcasting's 'Serial' Era Ends as Video Takes Over Sara Kehaulani Goo named President of the Creator Network What Happened When Mark Zuckerberg Moved In Next Door Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments Two-mile suspension bridge Will Giz allow the Skee-ballers to make this their next outing? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Tulsee Doshi Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit Melissa.com/twit
GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it. Generative AI and the Future of the Digital Commons David Sacks on X: "A BEST CASE SCENARIO FOR AI? The Doomer narratives were wrong. Predicated on a "rapid take-off" to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence. Instead, we" / X A taxonomy of hallucinations (see table 2) Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' for Enterprise Medicare will test using AI to help decide whether patients get coverage — which could delay or deny care, critics warn Podcasting's 'Serial' Era Ends as Video Takes Over Sara Kehaulani Goo named President of the Creator Network What Happened When Mark Zuckerberg Moved In Next Door Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments Two-mile suspension bridge Will Giz allow the Skee-ballers to make this their next outing? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Tulsee Doshi Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit Melissa.com/twit
As AI becomes deeply embedded in every industry, building AI systems that are secure, responsible, and privacy-centric is more crucial than ever. But where do you begin? At the strategy level? Design? Or implementation? How do organizations tackle the challenges of AI risks, data governance, and compliance while keeping pace with innovation?Join us for an insightful conversation with Punit Bhatia and Santosh Kaveti, CEO of Pro Arch, as we explore the evolving landscape of responsible AI, key foundational steps, and the practical approaches to secure AI deployment.If you're looking to understand how to build AI systems that are not only innovative but also secure and trustworthy, this episode is for you!KEY CONVERSION 00:01:58 Responsible AI 00:04:30 AI Strategy 00:11:43 Role of standards and Approach 00:15:35 Good practices of Data Governance 00:19:55 AI Talent 00:23:10 Pro Arch Role in costumers 00:25:00 Contact Information of Santosh ABOUT GUEST Santosh Kaveti is CEO & Founder at Proarch. With over 18 years of experience as a technologist, entrepreneur, investor, and advisor, Santosh Kaveti is the CEO and Founder of ProArch, a purpose-driven enterprise that accelerates value and increases resilience for its clients with consulting and technology services, enabled by cloud, guided by data, fueled by apps, and secured by design. Santosh's vision and leadership have propelled ProArch to become a dominant force in key industry verticals, such as Energy, Healthcare & Lifesciences, and Manufacturing, where he leverages his expertise in manufacturing process improvement, mentoring, and consulting. Operationalizing AI: From Strategy to Execution Navigating AI Risks: Ensuring Security and Compliance Prioritizing AI Initiatives: Aligning with Business Goals Attracting and Retaining Top AI Talent Integrating AI into Core Business Functions The Data Foundation: Governance, Quality, and Culture in AI Santosh's journey is marked by resilience, ambition, and self-awareness, as he has learned from his successes and failures, and continuously evolved his skills and perspective. He has traveled across 23 countries, gaining insights into the global diversity and interconnectedness of human experiences. He is passionate about blending technology with a human-centric approach and making a meaningful societal impact through his support for initiatives that uplift underprivileged children, assist disadvantaged families, and promote social awareness.Santosh's ethos extends to his investments in and mentorship of promising startups, as well as his role as the Chairman of the Board at Enhops and iV4, two ProArch companies. ABOUT HOST Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high privacy awareness and compliance as a business priority. Selectively, Punit is open to mentor and coach professionals. Punit is the author of books “Be Ready for GDPR' which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 30 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts. As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one's value to have joy in life. He has developed the philosophy named ‘ABC for joy of life' which passionately shares. Punit is based out of Belgium, the heart of Europe. RESOURCES Websites www.fit4privacy.com,www.punitbhatia.com, https://www.linkedin.com/in/santoshkaveti/ , https://www.proarch.com/ Podcast https://www.fit4privacy.com/podcast Blog https://www.fit4privacy.com/blog YouTube http://youtube.com/fit4privacy
AI ethics expert Sam Sammane challenges Silicon Valley's artificial intelligence hype in this controversial entrepreneurship interview. The Theo Sim founder and nanotechnology PhD reveals why current AI regulations only help wealthy tech giants while blocking innovation for small businesses. Sam exposes the truth about ChatGPT privacy risks, demonstrates how personalized AI systems running locally protect your data better than cloud-based solutions, and shares his revolutionary context engineering approach that transforms generic chatbots into custom AI employees. Sam's contrarian take on AI policy, trustworthy AI development, and why schools must teach cognitive ethics now will reshape how you think about augmenting human intelligence. The future of AI belongs to businesses that act today, not tomorrow.
GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it. Generative AI and the Future of the Digital Commons David Sacks on X: "A BEST CASE SCENARIO FOR AI? The Doomer narratives were wrong. Predicated on a "rapid take-off" to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence. Instead, we" / X A taxonomy of hallucinations (see table 2) Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' for Enterprise Medicare will test using AI to help decide whether patients get coverage — which could delay or deny care, critics warn Podcasting's 'Serial' Era Ends as Video Takes Over Sara Kehaulani Goo named President of the Creator Network What Happened When Mark Zuckerberg Moved In Next Door Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments Two-mile suspension bridge Will Giz allow the Skee-ballers to make this their next outing? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Tulsee Doshi Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit Melissa.com/twit
Follow Alexandra on LinkedIn and X!Follow us on Instagram and on LinkedIn!Created by SOUR, this podcast is part of the studio's "Future of X,Y,Z" research, where the collaborative discussion outcomes serve as the base for the futuristic concepts built in line with the studio's mission of solving urban, social and environmental problems through intelligent designs.Make sure to visit our website and subscribe to the show on Apple Podcasts, Spotify, or Google Podcasts so you never miss an episode. If you found value in this show, we would appreciate it if you could head over to iTunes to rate and leave a review – or you can simply tell your friends about the show!Don't forget to join us next week for another episode. Thank you for listening!
If there's one key takeaway from Season 4 of The Tea on Cybersecurity, it's that cybersecurity is a shared responsibility. With this in mind, host Jara Rowe wraps up the season by sharing valuable insights that everyone can use. She reflects on the most impactful lessons about compliance, AI, and penetration testing. Key takeaways:The importance of vCISOs and cyber engineersHow to approach penetration testing and PTaaSWhy transparency and training are essential for AI safetyEpisode highlights:(00:00) Today's topic: Key insights from this season (01:16) The role of vCISOs and cyber engineers(02:47) Responsible AI use(03:51) Penetration testing and PTaaS for small teamsConnect with the host:Jara Rowe's LinkedIn - @jararoweConnect with Trava:Website - www.travasecurity.comBlog - www.travasecurity.com/learn-with-trava/blogLinkedIn - @travasecurityYouTube - @travasecurity
In this episode of Environment Variables, host Chris Adams welcomes Scott Chamberlin, co-founder of Neuralwatt and ex-Microsoft Software Engineer, to discuss energy transparency in large language models (LLMs). They explore the challenges of measuring AI emissions, the importance of data center transparency, and projects that work to enable flexible, carbon-aware use of AI. Scott shares insights into the current state of LLM energy reporting, the complexities of benchmarking across vendors, and how collaborative efforts can help create shared metrics to guide responsible AI development.
Hiwot Tesfaye disputes the notion of AI givers and takers, challenges innovation as an import, highlights untapped global potential, and charts a more inclusive course. Hiwot and Kimberly discuss the two camps myth of inclusivity; finding innovation everywhere; meaningful AI adoption and diffusion; limitations of imported AI; digital colonialism; low-resource languages and illiterate LLMs; an Icelandic success story; situating AI in time and place; employment over automation; capacity and skill building; skeptical delight and making the case for multi-lingual, multi-cultural AI. Hiwot Tesfaye is a Technical Advisor in Microsoft's Office of Responsible AI and a Loomis Council Member at the Stimson Center where she helped launch the Global Perspectives: Responsible AI Fellowship. Related Resources#35 Navigating AI: Ethical Challenges and Opportunities a conversation with Hiwot TesfayeA transcript of this episode is here.
In Episode 262 of the House of #EdTech, Chris Nesi explores the timely and necessary topic of creating a responsible AI policy for your classroom. With artificial intelligence tools becoming more integrated into educational spaces, the episode breaks down why teachers need to set clear expectations and how they can do it with transparency, collaboration, and flexibility. Chris offers a five-part framework that educators can use to guide students toward ethical and effective AI use. Before the featured content, Chris reflects on a growing internal debate: is it time to step back from tech-heavy classrooms and return to more analog methods? He also shares three edtech recommendations, including tools for generating copyright-free images, discovering daily AI tool capabilities, and randomizing seating charts for better classroom dynamics. Topics Discussed: EdTech Thought: Chris debates the “Tech or No Tech” question in modern classrooms EdTech Recommendations: https://nomorecopyright.com/ - Upload an image to transform it into a unique, distinct version designed solely for inspiration and creative exploration. https://www.shufflebuddy.com/ - Never worry about seating charts again Foster a strong classroom community by frequently shuffling your seating charts while respecting your students' individual needs. https://whataicandotoday.com/ - We've analysed 16362 AI Tools and identified their capabilities with OpenAI GPT-4.1, to bring you a free list of 83054 tasks of what AI can do today. Why classrooms need a responsible AI policy A five-part framework to build your AI classroom policy Define What AI Is (and Isn't) Clarify When and How AI Can Be Used Promote Transparency and Attribution Include Privacy and Tool Approval Guidelines Make It Collaborative and Flexible The importance of modeling digital citizenship and AI literacy Free editable AI policy template by Chris for grades K–12 Mentions: Mike Brilla – The Inspired Teacher podcast Jake Miller – Educational Duct Tape podcast // Educational Duct Tape Book
"If you're going to be running a very elite research institution, you have to have the best people. To have the best people, you have to trust them and empower them. You can't hire a world expert in some area and then tell them what to do. They know more than you do. They're smarter than you are in their area. So you've got to trust your people. One of our really foundational commitments to our people is: we trust you. We're going to work to empower you. Go do the thing that you need to do. If somebody in the labs wants to spend 5, 10, 15 years working on something they think is really important, they're empowered to do that." - Doug Burger Fresh out of the studio, Doug Burger, Technical Fellow and Corporate Vice President at Microsoft Research, joins us to explore Microsoft's bold expansion into Southeast Asia with the recent launch of the Microsoft Research Asia lab in Singapore. From there, Doug shares his accidental journey from academia to leading global research operations, reflecting on how Microsoft Research's open collaboration model empowers over thousands of researchers worldwide to tackle humanity's biggest challenges. Following on, he highlights the recent breakthroughs from Microsoft Research for example, the quantum computing breakthrough with topological qubits, the evolution from lines of code to natural language programming, and how AI is accelerating innovation across multiple scaling dimensions beyond traditional data limits. Addressing the intersection of three computing paradigms—logic, probability, and quantum—he emphasizes that geographic diversity in research labs enables Microsoft to build AI that works for everyone, not just one region. Closing the conversation, Doug shares his vision of what great looks like for Microsoft Research with researchers driven by purpose and passion to create breakthroughs that advance both science and society. Episode Highlights: [00:00] Quote of the Day by Doug Burger [01:08] Doug Burger's journey from academia to Microsoft Research [02:24] Career advice: Always seek challenges, move when feeling restless or comfortable [03:07] Launch of Microsoft Research Asia in Singapore: Tapping local talent and culture for inclusive AI development [04:13] Singapore lab focuses on foundational AI, embodied AI, and healthcare applications [06:19] AI detecting seizures in children and assessing Parkinson's motor function [08:24] Embedding Southeast Asian societal norms and values into Foundational AI research [10:26] Microsoft Research's open collaboration model [12:42] Generative AI's rapid pace accelerating technological innovation and research tools [14:36] AI revolutionizing computer architecture by creating completely new interfaces [16:24] Open versus closed source AI models debate and Microsoft's platform approach [18:08] Reasoning models enabling formal verification and correctness guarantees in AI [19:35] Multiple scaling dimensions in AI beyond traditional data scaling laws [21:01] Project Catapult and Brainwave: Building configurable hardware acceleration platforms [23:29] Microsoft's 17-year quantum computing journey with topological qubits breakthrough [26:26] Balancing blue-sky foundational research with application-driven initiatives at scale [29:16] Three computing paradigms: logic, probability (AI), and quantum superposition [32:26] Microsoft Research's exploration-to-exploitation playbook for breakthrough discoveries [35:26] Research leadership secret: Curiosity across fields enables unexpected connections [37:11] Hidden Mathematical Structures Transformers Architecture in LLMs [40:04] Microsoft Research's vision: Becoming Bell Labs for AI era [42:22] Steering AI models for mental health and critical thinking conversations Profile: Doug Burger, Technical Fellow and Corporate Vice President, Microsoft Research LinkedIn: https://www.linkedin.com/in/dcburger/ Microsoft Research Profile: https://www.microsoft.com/en-us/research/people/dburger/ Podcast Information: Bernard Leong hosts and produces the show. The proper credits for the intro and end music are "Energetic Sports Drive." G. Thomas Craig mixed and edited the episode in both video and audio format. Here are the links to watch or listen to our podcast. Analyse Asia Main Site: https://analyse.asia Analyse Asia Spotify: https://open.spotify.com/show/1kkRwzRZa4JCICr2vm0vGl Analyse Asia Apple Podcasts: https://podcasts.apple.com/us/podcast/analyse-asia-with-bernard-leong/id914868245 Analyse Asia YouTube: https://www.youtube.com/@AnalyseAsia Analyse Asia LinkedIn: https://www.linkedin.com/company/analyse-asia/ Analyse Asia X (formerly known as Twitter): https://twitter.com/analyseasia Analyse Asia Threads: https://www.threads.net/@analyseasia Sign Up for Our This Week in Asia Newsletter: https://www.analyse.asia/#/portal/signup Subscribe Newsletter on LinkedIn https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7149559878934540288
Justin DiPietro, Co-Founder & Chief Strategy Officer of Glia, shares how they are leveraging AI to enhance the customer experience in the highly regulated world of financial institutions.Topics Include:Glia provides voice, digital, and AI services for customer-facing and internal operationsBuilt on "channel-less architecture" unlike traditional contact centers that added channels sequentiallyOne interaction can move seamlessly between channels (voice, chat, SMS, social)AI applies across all channels simultaneously rather than per individual channel700 customers, primarily banks and credit unions, 370 employees, headquartered in New YorkTargets 3,500 banks and credit unions across the United States marketFocuses exclusively on financial services and other regulated industriesAI for regulated industries requires different approach than non-regulated businessesTraditional contact centers had trade-off between cost and quality of serviceAI enables higher quality while simultaneously decreasing costs for contact centersNumber one reason people call banks: "What's my balance?" (20% of calls)Financial services require 100% accuracy, not 99.999% due to trust requirementsUses AWS exclusively for security, reliability, and future-oriented technology accessReal-time system requires triple-hot redundancy; seconds matter for live callsWorks with Bedrock team; customers certify Bedrock rather than individual featuresShowed examples of competitors' AI giving illegal million-dollar loans at 0%"Responsible AI" separates probabilistic understanding from deterministic responses to customersUses three model types: client models, network models, and protective modelsTraditional NLP had 50% accuracy; their LLM approach achieves 100% understandingPolicy is "use Nova unless" they can't, primarily for speed benefitsParticipants:Justin DiPietro – Co-Founder & Chief Strategy Officer, GliaFurther Links:Glia WebsiteGlia AWS MarketplaceSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/
In this thought leadership session, ITSPmagazine co-founders Sean Martin and Marco Ciappelli moderate a dynamic conversation with five industry leaders offering their take on what will dominate the show floor and side-stage chatter at Black Hat USA 2025.Leslie Kesselring, Founder of Kesselring Communications, surfaces how media coverage is shifting in real time—no longer driven solely by talk submissions but now heavily influenced by breaking news, regulation, and public-private sector dynamics. From government briefings to cyberweapon disclosures, the pressure is on to cover what matters, not just what's scheduled.Daniel Cuthbert, member of the Black Hat Review Board and Global Head of Security Research at Banco Santander, pushes back on the hype. He notes that while tech moves fast, security research often revisits decades-old bugs. His sharp observation? “The same bugs from the ‘90s are still showing up—sometimes discovered by researchers younger than the vulnerabilities themselves.”Michael Parisi, Chief Growth Officer at Steel Patriot Partners, shifts the conversation to operational risk. He raises concern over Model-Chained Prompting (MCP) and how AI agents can rewrite enterprise processes without visibility or traceability—especially alarming in environments lacking kill switches or proper controls.Richard Stiennon, Chief Research Analyst at IT-Harvest, offers market-level insights, forecasting AI agent saturation with over 20 vendors already present in the expo hall. While excited by real advancements, he warns of funding velocity outpacing substance and cautions against the cycle of overinvestment in vaporware.Rupesh Chokshi, SVP & GM at Akamai Technologies, brings the product and customer lens—framing the security conversation around how AI use cases are rolling out fast while security coverage is still catching up. From OT to LLMs, securing both AI and with AI is a top concern.This episode is not just about placing bets on buzzwords. It's about uncovering what's real, what's noise, and what still needs fixing—no matter how long we've been talking about it.___________Guests:Leslie Kesselring, Founder at Cyber PR Firm Kesselring Communications | On LinkedIn: https://www.linkedin.com/in/lesliekesselring/“This year, it's the news cycle—not the sessions—that's driving what media cover at Black Hat.”Daniel Cuthbert, Black Hat Training Review Board and Global Head of Security Research for Banco Santander | On LinkedIn: https://www.linkedin.com/in/daniel-cuthbert0x/“Why are we still finding bugs older than the people presenting the research?”Richard Stiennon, Chief Research Analyst at IT-Harvest | On LinkedIn: https://www.linkedin.com/in/stiennon/“The urge to consolidate tools is driven by procurement—not by what defenders actually need.”Michael Parisi, Chief Growth Officer at Steel Patriot Partners | On LinkedIn: https://www.linkedin.com/in/michael-parisi-4009b2261/“Responsible AI use isn't a policy—it's something we have to actually implement.”Rupesh Chokshi, SVP & General Manager at Akamai Technologies | On LinkedIn: https://www.linkedin.com/in/rupeshchokshi/“The business side is racing to deploy AI—but security still hasn't caught up.”Hosts:Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com___________Episode SponsorsThreatLocker: https://itspm.ag/threatlocker-r974BlackCloak: https://itspm.ag/itspbcwebAkamai: https://itspm.ag/akamailbwcDropzoneAI: https://itspm.ag/dropzoneai-641Stellar Cyber: https://itspm.ag/stellar-9dj3___________ResourcesLearn more and catch more stories from our Black Hat USA 2025 coverage: https://www.itspmagazine.com/bhusa25ITSPmagazine Webinar: What's Heating Up Before Black Hat 2025: Place Your Bet on the Top Trends Set to Shake Up this Year's Hacker Conference — An ITSPmagazine Thought Leadership Webinar | https://www.crowdcast.io/c/whats-heating-up-before-black-hat-2025-place-your-bet-on-the-top-trends-set-to-shake-up-this-years-hacker-conferenceCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More
In this thought leadership session, ITSPmagazine co-founders Sean Martin and Marco Ciappelli moderate a dynamic conversation with five industry leaders offering their take on what will dominate the show floor and side-stage chatter at Black Hat USA 2025.Leslie Kesselring, Founder of Kesselring Communications, surfaces how media coverage is shifting in real time—no longer driven solely by talk submissions but now heavily influenced by breaking news, regulation, and public-private sector dynamics. From government briefings to cyberweapon disclosures, the pressure is on to cover what matters, not just what's scheduled.Daniel Cuthbert, member of the Black Hat Review Board and Global Head of Security Research at Banco Santander, pushes back on the hype. He notes that while tech moves fast, security research often revisits decades-old bugs. His sharp observation? “The same bugs from the ‘90s are still showing up—sometimes discovered by researchers younger than the vulnerabilities themselves.”Michael Parisi, Chief Growth Officer at Steel Patriot Partners, shifts the conversation to operational risk. He raises concern over Model-Chained Prompting (MCP) and how AI agents can rewrite enterprise processes without visibility or traceability—especially alarming in environments lacking kill switches or proper controls.Richard Stiennon, Chief Research Analyst at IT-Harvest, offers market-level insights, forecasting AI agent saturation with over 20 vendors already present in the expo hall. While excited by real advancements, he warns of funding velocity outpacing substance and cautions against the cycle of overinvestment in vaporware.Rupesh Chokshi, SVP & GM at Akamai Technologies, brings the product and customer lens—framing the security conversation around how AI use cases are rolling out fast while security coverage is still catching up. From OT to LLMs, securing both AI and with AI is a top concern.This episode is not just about placing bets on buzzwords. It's about uncovering what's real, what's noise, and what still needs fixing—no matter how long we've been talking about it.___________Guests:Leslie Kesselring, Founder at Cyber PR Firm Kesselring Communications | On LinkedIn: https://www.linkedin.com/in/lesliekesselring/“This year, it's the news cycle—not the sessions—that's driving what media cover at Black Hat.”Daniel Cuthbert, Black Hat Training Review Board and Global Head of Security Research for Banco Santander | On LinkedIn: https://www.linkedin.com/in/daniel-cuthbert0x/“Why are we still finding bugs older than the people presenting the research?”Richard Stiennon, Chief Research Analyst at IT-Harvest | On LinkedIn: https://www.linkedin.com/in/stiennon/“The urge to consolidate tools is driven by procurement—not by what defenders actually need.”Michael Parisi, Chief Growth Officer at Steel Patriot Partners | On LinkedIn: https://www.linkedin.com/in/michael-parisi-4009b2261/“Responsible AI use isn't a policy—it's something we have to actually implement.”Rupesh Chokshi, SVP & General Manager at Akamai Technologies | On LinkedIn: https://www.linkedin.com/in/rupeshchokshi/“The business side is racing to deploy AI—but security still hasn't caught up.”Hosts:Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com___________Episode SponsorsThreatLocker: https://itspm.ag/threatlocker-r974BlackCloak: https://itspm.ag/itspbcwebAkamai: https://itspm.ag/akamailbwcDropzoneAI: https://itspm.ag/dropzoneai-641Stellar Cyber: https://itspm.ag/stellar-9dj3___________ResourcesLearn more and catch more stories from our Black Hat USA 2025 coverage: https://www.itspmagazine.com/bhusa25ITSPmagazine Webinar: What's Heating Up Before Black Hat 2025: Place Your Bet on the Top Trends Set to Shake Up this Year's Hacker Conference — An ITSPmagazine Thought Leadership Webinar | https://www.crowdcast.io/c/whats-heating-up-before-black-hat-2025-place-your-bet-on-the-top-trends-set-to-shake-up-this-years-hacker-conferenceCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More
Multimodal interfaces. Real-time personalization. Data privacy. Content ownership. Responsible AI. In this episode, Eve Sangenito of global consultancy Perficient offers a grounded, enterprise lens on the evolving demands of AI-powered customer experience—and what leaders (and the partners who support them) need to understand right now. Eve and Sarah explore how generative AI is reshaping customer expectations, guiding tech investments, and redefining experience delivery at scale. For anyone driving digital transformation, building AI strategy, or modernizing enterprise CX, this conversation is a timely look at what's shifting—and what's ahead.
In this episode, our guest is Xiaochen Zhang, a global innovation leader and the driving force behind AI 2030 and FinTech for Good. Xiaochen shares his journey from the World Bank to founding organisations that champion responsible technology. He dives into the six-pillar framework of responsible AI—transparency, accountability, fairness, safety, security, sustainability, and privacy—and discusses the risks of digital divide, ethical AI design, and the future of collaborative intelligence. He also highlights the transformative potential of AI across climate, financial inclusion, and renewable energy, underscoring the urgency of responsible leadership and inclusive innovation. A fascinating conversation bridging technology, ethics, and global impact. Connect with Sohail Hasnie: Facebook @sohailhasnie X (Twitter) @shasnie LinkedIn @shasnie ADB Blog Sohail Hasnie YouTube @energypreneurs Instagram @energypreneurs Tiktok @energypreneurs Spotify Video @energypreneurs
In schools with limited resources, large class sizes, and wide differences in student ability, individualized learning has become a necessity. Artificial intelligence offers powerful tools to help meet those needs, especially in underserved communities. But the way we introduce those tools matters.This week, Matt Kirchner talks with Sam Whitaker, Director of Social Impact at StudyFetch, about how AI can support literacy, comprehension, and real learning outcomes when used with purpose. Sam shares his experience bringing AI education to a rural school in Uganda, where nearly every student had already used AI without formal guidance. The results of a two-hour project surprised everyone and revealed just how much potential exists when students are given the right tools.The conversation covers AI as a literacy tool, how to design platforms that encourage learning rather than shortcutting, and why student-facing AI should preserve creativity, curiosity, and joy. Sam also explains how responsible use of AI can reduce educational inequality rather than reinforce it.This is a hopeful, practical look at how education can evolve—if we build with intention.Listen to learn:Surprising lessons from working with students at a rural Ugandan school using artificial intelligenceWhat different MIT studies suggest about the impacts of AI use on memory and productivityHow AI can help U.S. literacy rates, and what far-reaching implications that will haveWhat China's AI education policy for six-year-olds might signal about the global race for responsible, guided AI use3 Big Takeaways:1. Responsible AI use must be taught early to prevent misuse and promote real learning. Sam compares AI to handing over a car without driver's ed—powerful but dangerous without structure. When AI is used to do the thinking for students, it stifles creativity and long-term retention instead of developing it.2. AI can help close educational gaps in schools that lack the resources for individualized learning. In many underserved districts, large class sizes make one-on-one instruction nearly impossible. AI tools can adapt to students' needs in real time, offering personalized learning that would otherwise be out of reach.3. AI can play a key role in addressing the U.S. literacy crisis. Sam points out that 70% of U.S. inmates read at a fourth-grade level or below, and 85% of juvenile offenders can't read. Adaptive AI tools are now being developed to assess, support, and gradually improve literacy for students who have been left behind.Resources in this Episode:To learn about StudyFetch, visit: www.studyfetch.comOther resources:MIT Study "Experimental Evidence on the Productivity Effects of General Artificial Intelligence"MIT Study "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task"Learn more about the Ugandan schools mentioned: African Rural University (ARU) and Uganda Rural Development anWe want to hear from you! Send us a text.Instagram - Facebook - YouTube - TikTok - Twitter - LinkedIn
On this episode of Embracing Erosion, Devon sits down with marketing executive and AI advisor, Liza Adams. Liza has held senior marketing leadership roles at major companies like Smartsheet, Juniper Networks, and Pure Storage, and now helps organizations accelerate growth through applied AI strategies at GrowthPath Partners.They discuss all things AI including, what marketing leaders are missing today, the future of the org chart, what tomorrows roles might look like, tactical tips on how to elevate your role using AI., and much more. Enjoy the conversation!
The rise of Artificial Intelligence ignites age-old questions about ethics, responsibility, and the nature of decision-making. As AI systems become more embedded in daily life, shaping what we see, buy, and believe, the call for "Ethical AI" grows louder. But what does that really mean? Is it about aligning machine behavior with human values? Can reasonable professionals agree on a set of standards to safely shepherd business into the AI Epoch? Register for this special two-hour DM Radio to find out! Host @eric_kavanagh will interview several industry luminaries, including: Andy Hannah, Founder of Blue Street Data, and Chairperson for the University of Pittsburgh's Responsible AI Advisory Board. Also joining will be Michael Colaresi, Associate Vice Provost for Data Science at the University of Pittsburgh. Another expert on the call will be Jessica Talisman, who draws on her background in library and information science to champion structured, ethical AI systems. Rounding out the panel will be Mench.ai Founder, Nikolai Mentchoukov, who created AI Agents before that name was even born!
Prepare for game-changing AI insights! Join Noelle Russell, CEO of the AI Leadership Institute and author of Scaling Responsible AI: From Enthusiasm to Execution. Noelle, an AI pioneer, shares her journey from the early Alexa team with Jeff Bezos, where her unique perspective shaped successful mindfulness apps. We'll explore her "I Love AI" community, which has taught over 3.4 million people. Unpack responsible, profitable AI, from the "baby tiger" analogy for AI development and organizational execution, to critical discussions around data bias and the cognitive cost of AI over-reliance.Key Moments: Journey into AI: From Jeff Bezos to Alexa (03:13): Noelle describes how she "stumbled into AI" after receiving an email from Jeff Bezos inviting her to join a new team at Amazon, later revealed to be the early Alexa team. She highlights that while she lacked inherent AI skills, her "purpose and passion" fueled her journey."I Love AI" Community & Learning (11:02): After leaving Amazon and experiencing a personal transition, Noelle created the "I Love AI" community. This free, neurodiverse space offers a safe environment for people, especially those laid off or transitioning careers, to learn AI without feeling alone, fundamentally changing their life trajectories.The "Baby Tiger" Analogy (17:21): Noelle introduces her "baby tiger" analogy for early AI model development. She explains that in the "peak of enthusiasm" (baby tiger mode), people get excited about novel AI models, but often fail to ask critical questions about scale, data needs, long-term care, or what happens if the model isn't wanted anymore.Model Selection & Explainability (32:01): Noelle stresses the importance of a clear rubric for model selection and evaluation, especially given rapid changes. She points to Stanford's HELM project (Holistic Evaluation of Language Models) as an open-source leaderboard that evaluates models on "toxicity" beyond just accuracy.Avoiding Data Bias (40:18): Noelle warns against prioritizing model selection before understanding the problem and analyzing the data landscape, as this often leads to biased outcomes and the "hammer-and-nail" problem.Cognitive Cost of AI Over-Reliance (44:43): Referencing recent industry research, Noelle warns about the potential "atrophy" of human creativity due to over-reliance on AI. Key Quotes:"Show don't tell... It's more about understanding what your review board does and how they're thinking and what their backgrounds are... And then being very thoughtful about your approach." - Noelle Russell"When we use AI as an aid rather than as writing the whole thing or writing the title, when we use it as an aid, like, can you make this title better for me? Then our brain actually is growing. The creative synapses are firing away." Noelle Russell"Most organizations, most leaders... they're picking their model before they've even figured out what the problem will be... it's kind of like, I have a really cool hammer, everything's a nail, right?" - Noelle RussellMentions:"I Love AI" CommunityScaling Responsible AI: From Enthusiasm to Execution - Noelle Russell"Your Brain on ChatGPT" - MIT Media LabPower to Truth: AI Narratives, Public Trust, and the New Tech Empire - StanfordMeta-learning, Social Cognition and Consciousness in Brains and MachinesHELM - A Reproductive and Transparent Framework for Evaluating Foundation ModelsGuest Bio: Noelle Russell is a multi-award-winning speaker, author, and AI Executive who specializes in transforming businesses through strategic AI adoption. She is a revenue growth + cost optimization expert, 4x Microsoft Responsible AI MVP, and named the #1 Agentic AI Leader in 2025. She has led teams at NPR, Microsoft, IBM, AWS and Amazon Alexa, and is a consistent champion for Data and AI literacy and is the founder of the "I ❤️ AI" Community teaching responsible AI for everyone.She is the founder of the AI Leadership Institute and empowers business owners to grow and scale with AI. In the last year, she has been named an awardee of the AI and Cyber Leadership Award from DCALive, the #1 Thought Leader in Agentic AI, and a Top 10 Global Thought Leader in Generative AI by Thinkers360. Hear more from Cindi Howson here. Sponsored by ThoughtSpot.
Dietmar Offenhuber reflects on synthetic data's break from reality, relates meaning to material use, and embraces data as a speculative and often non-digital artifact. Dietmar and Kimberly discuss data as a representation of reality; divorcing content from meaning; data settings vs. data sets; synthetic data quality and ground truth; data as a speculative artifact; the value in noise; data materiality and accountability; rethinking data literacy; Instagram data realities; non-digital computing and going beyond statistical analysis. Dietmar Offenhuber is a Professor and Department Chair of Art+Design at Northeastern University. Dietmar researches the material, sensory and social implications of environmental information and evidence construction. Related Resources Shapes and Frictions of Synthetic Data (paper): https://journals.sagepub.com/doi/10.1177/20539517241249390 Autographic Design: The Matter of Data in a Self-Inscribing World (book): https://autographic.design/ Reservoirs of Venice (project): https://res-venice.github.io/ Website: https://offenhuber.net/ A transcript of this episode is here.
The term ‘Responsible AI' is more than a buzzword; it's a call to action. When we talk about responsible AI, it's not about some fancy tech tools; it's about power, ethics, leadership, and long-term consequences. The question we need to ask is ‘Who defines what's safe? Who decides what's ethical?'. At present a handful of tech giants shape the answers to those questions. While using AI might feel like a moderate impact, developing these models comes with a great environmental cost. We live in a world that moves at hyperspeed where it's not healthy to treat AI as just a tech tool. The time has come for nonprofits and leaders to step up and lead with responsibility. In this week's episode, Scott and Nathan talk about the ever-evolving landscape of AI and the foundation of AI governance. AI technologies are generally developing at a remarkable rate, but the governance aspect is only slowly progressing. This mismatch shows how the regulations are trying to catch up instead of preventing harm. Starting the conversation, Nathan shares his thoughts on the need for an adaptive, forward-thinking governance framework for AI that is able to anticipate risk and not just to respond. Next, Nathan and Scott discuss the technological and geopolitical power in AI, where a handful of tech giants control the system, leaving the rest of us to decide if their definition of ‘Responsible AI' matches ours. Nathan kindly explains why responsible AI should be everyone's responsibility. We are in a dire need of drawing ethical lines, defining values, and demanding a transparent AI governance system before harm scales beyond our grasp. Further down in the conversation, Nathan and Scott pay attention to the following topics: challenges in AI governance, guardrails for using AI, the role of leadership in responsible AI use, environmental impact on developing AI, and more. HIGHLIGHTS [01:06] Governance and AI. [04:04] The lack of progress in the governance framework. [09:11] Challenges in AI governance. [13:02] The concentration of technological and geopolitical power in AI. [15:10] The importance of having guardrails of how to use AI. [20:21] The role of leadership in responsible AI. [25:31] The choice between acting in the fog or becoming irrelevant. [27:50] Navigating the ethics and safety in AI. [30:15] Environmental impact of AI. RESOURCES Laying the Foundation for AI Governance. aiforgood.itu.int/event/building-secure-and-trustworthy-ai-foundations-for-ai-governance/ Connect with Nathan and Scott: LinkedIn (Nathan): linkedin.com/in/nathanchappell/ LinkedIn (Scott): linkedin.com/in/scott-rosenkrans Website: fundraising.ai/
Today's guest is Miranda Jones, SVP of Data & AI Strategy at Emprise Bank. Miranda returns to discuss the evolving reality of responsible AI in the financial services sector. As generative and agentic systems mature, Jones emphasizes the importance of creating safe, low-risk environments where employees can experiment, learn prompt engineering, and develop a critical understanding of model limitations. She explores why domain-specific models outperform generalized foundational models in banking—where context, compliance, and communication style are essential to trust and performance. The episode also examines the strategic value of maintaining a deliberate pace in adopting agentic AI, ensuring human oversight and alignment with regulatory expectations. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the ‘AI in Business' podcast!
"[Question: So what was the biggest misconception for most business leaders usually when it comes to operationalizing AI governance?] Based on my interactions and conversations, now suddenly they think they have to erect a whole set of new committees, that they have to have these new programs. You almost hear a sigh from the room. Like, oh, we have now this whole additional compliance cost because we have to do all these new things. The reason I see that as a bit of a misconception, because building on everything that was just said earlier, you already have compliance, you already have committees, you already have governance. It's an integration of that because otherwise guess what's gonna happen? We all know that this is the next thing around the corner that's gonna pop up, whatever it's gonna be called. Are you gonna have to set up a whole new committee just because of that? Then the next thing, another one." - David Hardoon Fresh out of the studio, David Hardoon, Global Head of AI Enablement at Standard Chartered Bank, joins us in a conversation to explore how financial institutions can adopt AI responsibly at scale. He shares his unique journey from academia to government to global banking, reflecting on his fascination with human behavior that originally drew him to artificial intelligence. David explains how his time at Singapore's Monetary Authority shaped the groundbreaking FAIR principles, emphasizing how proper AI governance actually accelerates rather than inhibits innovation. He highlights real-world implementations from autonomous cash reconciliation agents to transaction monitoring systems, showcasing how banks are transforming operations while maintaining strict regulatory compliance. Addressing the biggest misconceptions about AI governance, he emphasizes the importance of integrating AI frameworks into existing structures rather than creating entirely new bureaucracies, while advocating for use-case-based approaches that build essential trust. Closing the conversation, David shares his philosophy that AI success ultimately depends on understanding human behavior and asks the fundamental question every organization should consider: "Why are we doing this?" Episode Highlights: [00:00] Quote of the Day by David Hardoon #QOTD - "AI governance isn't new bureaucracy." [00:46] Introduction: David Hardoon from Standard Chartered Bank. [02:02] How David's AI journey started with human behavior curiosity. [07:26] Governance accelerates innovation, like traffic rules enable fast driving. [10:31] FAIR principles in MAS Singapore born from lunches with compliance officers. [14:23] Don't reinvent governance wheel for AI implementations. [24:17] Banks already manage risk; apply same discipline to AI. [28:40] AI adoption problem is trust, not technology. [34:21] Autonomous AI agents handle cash reconciliation with bank IDs. [36:00] AI reduces transaction monitoring false positives by 50%. [39:54] AI requires full supply chain from infrastructure to translators. [41:52] Organizations must reward intelligent failure in AI innovation. [44:47] AI hallucination is a feature, not bug for innovation. [47:35] Measure AI ROI differently for innovation versus implementation teams. [56:27] Final wisdom: People always ask "why" about AI initiatives. Profile: David Hardoon, Global Head of AI Enablement, Standard Chartered Bank Personal Site: https://davidroihardoon.com/ LinkedIn: https://www.linkedin.com/in/davidrh/ Podcast Information: Bernard Leong hosts and produces the show. The proper credits for the intro and end music are "Energetic Sports Drive." G. Thomas Craig mixed and edited the episode in both video and audio format. Here are the links to watch or listen to our podcast. Analyse Asia Main Site: https://analyse.asia Analyse Asia Spotify: https://open.spotify.com/show/1kkRwzRZa4JCICr2vm0vGl Analyse Asia Apple Podcasts: https://podcasts.apple.com/us/podcast/analyse-asia-with-bernard-leong/id914868245 Analyse Asia YouTube: https://www.youtube.com/@Analys1eAsia Analyse Asia LinkedIn: https://www.linkedin.com/company/analyse-asia/ Analyse Asia X (formerly known as Twitter): https://twitter.com/analyseasia Analyse Asia Threads: https://www.threads.net/@analyseasia Sign Up for Our This Week in Asia Newsletter: https://www.analyse.asia/#/portal/signup Subscribe Newsletter on LinkedIn https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7149559878934540288
Tony chats with Lokesh Ballenahalli, Founder and Sunil Shivappa, COO at Enkefalos Technology, they are a research first global AI company focused on building AI solutions for the AI industry. They combine research deep research in LLMs, Responsible AI, and domain expertise to develop what they call the AI operating system for insurance. It has 3 core layers: Insurance GTP, Agentic AI Applications, and Monitoring of that AI.Lokesh Ballenahalli: https://www.linkedin.com/in/lokesh-ballenahalli/Sunil Shivappa: https://www.linkedin.com/in/sunil-m-shivappa-86273519a/Enkefalos Technology: https://www.enkefalos.com/Video Version: https://youtu.be/u3_xWZyPDEg
AI is reshaping how boards operate—but most leaders aren't ready. In this episode, we outline ten strategic actions every board must take to govern AI effectively and responsibly. From embedding AI into board agendas to modernising risk oversight and leadership structures, this is essential listening for executives navigating AI transformation. Learn how to align AI with business value, scale responsibly, and strengthen decision-making at the top.
Laid off or lost in the noise? You're not alone.Episode 182: Nicole Dunbar—Cornell-trained strategist and viral LinkedIn voice—joins Disruption Now to dissect the “white collar recession.” From AI displacement to hiring freezes, she breaks down how even top-tier pros are invisible in today's job market. Learn:6 proven job search strategies (only 1 involves job boards)-Why resumes alone fail—and what recruiters actually trust-The mindset shift every mid-career pro must make now-How to “network” without small talk or selling out-The emotional trap that sabotages high performers-The hiring game changed. Here's how to stay in it.
She's not waiting for permission.Flavilla Fongang went from the ghettos of Paris to building Black Rise—one of the UK's boldest tech tribes. She's mixing identity, data, and storytelling to scale Black power through business.In this episode:-Why storytelling beats pitching in tech-How she turned oil & gas into a launchpad-Her strategy for building community-first platforms-Why AI is non-negotiable for Black excellence-The real ROI of diverse ecosystemsTimestamps:00:00 Intro — Paris to Power01:42 Childhood in the Paris ghettos03:55 Moving to London & early struggles06:18 From oil & gas to fashion to tech09:44 Founding 3 Colours Rule12:30 How storytelling became her weapon15:02 Why she built GTA Black Women in Tech17:50 Launching Black Rise — the AI-powered tribe21:33 Scaling community and credibility25:01 Diversity with data, not feelings28:40 What leadership looks like in 202531:09 Her advice to future founders34:00 Closing thoughts + how to connectAbout Flavilla Fongang:Multi-award-winning entrepreneur. Founder of 3 Colours Rule, GTA Black Women in Tech, and now Black Rise. Former oil & gas exec turned tech community builder. UN Brand Partner. Named UK's Most Influential Woman in Tech (Computer Weekly), Global Top 100 MIPAD Innovator, and Entrepreneur of the Year (BTA 2023). She also serves as an entrepreneurship expert at Oxford University Saïd Business School.Watch this if you lead with identity and build with vision.—Follow Flavilla Fongang:LinkedIn: https://www.linkedin.com/in/flavillafongangTwitter (X): https://x.com/FlavillaFongangInstagram: https://www.instagram.com/flavillafongangTikTok: https://www.tiktok.com/@flavillafWebsite: https://www.flavillafongang.comBlack Rise: https://www.theblackrise.comGTA Black Women in Tech: https://theblackwomenintech.com3 Colours Rule: https://www.3coloursrule.com#blackfounder #techstorytelling #communitytechDisruption Now: Disrupting the status quo, making emerging tech human-centric and Accessible to all. Website https://disruptionnow.com/Apply to get on the Podcast https://form.typeform.com/to/Ir6Agmzr?typeform-source=disruptionnow.comMusic: Powerful Beat - Oleksandr Stepanov
She was told she wasn't tall enough, thin enough, pretty enough.Now she's rewriting the rules of beauty for everyone else.In this episode 183 of the Disruption Now Podcast, Sian Bitner-Kearney, founder of Rock Your Beauty, reveals how perfectionism, filters, and social media lies damage self-worth—and what it takes to break free. Her nonprofit is helping women embrace their authentic selves, one fashion show and workshop at a time.
How can companies harness Responsible AI without losing the human touch? PepsiCo's VP of People Solutions, Mark Sankarsingh, shares how the company boosts productivity and streamlines HR—while safeguarding trust, ethics, and human judgment. Learn how to lead with integrity as you scale teams and adopt AI in a digital-first world.
In this episode of the Disruption Now Podcast, host Rob Richardson sits down with Jacob D. Frankel (Kobi), the visionary Founder and CEO of Beyond Alpha Ventures. Jacob shares his journey from managing over $355 million in assets on Wall Street to leading a multi-strategy family office and hedge fund that invests in transformative technologies like AI, quantum computing, and cybersecurity. He discusses the importance of investing with purpose, the future of venture capital, and how Beyond Alpha Ventures is shaping the infrastructure of an AI-driven future. Tune in for an insightful conversation on strategic investing, innovation, and building a legacy.Top 3 Things You'll Learn from This Episode:Investing with Purpose – Jacob emphasizes the significance of deploying capital in ways that make a tangible difference, focusing on ventures that combine financial opportunity with real-world impact.Navigating Market Volatility – Learn how Beyond Alpha Ventures capitalizes on market dislocations and geopolitical shifts to identify high-conviction investment opportunities.The Future of Venture Capital – Discover Jacob's insights on the evolving landscape of venture capital, including the rise of AI, quantum computing, and the importance of ethical foresight in investment strategies.Jacob's Social Media Pages:LinkedIn: https://www.linkedin.com/in/jacobfrankelprivateequityWebsite: https://www.beyondalphaventures.com/Disruption Now: Building a Fair Share for Culture and Media. Join us and disrupt. Website https://bit.ly/2VUO9sfApply to get on the Podcast https://form.typeform.com/to/Ir6Agmzr?typeform-source=disruptionnow.comFacebook https://bit.ly/303IU8jInstagram https://bit.ly/2YOLl26Twitter https://bit.ly/2KfLaTfWebsite https://bit.ly/2VUO9sf
Co-hosts Mark Thompson and Steve Little examine the controversial rise of AI image "restoration" and discuss how entirely new images are being generated, rather than the original photos being restored. This is raising concerns about the preservation of authentic family photos.They discuss Mark's reconsideration of canceling his Perplexity subscription after rediscovering its unique strengths for supporting research.The hosts analyze recent court rulings that permit AI training on legally acquired content, plus Disney's ongoing case against Midjourney.This week's Tip of the Week explores how project workspaces in ChatGPT and Claude can greatly simplify your genealogical research.In RapidFire, the hosts cover Meta's aggressive AI hiring spree, the proliferation of AI tools in everyday software, including a new genealogy transcription tool from Dan Maloney, and the importance of reading AI news critically.Timestamps:In the News:06:50 The Pros and Cons of "Restoring" Family Photos with AI23:58 Mark is Cancelling Perplexity... Maybe32:33 AI Copyright Cases Are Starting to Work Their Way Through the CourtsTip of the Week:40:09 How Project Workspaces Help Genealogists Stay OrganizedRapidFire:48:51 Meta Goes on a Hiring Spree56:09 AI Is Everywhere!01:06:00 Reading AI News ResponsiblyResource LinksOpenAI: Introducing 4o Image Generation https://openai.com/index/introducing-4o-image-generation/Perplexity https://www.perplexity.ai/How does Perplexity work? https://www.perplexity.ai/help-center/en/articles/10352895-how-does-perplexity-workAnthropic wins key US ruling on AI training in authors' copyright lawsuit https://www.reuters.com/legal/litigation/anthropic-wins-key-ruling-ai-authors-copyright-lawsuit-2025-06-24/Meta wins AI copyright lawsuit as US judge rules against authors https://www.theguardian.com/technology/2025/jun/26/meta-wins-ai-copyright-lawsuit-as-us-judge-rules-against-authorsDisney, Universal sue image creator Midjourney for copyright infringement https://www.reuters.com/business/media-telecom/disney-universal-sue-image-creator-midjourney-copyright-infringement-2025-06-11/Disney and Universal Sue A.I. Firm for Copyright Infringement https://www.nytimes.com/2025/06/11/business/media/disney-universal-midjourney-ai.htmlProjects in ChatGPThttps://help.openai.com/en/articles/10169521-projects-in-chatgptMeta shares hit all-time high as Mark Zuckerberg goes on AI hiring blitz https://www.cnbc.com/2025/06/30/meta-hits-all-time-mark-zuckerberg-ai-blitz.htmlHere's What Mark Zuckerberg Is Offering Top AI Talent https://www.wired.com/story/mark-zuckerberg-meta-offer-top-ai-talent-300-million/Genealogy Assistant AI Handwritten Text Recognition Tool https://www.genea.ca/htr-tool/Borland Genetics https://borlandgenetics.com/Illusion of Thinking https://machinelearning.apple.com/research/illusion-of-thinkingSimon Willison: Seven replies to the viral Apple reasoning paper -- and why they fall short https://simonwillison.net/2025/Jun/15/viral-apple-reasoning-paper/MIT: Your Brain on ChatGPT https://www.media.mit.edu/projects/your-brain-on-chatgpt/overview/MIT researchers say using ChatGPT can rot your brain. The truth is a little more complicated https://theconversation.com/mit-researchers-say-using-chatgpt-can-rot-your-brain-the-truth-is-a-little-more-complicated-259450Guiding Principles for Responsible AI in Genealogy https://craigen.org/TagsArtificial Intelligence, Genealogy, Family History, AI Tools, Image Generation, AI Ethics, Perplexity, ChatGPT, Claude, Meta, Copyright Law, AI Training, Photo Restoration, Project Management, AI Development, Research Tools, Responsible AI Use, GRIP, AI News Analysis, Vibe Coding, Coalition for Responsible AI in Genealogy, AI Hiring, Dan Maloney, Handwritten Text Recognition
In this episode of The Greener Way, host Michelle Baltazar discusses the governance risks posed by AI with Elfreda Jonker from Alphinity Investment Management.They explore the impact of AI on cybersecurity and data privacy, as highlighted in Alphinity's latest sustainability report. The conversation covers the importance of a Responsible AI framework, how companies including Netflix and Wesfarmers address these risks, and the need for better investor disclosures by fund managers on how they tackle AI risks.01:38 Overview of Alphinity's Investment Management02:54 Highlights from the Sustainability Report04:20 What did Netflix do 08:35 AI as a governance risk11:09 Opportunities and challenges13:54 Conclusion Link: https://www.alphinity.com.au/This podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy
Are we on the brink of an AI revolution that could reshape our lives in unimaginable ways? Are we worrying about losing our jobs and ways of going things as usual? This is a very real concern that can affect our emotional well being. This week, we sit down with Kristof Horompoly, Head of AI Risk Management at ValidMind and former Head of Responsible AI for JP Morgan Chase, to tackle the biggest questions surrounding artificial intelligence. Kristof, with his deep expertise in the field, helps us navigate the promises and perils of AI. We explore a profound paradox: what if AI could unlock new realms of time, creativity, and even reignite our humanity, allowing us to focus on what truly matters? But conversely, what happens when we hand the steering wheel over to intelligent machines and they take us somewhere entirely unintended? In a world where machines can think, write, and create with increasing sophistication, we wonder: what is left for us to do? Should we be worried, or is there a path to embrace this future? Kristof provides thoughtful insights on how we can prepare for this evolving landscape, offering a grounded perspective on responsible AI development and what it means for our collective future. Tune in for an essential conversation on understanding, harnessing, and preparing for the age of AI. Topics covered: AI, artificial intelligence, Kristof Horompoly, ValidMind, JP Morgan Chase, AI risk management, responsible AI, future of AI, AI ethics, human-AI interaction, AI impact, technology, innovation, podcast, digital transformation, AI challenges, AI opportunities Video link: https://youtu.be/MGELXPkYMUU Did you enjoy this episode and would like to share some love?
Dr. Paul Hanona and Dr. Arturo Loaiza-Bonilla discuss how to safely and smartly integrate AI into the clinical workflow and tap its potential to improve patient-centered care, drug development, and access to clinical trials. TRANSCRIPT Dr. Paul Hanona: Hello, I'm Dr. Paul Hanona, your guest host of the ASCO Daily News Podcast today. I am a medical oncologist as well as a content creator @DoctorDiscover, and I'm delighted to be joined today by Dr. Arturo Loaiza-Bonilla, the chief of hematology and oncology at St. Luke's University Health Network. Dr. Bonilla is also the co-founder and chief medical officer at Massive Bio, an AI-driven platform that matches patients with clinical trials and novel therapies. Dr. Loaiza-Bonilla will share his unique perspective on the potential of artificial intelligence to advance precision oncology, especially through clinical trials and research, and other key advancements in AI that are transforming the oncology field. Our full disclosures are available in the transcript of the episode. Dr. Bonilla, it's great to be speaking with you today. Thanks for being here. Dr. Arturo Loaiza-Bonilla: Oh, thank you so much, Dr. Hanona. Paul, it's always great to have a conversation. Looking forward to a great one today. Dr. Paul Hanona: Absolutely. Let's just jump right into it. Let's talk about the way that we see AI being embedded in our clinical workflow as oncologists. What are some practical ways to use AI? Dr. Arturo Loaiza-Bonilla: To me, responsible AI integration in oncology is one of those that's focused on one principle to me, which is clinical purpose is first, instead of the algorithm or whatever technology we're going to be using. If we look at the best models in the world, they're really irrelevant unless we really solve a real day-to-day challenge, either when we're talking to patients in the clinic or in the infusion chair or making decision support. Currently, what I'm doing the most is focusing on solutions that are saving us time to be more productive and spend more time with our patients. So, for example, we're using ambient AI for appropriate documentation in real time with our patients. We're leveraging certain tools to assess for potential admission or readmission of patients who have certain conditions as well. And it's all about combining the listening of physicians like ourselves who are end users, those who create those algorithms, data scientists, and patient advocates, and even regulators, before they even write any single line of code. I felt that on my own, you know, entrepreneurial aspects, but I think it's an ethos that we should all follow. And I think that AI shouldn't be just bolted on later. We always have to look at workflows and try to look, for example, at clinical trial matching, which is something I'm very passionate about. We need to make sure that first, it's easier to access for patients, that oncologists like myself can go into the interface and be able to pull the data in real time when you really need it, and you don't get all this fatigue alerts. To me, that's the responsible way of doing so. Those are like the opportunities, right? So, the challenge is how we can make this happen in a meaningful way – we're just not reacting to like a black box suggestion or something that we have no idea why it came up to be. So, in terms of success – and I can tell you probably two stories of things that we know we're seeing successful – we all work closely with radiation oncologists, right? So, there are now these tools, for example, of automated contouring in radiation oncology, and some of these solutions were brought up in different meetings, including the last ASCO meeting. But overall, we know that transformer-based segmentation tools; transformer is just the specific architecture of the machine learning algorithm that has been able to dramatically reduce the time for colleagues to spend allotting targets for radiation oncology. So, comparing the target versus the normal tissue, which sometimes it takes many hours, now we can optimize things over 60%, sometimes even in minutes. So, this is not just responsible, but it's also an efficiency win, it's a precision win, and we're using it to adapt even mid-course in response to tumor shrinkage. Another success that I think is relevant is, for example, on the clinical trial matching side. We've been working on that and, you know, I don't want to preach to the choir here, but having the ability for us to structure data in real time using these tools, being able to extract information on biomarkers, and then show that multi-agentic AI is superior to what we call zero-shot or just throwing it into ChatGPT or any other algorithm, but using the same tools but just fine-tuned to the point that we can be very efficient and actually reliable to the level of almost like a research coordinator, is not just theory. Now, it can change lives because we can get patients enrolled in clinical trials and be activated in different places wherever the patient may be. I know it's like a long answer on that, but, you know, as we talk about responsible AI, that's important. And in terms of what keeps me up at night on this: data drift and biases, right? So, imaging protocols, all these things change, the lab switch between different vendors, or a patient has issues with new emerging data points. And health systems serve vastly different populations. So, if our models are trained in one context and deployed in another, then the output can be really inaccurate. So, the idea is to become a collaborative approach where we can use federated learning and patient-centricity so we can be much more efficient in developing those models that account for all the populations, and any retraining that is used based on data can be diverse enough that it represents all of us and we can be treated in a very good, appropriate way. So, if a clinician doesn't understand why a recommendation is made, as you probably know, you probably don't trust it, and we shouldn't expect them to. So, I think this is the next wave of the future. We need to make sure that we account for all those things. Dr. Paul Hanona: Absolutely. And even the part about the clinical trials, I want to dive a little bit more into in a few questions. I just kind of wanted to make a quick comment. Like you said, some of the prevalent things that I see are the ambient scribes. It seems like that's really taken off in the last year, and it seems like it's improving at a pretty dramatic speed as well. I wonder how quickly that'll get adopted by the majority of physicians or practitioners in general throughout the country. And you also mentioned things with AI tools regarding helping regulators move things quicker, even the radiation oncologist, helping them in their workflow with contouring and what else they might have to do. And again, the clinical trials thing will be quite interesting to get into. The first question I had subsequent to that is just more so when you have large datasets. And this pertains to two things: the paper that you published recently regarding different ways to use AI in the space of oncology referred to drug development, the way that we look at how we design drugs, specifically anticancer drugs, is pretty cumbersome. The steps that you have to take to design something, to make sure that one chemical will fit into the right chemical or the structure of the molecule, that takes a lot of time to tinker with. What are your thoughts on AI tools to help accelerate drug development? Dr. Arturo Loaiza-Bonilla: Yes, that's the Holy Grail and something that I feel we should dedicate as much time and effort as possible because it relies on multimodality. It cannot be solved by just looking at patient histories. It cannot be solved by just looking at the tissue alone. It's combining all these different datasets and being able to understand the microenvironment, the patient condition and prior treatments, and how dynamic changes that we do through interventions and also exposome – the things that happen outside of the patient's own control – can be leveraged to determine like what's the best next step in terms of drugs. So, the ones that we heard the news the most is, for example, the Nobel Prize-winning [for Chemistry awarded to Demis Hassabis and John Jumper for] AlphaFold, an AI system that predicts protein structures right? So, we solved this very interesting concept of protein folding where, in the past, it would take the history of the known universe, basically – what's called the Levinthal's paradox – to be able to just predict on amino acid structure alone or the sequence alone, the way that three-dimensionally the proteins will fold. So, with that problem being solved and the Nobel Prize being won, the next step is, “Okay, now we know how this protein is there and just by sequence, how can we really understand any new drug that can be used as a candidate and leverage all the data that has been done for many years of testing against a specific protein or a specific gene or knockouts and what not?” So, this is the future of oncology and where we're probably seeing a lot of investments on that. The key challenge here is mostly working on the side of not just looking at pathology, but leveraging this digital pathology with whole slide imaging and identifying the microenvironment of that specific tissue. There's a number of efforts currently being done. One isn't just H&E, like hematoxylin and eosin, slides alone, but with whole imaging, now we can use expression profiles, spatial transcriptomics, and gene whole exome sequencing in the same space and use this transformer technology in a multimodality approach that we know already the slide or the pathology, but can we use that to understand, like, if I knock out this gene, how is the microenvironment going to change to see if an immunotherapy may work better, right? If we can make a microenvironment more reactive towards a cytotoxic T cell profile, for example. So, that is the way where we're really seeing the field moving forward, using multimodality for drug discovery. So, the FDA now seems to be very eager to support those initiatives, so that's of course welcome. And now the key thing is the investment to do this in a meaningful way so we can see those candidates that we're seeing from different companies now being leveraged for rare disease, for things that are going to be almost impossible to collect enough data, and make it efficient by using these algorithms that sometimes, just with multiple masking – basically, what they do is they mask all the features and force the algorithm to find solutions based on the specific inputs or prompts we're doing. So, I'm very excited about that, and I think we're going to be seeing that in the future. Dr. Paul Hanona: So, essentially, in a nutshell, we're saying we have the cancer, which is maybe a dandelion in a field of grass, and we want to see the grass that's surrounding the dandelion, which is the pathology slides. The problem is, to the human eye, it's almost impossible to look at every single piece of grass that's surrounding the dandelion. And so, with tools like AI, we can greatly accelerate our study of the microenvironment or the grass that's surrounding the dandelion and better tailor therapy, come up with therapy. Otherwise, like you said, to truly generate a drug, this would take years and years. We just don't have the throughput to get to answers like that unless we have something like AI to help us. Dr. Arturo Loaiza-Bonilla: Correct. Dr. Paul Hanona: And then, clinical trials. Now, this is an interesting conversation because if you ever look up our national guidelines as oncologists, there's always a mention of, if treatment fails, consider clinical trials. Or in the really aggressive cancers, sometimes you might just start out with clinical trials. You don't even give the standard first-line therapy because of how ineffective it is. There are a few issues with clinical trials that people might not be aware of, but the fact that the majority of patients who should be on clinical trials are never given the chance to be on clinical trials, whether that's because of proximity, right, they might live somewhere that's far from the institution, or for whatever reason, they don't qualify for the clinical trial, they don't meet the strict inclusion criteria. But a reason you mentioned early on is that it's simply impossible for someone to be aware of every single clinical trial that's out there. And then even if you are aware of those clinical trials, to actually find the sites and put in the time could take hours. And so, how is AI going to revolutionize that? Because in my mind, it's not that we're inventing a new tool. Clinical trials have always been available. We just can't access them. So, if we have a tool that helps with access, wouldn't that be huge? Dr. Arturo Loaiza-Bonilla: Correct. And that has been one of my passions. And for those who know me and follow me and we've spoke about it in different settings, that's something that I think we can solve. This other paradox, which is the clinical trial enrollment paradox, right? We have tens of thousands of clinical trials available with millions of patients eager to learn about trials, but we don't enroll enough and many trials close to accrual because of lack of enrollment. It is completely paradoxical and it's because of that misalignment because patients don't know where to go for trials and sites don't know what patients they can help because they haven't reached their doors yet. So, the solution has to be patient-centric, right? We have to put the patient at the center of the equation. And that was precisely what we had been discussing during the ASCO meeting. There was an ASCO Education Session where we talked about digital prescreening hubs, where we, in a patient-centric manner, the same way we look for Uber, Instacart, any solution that you may think of that you want something that can be leveraged in real time, we can use these real-world data streams from the patient directly, from hospitals, from pathology labs, from genomics companies, to continuously screen patients who can match to the inclusion/exclusion criteria of unique trials. So, when the patient walks into the clinic, the system already knows if there's a trial and alerts the site proactively. The patient can actually also do decentralization. So, there's a number of decentralized clinical trial solutions that are using what I call the “click and mortar” approach, which is basically the patient is checking digitally and then goes to the site to activate. We can also have the click and mortar in the bidirectional way where the patient is engaged in person and then you give the solution like the ones that are being offered on things that we're doing at Massive Bio and beyond, which is having the patient to access all that information and then they make decisions and enroll when the time is right. As I mentioned earlier, there is this concept drift where clinical trials open and close, the patient line of therapy changes, new approvals come in and out, and sites may not be available at a given time but may be later. So, having that real-time alerts using tools that are able already to extract data from summarization that we already have in different settings and doing this natural language ingestion, we can not only solve this issue with manual chart review, which is extremely cumbersome and takes forever and takes to a lot of one-time assessments with very high screen failures, to a real-time dynamic approach where the patient, as they get closer to that eligibility criteria, they get engaged. And those tools can be built to activate trials, audit trials, and make them better and accessible to patients. And something that we know is, for example, 91%-plus of Americans live close to either a pharmacy or an imaging center. So, imagine that we can potentially activate certain of those trials in those locations. So, there's a number of pharmacies, special pharmacies, Walgreens, and sometimes CVS trying to do some of those efforts. So, I think the sky's the limit in terms of us working together. And we've been talking with corporate groups, they're all interested in those efforts as well, to getting patients digitally enabled and then activate the same way we activate the NCTN network of the corporate groups, that are almost just-in-time. You can activate a trial the patient is eligible for and we get all these breakthroughs from the NIH and NCI, just activate it in my site within a week or so, as long as we have the understanding of the protocol. So, using clinical trial matching in a digitally enabled way and then activate in that same fashion, but not only for NCTN studies, but all the studies that we have available will be the key of the future through those prescreening hubs. So, I think now we're at this very important time where collaboration is the important part and having this silo-breaking approach with interoperability where we can leverage data from any data source and from any electronic medical records and whatnot is going to be essential for us to move forward because now we have the tools to do so with our phones, with our interests, and with the multiple clinical trials that are coming into the pipelines. Dr. Paul Hanona: I just want to point out that the way you described the process involves several variables that practitioners often don't think about. We don't realize the 15 steps that are happening in the background. But just as a clarifier, how much time is it taking now to get one patient enrolled on a clinical trial? Is it on the order of maybe 5 to 10 hours for one patient by the time the manual chart review happens, by the time the matching happens, the calls go out, the sign-up, all this? And how much time do you think a tool that could match those trials quicker and get you enrolled quicker could save? Would it be maybe an hour instead of 15 hours? What's your thought process on that? Dr. Arturo Loaiza-Bonilla: Yeah, exactly. So one is the matching, the other one is the enrollment, which, as you mentioned, is very important. So, it can take, from, as you said, probably between 4 days to sometimes 30 days. Sometimes that's how long it takes for all the things to be parsed out in terms of logistics and things that could be done now agentically. So, we can use agents to solve those different steps that may take multiple individuals. We can just do it as a supply chain approach where all those different steps can be done by a single agent in a simultaneous fashion and then we can get things much faster. With an AI-based solution using these frontier models and multi-agentic AI – and we presented some of this data in ASCO as well – you can do 5,000 patients in an hour, right? So, just enrolling is going to be between an hour and maximum enrollment, it could be 7 days for those 5,000 patients if it was done at scale in a multi-level approach where we have all the trials available. Dr. Paul Hanona: No, definitely a very exciting aspect of our future as oncologists. It's one thing to have really neat, novel mechanisms of treatment, but what good is it if we can't actually get it to people who need it? I'm very much looking for the future of that. One of the last questions I want to ask you is another prevalent way that people use AI is just simply looking up questions, right? So, traditionally, the workflow for oncologists is maybe going on national guidelines and looking up the stage of the cancer and seeing what treatments are available and then referencing the papers and looking at who was included, who wasn't included, the side effects to be aware of, and sort of coming up with a decision as to how to treat a cancer patient. But now, just in the last few years, we've had several tools become available that make getting questions easier, make getting answers easier, whether that's something like OpenAI's tools or Perplexity or Doximity or OpenEvidence or even ASCO has a Guidelines Assistant as well that is drawing from their own guidelines as to how to treat different cancers. Do you see these replacing traditional sources? Do you see them saving us a lot more time so that we can be more productive in clinic? What do you think is the role that they're going to play with patient care? Dr. Arturo Loaiza-Bonilla: Such a relevant question, particularly at this time, because these AI-enabled query tools, they're coming left and right and becoming increasingly common in our daily workflows and things that we're doing. So, traditionally, when we go and we look for national guidelines, we try to understand the context ourselves and then we make treatment decisions accordingly. But that is a lot of a process that now AI is helping us to solve. So, at face value, it seems like an efficiency win, but in many cases, I personally evaluate platforms as the chief of hem/onc at St. Luke's and also having led the digital engagement things through Massive Bio and trying to put things together, I can tell you this: not all tools are created equal. In cancer care, each data point can mean the difference between cure and progression, so we cannot really take a lot of shortcuts in this case or have unverified output. So, the tools are helpful, but it has to be grounded in truth, in trusted data sources, and they need to be continuously updated with, like, ASCO and NCCN and others. So, the reason why the ASCO Guidelines Assistant, for instance, works is because it builds on all these recommendations, is assessed by end users like ourselves. So, that kind of verification is critical, right? We're entering a phase where even the source material may be AI-generated. So, the role of human expert validation is really actually more important, not less important. You know, generalist LLMs, even when fine-tuned, they may not be enough. You can pull a few API calls from PubMed, etc., but what we need now is specialized, context-aware, agentic tools that can interpret multimodal and real-time clinical inputs. So, something that we are continuing to check on and very relevant to have entities and bodies like ASCO looking into this so they can help us to be really efficient and really help our patients. Dr. Paul Hanona: Dr. Bonilla, what do you want to leave the listener with in terms of the future direction of AI, things that we should be cautious about, and things that we should be optimistic about? Dr. Arturo Loaiza-Bonilla: Looking 5 years ahead, I think there's enormous promise. As you know, I'm an AI enthusiast, but always, there's a few priorities that I think – 3 of them, I think – we need to tackle head-on. First is algorithmic equity. So, most AI tools today are trained on data from academic medical centers but not necessarily from community practices or underrepresented populations, particularly when you're looking at radiology, pathology, and what not. So, those blind spots, they need to be filled, and we can eliminate a lot of disparities in cancer care. So, those frameworks to incentivize while keeping the data sharing using federated models and things that we can optimize is key. The second one is the governance on the lifecycle. So, you know, AI is not really static. So, unlike a drug that is approved and it just, you know, works always, AI changes. So, we need to make sure that we have tools that are able to retrain and recall when things degrade or models drift. So, we need to use up-to-date AI for clinical practice, so we are going to be in constant revalidation and make it really easy to do. And lastly, the human-AI interface. You know, clinicians don't need more noise or we don't need more black boxes. We need decision support that is clear, that we can interpret, and that is actionable. “Why are you using this? Why did we choose this drug? Why this dose? Why now?” So, all these things are going to help us and that allows us to trace evidence with a single click. So, I always call it back to the Moravec's paradox where we say, you know, evolution gave us so much energy to discern in the sensory-neural and dexterity. That's what we're going to be taking care of patients. We can use AI to really be a force to help us to be better clinicians and not to really replace us. So, if we get this right and we decide for transparency with trust, inclusion, etc., it will never replace any of our work, which is so important, as much as we want, we can actually take care of patients and be personalized, timely, and equitable. So, all those things are what get me excited every single day about these conversations on AI. Dr. Paul Hanona: All great thoughts, Dr. Bonilla. I'm very excited to see how this field evolves. I'm excited to see how oncologists really come to this field. I think with technology, there's always a bit of a lag in adopting it, but I think if we jump on board and grow with it, we can do amazing things for the field of oncology in general. Thank you for the advancements that you've made in your own career in the field of AI and oncology and just ultimately with the hopeful outcomes of improving patient care, especially cancer patients. Dr. Arturo Loaiza-Bonilla: Thank you so much, Dr. Hanona. Dr. Paul Hanona: Thanks to our listeners for your time today. If you value the insights that you hear on ASCO Daily News Podcast, please take a moment to rate, review, and subscribe wherever you get your podcasts. Disclaimer: The purpose of this podcast is to educate and to inform. This is not a substitute for professional medical care and is not intended for use in the diagnosis or treatment of individual conditions. Guests on this podcast express their own opinions, experience, and conclusions. Guest statements on the podcast do not express the opinions of ASCO. The mention of any product, service, organization, activity, or therapy should not be construed as an ASCO endorsement. More on today's speakers: Dr. Arturo Loaiza-Bonilla @DrBonillaOnc Dr. Paul Hanona @DoctorDiscover on YouTube Follow ASCO on social media: @ASCO on Twitter ASCO on Facebook ASCO on LinkedIn ASCO on BlueSky Disclosures: Paul Hanona: No relationships to disclose. Dr. Arturo-Loaiza-Bonilla: Leadership: Massive Bio Stock & Other Ownership Interests: Massive Bio Consulting or Advisory Role: Massive Bio, Bayer, PSI, BrightInsight, CardinalHealth, Pfizer, AstraZeneca, Medscape Speakers' Bureau: Guardant Health, Ipsen, AstraZeneca/Daiichi Sankyo, Natera
In this episode of The Broadband Bunch, host Brad Hine sits down with Chris Draper, Board Chair at SafetAI, recorded live at day two of the Community Broadband Action Network (CBAN) conference in Ames, Iowa. With broadband providers increasingly overwhelmed by promises of “AI-infused” solutions, Chris brings clarity and expertise to the conversation around artificial intelligence in utilities and broadband. Drawing from his experience in high-risk technology environments—from rocket science to compliance in legal and government tech—Chris discusses the need for intentional, ethical AI implementation. He explores the "art of the possible" while highlighting the real-world risks of AI systems operating faster than human oversight can manage. Listeners will gain insight into how AI can amplify human action when deployed responsibly—especially in rural broadband and utility environments—and why now is the time to establish ethical frameworks before regulatory mandates catch up. Chris also shares his philosophy on data responsibility, automation pitfalls, the importance of transparency, and how SafetAI is helping organizations make informed decisions about AI adoption.
Pia Lauritzen questions our use of questions, the nature of humanity, the premise of AGI, the essence of tech, if humans can be optimized and why thinking is required. Pia and Kimberly discuss the function of questions, curiosity as a basic human feature, AI as an answer machine, why humans think, the contradiction at the heart of AGI, grappling with the three big Es, the fallacy of human optimization, respecting humanity, Heidegger's eerily precise predictions, the skill of critical thinking, and why it's not really about the questions at all. Pia Lauritzen, PhD is a philosopher, author and tech inventor asking big questions about tech and transformation. As the CEO and Founder of Qvest and a Thinkers50 Radar Member Pia is on a mission to democratize the power of questions. Related ResourcesQuestions (Book): https://www.press.jhu.edu/books/title/23069/questions TEDx Talk: https://www.ted.com/talks/pia_lauritzen_what_you_don_t_know_about_questions Question Jam: www.questionjam.comForbes Column: forbes.com/sites/pialauritzen LinkedIn Learning: www.Linkedin.com/learning/pialauritzen Personal Website: pialauritzen.dk A transcript of this episode is here.
"You can try to develop self-awareness and take a beginner's mind in all things. This includes being open to feedback and truly listening, even when it might be hard to receive. I think that's been something I've really tried to practice. The other area is recognizing that just like a company or country, as humans we have many stakeholders. You may wear many hats in different ways. So as we think of the totality of your life over time, what's your portfolio of passions? How do you choose—as individuals, as society, as organizations, as humans and families with our loved ones and friends—to not just spend your time and resources, but really invest your time, resources, and spirit into areas, people, and contexts that bring you meaning and where you can build a legacy? So it's not so much advice, but more like a north star." - Sabastian V. Niles Fresh out of the studio, Sabastian Niles, President and Chief Legal Officer at Salesforce Global, joins us to explore how trust and responsibility shape the future of enterprise AI. He shares his journey from being a high-tech corporate lawyer and trusted advisor to leading AI governance at a company whose number one value is trust, reflecting on the evolution from automation to agentic AI that can reason, plan, and execute tasks alongside humans. Sabastian explains how Agentforce 3.0 enables agent-to-agent interactions and human-AI collaboration through command centers and robust guardrails. He highlights how organizations are leveraging trusted AI for personalized customer experiences, while Salesforce's Office of Ethical and Humane Use operationalizes trust through transparency, explainability, and auditability. Addressing the black box problem in AI, he emphasizes that guardrails provide confidence to move faster rather than creating barriers. Closing the conversation, Sabastian shares his vision on what great looks like for trusted agentic AI at scale. Episode Highlights [00:00] Quote of the Day by Sabastian Niles: "Portfolio of passions - invest your spirit into areas that bring meaning" [01:02] Introduction: Sabastian Niles, President and Chief Legal Officer of Salesforce Global [02:29] Sabastian's Career Journey [04:50] From Trusted Advisor to SalesForce whose number one value is trust [08:09] Salesforce's 5 core values: Trust, Customer Success, Innovation, Equality, Sustainability [10:25] Defining Agentic AI: humans with AI agents driving stakeholder success together [13:13] Trust paradigm shift: trusted approaches become an accelerant, not obstacle [17:33] Agent interactions: not just human-to-agent, but agent-to-agent-to-agent handoffs [23:35] Enterprise AI requires transparency, explainability, and auditability [28:00] Trust philosophy: "begins long before prompt, continues after output" [34:06] Office of Ethical and Humane Use operationalizes trust values [40:00] Future vision: AI helps us spend time on uniquely human work [45:17] Governance philosophy: Guardrails provide confidence to move faster [48:24] What does great look like for Salesorce for Trust & Responsibility in the Era of AI? [50:16] Closing Profile: Sabastian V. Niles, President & Chief Legal Officer, LinkedIn: https://www.linkedin.com/in/sabastian-v-niles-b0175b2/ Podcast Information: Bernard Leong hosts and produces the show. The proper credits for the intro and end music are "Energetic Sports Drive." G. Thomas Craig mixed and edited the episode in both video and audio format. Here are the links to watch or listen to our podcast. Analyse Asia Main Site: https://analyse.asia Analyse Asia Spotify: https://open.spotify.com/show/1kkRwzRZa4JCICr2vm0vGl Analyse Asia Apple Podcasts: https://podcasts.apple.com/us/podcast/analyse-asia-with-bernard-leong/id914868245 Analyse Asia YouTube: https://www.youtube.com/@AnalyseAsia Analyse Asia LinkedIn: https://www.linkedin.com/company/analyse-asia/
Co-hosts Mark Thompson and Steve Little discuss recent updates from Google Gemini and Anthropic Claude that are reshaping AI capabilities for genealogists. Google's Gemini 2.5 Pro with its massive context window and Claude 4's hybrid reasoning models that excels at both writing and document analysis.They share insights from the RootsTech panel on responsible AI use in genealogy, and introduce the Coalition's five core principles for the response use of AI. The episode features an interview with Jessica Taylor, president of Legacy Tree Genealogists, who discusses how her company is thoughtfully experimenting with AI tools.In RapidFire, they preview ChatGPT 5's anticipated summer release, Meta's $14 billion acquisition to stay competitive, and Adobe Acrobat AI's new multi-document capabilities.Timestamps:In the News:03:45 Google Gemini 2.5 Pro: Massive Context Windows Transform Document Analysis15:09 Claude 4 Opus and Sonnet: Hybrid Reasoning Models for Writing and Research26:30 RootsTech Panel: Coalition for Responsible AI in GenealogyInterview:31:28 Jessica Taylor, CEO of Legacy Tree Genealogists, on her cautious approach to AI AdoptionRapidFire:45:07 ChatGPT 5 Coming Soon: One Model to Rule Them All51:08 Meta's $14.8 Billion Scale AI Acquisition56:42 Adobe Acrobat AI Assistant Adds Multi-Document AnalysisResource LinksGoogle I/O Conference Highlightshttps://blog.google/technology/ai/google-io-2025-all-our-announcements/Anthropic Announces Claude 4https://www.anthropic.com/news/claude-4Anthropic's new Claude 4 AI models can reason over many stepshttps://techcrunch.com/2025/05/22/anthropics-new-claude-4-ai-models-can-reason-over-many-steps/Coalition for Responsible AI in Genealogyhttps://craigen.org/Jessica M. Taylorhttps://www.apgen.org/users/jessica-m-taylorLegacy Tree Genealogistshttps://www.legacytree.com/Rootstechhttps://www.familysearch.org/en/rootstech/ChatGPT 5 is Coming Soonhttps://www.tomsguide.com/ai/chatgpt/chatgpt-5-is-coming-soon-heres-what-we-knowMeta's $14.8 billion Scale AI deal latest test of AI partnershipshttps://www.reuters.com/sustainability/boards-policy-regulation/metas-148-billion-scale-ai-deal-latest-test-ai-partnerships-2025-06-13/A frustrated Zuckerberg makes his biggest AI bethttps://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.htmlAdobe upgrades Acrobat AI chatbot to add multi-document analysishttps://www.androidauthority.com/adobe-ai-assistant-acrobat-3451988/TagsArtificial Intelligence, Genealogy, Family History, AI Tools, Google Gemini, Claude AI, OpenAI, ChatGPT, Meta AI, Adobe Acrobat, Responsible AI, Coalition for Responsible AI in Genealogy, RootsTech, AI Ethics, Document Analysis, AI Writing Tools, Hybrid Reasoning Models, Context Windows, Professional Genealogy, Legacy Tree Genealogists, Jessica Taylor, AI Integration, Multi-Document Analysis, AI Acquisitions
Multi-agentic AI is rewriting the future of work.... but are we racing ahead without checking for warning signs?Microsoft's new agent systems can split up work, make choices, and act on their own. The possibilities? Massive.But it's not without risks, which is why you NEED to listen to Sarah Bird. She's the Chief Product Officer of Responsible AI at Microsoft and is constantly building out safer agentic AI. So what's really at stake when AIs start making decisions together?And how do you actually stay in control?We're pulling back the curtain on the 3 critical risks of multi-agentic AI and unveiling the playbook to navigate them safely.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Responsible AI: Evolution and ChallengesAgentic AI's Ethical ImplicationsMulti-Agentic AI Responsibility ShiftMicrosoft's AI Governance StrategiesTesting Multi-Agentic Risks and PatternsAgentic AI: Future Workforce SkillsObservability in Multi-Agentic SystemsThree Risk Categories in AI ImplementationTimestamps:00:00 Evolving Challenges in Responsible AI05:50 Agent Technology: Benefits and Risks09:27 Complex System Governance and Observability12:26 AI Monitoring and Human Intervention15:14 Essential Testing for Trust Building19:43 Securing AI Agents with Entra22:06 Exploring Human-AI Interface Innovation26:06 AI Workforce Integration Challenges28:22 AI's Transformative Impact on JobsKeywords:Agentic AI, multi agentic AI, responsible AI, generative AI, Microsoft Build conference, AI governance, AI ethics, AI systems, AI risk, AI mitigation, AI tools, human in the loop, Foundry observability, AI testing, system security, AI monitoring, user intent, AI capability, prompt injection, Copilot, AI orchestration, AI deployment, system governance, Entra agent ID, AI education, AI upskilling, AI workforce integration, systemic risk, AI misuse, AI malfunctions, AI systemic risk, AI-powered solutions, AI development, AI innovation, AI technology, AI security measures.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
In this episode, Alta Charo, emerita professor of law and bioethics at the University of Wisconsin–Madison, joins Sullivan for a conversation on the evolving landscape of genome editing and its regulatory implications. Drawing on decades of experience in biotechnology policy, Charo emphasizes the importance of distinguishing between hazards and risks and describes the field's approach to regulating applications of technology rather than the technology itself. The discussion also explores opportunities and challenges in biotech's multi-agency oversight model and the role of international coordination. Later, Daniel Kluttz, a partner general manager in Microsoft's Office of Responsible AI, joins Sullivan to discuss how insights from genome editing could inform more nuanced and robust governance frameworks for emerging technologies like AI.
The Big Unlock Podcast · Designing AI-Native Healthcare with Innovation, Automation, and Responsible AI. – Podcast with Sara Vaezy In this episode, Sara Vaezy, Chief Transformation Officer at Providence, discusses Providence's strategic approach to digital transformation, consumer engagement, and responsible AI adoption to improve both patient and caregiver experiences. Sara highlights the importance of delivering personalized, frictionless, and proactive healthcare experiences across digital touchpoints. At Providence, a standout initiative is the use of conversational AI to enable ‘message deflection' which reduces the volume of patient messages sent to physicians by helping patients resolve queries instantly through intelligent chatbots. Sara emphasizes building a digital workforce not just to automate routine tasks, but to rethink and redesign workflows creatively. With foundational investments in cloud infrastructure, unified data systems, and interoperability, Providence is well-positioned to scale AI use cases like ambient documentation and care navigation. Sara also shares how Providence has incubated and spun off innovative startups like DexCare and Praia Health to address critical gaps in supply-demand matching and patient personalization. She advocates for ethical AI governance, better observability tools, and designing AI-native healthcare processes that go beyond simply replacing human tasks. Take a listen.
In honor of National Safety Month, this special compilation episode of AI and the Future of Work brings together powerful conversations with four thought leaders focused on designing AI systems that protect users, prevent harm, and promote trust. Featuring past guests:Silvio Savarese (Executive Vice President and Chief Scientist, Salesforce) -Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/15548310Navindra Yadav (Co-founder & CEO, Theom) - Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/12370356Eric Siegel (CEO, Gooder AI & Author ) - Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/14464391Ben Kus (CTO, Box) - Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/14789034✅ What You'll Learn: What it means to design AI with safety, transparency, and human oversight in mindHow leading enterprises approach responsible AI development at scaleWhy data privacy and permissions are critical to safe AI deploymentHow to detect and mitigate bias in predictive modelsWhy responsible AI requires balancing speed with long-term impactHow trust, explainability, and compliance shape the future of enterprise AI ResourcesSubscribe to the AI & The Future of Work Newsletter: https://aiandwork.beehiiv.com/subscribe Other special compilation episodes Ethical AI in Hiring: How to Stay Compliant While Building a Fairer Future of Work (HR Day Special Episode)Data Privacy Day Special Episode: AI, Deepfakes & The Future of TrustThe Future of AI Ethics Special: Perspectives from Women Leaders in AI on Bias, Accountability & TrustWorld Health Day Special: How AI Is Making Healthcare Smarter, Cheaper, and Kinder