POPULARITY
In this special episode of Essential ESG, Phoebe Wynn-Pope and James North discuss the fast-evolving landscape of responsible artificial intelligence (AI) governance. As AI technologies continue to transform industries, the legal and regulatory frameworks surrounding them are shifting just as rapidly. With AI regulation in flux globally, Phoebe and James explore why proactive AI governance is critical – not only for managing legal risks and navigating emerging regulations, but also in unlocking AI's productivity potential and building stakeholder trust.
Henrik Skaug Sætra considers the basis of democracy, the nature of politics, the tilt toward digital sovereignty and what role AI plays in our collective human society. Henrik and Kimberly discuss AI's impact on human comprehension and communication; core democratic competencies at risk; politics as a joint human endeavor; conflating citizens with customers; productively messy processes; the problem of democracy; how AI could change what democracy means; whether democracy is computable; Google's experiments in democratic AI; AI and digital sovereignty; and a multidisciplinary path forward. Henrik Skaug Sætra is an Associate Professor of Sustainable Digitalisation and Head of the Technology and Sustainable Futures research group at Oslo University. He is also the CEO of Pathwais.eu connecting strategy, uncertainty, and action through scenario-based risk management.Related ResourcesGoogle Scholar Profile: https://scholar.google.com/citations?user=pvgdIpUAAAAJ&hl=enHow to Save Democracy from AI (Book – Norwegian): https://www.norli.no/9788202853686AI for the Sustainable Development Goals (Book): https://www.amazon.com/AI-Sustainable-Development-Goals-Everything/dp/1032044063Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism (Book): https://www.amazon.com/Technology-Sustainable-Development-Pitfalls-Techno-Solutionism-ebook/dp/B0C17RBTVLA transcript of this episode is here.
Send me a messageIn this week's episode of the Sustainable Supply Chain Podcast, I sat down with fellow Irishman Paul Byrnes, CEO of Mavarick AI, to explore how manufacturers can use AI and data to tackle the notoriously difficult challenge of Scope 3 emissions.Paul brings a unique perspective, rooted in both deep data science and hands-on manufacturing experience, and he didn't shy away from the hard truths: most companies still struggle with messy, unreliable data and limited supplier engagement. We unpack why primary data will soon become table stakes, why spend-based estimates can be 40% off the mark, and how engaging suppliers requires a simple but often overlooked question, what's in it for them?We also discussed where AI genuinely moves the needle:Boosting confidence in data accuracy by identifying gaps and “contaminated” entriesProviding personalised training to help suppliers meet sustainability requestsUncovering and prioritising decarbonisation levers with clear ROIPaul shared real-world examples, from medical devices to automotive, that show how targeted projects, rather than trying to tackle all 15 Scope 3 categories at once, deliver the best results. We also touched on the environmental footprint of AI itself, energy, water, rare materials, and how responsible computing and smaller, purpose-built models can reduce the impact.For leaders wrestling with emissions strategy, Paul's advice is simple: start by mapping your data landscape. Know where you're rich, where you're poor, and build from there.This is a practical, candid conversation about making sustainability and profitability work hand-in-hand, and why efficiency wins are so often sustainability wins.Elevate your brand with the ‘Sustainable Supply Chain' podcast, the voice of supply chain sustainability.Last year, this podcast's episodes were downloaded over 113,000 times by senior supply chain executives around the world.Become a sponsor. Lead the conversation.Contact me for sponsorship opportunities and turn downloads into dialogues.Act today. Influence the future.Podcast supportersI'd like to sincerely thank this podcast's generous Subscribers: Alicia Farag Kieran Ognev And remember you too can become a Sustainable Supply Chain+ subscriber - it is really easy and hugely important as it will enable me to continue to create more excellent episodes like this one and give you access to the full back catalog of over 460 episodes.Podcast Sponsorship Opportunities:If you/your organisation is interested in sponsoring this podcast - I have several options available. Let's talk!FinallyIf you have any comments/suggestions or questions for the podcast - feel free to just send me a direct message on LinkedIn, or send me a text message using this link.If you liked this show, please don't forget to rate and/or review it. It makes a big difference to help new people discover it. Thanks for listening.
Merage Ghane, Ph.D., Director of Responsible AI in Health at the Coalition for Health AI (CHAI), discusses the consequences of AI misuse and the importance of building trust in clinical applications of AI. She highlights the need for human-centered solutions, emphasizing ethics in healthcare, and discusses the evolution of the healthcare industry.
Throwback Episode featuring with Dominic Price, AtlassianIn this high-impact throwback episode, we revisit our conversation with Dominic Price—Work Futurist at Atlassian and one of the leading voices on the future of work, team culture, and tech-human collaboration.With over a decade at Atlassian, Dom brings a bold, people-first lens to modern leadership—unpacking what it takes to build thriving teams in a world shaped by disruption, agility, and rapid technological change.In this episode, we cover:1:54 What is a Work Futurist4:28 Creating and Sharing Playbooks at Atlassian9:38 The Return to Office Debate 19:35 Why Productivity is a Flawed Metric 24:05 The Problem with Waterfall Approaches to Agile 28:33 Responsible AI and Technology Implementation 40:53 Scaling Culture and Distributed Teamwork 49:17 Learning From Mistakes If you're rethinking your approach to collaboration, leadership, or scaling culture in complex environments—this one's worth another listen.Support the showThank you for listening to Agile Ideas! If you enjoyed this episode, please share it with someone who might benefit from our discussions. Remember to rate us on your preferred podcast platform and follow us on social media for updates and more insightful content.Thank you for listening. If you enjoyed this episode, I'd really appreciate it if you could share it with your friends and rate us. Let's spread the #AgileIdeas together! We'd like to hear any feedback. www.agilemanagementoffice.com/contact Don't miss out on exclusive access to special events, checklists, and blogs that are not available everywhere. Subscribe to our newsletter now at www.agilemanagementoffice.com/subscribe. You can also find us on most social media channels by searching 'Agile Ideas'. Follow me, your host, on LinkedIn - go to Fatimah Abbouchi - www.linkedin.com/in/fatimahabbouchi/ For all things Agile Ideas and to stay connected, visit our website below. It's your one-stop destination for all our episodes, blogs, and more. We hope you found today's episode enlightening. Until next time, keep innovating and exploring new Agile Ideas!Learn more about podcast host Fatimah Abbouchi...
How do you get your organization trained up to use AI tools? Richard talks to Stephanie Donahue about her work implementing AI tools at Avanade and with Avanade's customers. Stephanie discusses how many workers are bringing their own AI tools, such as ChatGPT, to work and the risks that represent to the organization. Having an approved set of tools helps people work in the right direction, but they will still need some education. The challenge lies in the rapidly shifting landscape and the lack of certifications. However, you'll have some individuals eager to utilize these tools, often on the younger side, and they can help build your practice. The opportunities are tremendous!LinksChatGPT EnterpriseLearning M365 CopilotSecure and Govern Microsoft 365 CopilotMicrosoft PurviewResponsible AI at MicrosoftMicrosoft Viva EngageRecorded July 18, 2025
Join us as we sit down with Christina Stathopoulos, founder of Dare to Data and former Google and Waze data strategist, to discuss the unique challenges and opportunities for women in data science and AI. In this episode, you'll learn how data bias and AI algorithms can impact women and minority groups, why diversity in tech teams is crucial, and how inclusive design can lead to better, fairer technology. Christina shares her personal journey as a woman in data, offers actionable advice for overcoming imposter syndrome, and highlights the importance of education and allyship in building a more inclusive future for data and AI. Panelists: Christina Stathopoulos, Founder of Dare to Data - LinkedInMegan Bowers, Sr. Content Manager @ Alteryx - @MeganBowers, LinkedInShow notes: Dare to DataDiversity at AlteryxInvisible WomenUnmasking AI Interested in sharing your feedback with the Alter Everything team? Take our feedback survey here!This episode was produced by Megan Bowers, Mike Cusic, and Matt Rotundo. Special thanks to Andy Uttley for the theme music.
Send us a textIn this special episode, we speak with Daphne Li, CEO of Common Sense Privacy, alongside leaders from Prodigy Education, AI for Equity, MagicSchool AI, and ClassDojo—recipients of the Privacy Seal. Together, we explore how the edtech sector is tackling one of its biggest challenges: earning trust through responsible AI and data privacy practices.
Merage Ghane, Ph.D., Director of Responsible AI in Health at the Coalition for Health AI (CHAI), discusses the consequences of AI misuse and the importance of building trust in clinical applications of AI. She highlights the need for human-centered solutions, emphasizing ethics in healthcare, and discusses the evolution of the healthcare industry.
Merage Ghane, Ph.D., Director of Responsible AI in Health at the Coalition for Health AI (CHAI), discusses the consequences of AI misuse and the importance of building trust in clinical applications of AI. She highlights the need for human-centered solutions, emphasizing ethics in healthcare, and discusses the evolution of the healthcare industry.
Send us a textDiya Wynn, Responsible AI Lead at Amazon Web Services, takes us on a remarkable journey from her childhood in the South Bronx to becoming a technology leader championing fairness in artificial intelligence. Her story begins with a pivotal moment at age eight when, after receiving a basic computer as an academic achievement award, she declared she wanted to be a computer engineer—a path that would shape her entire professional life.What makes Diya's perspective so valuable is how she demystifies AI for families. Rather than presenting artificial intelligence as some futuristic concept, she helps us recognize how it's already woven into our daily lives through search engines, streaming recommendations, and customer service interactions. This familiarity makes AI more approachable for both parents and children navigating today's digital landscape.For children curious about future careers, Diya offers reassuring guidance. As AI continues changing the job landscape, she emphasizes developing timeless human capabilities—critical thinking, problem-solving, emotional intelligence, and effective communication—that will remain valuable regardless of technological evolution. Her message inspires young listeners to approach technology with curiosity rather than fear.Resources Mentioned in the EpisodeAmber invited kids (through their parents) to share stories or be guests.Email: contact@aidigitales.comDiya described her role leading Responsible AI initiatives at Amazon Web Services (AWS). AWS Responsible AI page: https://aws.amazon.com/ai/responsible-ai/Code.org – Free coding lessons and AI-related activities. https://code.orgPartyRock by AWS – A fun, no-code way to create generative AI apps. https://partyrock.awsGoogle AI courses for beginners (referenced as free learning resources). https://ai.google/education/Microsoft Learn (free coding & AI training modules). https://learn.microsoft.com/training/Data Science Camp (DMV area) https://datasciencecamp.orgSupport the showHelp us become the #1 podcast for AI for Kids.Buy our new book "Let Kids Be Kids, Not Robots!: Embracing Childhood in an Age of AI"Social Media & Contact: Website: www.aidigitales.com Email: contact@aidigitales.com Follow Us: Instagram, YouTube Gift or get our books on Amazon or Free AI Worksheets Listen, rate, and subscribe! Stay updated with our latest episodes by subscribing to AI for Kids on your favorite podcast platform. Apple Podcasts Amazon Music Spotify YouTube Other Like our content, subscribe or feel free to donate to our Patreon here: patreon.com/AiDigiTales...
Is AI destined to replace us, or can it unlock unprecedented human potential? Helen and Dave Edwards join Lukas Egger to explore the emotional, cognitive, and cultural shifts that AI is ushering in. They challenge the narrow focus on productivity, urging us to consider AI's broader impact on our lives and organizations. Discover how AI can be a powerful force for innovation, creativity, and meaning-making, but only if we prioritize human dignity and cultivate symbiotic relationships between humans and machines. This conversation is a must-listen for anyone seeking to navigate the AI revolution with purpose and vision.
In this episode of Legal Leaders Insights, Giulio Coraggio, Head of Intellectual Property & Technology at DLA Piper Italy, interviews Emerald De Leeuw-Goggin, Global Head of AI Governance & Privacy at Logitech.We dive into her career journey from founding Eurocomply to leading AI governance and privacy at one of the world's most innovative consumer electronics companies. Emerald reveals the pivotal moments that shaped her path and shares practical insights for navigating the rapidly evolving world of AI compliance, privacy, and regulation.What you'll learn in this episode:How to integrate AI governance, privacy, and intellectual property in consumer electronics.The challenges of deploying AI responsibly while ensuring compliance with the EU AI Act and privacy regulations.The future impact of AI laws on consumer technology and business strategy.How to close the funding gap for female entrepreneurs and build a more inclusive tech ecosystem.Whether you're a lawyer, entrepreneur, or business leader, this conversation will give you a front-row seat to the future of AI, compliance, and innovation.
Você já parou para pensar quais viéses seu algoritmo pode carregar e como isso impacta suas análises?Neste episódio, conversamos com Andressa Freires, fundadora da diversiData e Data Science Specialist, sobre como as perspectivas dos desenvolvedores de AIs e modelos podem transpassar no conteúdo criado por essas tecnologias. Além disso, discutimos como a falta de diversidade pode impactar as ferramentas que são amplamente utilizadas pelo mundo e as consequências desse movimento.Lembrando que você pode encontrar todos os podcasts da comunidade Data Hackers no Spotify, iTunes, Google Podcast, Castbox e muitas outras plataformas.Nossa Bancada Data Hackers:Paulo Vasconcellos — Co-founder da Data Hackers e Principal Data Scientist na Hotmart.Monique Femme — Head of Community Management na Data HackersReferências:https://mitsloanreview.com.br/quebrando-correntes-e-liderando-com-proposito/https://linktr.ee/diversidatahttps://www.amazon.com/Unmasking-AI-Mission-Protect-Machines/dp/0593241835https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815
Welcome to The Inner Game of Change, the podcast where we explore the unseen forces that shape how we lead, adapt, and thrive in the face of change and transformation. Each episode is a chance to learn from thinkers, doers, and everyday leaders about what really makes change work — and what keeps it human.My guest is Rebecca Bultsma — an AI ethics researcher, power user, and someone who lives in that space between awe and dread of what AI can do. Rebecca has built her career helping leaders cut through the hype, face the risks, and still find practical, human-centred ways to use this technology without losing their soul or their job.In this episode, we explore the hype and the harm, the messy middle of adoption, and the accountability gaps that every business and every leader needs to face. And at the heart of it all, we talk about what it means to stay radically human in a world that is increasingly shaped by algorithms.I am grateful to have Rebecca chatting with me today. About Rebecca (In her words)The honest truth?I'm an AI Ethics researcher who uses AI all day. Yes, I see the irony. Yes, I'm navigating this contradiction in public. Daily.I help leaders who are somewhere between "AI will save us" and "AI will end us" find their actual footing. No BS, no fear-mongering, just practical strategies for using AI without losing your soul (or your job).What I actually do:Translate tech panic into action plans. I take 20 years of making complex things human-friendly (comms/PR veteran) and mix it with an MSc in AI Ethics from Edinburgh. The result? I can explain why AI is incredible AND terrifying in the same breath - and help you navigate both.The work:Keynotes that don't put you to sleep (50+ delivered, people actually stay awake)Workshops where we actually DO things (not just talk about them)Executive sessions for when you need to admit you don't get it (safe space, I promise)Currently obsessing over: AI governance that doesn't kill innovation, helping teachers not fear GenAI, and explaining to boards why "AI strategy" isn't optional anymore.ContactsRebecca's Profilelinkedin.com/in/rebecca-bultsmaWebsiterebeccabultsma.com/ (Company)Send us a textAli Juma @The Inner Game of Change podcast Follow me on LinkedIn
We're excited to have Adi Ganesan, a PhD researcher at Stony Brook University, Penn University, and Vanderbilt, on the show. We'll talk about how large language models LLMs) are being tested and used in psychology, citing examples from mental health research. Fun fact: Adi was Sid's research partner during his Ph.D. program.Discussion highlightsLanguage models struggle with certain aspects of therapy including being over-eager to solve problems rather than building understandingCurrent models are poor at detecting psychomotor symptoms from text alone but are oversensitive to suicidality markersCognitive reframing assistance represents a promising application where LLMs can help identify thought trapsProper evaluation frameworks must include privacy, security, effectiveness, and appropriate engagement levelsTheory of mind remains a significant challenge for LLMs in therapeutic contexts; example: The Sally-Anne Test.Responsible implementation requires staged evaluation before patient-facing deploymentResourcesTo learn more about Adi's research and topics discussed in this episode, check out the following resources:Large language models could change the future of behavioral healthcare: a proposal for responsible development and evaluationTherapist Behaviors paper: [2401.00820] A Computational Framework for Behavioral Assessment of LLM Therapists Cognitive reframing paper: Cognitive Reframing of Negative Thoughts through Human-Language Model Interaction - ACL Anthology Faux Pas paper: Testing theory of mind in large language models and humans | Nature Human Behaviour READI: Readiness Evaluation for Artificial Intelligence-Mental Health Deployment and Implementation (READI): A Review and Proposed Framework Large language models could change the future of behavioral healthcare: A proposal for responsible development and evaluation | npj Mental Health Research GPT-4's Schema of Depression: Explaining GPT-4's Schema of Depression Using Machine Behavior AnalysisAdi's Profile: Adithya V Ganesan - Google Scholar What did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
In dieser Podcast-Episode blicken wir gemeinsam mit Zukunftsforscher Morell Westermann und Edgar aus dem Jahr 2035 zurück auf die Gegenwart: Wir diskutieren, welche Entwicklungen im Bereich künstliche Intelligenz und Robotik Realität geworden sind – und ob Konzepte wie Responsible AI unser Denken und Handeln prägen könnten. In dieser Folge begegnen sich Mensch und Maschine im Dialog. Wir sprechen nicht nur über KI, wir sprechen MIT der KI über Chancen, die Zukunft aktiv mitzugestalten. Jetzt reinhören und erfahren, wie ein Rückblick aus 2035 neue Perspektiven auf unsere Gegenwart eröffnet.
All links and images can be found on CISO Series. This week's episode is hosted by David Spark, producer of CISO Series and Mike Johnson, CISO, Rivian. Joining them is Jennifer Swann, CISO, Bloomberg Industry Group. In this episode: Vulnerability management vs. configuration control Open source security and supply chain trust Building security leadership presence AI governance and enterprise risk Huge thanks to our sponsor, Vanta Vanta's Trust Management Platform automates key areas of your GRC program—including compliance, internal and third-party risk, and customer trust—and streamlines the way you gather and manage information. A recent IDC analysis found that compliance teams using Vanta are 129% more productive. Get started today at Vanta.com/CISO.
As AI capabilities evolve, financial institutions face pressure to balance innovation with security, trust, and regulatory compliance. From fraud prevention to customer experience, deterministic AI applications continue to form the backbone of financial services—even as new technologies like generative and agentic AI emerge. In this episode, JoAnn Stonier, Data and AI Fellow at Mastercard, joins us to share how Mastercard is navigating these dynamics. She explains how AI-driven analytics reduce false positives in fraud detection, why “agent-ish” AI marks an important transition toward more autonomous systems, and how responsible governance ensures privacy and security remain at the forefront. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the ‘AI in Business' podcast! If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!
Design As is back! Starting September 9th Design As will release episodes weekly. Joining host Lee Moreau to speculate on the future of design through a range of different perspectives. This season you'll hear discussions about Responsible AI from technologists, educators, designers, and industry leaders plus bonus episodes directly recorded at the Shapeshift Summit hosted by the Institute of Design May 2025 in Chicago. Follow Design Observer on Instagram to keep up and see even more Design As content. For more information about this season and full transcription of the trailer visit us on our website.
Advancing Digital Transformation To Pioneer Global Health Solutions.In this first episode of Narratives of Purpose's special series from the 2025 HIMSS European Health Conference, host Claire Murigande speaks with Hal Wolf, the President and CEO of HIMSS.HIMSS (Healthcare Information and Management Systems Society) is a non-profit organization with a strong commitment to advancing global health through technology, supporting the transformation of the health ecosystem and fostering health equity.In this interview, Hal emphasizes the necessity for a significant leap forward in our approach to healthcare, particularly in the context of artificial intelligence and its transformative potential that can propel us towards more effective care delivery.Be sure to visit our podcast website for the full episode transcript.LINKS:Article covering HIMSS Europe 2025: AI capacity building in healthcareLinkedIn posts covering HIMSS Europe 2025: The future of the healthcare workforce | Responsible AI in health | Cybersecurity | The European Health Data Space | Women in Health IT | Women's Health in focusLearn more about HIMSS activities and events at himss.org Follow HIMSS on their social media channels: LinkedIn | Facebook | Instagram |
Dr. Rebecca Portnoff generates awareness of the threat landscape, enablers, challenges and solutions to the complex but addressable issue of online child sexual abuse. Rebecca and Kimberly discuss trends in online child sexual abuse; pillars of impact and harm; how GenAI expands the threat landscape; personalized targeting and bespoke abuse; Thorn's Safety by Design Initiative; scalable prevention strategies; technical and legal barriers; standards, consensus and commitment; building better from the beginning; accountability as an innovative goal; and not confusing complex with unsolvable. Dr. Rebecca Portnoff is the Vice President of Data Science at Thorn, a non-profit dedicated to protecting children from sexual abuse. Read Thorn's seminal Safety by Design paper, bookmark the Research Center to stay updated and support Thorn's critical work by donating here. Related Resources Thorn's Safety by Design Initiative (News): https://www.thorn.org/blog/generative-ai-principles/ Safety by Design Progress Reports: https://www.thorn.org/blog/thorns-safety-by-design-for-generative-ai-progress-reports/ Thorn + SIO AIG-CSAM Research (Report): https://cyber.fsi.stanford.edu/io/news/ml-csam-report A transcript of this episode is here.
What are the hidden dangers lurking beneath the surface of vibe coded apps and hyped-up CEO promises? And what is Influence Ops?I'm joined by Susanna Cox (Disesdi), an AI security architect, researcher, and red teamer who has been working at the intersection of AI and security for over a decade. She provides a masterclass on the current state of AI security, from explaining the "color teams" (red, blue, purple) to breaking down the fundamental vulnerabilities that make GenAI so risky.We dive into the recent wave of AI-driven disasters, from the Tea dating app that exposed its users' sensitive data to the massive Catholic Health breach. We also discuss why the trend of blindly vibe coding is an irresponsible and unethical shortcut that will create endless liabilities in the near term.Susanna also shares her perspective on AI policy, the myth of separating "responsible" from "secure" AI, and the one threat that truly keeps her up at night: the terrifying potential of weaponized globally scaled Influence Ops to manipulate public opinion and democracy itself.Find Disesdi Susanna Cox:Substack: https://disesdi.substack.com/Socials (LinkedIn, X, etc.): @DisesdiKEY MOMENTS:00:26 - Who is Disesdi Susanna Cox?03:52 - What are Red, Blue, and Purple Teams in Security?07:29 - Probabilistic vs. Deterministic Thinking: Why Data & Security Teams Clash12:32 - How GenAI Security is Different (and Worse) than Classical ML14:39 - Recent AI Disasters: Catholic Health, Agent Smith & the "T" Dating App18:34 - The Unethical Problem with "Vibe Coding"24:32 - "Vibe Companies": The Gaslighting from CEOs About AI30:51 - Why "Responsible AI" and "Secure AI" Are the Same Thing33:13 - Deconstructing the "Woke AI" Panic44:39 - What Keeps an AI Security Expert Up at Night? Influence Ops52:30 - The Vacuous, Haiku-Style Hellscape of LinkedIn
Governing AI – Welcome to AI Control Tower Episode Overview Governance accelerates AI success. Control enables innovation. In this premiere episode, we explore how ServiceNow governs AI at scale through AI Control Tower, a groundbreaking solution that transforms how enterprises manage their artificial intelligence landscape. Host Bobby Brill brings together three visionary leaders. They reveal the hidden AI sprawl within enterprises. They expose the risks. And they illuminate a path forward - one where governance and innovation dance in perfect harmony, where control mechanisms become the very foundation of sustainable AI transformation. The Conversation AI is everywhere. Sometimes visible, often hidden, always expanding. Our experts navigate the complex terrain of enterprise AI governance, from shadow AI implementations that operate beyond IT oversight to the regulatory frameworks demanding immediate attention. Ravi Krishnamurthy, Vice President of Product Management for AI Platform and Responsible AI, unveils AI Control Tower—a solution that brings clarity to chaos. Sampada Chavan, Senior Principal Product Manager, demonstrates how regulatory compliance becomes achievable through systematic governance. Peter Weigt, Sr. Director of Inbound Product Management for Responsible AI, challenges conventional thinking about the innovation paradox and reveals governance as the strategic enabler organizations desperately need. Discovery meets responsibility. Compliance meets creativity. Control meets capability. Expert Panel Ravi Krishnamurthy | Vice President of Product Management, AI Platform and Responsible AI Sampada Chavan | Senior Principal Product Manager, AI Control Tower Peter Weigt | Sr. Director, Inbound Product Management, Responsible AI Host: Bobby Brill 00:00 Introduction to Governing AI at Scale 00:31 Meet the Experts 00:54 Hidden AI in Your Enterprise 02:56 Discovery and Responsible AI Practices 11:28 The Innovation Paradox 14:59 AI Control Tower and Governance 19:42 Conclusion and Final Thoughts Links to AICT deep-dive blogs: https://www.servicenow.com/community/now-assist-articles/part-0-our-journey-to-the-ai-control-tower-nbsp/ta-p/3280295 https://www.servicenow.com/community/now-assist-articles/part-1-the-gathering-storm-ai-s-rising-tension-nbsp/ta-p/3280309 https://www.servicenow.com/community/now-assist-articles/part-2-the-architecture-of-control-building-the-ai-control-tower/ta-p/3344257 See omnystudio.com/listener for privacy information.
Governing AI – Welcome to AI Control Tower Episode Overview Governance accelerates AI success. Control enables innovation. In this premiere episode, we explore how ServiceNow governs AI at scale through AI Control Tower, a groundbreaking solution that transforms how enterprises manage their artificial intelligence landscape. Host Bobby Brill brings together three visionary leaders. They reveal the hidden AI sprawl within enterprises. They expose the risks. And they illuminate a path forward - one where governance and innovation dance in perfect harmony, where control mechanisms become the very foundation of sustainable AI transformation. The Conversation AI is everywhere. Sometimes visible, often hidden, always expanding. Our experts navigate the complex terrain of enterprise AI governance, from shadow AI implementations that operate beyond IT oversight to the regulatory frameworks demanding immediate attention. Ravi Krishnamurthy, Vice President of Product Management for AI Platform and Responsible AI, unveils AI Control Tower—a solution that brings clarity to chaos. Sampada Chavan, Senior Principal Product Manager, demonstrates how regulatory compliance becomes achievable through systematic governance. Peter Weigt, Sr. Director of Inbound Product Management for Responsible AI, challenges conventional thinking about the innovation paradox and reveals governance as the strategic enabler organizations desperately need. Discovery meets responsibility. Compliance meets creativity. Control meets capability. Expert Panel Ravi Krishnamurthy | Vice President of Product Management, AI Platform and Responsible AI Sampada Chavan | Senior Principal Product Manager, AI Control Tower Peter Weigt | Sr. Director, Inbound Product Management, Responsible AI Host: Bobby Brill 00:00 Introduction to Governing AI at Scale 00:31 Meet the Experts 00:54 Hidden AI in Your Enterprise 02:56 Discovery and Responsible AI Practices 11:28 The Innovation Paradox 14:59 AI Control Tower and Governance 19:42 Conclusion and Final Thoughts Links to AICT deep-dive blogs: https://www.servicenow.com/community/now-assist-articles/part-0-our-journey-to-the-ai-control-tower-nbsp/ta-p/3280295 https://www.servicenow.com/community/now-assist-articles/part-1-the-gathering-storm-ai-s-rising-tension-nbsp/ta-p/3280309 https://www.servicenow.com/community/now-assist-articles/part-2-the-architecture-of-control-building-the-ai-control-tower/ta-p/3344257 See omnystudio.com/listener for privacy information.
Send us a textIn episode 250 of The Data Diva Talks Privacy Podcast, host Debbie Reynolds, “The Data Diva,” welcomes Marianne Mazaud, Co-Founder of AI ON US, an International Executive Summit Focused on Responsible Artificial Intelligence, co-created with Thomas Lozopone. They explore the powerful relationship between AI, privacy, and trust, emphasizing how leaders can take actionable steps to create inclusive and ethically grounded AI systems.Marianne shares insights from her extensive experience in creative performance marketing and brand protection, including how generative AI technologies have created both opportunities and new risks. She stresses the importance of privacy and inclusion in AI governance, especially in high-risk sectors like healthcare and education.The conversation moves to public trust in AI. Marianne references a study revealing widespread distrust in AI systems due to cybersecurity concerns, algorithmic bias, and lack of transparency. She highlights the need to involve more diverse voices, including individuals with disabilities and children, in the development of emerging technologies. Marianne and Debbie also examine the role of data privacy in consumer trust, citing a PricewaterhouseCoopers report showing that 83% of consumers believe data protection is essential to building trust with businesses.They compare AI regulatory landscapes across the European Union and the United States. Marianne outlines how the EU AI Act places joint responsibility on AI developers and providers, which can introduce compliance complexities, especially for small businesses. She explains how these regulations can be difficult to implement retroactively and may impact innovation when not considered early in the development process.Marianne closes by introducing the AI On Us initiative and the International Summit on Responsible AI for Executives. These efforts are designed to support leaders navigating AI governance through immersive workshops, best practices, and applied exercises. She also describes the Arborus Charter, a commitment to gender equality and inclusion in AI that has been adopted by 150 companies globally.They discuss the erosion of public trust in AI and the contributing role of biased algorithms, black-box decision-making, and regulatory fragmentation across regions. Marianne describes the uneven distribution of protections for vulnerable populations, such as children and persons with disabilities, and the failure of many AI systems to account for culturally or biologically diverse user bases. She emphasizes that privacy harms are not only about data collection but also about downstream effects and misuse, especially in sectors like healthcare, hiring, and public policy.Debbie and Marianne contrast the emerging regulatory models in the United States and the European Union, noting that the U.S. often lacks forward-looking obligations for AI developers, whereas the EU imposes preemptive risk requirements. Despite these differences, both agree that building AI systems that are trustworthy, explainable, and fair must become a global imperative. Marianne closes by describing how AI on Us was founded to help global executives take practical, values-driven steps toward responsible AI. Through events, tools, and shared ethical commitments, the initiative encourages leaders to treat AI responsibility as a competitive advantage, not just a compliance obligation.#AIandPrivacy #ResponsibleAI #Governance #SyntheticContent #TrustworthyAI #InclusiveTech #AlgorithmicAccountability #PrivacyHarms #EtSupport the show
Welcome back to The Power Lounge, a space dedicated to meaningful conversations with industry leaders. In today's episode, "Empower Your Team with Responsible AI," host Amy Vaughan, Together Digital's Chief Empowerment Officer, explores a critical challenge for digital teams: adopting AI responsibly without compromising ethical standards.Joining Amy is Nikki Ferrell, Associate Director of Online Enrollment and Marketing Communications at Miami University. Nikki has been instrumental in launching an AI steering committee to manage the swift integration of generative AI in higher education. Together, they examine the potential risks of unmanaged AI use, the importance of establishing clear policies, and how continuous learning and experimentation can cultivate ethical and innovative teams.Whether you're a team leader, a business owner, or simply interested in the complexities of AI, this episode offers a practical framework for implementing technology that prioritizes people, purpose, and ethics. Gain actionable insights and hear real-world experiences right here on The Power Lounge.Chapters:00:00 - Introduction01:24 - AI's Impact: Unprepared Marketing Practices05:08 - Creating AI Steering Committees09:32 - Normalize Open AI Use at Work14:42 - Adopting AI for Organizational Success16:30 - Take Initiative to Lead21:00 - Cautious Marketing on Mother's Day25:25 - AI in Education: Gen Z & Alpha Hesitations29:19 - "AI as Amplifying Tool"30:55 - AI's Impact on Cognitive Skills36:31 - AI Augments, Not Replaces, Workforce38:30 - "Embracing Tech Amidst Red Tape"41:45 - "Responsible AI Adoption Insights"44:19 - AI Use Case Library Development48:03 - Embracing AI for Strategic Future51:01 - Exploring AI for Everyday Tasks54:58 - AI-Assisted Strategy Development58:51 - Subscribe for Updates & Community59:45 - OutroQuotes:"Empowerment begins when we stop being afraid of new technology and start building community around it."- Amy Vaughan"You don't need a title to lead the way with AI—start small, learn together, and let your curiosity spark real change."- Nikki FerrellKey TakeawaysStart Small, Stay Grounded in ResearchPolicies Aren't Optional—They EmpowerOpenness Overgoing UndergroundYou Don't Need a Title to LeadAlign with Mission and ValuesBuild a Culture of ExperimentationTransparency Builds Trust (and Avoids Backlash)AI Augments, Not ReplacesMeet People Where They AreThe Future is CollaborativeConnect with Nikki Ferrell:LinkedIn: https://www.linkedin.com/in/nferrell/Website: https://miamioh.edu/Connect with the host Amy Vaughan:LinkedIn: http://linkedin.com/in/amypvaughanPodcast:Power Lounge Podcast - Together DigitalLearn more about Together Digital and consider joining the movement by visitingHome - Together DigitalSupport the show
What does it take to build intelligent systems that are not only AI-powered but also secure, scalable, and grounded in real-world needs? In this episode of Tech Talks Daily, I speak with Srinivas Chippagiri, a senior technology leader and author of Building Intelligent Systems with AI and Cloud Technologies. With over a decade of experience spanning Wipro, GE Healthcare, Siemens, and now Tableau at Salesforce, Srinivas offers a practical view into how AI and cloud infrastructure are evolving together. We explore how AI is changing cloud-native development through predictive maintenance, automated DevOps pipelines, and developer co-pilots. But this is not just about technology. Srinivas highlights why responsible AI needs to be part of every system design, sharing examples from his own research into anomaly detection, fuzzy logic, and explainable models that support trust in regulated industries. The conversation also covers the rise of hybrid and edge computing, the real challenges of data fragmentation and compute costs, and how teams are adapting with new skills like prompt engineering and model observability. Srinivas gives a thoughtful view on what ethical AI deployment looks like in practice, from bias audits to AI governance boards. For those looking to break into this space, his advice is refreshingly clear. Start with small, end-to-end projects. Learn by doing. Contribute to open-source communities. And stay curious. Whether you're scaling AI systems, building a career in cloud tech, or just trying to keep pace with fast-moving trends, this episode offers a grounded and insightful guide to where things are heading next. Srinivas's book is available on Amazon under Building Intelligent Systems with AI and Cloud Technologies, and you can connect with him on LinkedIn to continue the conversation.
GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it. Generative AI and the Future of the Digital Commons David Sacks on X: "A BEST CASE SCENARIO FOR AI? The Doomer narratives were wrong. Predicated on a "rapid take-off" to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence. Instead, we" / X A taxonomy of hallucinations (see table 2) Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' for Enterprise Medicare will test using AI to help decide whether patients get coverage — which could delay or deny care, critics warn Podcasting's 'Serial' Era Ends as Video Takes Over Sara Kehaulani Goo named President of the Creator Network What Happened When Mark Zuckerberg Moved In Next Door Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments Two-mile suspension bridge Will Giz allow the Skee-ballers to make this their next outing? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Tulsee Doshi Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit Melissa.com/twit
GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it. Generative AI and the Future of the Digital Commons David Sacks on X: "A BEST CASE SCENARIO FOR AI? The Doomer narratives were wrong. Predicated on a "rapid take-off" to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence. Instead, we" / X A taxonomy of hallucinations (see table 2) Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' for Enterprise Medicare will test using AI to help decide whether patients get coverage — which could delay or deny care, critics warn Podcasting's 'Serial' Era Ends as Video Takes Over Sara Kehaulani Goo named President of the Creator Network What Happened When Mark Zuckerberg Moved In Next Door Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments Two-mile suspension bridge Will Giz allow the Skee-ballers to make this their next outing? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Tulsee Doshi Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit Melissa.com/twit
GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it. Generative AI and the Future of the Digital Commons David Sacks on X: "A BEST CASE SCENARIO FOR AI? The Doomer narratives were wrong. Predicated on a "rapid take-off" to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence. Instead, we" / X A taxonomy of hallucinations (see table 2) Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' for Enterprise Medicare will test using AI to help decide whether patients get coverage — which could delay or deny care, critics warn Podcasting's 'Serial' Era Ends as Video Takes Over Sara Kehaulani Goo named President of the Creator Network What Happened When Mark Zuckerberg Moved In Next Door Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments Two-mile suspension bridge Will Giz allow the Skee-ballers to make this their next outing? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Tulsee Doshi Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit Melissa.com/twit
GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it. Generative AI and the Future of the Digital Commons David Sacks on X: "A BEST CASE SCENARIO FOR AI? The Doomer narratives were wrong. Predicated on a "rapid take-off" to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence. Instead, we" / X A taxonomy of hallucinations (see table 2) Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' for Enterprise Medicare will test using AI to help decide whether patients get coverage — which could delay or deny care, critics warn Podcasting's 'Serial' Era Ends as Video Takes Over Sara Kehaulani Goo named President of the Creator Network What Happened When Mark Zuckerberg Moved In Next Door Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments Two-mile suspension bridge Will Giz allow the Skee-ballers to make this their next outing? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Tulsee Doshi Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit Melissa.com/twit
As AI becomes deeply embedded in every industry, building AI systems that are secure, responsible, and privacy-centric is more crucial than ever. But where do you begin? At the strategy level? Design? Or implementation? How do organizations tackle the challenges of AI risks, data governance, and compliance while keeping pace with innovation?Join us for an insightful conversation with Punit Bhatia and Santosh Kaveti, CEO of Pro Arch, as we explore the evolving landscape of responsible AI, key foundational steps, and the practical approaches to secure AI deployment.If you're looking to understand how to build AI systems that are not only innovative but also secure and trustworthy, this episode is for you!KEY CONVERSION 00:01:58 Responsible AI 00:04:30 AI Strategy 00:11:43 Role of standards and Approach 00:15:35 Good practices of Data Governance 00:19:55 AI Talent 00:23:10 Pro Arch Role in costumers 00:25:00 Contact Information of Santosh ABOUT GUEST Santosh Kaveti is CEO & Founder at Proarch. With over 18 years of experience as a technologist, entrepreneur, investor, and advisor, Santosh Kaveti is the CEO and Founder of ProArch, a purpose-driven enterprise that accelerates value and increases resilience for its clients with consulting and technology services, enabled by cloud, guided by data, fueled by apps, and secured by design. Santosh's vision and leadership have propelled ProArch to become a dominant force in key industry verticals, such as Energy, Healthcare & Lifesciences, and Manufacturing, where he leverages his expertise in manufacturing process improvement, mentoring, and consulting. Operationalizing AI: From Strategy to Execution Navigating AI Risks: Ensuring Security and Compliance Prioritizing AI Initiatives: Aligning with Business Goals Attracting and Retaining Top AI Talent Integrating AI into Core Business Functions The Data Foundation: Governance, Quality, and Culture in AI Santosh's journey is marked by resilience, ambition, and self-awareness, as he has learned from his successes and failures, and continuously evolved his skills and perspective. He has traveled across 23 countries, gaining insights into the global diversity and interconnectedness of human experiences. He is passionate about blending technology with a human-centric approach and making a meaningful societal impact through his support for initiatives that uplift underprivileged children, assist disadvantaged families, and promote social awareness.Santosh's ethos extends to his investments in and mentorship of promising startups, as well as his role as the Chairman of the Board at Enhops and iV4, two ProArch companies. ABOUT HOST Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high privacy awareness and compliance as a business priority. Selectively, Punit is open to mentor and coach professionals. Punit is the author of books “Be Ready for GDPR' which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 30 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts. As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one's value to have joy in life. He has developed the philosophy named ‘ABC for joy of life' which passionately shares. Punit is based out of Belgium, the heart of Europe. RESOURCES Websites www.fit4privacy.com,www.punitbhatia.com, https://www.linkedin.com/in/santoshkaveti/ , https://www.proarch.com/ Podcast https://www.fit4privacy.com/podcast Blog https://www.fit4privacy.com/blog YouTube http://youtube.com/fit4privacy
AI ethics expert Sam Sammane challenges Silicon Valley's artificial intelligence hype in this controversial entrepreneurship interview. The Theo Sim founder and nanotechnology PhD reveals why current AI regulations only help wealthy tech giants while blocking innovation for small businesses. Sam exposes the truth about ChatGPT privacy risks, demonstrates how personalized AI systems running locally protect your data better than cloud-based solutions, and shares his revolutionary context engineering approach that transforms generic chatbots into custom AI employees. Sam's contrarian take on AI policy, trustworthy AI development, and why schools must teach cognitive ethics now will reshape how you think about augmenting human intelligence. The future of AI belongs to businesses that act today, not tomorrow.
GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it. Generative AI and the Future of the Digital Commons David Sacks on X: "A BEST CASE SCENARIO FOR AI? The Doomer narratives were wrong. Predicated on a "rapid take-off" to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence. Instead, we" / X A taxonomy of hallucinations (see table 2) Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' for Enterprise Medicare will test using AI to help decide whether patients get coverage — which could delay or deny care, critics warn Podcasting's 'Serial' Era Ends as Video Takes Over Sara Kehaulani Goo named President of the Creator Network What Happened When Mark Zuckerberg Moved In Next Door Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments Two-mile suspension bridge Will Giz allow the Skee-ballers to make this their next outing? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Tulsee Doshi Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit Melissa.com/twit
GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it. Generative AI and the Future of the Digital Commons David Sacks on X: "A BEST CASE SCENARIO FOR AI? The Doomer narratives were wrong. Predicated on a "rapid take-off" to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence. Instead, we" / X A taxonomy of hallucinations (see table 2) Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' for Enterprise Medicare will test using AI to help decide whether patients get coverage — which could delay or deny care, critics warn Podcasting's 'Serial' Era Ends as Video Takes Over Sara Kehaulani Goo named President of the Creator Network What Happened When Mark Zuckerberg Moved In Next Door Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments Two-mile suspension bridge Will Giz allow the Skee-ballers to make this their next outing? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Tulsee Doshi Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit Melissa.com/twit
Follow Alexandra on LinkedIn and X!Follow us on Instagram and on LinkedIn!Created by SOUR, this podcast is part of the studio's "Future of X,Y,Z" research, where the collaborative discussion outcomes serve as the base for the futuristic concepts built in line with the studio's mission of solving urban, social and environmental problems through intelligent designs.Make sure to visit our website and subscribe to the show on Apple Podcasts, Spotify, or Google Podcasts so you never miss an episode. If you found value in this show, we would appreciate it if you could head over to iTunes to rate and leave a review – or you can simply tell your friends about the show!Don't forget to join us next week for another episode. Thank you for listening!
If there's one key takeaway from Season 4 of The Tea on Cybersecurity, it's that cybersecurity is a shared responsibility. With this in mind, host Jara Rowe wraps up the season by sharing valuable insights that everyone can use. She reflects on the most impactful lessons about compliance, AI, and penetration testing. Key takeaways:The importance of vCISOs and cyber engineersHow to approach penetration testing and PTaaSWhy transparency and training are essential for AI safetyEpisode highlights:(00:00) Today's topic: Key insights from this season (01:16) The role of vCISOs and cyber engineers(02:47) Responsible AI use(03:51) Penetration testing and PTaaS for small teamsConnect with the host:Jara Rowe's LinkedIn - @jararoweConnect with Trava:Website - www.travasecurity.comBlog - www.travasecurity.com/learn-with-trava/blogLinkedIn - @travasecurityYouTube - @travasecurity
Hiwot Tesfaye disputes the notion of AI givers and takers, challenges innovation as an import, highlights untapped global potential, and charts a more inclusive course. Hiwot and Kimberly discuss the two camps myth of inclusivity; finding innovation everywhere; meaningful AI adoption and diffusion; limitations of imported AI; digital colonialism; low-resource languages and illiterate LLMs; an Icelandic success story; situating AI in time and place; employment over automation; capacity and skill building; skeptical delight and making the case for multi-lingual, multi-cultural AI. Hiwot Tesfaye is a Technical Advisor in Microsoft's Office of Responsible AI and a Loomis Council Member at the Stimson Center where she helped launch the Global Perspectives: Responsible AI Fellowship. Related Resources#35 Navigating AI: Ethical Challenges and Opportunities a conversation with Hiwot TesfayeA transcript of this episode is here.
In Episode 262 of the House of #EdTech, Chris Nesi explores the timely and necessary topic of creating a responsible AI policy for your classroom. With artificial intelligence tools becoming more integrated into educational spaces, the episode breaks down why teachers need to set clear expectations and how they can do it with transparency, collaboration, and flexibility. Chris offers a five-part framework that educators can use to guide students toward ethical and effective AI use. Before the featured content, Chris reflects on a growing internal debate: is it time to step back from tech-heavy classrooms and return to more analog methods? He also shares three edtech recommendations, including tools for generating copyright-free images, discovering daily AI tool capabilities, and randomizing seating charts for better classroom dynamics. Topics Discussed: EdTech Thought: Chris debates the “Tech or No Tech” question in modern classrooms EdTech Recommendations: https://nomorecopyright.com/ - Upload an image to transform it into a unique, distinct version designed solely for inspiration and creative exploration. https://www.shufflebuddy.com/ - Never worry about seating charts again Foster a strong classroom community by frequently shuffling your seating charts while respecting your students' individual needs. https://whataicandotoday.com/ - We've analysed 16362 AI Tools and identified their capabilities with OpenAI GPT-4.1, to bring you a free list of 83054 tasks of what AI can do today. Why classrooms need a responsible AI policy A five-part framework to build your AI classroom policy Define What AI Is (and Isn't) Clarify When and How AI Can Be Used Promote Transparency and Attribution Include Privacy and Tool Approval Guidelines Make It Collaborative and Flexible The importance of modeling digital citizenship and AI literacy Free editable AI policy template by Chris for grades K–12 Mentions: Mike Brilla – The Inspired Teacher podcast Jake Miller – Educational Duct Tape podcast // Educational Duct Tape Book
"If you're going to be running a very elite research institution, you have to have the best people. To have the best people, you have to trust them and empower them. You can't hire a world expert in some area and then tell them what to do. They know more than you do. They're smarter than you are in their area. So you've got to trust your people. One of our really foundational commitments to our people is: we trust you. We're going to work to empower you. Go do the thing that you need to do. If somebody in the labs wants to spend 5, 10, 15 years working on something they think is really important, they're empowered to do that." - Doug Burger Fresh out of the studio, Doug Burger, Technical Fellow and Corporate Vice President at Microsoft Research, joins us to explore Microsoft's bold expansion into Southeast Asia with the recent launch of the Microsoft Research Asia lab in Singapore. From there, Doug shares his accidental journey from academia to leading global research operations, reflecting on how Microsoft Research's open collaboration model empowers over thousands of researchers worldwide to tackle humanity's biggest challenges. Following on, he highlights the recent breakthroughs from Microsoft Research for example, the quantum computing breakthrough with topological qubits, the evolution from lines of code to natural language programming, and how AI is accelerating innovation across multiple scaling dimensions beyond traditional data limits. Addressing the intersection of three computing paradigms—logic, probability, and quantum—he emphasizes that geographic diversity in research labs enables Microsoft to build AI that works for everyone, not just one region. Closing the conversation, Doug shares his vision of what great looks like for Microsoft Research with researchers driven by purpose and passion to create breakthroughs that advance both science and society. Episode Highlights: [00:00] Quote of the Day by Doug Burger [01:08] Doug Burger's journey from academia to Microsoft Research [02:24] Career advice: Always seek challenges, move when feeling restless or comfortable [03:07] Launch of Microsoft Research Asia in Singapore: Tapping local talent and culture for inclusive AI development [04:13] Singapore lab focuses on foundational AI, embodied AI, and healthcare applications [06:19] AI detecting seizures in children and assessing Parkinson's motor function [08:24] Embedding Southeast Asian societal norms and values into Foundational AI research [10:26] Microsoft Research's open collaboration model [12:42] Generative AI's rapid pace accelerating technological innovation and research tools [14:36] AI revolutionizing computer architecture by creating completely new interfaces [16:24] Open versus closed source AI models debate and Microsoft's platform approach [18:08] Reasoning models enabling formal verification and correctness guarantees in AI [19:35] Multiple scaling dimensions in AI beyond traditional data scaling laws [21:01] Project Catapult and Brainwave: Building configurable hardware acceleration platforms [23:29] Microsoft's 17-year quantum computing journey with topological qubits breakthrough [26:26] Balancing blue-sky foundational research with application-driven initiatives at scale [29:16] Three computing paradigms: logic, probability (AI), and quantum superposition [32:26] Microsoft Research's exploration-to-exploitation playbook for breakthrough discoveries [35:26] Research leadership secret: Curiosity across fields enables unexpected connections [37:11] Hidden Mathematical Structures Transformers Architecture in LLMs [40:04] Microsoft Research's vision: Becoming Bell Labs for AI era [42:22] Steering AI models for mental health and critical thinking conversations Profile: Doug Burger, Technical Fellow and Corporate Vice President, Microsoft Research LinkedIn: https://www.linkedin.com/in/dcburger/ Microsoft Research Profile: https://www.microsoft.com/en-us/research/people/dburger/ Podcast Information: Bernard Leong hosts and produces the show. The proper credits for the intro and end music are "Energetic Sports Drive." G. Thomas Craig mixed and edited the episode in both video and audio format. Here are the links to watch or listen to our podcast. Analyse Asia Main Site: https://analyse.asia Analyse Asia Spotify: https://open.spotify.com/show/1kkRwzRZa4JCICr2vm0vGl Analyse Asia Apple Podcasts: https://podcasts.apple.com/us/podcast/analyse-asia-with-bernard-leong/id914868245 Analyse Asia YouTube: https://www.youtube.com/@AnalyseAsia Analyse Asia LinkedIn: https://www.linkedin.com/company/analyse-asia/ Analyse Asia X (formerly known as Twitter): https://twitter.com/analyseasia Analyse Asia Threads: https://www.threads.net/@analyseasia Sign Up for Our This Week in Asia Newsletter: https://www.analyse.asia/#/portal/signup Subscribe Newsletter on LinkedIn https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7149559878934540288
Justin DiPietro, Co-Founder & Chief Strategy Officer of Glia, shares how they are leveraging AI to enhance the customer experience in the highly regulated world of financial institutions.Topics Include:Glia provides voice, digital, and AI services for customer-facing and internal operationsBuilt on "channel-less architecture" unlike traditional contact centers that added channels sequentiallyOne interaction can move seamlessly between channels (voice, chat, SMS, social)AI applies across all channels simultaneously rather than per individual channel700 customers, primarily banks and credit unions, 370 employees, headquartered in New YorkTargets 3,500 banks and credit unions across the United States marketFocuses exclusively on financial services and other regulated industriesAI for regulated industries requires different approach than non-regulated businessesTraditional contact centers had trade-off between cost and quality of serviceAI enables higher quality while simultaneously decreasing costs for contact centersNumber one reason people call banks: "What's my balance?" (20% of calls)Financial services require 100% accuracy, not 99.999% due to trust requirementsUses AWS exclusively for security, reliability, and future-oriented technology accessReal-time system requires triple-hot redundancy; seconds matter for live callsWorks with Bedrock team; customers certify Bedrock rather than individual featuresShowed examples of competitors' AI giving illegal million-dollar loans at 0%"Responsible AI" separates probabilistic understanding from deterministic responses to customersUses three model types: client models, network models, and protective modelsTraditional NLP had 50% accuracy; their LLM approach achieves 100% understandingPolicy is "use Nova unless" they can't, primarily for speed benefitsParticipants:Justin DiPietro – Co-Founder & Chief Strategy Officer, GliaFurther Links:Glia WebsiteGlia AWS MarketplaceSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/
In this thought leadership session, ITSPmagazine co-founders Sean Martin and Marco Ciappelli moderate a dynamic conversation with five industry leaders offering their take on what will dominate the show floor and side-stage chatter at Black Hat USA 2025.Leslie Kesselring, Founder of Kesselring Communications, surfaces how media coverage is shifting in real time—no longer driven solely by talk submissions but now heavily influenced by breaking news, regulation, and public-private sector dynamics. From government briefings to cyberweapon disclosures, the pressure is on to cover what matters, not just what's scheduled.Daniel Cuthbert, member of the Black Hat Review Board and Global Head of Security Research at Banco Santander, pushes back on the hype. He notes that while tech moves fast, security research often revisits decades-old bugs. His sharp observation? “The same bugs from the ‘90s are still showing up—sometimes discovered by researchers younger than the vulnerabilities themselves.”Michael Parisi, Chief Growth Officer at Steel Patriot Partners, shifts the conversation to operational risk. He raises concern over Model-Chained Prompting (MCP) and how AI agents can rewrite enterprise processes without visibility or traceability—especially alarming in environments lacking kill switches or proper controls.Richard Stiennon, Chief Research Analyst at IT-Harvest, offers market-level insights, forecasting AI agent saturation with over 20 vendors already present in the expo hall. While excited by real advancements, he warns of funding velocity outpacing substance and cautions against the cycle of overinvestment in vaporware.Rupesh Chokshi, SVP & GM at Akamai Technologies, brings the product and customer lens—framing the security conversation around how AI use cases are rolling out fast while security coverage is still catching up. From OT to LLMs, securing both AI and with AI is a top concern.This episode is not just about placing bets on buzzwords. It's about uncovering what's real, what's noise, and what still needs fixing—no matter how long we've been talking about it.___________Guests:Leslie Kesselring, Founder at Cyber PR Firm Kesselring Communications | On LinkedIn: https://www.linkedin.com/in/lesliekesselring/“This year, it's the news cycle—not the sessions—that's driving what media cover at Black Hat.”Daniel Cuthbert, Black Hat Training Review Board and Global Head of Security Research for Banco Santander | On LinkedIn: https://www.linkedin.com/in/daniel-cuthbert0x/“Why are we still finding bugs older than the people presenting the research?”Richard Stiennon, Chief Research Analyst at IT-Harvest | On LinkedIn: https://www.linkedin.com/in/stiennon/“The urge to consolidate tools is driven by procurement—not by what defenders actually need.”Michael Parisi, Chief Growth Officer at Steel Patriot Partners | On LinkedIn: https://www.linkedin.com/in/michael-parisi-4009b2261/“Responsible AI use isn't a policy—it's something we have to actually implement.”Rupesh Chokshi, SVP & General Manager at Akamai Technologies | On LinkedIn: https://www.linkedin.com/in/rupeshchokshi/“The business side is racing to deploy AI—but security still hasn't caught up.”Hosts:Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com___________Episode SponsorsThreatLocker: https://itspm.ag/threatlocker-r974BlackCloak: https://itspm.ag/itspbcwebAkamai: https://itspm.ag/akamailbwcDropzoneAI: https://itspm.ag/dropzoneai-641Stellar Cyber: https://itspm.ag/stellar-9dj3___________ResourcesLearn more and catch more stories from our Black Hat USA 2025 coverage: https://www.itspmagazine.com/bhusa25ITSPmagazine Webinar: What's Heating Up Before Black Hat 2025: Place Your Bet on the Top Trends Set to Shake Up this Year's Hacker Conference — An ITSPmagazine Thought Leadership Webinar | https://www.crowdcast.io/c/whats-heating-up-before-black-hat-2025-place-your-bet-on-the-top-trends-set-to-shake-up-this-years-hacker-conferenceCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More
Multimodal interfaces. Real-time personalization. Data privacy. Content ownership. Responsible AI. In this episode, Eve Sangenito of global consultancy Perficient offers a grounded, enterprise lens on the evolving demands of AI-powered customer experience—and what leaders (and the partners who support them) need to understand right now. Eve and Sarah explore how generative AI is reshaping customer expectations, guiding tech investments, and redefining experience delivery at scale. For anyone driving digital transformation, building AI strategy, or modernizing enterprise CX, this conversation is a timely look at what's shifting—and what's ahead.
In schools with limited resources, large class sizes, and wide differences in student ability, individualized learning has become a necessity. Artificial intelligence offers powerful tools to help meet those needs, especially in underserved communities. But the way we introduce those tools matters.This week, Matt Kirchner talks with Sam Whitaker, Director of Social Impact at StudyFetch, about how AI can support literacy, comprehension, and real learning outcomes when used with purpose. Sam shares his experience bringing AI education to a rural school in Uganda, where nearly every student had already used AI without formal guidance. The results of a two-hour project surprised everyone and revealed just how much potential exists when students are given the right tools.The conversation covers AI as a literacy tool, how to design platforms that encourage learning rather than shortcutting, and why student-facing AI should preserve creativity, curiosity, and joy. Sam also explains how responsible use of AI can reduce educational inequality rather than reinforce it.This is a hopeful, practical look at how education can evolve—if we build with intention.Listen to learn:Surprising lessons from working with students at a rural Ugandan school using artificial intelligenceWhat different MIT studies suggest about the impacts of AI use on memory and productivityHow AI can help U.S. literacy rates, and what far-reaching implications that will haveWhat China's AI education policy for six-year-olds might signal about the global race for responsible, guided AI use3 Big Takeaways:1. Responsible AI use must be taught early to prevent misuse and promote real learning. Sam compares AI to handing over a car without driver's ed—powerful but dangerous without structure. When AI is used to do the thinking for students, it stifles creativity and long-term retention instead of developing it.2. AI can help close educational gaps in schools that lack the resources for individualized learning. In many underserved districts, large class sizes make one-on-one instruction nearly impossible. AI tools can adapt to students' needs in real time, offering personalized learning that would otherwise be out of reach.3. AI can play a key role in addressing the U.S. literacy crisis. Sam points out that 70% of U.S. inmates read at a fourth-grade level or below, and 85% of juvenile offenders can't read. Adaptive AI tools are now being developed to assess, support, and gradually improve literacy for students who have been left behind.Resources in this Episode:To learn about StudyFetch, visit: www.studyfetch.comOther resources:MIT Study "Experimental Evidence on the Productivity Effects of General Artificial Intelligence"MIT Study "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task"Learn more about the Ugandan schools mentioned: African Rural University (ARU) and Uganda Rural Development anWe want to hear from you! Send us a text.Instagram - Facebook - YouTube - TikTok - Twitter - LinkedIn
Prepare for game-changing AI insights! Join Noelle Russell, CEO of the AI Leadership Institute and author of Scaling Responsible AI: From Enthusiasm to Execution. Noelle, an AI pioneer, shares her journey from the early Alexa team with Jeff Bezos, where her unique perspective shaped successful mindfulness apps. We'll explore her "I Love AI" community, which has taught over 3.4 million people. Unpack responsible, profitable AI, from the "baby tiger" analogy for AI development and organizational execution, to critical discussions around data bias and the cognitive cost of AI over-reliance.Key Moments: Journey into AI: From Jeff Bezos to Alexa (03:13): Noelle describes how she "stumbled into AI" after receiving an email from Jeff Bezos inviting her to join a new team at Amazon, later revealed to be the early Alexa team. She highlights that while she lacked inherent AI skills, her "purpose and passion" fueled her journey."I Love AI" Community & Learning (11:02): After leaving Amazon and experiencing a personal transition, Noelle created the "I Love AI" community. This free, neurodiverse space offers a safe environment for people, especially those laid off or transitioning careers, to learn AI without feeling alone, fundamentally changing their life trajectories.The "Baby Tiger" Analogy (17:21): Noelle introduces her "baby tiger" analogy for early AI model development. She explains that in the "peak of enthusiasm" (baby tiger mode), people get excited about novel AI models, but often fail to ask critical questions about scale, data needs, long-term care, or what happens if the model isn't wanted anymore.Model Selection & Explainability (32:01): Noelle stresses the importance of a clear rubric for model selection and evaluation, especially given rapid changes. She points to Stanford's HELM project (Holistic Evaluation of Language Models) as an open-source leaderboard that evaluates models on "toxicity" beyond just accuracy.Avoiding Data Bias (40:18): Noelle warns against prioritizing model selection before understanding the problem and analyzing the data landscape, as this often leads to biased outcomes and the "hammer-and-nail" problem.Cognitive Cost of AI Over-Reliance (44:43): Referencing recent industry research, Noelle warns about the potential "atrophy" of human creativity due to over-reliance on AI. Key Quotes:"Show don't tell... It's more about understanding what your review board does and how they're thinking and what their backgrounds are... And then being very thoughtful about your approach." - Noelle Russell"When we use AI as an aid rather than as writing the whole thing or writing the title, when we use it as an aid, like, can you make this title better for me? Then our brain actually is growing. The creative synapses are firing away." Noelle Russell"Most organizations, most leaders... they're picking their model before they've even figured out what the problem will be... it's kind of like, I have a really cool hammer, everything's a nail, right?" - Noelle RussellMentions:"I Love AI" CommunityScaling Responsible AI: From Enthusiasm to Execution - Noelle Russell"Your Brain on ChatGPT" - MIT Media LabPower to Truth: AI Narratives, Public Trust, and the New Tech Empire - StanfordMeta-learning, Social Cognition and Consciousness in Brains and MachinesHELM - A Reproductive and Transparent Framework for Evaluating Foundation ModelsGuest Bio: Noelle Russell is a multi-award-winning speaker, author, and AI Executive who specializes in transforming businesses through strategic AI adoption. She is a revenue growth + cost optimization expert, 4x Microsoft Responsible AI MVP, and named the #1 Agentic AI Leader in 2025. She has led teams at NPR, Microsoft, IBM, AWS and Amazon Alexa, and is a consistent champion for Data and AI literacy and is the founder of the "I ❤️ AI" Community teaching responsible AI for everyone.She is the founder of the AI Leadership Institute and empowers business owners to grow and scale with AI. In the last year, she has been named an awardee of the AI and Cyber Leadership Award from DCALive, the #1 Thought Leader in Agentic AI, and a Top 10 Global Thought Leader in Generative AI by Thinkers360. Hear more from Cindi Howson here. Sponsored by ThoughtSpot.
Dietmar Offenhuber reflects on synthetic data's break from reality, relates meaning to material use, and embraces data as a speculative and often non-digital artifact. Dietmar and Kimberly discuss data as a representation of reality; divorcing content from meaning; data settings vs. data sets; synthetic data quality and ground truth; data as a speculative artifact; the value in noise; data materiality and accountability; rethinking data literacy; Instagram data realities; non-digital computing and going beyond statistical analysis. Dietmar Offenhuber is a Professor and Department Chair of Art+Design at Northeastern University. Dietmar researches the material, sensory and social implications of environmental information and evidence construction. Related Resources Shapes and Frictions of Synthetic Data (paper): https://journals.sagepub.com/doi/10.1177/20539517241249390 Autographic Design: The Matter of Data in a Self-Inscribing World (book): https://autographic.design/ Reservoirs of Venice (project): https://res-venice.github.io/ Website: https://offenhuber.net/ A transcript of this episode is here.
Today's guest is Miranda Jones, SVP of Data & AI Strategy at Emprise Bank. Miranda returns to discuss the evolving reality of responsible AI in the financial services sector. As generative and agentic systems mature, Jones emphasizes the importance of creating safe, low-risk environments where employees can experiment, learn prompt engineering, and develop a critical understanding of model limitations. She explores why domain-specific models outperform generalized foundational models in banking—where context, compliance, and communication style are essential to trust and performance. The episode also examines the strategic value of maintaining a deliberate pace in adopting agentic AI, ensuring human oversight and alignment with regulatory expectations. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the ‘AI in Business' podcast!
Tony chats with Lokesh Ballenahalli, Founder and Sunil Shivappa, COO at Enkefalos Technology, they are a research first global AI company focused on building AI solutions for the AI industry. They combine research deep research in LLMs, Responsible AI, and domain expertise to develop what they call the AI operating system for insurance. It has 3 core layers: Insurance GTP, Agentic AI Applications, and Monitoring of that AI.Lokesh Ballenahalli: https://www.linkedin.com/in/lokesh-ballenahalli/Sunil Shivappa: https://www.linkedin.com/in/sunil-m-shivappa-86273519a/Enkefalos Technology: https://www.enkefalos.com/Video Version: https://youtu.be/u3_xWZyPDEg
Multi-agentic AI is rewriting the future of work.... but are we racing ahead without checking for warning signs?Microsoft's new agent systems can split up work, make choices, and act on their own. The possibilities? Massive.But it's not without risks, which is why you NEED to listen to Sarah Bird. She's the Chief Product Officer of Responsible AI at Microsoft and is constantly building out safer agentic AI. So what's really at stake when AIs start making decisions together?And how do you actually stay in control?We're pulling back the curtain on the 3 critical risks of multi-agentic AI and unveiling the playbook to navigate them safely.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Responsible AI: Evolution and ChallengesAgentic AI's Ethical ImplicationsMulti-Agentic AI Responsibility ShiftMicrosoft's AI Governance StrategiesTesting Multi-Agentic Risks and PatternsAgentic AI: Future Workforce SkillsObservability in Multi-Agentic SystemsThree Risk Categories in AI ImplementationTimestamps:00:00 Evolving Challenges in Responsible AI05:50 Agent Technology: Benefits and Risks09:27 Complex System Governance and Observability12:26 AI Monitoring and Human Intervention15:14 Essential Testing for Trust Building19:43 Securing AI Agents with Entra22:06 Exploring Human-AI Interface Innovation26:06 AI Workforce Integration Challenges28:22 AI's Transformative Impact on JobsKeywords:Agentic AI, multi agentic AI, responsible AI, generative AI, Microsoft Build conference, AI governance, AI ethics, AI systems, AI risk, AI mitigation, AI tools, human in the loop, Foundry observability, AI testing, system security, AI monitoring, user intent, AI capability, prompt injection, Copilot, AI orchestration, AI deployment, system governance, Entra agent ID, AI education, AI upskilling, AI workforce integration, systemic risk, AI misuse, AI malfunctions, AI systemic risk, AI-powered solutions, AI development, AI innovation, AI technology, AI security measures.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner