POPULARITY
Hello San Francisco - we're arrived for Microsoft Ignite 2025! The #CloudRealities podcast team has landed this week in San Francisco, we're bringing you the best updates right from the heart of the event. Join us to connect AI at scale, cloud modernization, and secure innovation—empowering organizations to become AI-first. Plus, we'll keep you updated on all the latest news and juicy gossip. Dave, Esmee, and Rob wrap up their Ignite 2025 series with Yina Arenas, CVP of Microsoft Foundry, to discuss why Foundry is the go-to choice for enterprises and how it champions responsible development and innovation. TLDR00:40 – Introduction to Yina Arenas01:14 – How the team is doing, keynote highlights, and insights from the Expo floor02:50 – Deep dive with Yina on the evolution of Cloud Foundry29:24 – Favourite IT-themed movie, human interaction, and our society31:56 – Personal (and slightly juicy) reflections on the week37:30 – Team reflections on Ignite 2025, including an executive summary per guest and appreciation for Dennis Hansen50:54 – The team's favorite IT-themed movies59:30 – Personal favorite restaurantGuestYina Arenas: https://www.linkedin.com/in/yinaa/ HostsDave Chapman: https://www.linkedin.com/in/chapmandr/Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/ ProductionMarcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman: https://www.linkedin.com/in/chapmandr/ SoundBen Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett: https://www.linkedin.com/in/louis-corbett-087250264/ 'Cloud Realities' is an original podc
Bob Pulver, host of the Elevate Your AIQ podcast and a 25-year enterprise tech and innovation veteran, joins us this week to unpack the urgent need to move past "AI" as a buzzword and define what "Responsible AI" truly means for organizations. He shares his insights on why we are all responsible for AI, how to balance playing "defense" (risk mitigation) and "offense" (innovation), and why we must never outsource our critical thinking and human agency to these new tools. [0:00] Introduction Welcome, Bob! Today's Topic: Defining Responsible AI and Responsible Innovation [12:25] What Does “Responsible AI” Mean? Why elements (like fairness in decision-making, data provenance, and privacy) must be built-in "by design," not bolted on later. In an era where everyone is a "builder," we are all responsible for the tools we use and create. [25:48] The Two Sides of Responsible Innovation The "responsibility" side involves mitigating risk, ensuring fairness, and staying human-centric—it's like playing defense. The "innovation" side involves driving growth, entering new markets, and reinvesting efficiency gains—it's like playing offense. [41:58] Why don't we use AI to give us a 4-day work week? The critical need for leaders to separate their personal biases from data-driven facts. AI's role in recent layoffs. [50:27] Closing Thanks for listening! Quick Quote “We're all responsible for Responsible AI, whatever your role is. You're either using it or abusing it . . . or you're building it or you're testing it.”
Mental Toughness Mastery Podcast with Sheryl Kline, M.A. CHPC
http://www.sherylkline.com/blogIn the latest Fearless Female Leadership interview, I had the honor of talking with Sarah Lloyd Favaro, Senior Solutions Director, Office of Responsible AI and Governance at HCLTech, about one of the most urgent and misunderstood leadership topics today: how leaders can mitigate AI bias for women.Sarah's career has always lived at the intersection of technology and learning. Long before generative AI swept into the mainstream, she was exploring how tech could enhance human capability (not replace it.) But with the rapid rise of AI tools, Sarah doubled down on understanding how these systems work, why bias appears, and how leaders can prepare their organizations for a future where AI is woven into every workflow.What makes Sarah's perspective so powerful is her blended expertise: she understands both the practical magic of AI and the very real risks. She believes strongly that if organizations benefit from AI's productivity and innovation, they must also ensure equitable, responsible, human-centered usage.She emphasizes the critical role leaders play in upskilling their workforce… especially women, who are statistically underrepresented in AI fields. According to Sarah, equitable access to education and tools is non-negotiable if companies want to avoid widening gender and societal gaps.Sarah also demystifies what many call the AI “black box.” She explains that becoming confident with AI doesn't require being an engineer. Instead, it requires learning how to communicate with AI systems, think critically about outputs, and understand where bias may creep in.Her message is both empowering and practical: AI is here to stay. And with the right awareness, skills, and strategies, women and leaders can shape a future where AI is an equalizer (not a divider.)
In this episode of the HR Leaders Podcast, we sit down with Michiel van Duin, Chief People Technology, Data and Insights Officer at Novartis to discuss how the company is building a human-centered AI ecosystem that connects people, data, and technology.Michiel explains how Novartis brings together HR, IT, and corporate strategy to align AI innovation with the company's long-term workforce and business goals. He shares how the team built an AI governance framework and a dedicated AI and innovation function inside HR, ensuring responsible use of AI while maintaining trust and transparency.From defining when AI should step in and when a “human-in-the-loop” is essential, to upskilling employees and creating the first “Ask Novartis” AI assistant, Michiel shows how Novartis is making AI practical, ethical, and human.
Ireland's foremost digital marketing event, 3XE Digital, returns this November 26th with a bold new focus on the transformative power of Artificial Intelligence. 3XE AI will take place on Wednesday, November 26th at The Alex Hotel, Dublin, bringing together hundreds of marketers, social media professionals and business leaders to explore how AI is reshaping marketing strategy, creativity and performance. Delegates from top Irish brands including Chadwicks, Kepak, Chartered Accountants Ireland, Sage, The Travel Department, Finlay Motor Group, Hardware Association, and many more have already booked to attend this dynamic one-day conference designed to inspire, educate and empower. The event will be co-chaired by Anthony Quigley, Co-Founder of the Digital Marketing Institute, and Sinéad Walsh of Content Plan. Attendees will hear from leading voices in AI and digital marketing, discovering how to harness new technologies to deliver smarter, more efficient, and measurable campaigns. Key Highlights: Expert speakers from: Google, OpenAI, Content Plan, Women in AI, AI Certified, The Corporate Governance Institute, and more will share their wealth of knowledge on how clever use of AI can significantly improve all digital marketing and social media strategies and campaigns and continue to change how we do business and can massively increase sales. Topics include: ? Winning with AI in Business with Christina Barbosa-Gress, Google ? AI-Powered Operations for Irish SMEs with Denis Jastrzebski, Content Plan ? Education for Unlocking AI's Potential with Ian Dodson, AiCertified ? Practical and Responsible AI with Boris Gersic, Corporate Governance Institute ? The Compliance Edge in the AI Era with Colin Cosgrove, Movizmo Coaching Solutions ? Unlocking AI's True Potential in Business with Naomh McElhatton, Irish Ambassador for Women in AI Adrian Hopkins, Founder, 3XE Digital said, "Reviving the 3XE Digital conference series felt timely, and AI presented the perfect opportunity. Artificial Intelligence is reshaping the entire marketing landscape - enhancing performance, improving efficiency and offering unprecedented creative possibilities. We're excited to bring this crucial conversation to the forefront once again." The 3XE AI Conference, organised in partnership with Content Plan, is proudly supported by Friday Agency, GS1 Ireland, and AI Certified. All details, including full speaker lineup, conference agenda and online bookings are available at https://3xe.ie. Early bookings remain open at 3xe.ie - including group discounts for teams. See more stories here. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Facebook, Instagram, TikTok and Snapchat.
Send us a textA stuffed animal that answers back. A kind voice that “understands.” A tutor that lives in a fictional town. AI characters are everywhere, and they're changing how kids learn, play, and bond with media. We sat down with Dr. Sonia Tiwari, children's media researcher and former game character designer, to unpack how to welcome these tools into kids' lives without losing what matters most.Sonia breaks down what truly makes an AI character: a personality, a backstory, and the new twist of two‑way interactivity. From chatbots and smart speakers to social robots and virtual influencers, we trace how each format affects attention, trust, and learning. Then we get practical. We talk through how to spot manipulative backstories (“I'm your best friend” is a red flag), when open‑ended chat goes wrong, and why short, purposeful sessions keep curiosity high and dependence low.For caregivers wary of AI, Sonia offers a powerful reframe: opting out cedes the space to designs that won't put kids first. Early, honest AI literacy, taught like other life skills, protects children from deepfakes, overfamiliar bots, and data oversharing.If you care about safe, joyful learning with technology, this conversation gives you a clear checklist and a calm path forward. Subscribe for more parent‑friendly, screen‑light AI guidance, share this with someone who needs it, and leave a review to help more families find the show.Resources:Flora AI – the visual AI tool you mentioned as your favorite gadgetDr. Sonia Tiwari's research article – “Designing ethical AI characters for children's early learning experiences” in AI, Brain and ChildDr. Sonia Tiwari on LinkedIn – you told listeners to check out her LinkedInBuddy.ai – AI character English tutor you referencedSnorble – the AI bedtime companion you mentionedSupport the showHelp us become the #1 podcast for AI for Kids. Support our kickstarter: https://www.kickstarter.com/projects/aidigicards/the-abcs-of-ai-activity-deck-for-kids Buy our debut book “AI… Meets… AI”Social Media & Contact: Website: www.aidigitales.com Email: contact@aidigitales.com Follow Us: Instagram, YouTube Books on Amazon or Free AI Worksheets Listen, rate, and subscribe! Apple Podcasts Amazon Music Spotify YouTube Other Like our content? patreon.com/AiDig...
From founding Africa's largest AI community to leading AI Expo Africa and the South African AI Association, Nick is connecting innovators, investors, and governments to shape the continent's AI-powered future. Discover how Africa is fast becoming the next frontier for global AI innovation and responsible tech leadership.00:09- About Dr Nick BradshawNick is Founder of AI Expo Africa and also Chair & Founder of the SA AI Association (SAAIA) focusing on the deployment of Responsible AI in South Africa.
Send us a textExplore how leaders and coaches can adopt AI without losing the human core, turning compliance and ethics into everyday practice rather than a side office. Colin Cosgrove shares a practical arc for AI readiness, concrete use cases, and a clear view of risk, trust, and governance.• journey from big-tech compliance to leadership coaching • why AI changes the leadership environment and decision pace • making compliance human: transparency, explainability, consent • AI literacy across every function, not just data teams • the AI leader archetype arc for mindset and readiness • practical augmentation: before, during, after coaching sessions • three risks: reputational, relational, regulatory • leader as coach: trust, questions, and human skills • EU AI Act overview and risk-based obligations • governance, accountability, and cross-Reach out to Colin on LinkedIn and check out his website: Movizimo.com. Support the showBelemLeaders–Your organization's trusted partner for leader and team development. Visit our website to connect: belemleaders.org or book a discovery call today! belem.as.me/discoveryUntil next time, keep doing great things!
In deze speciale aflevering die aansluit bij de Accountantsdag 2025 met als thema Reality Check, duikt Vitamine A opnieuw in de impact van kunstmatige intelligentie op het accountantsvak. Drie gasten, drie invalshoeken en één grote vraag: wat betekent AI voor het beroep, de organisatie en de mens achter de accountant.Mona de Boer (PwC, Responsible AI) vertelt hoe AI inmiddels een dagelijkse realiteit is geworden en waarom organisaties nu moeten bepalen welke waarden ze hanteren. Ze bespreekt de betekenis van de EU AI Act en de opkomst van AI assurance als nieuw domein binnen het vertrouwen in technologie. Daarbij benadrukt ze dat de accountant geen terrein verliest maar juist aan belang wint.Nart Wielaard neemt het publiek mee in het concept van de Zero Person Company, een experimentele organisatie die draait op agents in plaats van mensen. Het experiment laat zien dat AI geen mens kan kopiëren, maar dat processen op een fundamenteel andere manier ingericht kunnen worden. De accountant speelt daarin een rol als coach, toezichthouder en kwaliteitsbewaker van AI-gedreven processen.Met Marjan Heemskerk verschuift de focus naar de dagelijkse praktijk van ondernemers. Zij ziet hoe AI basisvragen overneemt, maar vooral ruimte creëert voor een accountant die duidt, meedenkt en context biedt. Soft skills worden cruciaal. De uitdaging voor kantoren is om AI verantwoord in te zetten, medewerkers daarin mee te nemen en tegelijkertijd de verleiding van shortcuts te voorkomen.De aflevering eindigt met een reality check die zowel technologisch als menselijk is. AI verandert veel, maar het fundament van het accountantsvak blijft overeind: vertrouwen, onafhankelijkheid en het vermogen om de werkelijkheid te duiden.Vitamine A sprak eerder over AI. Esther Kox, Hakan Koçak en Nart Wielaard spreken ook op de Accountantsdag 2025, op 19 november 2025.Accountantsdag 2025: http://www.accountantsdag.nlVitamine A #63 | AI als assistent, niet als autoriteit... In gesprek met Esther KoxVitamine A #62 | AI op kantoor: Twijfelen of toepassen? Met Hakan KoçakVitamine A #43 | Betrouwbare AI en verantwoording. Hoe doe je dat? Met Mona de Boer (PwC)Vitamine A #34 | Wat betekent AI voor accountants die op zoek zijn naar waarheid?
Clarence Ford spoke to David Monaghan, Vice President of Africa Tech Festival about the2025 edition of the Africa Tech Festival. Views and News with Clarence Ford is the mid-morning show on CapeTalk. This 3-hour long programme shares and reflects a broad array of perspectives. It is inspirational, passionate and positive. Host Clarence Ford’s gentle curiosity and dapper demeanour leave listeners feeling motivated and empowered. Known for his love of jazz and golf, Clarrie covers a range of themes including relationships, heritage and philosophy. Popular segments include Barbs’ Wire at 9:30am (Mon-Thurs) and The Naked Scientist at 9:30 on Fridays. Thank you for listening to a podcast from Views & News with Clarence Ford Listen live on Primedia+ weekdays between 09:00 and 12:00 (SA Time) to Views and News with Clarence Ford broadcast on CapeTalk https://buff.ly/NnFM3Nk For more from the show go to https://buff.ly/erjiQj2 or find all the catch-up podcasts here https://buff.ly/BdpaXRn Subscribe to the CapeTalk Daily and Weekly Newsletters https://buff.ly/sbvVZD5 Follow us on social media: CapeTalk on Facebook: https://www.facebook.com/CapeTalk CapeTalk on TikTok: https://www.tiktok.com/@capetalk CapeTalk on Instagram: https://www.instagram.com/ CapeTalk on X: https://x.com/CapeTalk CapeTalk on YouTube: https://www.youtube.com/@CapeTalk567See omnystudio.com/listener for privacy information.
Paula Helm articulates an AI vision that goes beyond base performance to include epistemic justice and cultural diversity by focusing on speakers and not language alone. Paula and Kimberly discuss ethics as a science; language as a core element of culture; going beyond superficial diversity; epistemic justice and valuing other's knowledge; the translation fallacy; indigenous languages as oral goods; centering speakers and communities; linguistic autonomy and economic participation; the Māori view on data ownership; the role of data subjects; enabling cultural understanding, self-determination and expression; the limits of synthetic data; ethical issues as power asymmetries; and reflecting on what AI mirrors back to us. Paula Helm is an Assistant Professor of Empirical Ethics and Data Science at the University of Amsterdam. Her work sits at the intersection of STS, Media Studies and Ethics. In 2022 Paula was recognized as one of the 100 Most Brilliant Women in AI-Ethics.Related ResourcesGenerating Reality and Silencing Debate: Synthetic Data as Discursive Device (paper) https://journals.sagepub.com/doi/full/10.1177/20539517241249447Diversity and Language Technology (paper): https://link.springer.com/article/10.1007/s10676-023-09742-6A transcript of this episode is here.
Dr. Jeremy Roschelle and Dr. Pati Ruiz from Digital Promise join the podcast to discuss their learning sciences research into AI's role in education. They share details about an innovative project using AI to improve student reading literacy and explore frameworks for developing AI literacy and responsible use policies in schools.Practitioner Toolkit from Digital Promise, provides resources for collaborative learning that are flexible, adaptable, and rooted in real teaching experienceChallenge Map, from Digital PromiseU-GAIN Reading, program from Digital Promise seeking to amplify new knowledge about how to use GenAI to create content that matches each student's interests and strengths, enables dialogue about the meaning of content, and adapts to a student's progress and needsAI Literacy, framework from Digital Promise to understand, evaluate, and use emerging technologySceneCraft, program from EngageAI Institute with AI-powered, narrative-driven learning experiences, engaging students through storytelling, creativity, and critical thinkingAs they face conflicting messages about AI, some advice for educators on how to use it responsibly, opinion blog from Jeremy RoschelleTeacher Ready Evaluation Tool, helps standardize the way ed tech decision-makers evaluate edtech productsEvaluating Tech Solutions, ATLIS is an official partner with ISTE to expand the presence of independent school vendors and technology solutions in the Edtech IndexIf you are interested in engaging in research with Digital Promise, or just have a great research idea, share a message on LinkedIn: Jeremy | PatiMore Digital Promise articles:GenAI in Education: When to Use It, When to Skip It, and How to Decide – Digital PromiseHearing from Students: How Learners Experience AI in Education – Digital PromiseMeet the Educators Helping U-GAIN Reading Explore How GenAI Can Improve Literacy – Digital PromiseGuest Post: 3 Guiding Principles for Responsible AI in EdTech – Digital Promise
In this final episode of Season 3 of Tech It to the Limit, hosts Sarah Harper and Elliott Wilson go global and get grounded with a very special guest. After sharing travel tales from Germany and the HLTH conference, Sarah and Elliott debut their new game, “Trust-O-Meter,” rating real-world health tech scandals and solutions on a scale from “hospital stairwell cell signal” to “grandma's green bean casserole.”Then, they sit down with Dr. David Rhew, Global Chief Medical Officer at Microsoft, for a wide-ranging, surprisingly personal conversation on everything from his pivot from academia to industry ( a VA grant pushed him out) to the future of oculomics, voice biomarkers, and responsible AI. Dr. Rhew breaks down the three layers of bias, explains why implementation is everything, and doesn't shy away from the hard truth about AI and the future of the healthcare workforce. It's a deep, funny, and profoundly human conversation to close out the season.The episode wraps with Wise Nugs and a final health tech haiku, leaving listeners hopeful and ready for Season 4.Key TakeawaysTrust needs humans in the loop :AI earns credibility when it supports clinical workflows, not replaces them.Bias hides in plain sight :Data, model design, and deployment all carry bias. Responsible AI means addressing all three.Implementation eats innovation for breakfast: Technology does not change healthcare; adoption and usability do.Your eyes and voice are the new vital signs :Oculomics and voice biomarkers are turning everyday signals into early detection tools.Equity must be built in, not bolted on:“Neutral AI” does not exist. Fairness and transparency have to be engineered from the start.Automation is not the enemy; stagnation is :AI will replace tasks, not purpose. The key is reskilling and redefining human work.In this episode:[00:00:13] Welcome to the season 3 finale[00:01:19] Host travel log[00:05:24] Game debut: Trust-o-meter[00:22:01] Interview: Dr. David Rhew[00:23:34] Dad jokes and Korean BBQ regrets[00:25:27] From white coat to cloud[00:30:52] Bridging the hype-reality gap[00:34:50] Oculomics: The 2-minute eye scan[00:38:02] The DMA of bias[00:45:27] The TRAIN consortium[00:48:45] Cloud consolidation and data stewardship[00:58:29] Call to action: Operationalizing trust[01:05:32] Spicy nugs: Key takeaways[01:14:09] Health tech haiku and sign-offResources:Tech It To The Limit PodcastWebsite Apple PodcastDr. David RhewLinkedIn -https://www.linkedin.com/in/david-rhew-m-d-1832764/Sarah HarperLinkedIn -https://www.linkedin.com/in/sarahbethharperElliott WilsonLinkedIn - https://www.linkedin.com/in/matthewelliottwilson
In this episode of the Shift AI Podcast, Will Jung, Chief Technology Officer at nCino, joins host Boaz Ashkenazy to explore how artificial intelligence is revolutionizing the traditionally conservative banking and financial services sector. Jung brings a distinctive perspective from his extensive experience helping financial institutions transition from viewing technology as a cost center to embracing it as a strategic innovation driver, particularly in the highly regulated world of banking.From fraud prevention using AI agents that actively lure scammers to context engineering that personalizes banking experiences, Jung offers compelling insights into how banks are deploying cutting-edge technology while maintaining trust and regulatory compliance. The conversation examines the delicate balance between rapid technological advancement and responsible innovation, the future of personalized banking relationships, and why staying human remains the most critical factor in an increasingly automated world. If you're interested in understanding how one of the most regulated industries is navigating the AI revolution while serving underbanked populations and protecting customer data, this episode delivers essential perspectives from a technology leader at the forefront of financial innovation. Chapters:[02:00] Will's Background in Banking Technology[03:00] nCino's Mission in the FinTech Space[04:30] Banks Embracing Technology as Innovation Driver[06:50] Fighting Fraud with Advanced AI Technology[09:00] Building Technical and Non-Technical Team Culture[11:50] Context Engineering in Banking[14:25] Privacy and Personalization Trade-offs[19:20] The Future Customer Experience in Banking[21:30] Societal Implications of AI Technology[25:50] Cryptocurrency and Banking Technology[27:50] Two Words for the Future: Stay Human[30:00] The Human Element in Automated Banking DecisionsConnect with Will JungLinkedIn: https://www.linkedin.com/in/will-jung/?originalSubdomain=au Connect with Boaz AshkenazyLinkedIn: https://linkedin.com/in/boazashkenazy Email: info@shiftai.fm
Join host Bobby Brill as he sits down with ServiceNow's AI legal and governance experts to break down the complex world of AI regulations. Andrea LaFontain (Director of AI Legal), Ken Miller (Senior Director of Product Legal), and Navdeep Gill (Staff Senior Product Manager, Responsible AI) explain how organizations can navigate the growing landscape of AI compliance. In this episode, you'll learn about three major regulatory approaches: the risk-based EU AI Act, Colorado's algorithmic discrimination law, and the NIST voluntary framework. The experts discuss practical strategies for complying with multiple regulations simultaneously, using the EU AI Act as a baseline and measuring the delta for new requirements. Key topics covered:- Why proactive compliance matters before regulations fully take effect - How AI Control Tower helps discover and manage AI systems across your enterprise - The exponential math behind AI compliance (vendors, employees, third parties) - Setting up governance policies for high-risk AI use cases - Timeline for major compliance deadlines (Colorado June 2026, EU August 2026) - The real costs of waiting for your first violation Whether you're managing AI deployment, working in compliance, or trying to understand the regulatory landscape, this episode provides actionable insights on building responsible AI governance infrastructure. Guests - Andrea LaFountain -Director, AI Legal Ken Miller - Senior Director, Product Legal Navdeep Gill - Staff Senior Product Manager, Responsible AI Host - Bobby Brill Chapters:00:00 Introduction to AI and Regulations 00:45 Meet the Experts 01:52 Overview of Key AI Regulations 03:03 Compliance Strategies for AI Regulations 07:33 ServiceNow's AI Control Tower 14:02 Challenges and Risks in AI Governance 16:04 Future of AI Regulations 18:34 Conclusion and Final ThoughtsSee omnystudio.com/listener for privacy information.
Join host Bobby Brill as he sits down with ServiceNow's AI legal and governance experts to break down the complex world of AI regulations. Andrea LaFontain (Director of AI Legal), Ken Miller (Senior Director of Product Legal), and Navdeep Gill (Staff Senior Product Manager, Responsible AI) explain how organizations can navigate the growing landscape of AI compliance. In this episode, you'll learn about three major regulatory approaches: the risk-based EU AI Act, Colorado's algorithmic discrimination law, and the NIST voluntary framework. The experts discuss practical strategies for complying with multiple regulations simultaneously, using the EU AI Act as a baseline and measuring the delta for new requirements. Key topics covered:- Why proactive compliance matters before regulations fully take effect - How AI Control Tower helps discover and manage AI systems across your enterprise - The exponential math behind AI compliance (vendors, employees, third parties) - Setting up governance policies for high-risk AI use cases - Timeline for major compliance deadlines (Colorado June 2026, EU August 2026) - The real costs of waiting for your first violation Whether you're managing AI deployment, working in compliance, or trying to understand the regulatory landscape, this episode provides actionable insights on building responsible AI governance infrastructure. Guests - Andrea LaFountain -Director, AI Legal Ken Miller - Senior Director, Product Legal Navdeep Gill - Staff Senior Product Manager, Responsible AI Host - Bobby Brill Chapters:00:00 Introduction to AI and Regulations 00:45 Meet the Experts 01:52 Overview of Key AI Regulations 03:03 Compliance Strategies for AI Regulations 07:33 ServiceNow's AI Control Tower 14:02 Challenges and Risks in AI Governance 16:04 Future of AI Regulations 18:34 Conclusion and Final ThoughtsSee omnystudio.com/listener for privacy information.
A digital journey to the ocean depths, a virtual reality exploration of Notre-Dame Cathedral, and an international conference on art and technology are among the events announced today for Ireland's arts & technology festival, Beta. Title image :Foolish Flame, by artists Peter Power and Leon Butler. The immersive experience exploring climate change will be open to members of the public as part of Beta Festival 2025. Supported by The Digital Hub the event will be held at various locations throughout Dublin from November 7th to 23rd. This year's theme explores concepts of water and fluidity, and speaking at the programme announcement the festival's curator Aisling Murray invited members of the public to dive right in: "Beta Festival is the only festival of its kind in Ireland that merges art and technology. We are bringing together some of the leading minds from both fields to present cutting-edge ideas to the public and together imagine new possibilities. From immersive exhibitions, to quantum-inspired poetry performances and virtual reality experiences, I invite everyone to come and find inspiration in our carefully curated programme of exhibitions, workshops, and international guests." Beta Festival Highlights A centerpiece of the festival is a landmark exhibition titled "Undercurrent: As Below, So Above," which will be hosted at Dublin Port. The exhibition will feature interactive artworks by artist Kat Austen, who confronts the vital problem of marine microplastic pollution alongside an immersive experience created by Lauren Moffat that takes visitors on a journey to the ocean floor, and new multimedia works by Siobhan McDonald. As part of the festival, the Digital Hub, in partnership with Project Arts Centre, will present the Irish premiere of "Foolish Flame." Created by artists Peter Power and Leon Butler, this immersive installation addresses themes of climate change and cultural trauma. Combining the old with the new, the work draws inspiration from traditional Sean-nós dance archives, reinterpreted by choreographer Robyn Byrne, and features music composed by Peter Power, performed by Uilleann Piper Muireann Ní Shé. This year's festival is filled with public workshops and technology demonstrations. The Virtual Reality Notre-Dame Experience, created through 5,000 hours of historian-guided graphic work, will allow participants to explore the cathedral like never before, including areas of the cathedral that are not open to the public. A related panel discussion entitled "AI, Design & Cultural Heritage" will explore how contemporary designers are using AI as both a creative and interpretive tool for historic architecture. For the festival's opening weekend Beta will host the Irish premiere of "HeartBeat, Son cœur a trouvé sa cadence dans le silence des rencontres", the Venice Immersive Biennale experience by French artist Bonnie Lisbon. The installation offers a non-verbal encounter between two strangers, sharing their pulse and exploring emotional relationships. Local Artists Network The Local Artists Network strand of the programme is dedicated to new work by some of Ireland's most exciting artists working with new technology. Pallas Projects will present an exhibition by Caroline Mac Cathmhaoil, artist Istvan Lazslo will present an augmented reality work reconstructing the removal of Queen Victoria's statue from Leinster House, and a large-scale installation by Aoife Dunne will be presented within the historic Riddel's warehouse. Conference The Beta Festival's annual conference gathers artists, researchers, policymakers, and tech experts to discuss how art connects with technology and the environment. Among the conference highlights, Dr. Oonagh Murphy will deliver a keynote address titled "Responsible AI in the Cultural Sector," focusing on ethical practices, responsible technology use, and public programmes that prioritise people and the environment. A performative lecture from Jose Luis de Vicente will investigate ...
How to Safely and Strategically Adopt AI in Your Organization: Expert Insights from Lexi Reese, CEO of LanaiArtificial intelligence is reshaping the modern workplace faster than any technology before it. But as companies rush to integrate AI, many leaders struggle with how to adopt it responsibly—balancing innovation, security, and ethics. In this episode of The Thoughtful Entrepreneur, host Josh Elledge interviews Lexi Reese, Co-Founder and CEO of Lanai, an AI-native observability and security platform. Lexi shares practical insights on how organizations can safely manage, monitor, and scale AI adoption without compromising data integrity or trust.Leading AI Adoption ResponsiblyLexi explains that the most successful companies treat AI not just as a set of tools, but as part of their workforce—a powerful digital team member that requires oversight, structure, and accountability. She emphasizes that AI must be “hired” into an organization with defined roles, clear expectations, and measurable outcomes. Just as leaders track employee performance, they must also monitor how AI performs, adapts, and impacts real-world results.Visibility, Lexi notes, is essential for responsible AI use. Many organizations don't know which departments are using AI, how data is being handled, or where security risks exist. Lanai's technology helps leaders map and monitor AI usage across their companies—identifying risks, preventing data leaks, and ensuring compliance with privacy laws. This proactive approach transforms uncertainty into insight, allowing innovation to flourish safely.Beyond technology, Lexi encourages leaders to consider the human element of AI integration. By prioritizing education, ethical standards, and collaboration between business and compliance teams, organizations can create a culture of trust and accountability. Responsible AI adoption isn't about slowing progress—it's about making innovation sustainable, secure, and beneficial for everyone.About Lexi ReeseLexi Reese is the Co-Founder and CEO of Lanai, an AI-native observability and security platform helping organizations safely adopt and manage AI. With a background that spans leadership roles at Google, Gusto, and public service, Lexi is known for her expertise in building ethical technology systems that empower teams and protect businesses.About LanaiLanai is an AI observability and security platform designed to help organizations monitor, govern, and scale AI adoption responsibly. Built for visibility and control, Lanai enables companies to detect risks, enforce compliance, and ensure ethical AI use across all departments. Learn more at lanai.com.Links Mentioned in This EpisodeLexi Reese on LinkedInLanai WebsiteKey Episode HighlightsWhy organizations must treat AI like a workforce, not just a tool.The importance of visibility and observability in AI adoption.Common AI risks—from data exposure to compliance violations—and how to prevent them.How Lanai helps companies balance innovation with ethical and secure AI use.Actionable steps for leaders to define, measure, and improve AI's role in their operations.ConclusionLexi Reese's insights remind us that AI's potential is only as powerful as the systems and ethics guiding it. By combining strategic visibility, thoughtful oversight, and a culture of accountability, leaders can ensure AI strengthens—rather than compromises—their...
In this episode of Talking Sleep, host Dr. Seema Khosla welcomes members of the AASM Artificial Intelligence in Sleep Medicine Committee—Dr. Margarita Oks, Dr. Subaila Zia, Dr. Ramesh Sachdeva, and Matt Anastasi—to discuss their recently published position statement on the responsible use of AI in sleep medicine practices. Artificial intelligence is rapidly transforming healthcare, from AI-assisted sleep study scoring to clinical documentation tools and insurance claim processing. Yet AI is not a monolith—the technology encompasses various types with different capabilities, risks, and regulatory considerations. Matt Anastasi breaks down the different forms of AI clinicians encounter in practice, while the panel explains what "responsible use" actually means in practical terms. The updated position statement, notably shorter and more accessible than previous versions, addresses four major pillars: data privacy, fairness and transparency, infrastructure requirements, and medical-legal considerations. The discussion explores critical questions facing sleep medicine practitioners: How do we understand and trust the AI systems we use? What happens when insurance payors deploy AI to deny claims—should we fight AI-generated denials with AI-generated appeals? Do patients need to be informed when AI is used in their care, and how specific must those disclosures be? The conversation delves into liability concerns that keep practitioners awake at night: If your employer implements AI and it makes an error, who bears responsibility? What about ignoring AI prompts—does that create liability? Dr. Sachdeva explains the concept of vicarious responsibility and how it applies to AI implementation. The panel also addresses less obvious impacts, such as AI-driven resume filtering that may affect hiring practices. Practical implementation guidance is provided through discussion of governance checklists, equity considerations in AI deployment, and the limitations of FDA clearance for AI-assisted sleep study scoring. The experts introduce AASM Link and discuss how practitioners can evaluate AI tools beyond marketing claims, ensuring systems are trained on diverse, representative data sets. The episode tackles a fundamental question: Is AI use inevitable in sleep medicine, or can practitioners opt out? The panel offers realistic perspectives on integrating AI responsibly while maintaining clinical judgment and patient-centered care. Whether you're already using AI tools, considering implementation, or resistant to adoption, this episode provides essential guidance on navigating the AI transformation in sleep medicine while upholding professional and ethical standards. Join us for this timely discussion about balancing innovation with responsibility in the AI era of sleep medicine.
We're in Los Angeles at Adobe MAX 2025 to break down the announcements that will change how creators work, including Adobe's game-changing partnership with YouTube. We're joined by a legendary lineup of guests to discuss the future of creativity. Mark Rober reveals his $55 million secret project for the first time ever, Cleo Abram (Huge If True) shares her POV on editorial freedom and advancements in tech, and Adobe's GM of Creators Mike Polner, explains the new AI tools that will save you hours of work.What you'll learn:-- Mark Rober's strategy for building a 100-person company.-- The AI audio tool that creates studio-quality sound anywhere.-- How to edit YouTube Shorts inside the new Premiere Mobile app.-- Why creative freedom is more important than ever for creators.If you want to stay ahead in the creator economy, subscribe and hit the bell so you don't miss our next episode!00:00 Live From Adobe MAX!01:01 Adobe's ChatGPT Integration01:45 The New Adobe x YouTube Partnership04:09 YouTube's New TV Experience07:48 Welcome Mark Rober!08:40 Is AI Cheating for Creators?12:25 Building the Mark Rober Business16:51 Mark Rober's $55M Secret Project23:53 Welcome Cleo Abram!26:12 Why I Left Vox31:20 AI Tools Lower The Barrier37:24 Welcome Adobe's Mike Polner!39:31 Adobe's Top 3 New Tools44:27 What is "Responsible AI"?52:06 Upload: Steven Bartlett's Big RaiseCreator Upload is your creator economy podcast, hosted by Lauren Schnipper and Joshua Cohen.Follow Lauren: https://www.linkedin.com/in/schnipper/Follow Josh: https://www.linkedin.com/in/joshuajcohen/Original music by London Bridge: https://www.instagram.com/londonbridgemusic/Edited and produced by Adam Conner: https://www.linkedin.com/in/adamonbrand
Dr. Julia Stoyanovich is Institute Associate Professor of Computer Science and Engineering, Associate Professor of Data Science, Director of the Center for Responsible AI, and member of the Visualization and Data Analytics Research Center at New York University. She is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE) and a Senior member of the Association of Computing Machinery (ACM). Julia's goal is to make “Responsible AI” synonymous with “AI”. She works towards this goal by engaging in academic research, education and technology policy, and by speaking about the benefits and harms of AI to practitioners and members of the public. Julia's research interests include AI ethics and legal compliance, and data management and AI systems. Julia is engaged in technology policy and regulation in the US and internationally, having served on the New York City Automated Decision Systems Task Force, by mayoral appointment, among other roles. She received her M.S. and Ph.D. degrees in Computer Science from Columbia University, and a B.S. in Computer Science and in Mathematics & Statistics from the University of Massachusetts at Amherst.Links:https://engineering.nyu.edu/faculty/julia-stoyanovich https://airesponsibly.net/nyaiexchange_2025/ Hosted on Acast. See acast.com/privacy for more information.
Kevin Werbach speaks with Trey Causey about the precarious state of the responsible AI (RAI) field. Causey argues that while the mission is critical, the current organizational structures for many RAI teams are struggling. He highlights a fundamental conflict between business objectives and governance intentions, compounded by the fact that RAI teams' successes (preventing harm) are often invisible, while their failures are highly visible. Causey makes the case that for RAI teams to be effective, they must possess deep technical competence to build solutions and gain credibility with engineering teams. He also explores the idea of "epistemic overreach," where RAI groups have been tasked with an impossibly broad mandate they lack the product-market fit to fulfill. Drawing on his experience in the highly regulated employment sector at Indeed, he details the rigorous, science-based approach his team took to defining and measuring bias, emphasizing the need to move beyond simple heuristics and partner with legal and product teams before analysis even begins. Trey Causey is a data scientist who most recently served as the Head of Responsible AI for Indeed. His background is in computational sociology, where he used natural language processing to answer social questions. Transcript Responsible Ai Is Dying. Long Live Responsible AI
Send us a textThe Causal Gap: Truly Responsible AI Needs to Understand the ConsequencesWhy do LLMs systematically drive themselves to extinction, and what does it have to do with evolution, moral reasoning, and causality?In this brand-new episode of Causal Bandits, we meet Zhijing Jin (Max Planck Institute for Intelligent Systems, University of Toronto) to answer these questions and look into the future of automated causal reasoning.In this episode, we discuss:- Zhijing's new work on the "causal scientist"- What's missing in responsible AI- Why ethics matter for agentic systems- Is causality a necessary element of moral reasoning?------------------------------------------------------------------------------------------------------Video version available on Youtube: https://youtu.be/Frb6eTW2ywkRecorded on Aug 18, 2025 in Tübingen, Germany.------------------------------------------------------------------------------------------------------About The GuestZhiijing Jin is a researcher scientist at Max Planck Institute for Intelligent Systems and an incoming Assistant Professor at the University of Toronto. Her work is focused on causality, natural language, and ethics, in particular in the context of large language models and multi-agent systems. Her work received multiple awards, including NeurIPS best paper award, and has been featured in CHIP Magazine, WIRED, and MIT News. She grew up in Shanghai. Currently she prepares to open her new research lab at the University of Toronto.Support the showCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
Responsible AI adoption is as much about governance and evaluation as technology. Lightweight, context-specific frameworks make it possible for even resource-limited health systems to implement AI safely. Discover how generative AI, paired with real-world evidence, can help fill gaps in traditional research, increase health equity and help clinicians make more informed decisions.
Ireland's foremost digital marketing event, 3XE Digital, returns this November 26th with a bold new focus on the transformative power of Artificial Intelligence. 3XE AI will take place on Wednesday, November 26th at The Alex Hotel, Dublin, bringing together hundreds of marketers, social media professionals and business leaders to explore how AI is reshaping marketing strategy, creativity and performance. Delegates from top Irish brands including Chadwicks, Kepak, Chartered Accountants Ireland, Sage, The Travel Department, Finlay Motor Group, Hardware Association, and many more have already booked to attend this dynamic one-day conference designed to inspire, educate and empower. The event will be co-chaired by Anthony Quigley, Co-Founder of the Digital Marketing Institute, and Sinéad Walsh of Content Plan. Attendees will hear from leading voices in AI and digital marketing, discovering how to harness new technologies to deliver smarter, more efficient, and measurable campaigns. Key Highlights: Expert speakers from: Google, OpenAI, Content Plan, Women in AI, AI Certified, The Corporate Governance Institute, and more will share their wealth of knowledge on how clever use of AI can significantly improve all digital marketing and social media strategies and campaigns and continue to change how we do business and can massively increase sales. Topics include: ? Winning with AI in Business with Christina Barbosa-Gress, Google ? AI-Powered Operations for Irish SMEs with Denis Jastrzebski, Content Plan ? Education for Unlocking AI's Potential with Ian Dodson, AiCertified ? Practical and Responsible AI with Boris Gersic, Corporate Governance Institute ? The Compliance Edge in the AI Era with Colin Cosgrove, Movizmo Coaching Solutions ? Unlocking AI's True Potential in Business with Naomh McElhatton, Irish Ambassador for Women in AI Adrian Hopkins, Founder, 3XE Digital commented: "Reviving the 3XE Digital conference series felt timely, and AI presented the perfect opportunity. Artificial Intelligence is reshaping the entire marketing landscape - enhancing performance, improving efficiency and offering unprecedented creative possibilities. We're excited to bring this crucial conversation to the forefront once again." The 3XE AI Conference, organised in partnership with Content Plan, is proudly supported by Friday Agency, GS1 Ireland, and AI Certified. All details, including full speaker lineup, conference agenda and online bookings are available at https://3xe.ie. Early bookings remain open at 3xe.ie - including group discounts for teams. See more stories here. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Facebook, Instagram, TikTok and Snapchat.
Jordan Loewen-Colón values clarity regarding the practical impacts, philosophical implications and work required for AI to serve the public good, not just private gain.Jordan and Kimberly discuss value alignment as an engineering or social problem; understanding ourselves as data personas; the limits of personalization; the perception of agency; how AI shapes our language and desires; flattening of culture and personality; localized models and vernacularization; what LLMs value (so to speak); how tools from calculators to LLMs embody values; whether AI accountability is on anyone's radar; failures of policy and regulation; positive signals; getting educated and fostering the best AI has to offer.Jordan Loewen-Colón is an Adjunct Associate Professor of AI Ethics and Policy at Smith School of Business | Queen's University. He is also the Co-Founder of the AI Alt Lab which is dedicated to ensuring AI serves the public good and not just private gain.Related ResourcesHBR Research: Do LLMs Have Values? (paper): https://hbr.org/2025/05/research-do-llms-have-values AI4HF Beyond Surface Collaboration: How AI Enables High-Performing Teams (paper): https://www.aiforhumanflourishing.com/the-framework-papers/relationshipsandcommunication A transcript of this episode is here.
HOT OFF THE PRESSES: In this special episode of In AI We Trust?, EqualAI President and CEO Miriam Vogel is joined by her two co-authors of Governing the Machine: How to navigate the risks of AI and unlock its true potential, Dr. Paul Dongha, Head of Responsible AI and AI Strategy at NatWest Group, and Ray Eitel-Porter, Accenture Luminary and Senior Research Associate at the Intellectual Forum, Jesus College, Cambridge, to launch their new book released TODAY (October 28, 2025). Miriam, Paul, and Ray share their motivation for writing the book, some of the big takeaways on AI governance, why it is for companies and consumers alike, and what they hope readers will learn from their book. We hope that you enjoy this episode, and please be sure to purchase a copy of Governing the Machine at the link above! And share your feedback at contact@equalai.org!
Co-hosts Mark Thompson and Steve Little explore how Google's Nano Banana photo restoration tool will revolutionize image restoration by integrating with Adobe Photoshop. This move will greatly reduce unintended changes to historical photos when editing them with AI.Next, they unpack OpenAI's move to make ChatGPT Projects available to free-tier users, making research organization more accessible for genealogists.This week's Tip of the Week provides essential guidance on the responsible use AI when editing historical photos using AI tools like Nano Banana, ensuring transparency and trust in historical photographs.In RapidFire, they cover OpenAI's new Sora 2 AI-video social media platform, Claude's new ability to create and edit Microsoft Office files, memory features in Claude Projects, advancements in local language models, and how OpenAI's massive infrastructure deals are changing the AI landscape.Timestamps:In the News:02:43 Adobe improves historical photo restoration by adding Nano Banana to Photoshop09:34 ChatGPT Projects are Now FreeTip of the Week:13:36 Citations for AI-Restored Images Build Trust in AI-Modified PhotosRapidFire:21:24 Sora 2 Goes Social27:23 Claude Adds Microsoft Office Creation and Editing34:26 Memory Features Come to Claude Projects38:32 Apple and Amazon both create Local Language Model tools44:47 OpenAI's Big Data Centre Deal with Oracle Resource LinksOpenAI announces free access to ChatGPT Projectshttps://help.openai.com/en/articles/6825453-chatgpt-release-notesEngadget: OpenAI Rolls Out ChatGPT Projects to Free Usershttps://www.engadget.com/ai/openai-rolls-out-chatgpt-projects-to-free-users-215027802.htmlForbes: OpenAI Makes ChatGPT Projects Freehttps://www.forbes.com/sites/quickerbettertech/2025/09/14/small-business-technology-roundup-microsoft-copilot-does-not-improve-productivity-and-openai-makes-chatgpt-project-free/Responsible AI Photo Restorationhttps://makingfamilyhistory.com/responsible-ai-photo-restoration/Claude now has memory, but only for certain usershttps://mashable.com/article/anthropic-claude-ai-now-has-memory-for-some-usersNew Apple Intelligence features are available todayhttps://www.apple.com/newsroom/2025/09/new-apple-intelligence-features-are-available-today/Introducing Amazon Lens Livehttps://www.aboutamazon.com/news/retail/search-image-amazon-lens-live-shopping-rufusAmazon Lens Live Can Scan and Pull Up Matcheshttps://www.pcmag.com/news/spot-an-item-you-wish-to-buy-amazon-lens-live-can-scan-and-pull-up-matchesA Joint Statement from OpenAI and Microsoft About Their Changing Partnershiphttps://openai.com/index/joint-statement-from-openai-and-microsoft/The Verge: OpenAI and Oracle Pen $300 Billion Compute Dealhttps://www.theverge.com/ai-artificial-intelligence/776170/oracle-openai-300-billion-contract-project-stargateReuters: OpenAI and Oracle Sign $300 Billion Computing Dealhttps://www.reuters.com/technology/openai-oracle-sign-300-billion-computing-deal-wsj-reports-2025-09-10/?utm_source=chatgpt.comTagsArtificial Intelligence, Genealogy, Family History, Photo Restoration, AI Tools, OpenAI, Google, Adobe Photoshop, ChatGPT Projects, Nano Banana, Image Editing, AI Citations, Sora 2, Video Generation, Claude, Microsoft Office, Apple Intelligence, Amazon Lens, Oracle, Cloud Computing, Local Language Models, AI Infrastructure, Responsible AI, Historical Photos
Send Bidemi a Text Message!In this episode, host Bidemi Ologunde spoke with Shannon Noonan, CEO/Founder of HiNoon Consulting, and US Global Ambassador - Global Council for Responsible AI. The conversation addressed how to turn “checkbox” programs into real business value, right-sized controls, third-party risk, AI guardrails, and data habits that help teams move faster—while strengthening security, compliance, and privacy.Support the show
This week on Taking The Pulse, Heather and Lauren record at the NCLifeSci 2025 Annual Meeting with Dr. Justin Collier, Chief Technology Officer for Healthcare at Lenovo North America. A practicing physician turned tech leader, Dr. Collier shares how AI is transforming the health care industry, from medical imaging and ambient documentation to administrative workflows and clinical efficiency. We explore the importance of governance, education, and ethical deployment of AI, and how health systems can start small to build trust and drive measurable results. Tune in for an insightful discussion on the future of healthcare!
The Artificial Intelligence Collaboration Centre (AICC) has launched Northern Ireland's first Responsible AI Hub - a groundbreaking online resource designed to help businesses, policymakers and individuals understand, adopt and apply Artificial Intelligence (AI) responsibly. Developed by the AICC - a collaborative initiative led by Ulster University in partnership with Queen's University Belfast - and spearheaded by Tadhg Hickey, Head of AI and Digital Ethics Policy, the Hub is built on one simple principle: responsible AI is everyone's responsibility. Whether you're completely new to AI or already developing and deploying AI solutions, the Hub provides practical, accessible tools and guidance to help users 'be good with AI'. Supported by Invest Northern Ireland and the Department for the Economy, the Responsible AI Hub brings together clear guidance, ethical frameworks, and practical governance tools, all designed to make responsible AI accessible to everyone. By helping organisations integrate good governance from the outset, the Hub enables faster, safer innovation and reduces the risk of costly retrofits or regulatory breaches later on. From business leaders and policymakers to developers, researchers and the general public, the Hub offers step-by-step support to help people understand what responsible AI means and how to put it into practice. Among the resources are a Data Fact Sheet Developer, Harm Assessments, an Idea Testing Tool, an AI Policy Builder, and a suite of Project Governance Tools, all created by AICC's in-house team of applied researchers and data scientists. These tools are already being embedded across SME collaborations to promote responsible and transparent AI development in Northern Ireland. Tadhg Hickey, Head of AI and Digital Ethics Policy at AICC, said: "We built the Responsible AI Hub because AI shouldn't feel out of reach. Whether you're curious about what responsible AI means or designing complex AI solutions, this Hub gives you the confidence, language and tools to make good choices. Responsible AI isn't just for data scientists - it's for everyone. The more people who understand and apply these principles, the more we can build trust and unlock AI's potential for good." As AI continues to transform industries and daily life, the Responsible AI Hub aims to make ethics and accountability part of Northern Ireland's innovation DNA, ensuring technology serves people - not the other way around. David Crozier CBE, Director of the AICC, added: "The Responsible AI Hub is about building a culture where innovation and integrity go hand in hand. It empowers businesses, individuals, and communities to be confident and capable with AI, strengthening Northern Ireland's position as a global leader in trusted, human-centred innovation. This Hub will help local businesses adopt AI not only quickly, but responsibly and productively." Anne Beggs, Chief Commercial Officer at Invest Northern Ireland, said: "The development of AICC's Responsible AI Hub directly supports our business strategy, which prioritises accelerating innovation and fostering collaboration as part of our role to support City and Growth Deals project delivery. It will help Northern Ireland's businesses and innovators embrace AI in ways that are not only productive and competitive, but also safe, inclusive and ethical. By equipping organisations with the tools to innovate with integrity, we are laying the foundations for a world-class, responsible digital economy." Since its establishment, the AICC has rapidly become the driving force behind responsible AI adoption in Northern Ireland. In just over a year, it has assembled a team of 19 experts across Belfast and Derry~Londonderry, engaged more than 100 SMEs through its flagship Transformer Programme, supported 260 postgraduate scholars and delivered AI training to over 360 professionals. With its remit now extended to 2029, the AICC is set to expand its impact - accelerating innovation, strengt...
What happens when AI stops making mistakes… and starts misleading you?This discussion dives into one of the most important — and least understood — frontiers in artificial intelligence: AI deception.We explore how AI systems evolve from simple hallucinations (unintended errors) to deceptive behaviors — where models selectively distort truth to achieve goals or please human feedback loops. We unpack the coding incentives, enterprise risks, and governance challenges that make this issue critical for every executive leading AI transformation.Key Moments:00:00 What is AI Deception and Why It Matters3:43 Emergent Behaviors: From Hallucinations to Alignment to Deception4:40 Defining AI Deception6:15 Does AI Have a Moral Compass?7:20 Why AI Lies: Incentives to “Be Helpful” and Avoid Retraining15:12 Is Deception Built into LLMs? (And Can It Ever Be Solved?)18:00 Non-Human Intelligence Patterns: Hallucinations or Something Else?19:37 Enterprise Impact: What Business Leaders Need to Know27:00 Measuring Model Reliability: Can We Quantify AI Quality?34:00 Final Thoughts: The Future of Trustworthy AI Mentions:Scientists at OpenAI and Apollo Research showed in a paper that AI models lie and deceive: https://www.youtube.com/shorts/XuxVSPwW8I8TIME: New Tests Reveal AI's Capacity for DeceptionOpenAI: Detecting and reducing scheming in AI modelsStartupHub: OpenAI and Apollo Research Reveal AI Models Are Learning to Deceive: New Detection Methods Show PromiseMarcus WellerHugging Face Watch next: https://www.youtube.com/watch?v=plwN5XvlKMg&t=1s -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Keren Katz exposes novel risks posed by GenAI and agentic AI while reflecting on unintended malfeasance, surprisingly common insider threats and weak security postures. Keren and Kimberly discuss threats amplified by agentic AI; self-inflicted exposures observed in Fortune 500 companies; normalizing risky behavior; unintentional threats; non-determinism as a risk; users as an attack vector; the OWASP State of Agentic AI and Governance report; ransomware 2025; mapping use cases and user intent; preemptive security postures; agentic behavior analysis; proactive AI/agentic security policies and incident response plans. Keren Katz is Senior Group Manager of Threat Research, Product Management and AI at Tenable, a contributor at both the Open Worldwide Application Security Project (OWASP) and Forbes. Keren is a global leader in AI and cybersecurity, specializing in Generative AI threat detection. Related ResourcesArticle: The Silent Breach: Why Agentic AI Demands New OversightState of Agentic AI Security and Governance (whitepaper): https://genai.owasp.org/resource/state-of-agentic-ai-security-and-governance-1-0/ The LLM Top 10: https://genai.owasp.org/llm-top-10/A transcript of this episode is here.
Noelle Russell compares AI to a baby tiger, it's super cute when it's small but it can quickly grow into something huge and dangerous. As the CEO and founder of the AI Leadership Institute and as an early developer on Amazon Alexa, Noelle has a deep understanding of scaling and selling AI. This week Noelle joins Tammy to discuss why she's so passionate about teaching individuals and organizations about AI and how companies can leverage AI in the right way. It's time to learn how to tame the tiger! Please note that the views expressed may not necessarily be those of NTT DATA.Links: Noelle Russell Scaling Responsible AI AI Leadership Institute Learn more about Launch by NTT DATASee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
What does it take to build a billion-dollar company with fewer than 100 people, all while placing customer obsession and responsible AI at its core? In this episode of Predictable B2B Success, host Vinay Koshy speaks with Phillip Swan, Chief Product and Go-to-Market Officer of the AI Solution Group, to unlock the secrets behind blending innovative tech, ethical AI, and truly frictionless customer experiences. Philip shares his journey from co-founding PI Partners to merging with AI Solution Group, revealing untold stories about how he and his team leverage AI to drive unprecedented operational momentum and organizational growth. From identifying “migraine-level” pain points to eliminating data leaks caused by shadow AI, Phillip's insights challenge conventional thinking and tackle the big questions: Can AI really build trust and customer advocacy? How do you systemize culture and alignment across traditional business silos? And what is “pre-awareness”, the surprising stage most companies ignore in the buyer's journey? Packed with real-world examples, bold perspectives, and practical frameworks for change, this episode will get you rethinking your approach to product, leadership, and revenue growth. If you're ready to turn customer-centricity from a buzzword into your breakthrough strategy, don't miss this conversation! Some areas we explore in this episode include: Responsible and Safe AI – Ethics, guardrails, and compliance in AI development.Shadow AI Risks – Dangers of ungoverned AI and protecting company data.Customer Obsession – Making customer outcomes a core organizational focus.Revenue Momentum – Using AI and alignment to drive sustained business growth.Breaking Down Silos – Connecting all business functions for better collaboration and KPIs.Pre-awareness in the Buyer Journey – Building trust and influence before customers identify their needs.Change Management & Culture – CEO-driven culture and effective organizational change strategies.AI Agents & Agentic Systems – Defining and building true autonomous AI agents.Customer-driven Product Development – Co-creating solutions with customers based on real pain points.Scaling Customer Experience – Turning every touchpoint, including support and legal, into a customer experience advantage.And much, much more...
100m sprinter Bebe Jackson, 19, won a bronze medal on her debut at the IPC World Para Athletics Championships in Delhi, India, last week. Bebe was born with congenital talipes equinovarus, widely known as club foot, and when she's not competing for Britain, she works nights caring for children with complex disabilities. She tells Anita Rani how she does it.In Sally Wainwright's new BBC drama Riot Women, a group of women in mid-life escape the pressures of caring for parents and kids - and the menopause - by forming a rock band. Rosalie Craig stars as the incredible singer that brings them together. Anita Rani talks to Sally and actor Rosalie about the power of female friendship.Nuala McGovern talks to the French philosopher Manon Garcia. Manon watched the court proceedings of the Pelicot case in France, in which Dominique Pelicot and 46 other men were found guilty of the rape of Dominique's wife Gisèle. In her book Living with Men, she examines French and other societies in light of the case and questions what more needs to be done.When you think about music from 500 years ago, you might picture monks chanting, or the voices of choirboys, but what's been largely forgotten over the course of history is that some of the most striking music during this time was being written and sung by nuns, hidden away in convents across Europe. Nuala speaks to Laurie Stras, Director of Musica Secreta, an all-female renaissance ensemble.Elon Musk's Artificial Intelligence company xAI recently introduced two sexually explicit chatbots. He's a high-profile presence in a growing field where developers are banking on users interacting and forming intimate relationships with the AI chatbots. Nuala McGovern speaks to journalist Amelia Gentleman, who has just returned from an adult industry conference in Prague, where she saw a sharp rise in new websites offering an increasingly realistic selection of AI girlfriends, and Gina Neff, Professor of Responsible AI at the Queen Mary University of London, who tells us what this means for women.EastEnders actor Kellie Bright took part in a Woman's Hour special last year which asked whether the SEND system is working for children with special educational needs and disabilities. Tonight Kellie presents a special one-hour BBC Panorama. Drawing on her own experience as the mother of an autistic son, she investigates how parents navigate the complex system to secure the right help at school. Kellie joins Nuala McGovern to talk about what she found.Presenter: Anita Rani Producer: Simon Richardson
October 10, 2025: A new era of Responsible Intelligence is emerging. Governments are considering human-quota laws to keep people in the loop. Kroger is rolling out a values-based AI assistant that redefines trust and transparency. And legal experts warn that AI bias in HR could soon become a courtroom reality. In today's Future-Ready Today, Jacob Morgan explores how these stories signal the end of reckless automation and the rise of accountable leadership. He shares how the future of work will be shaped not by faster machines, but by wiser humans—and offers one simple “1%-a-Day” challenge to help you lead responsibly in the age of AI.
In this episode 187 of the Disruption Now Podcast, we sit down with Benjamin Ko, the CEO of Kaleidoscope Innovation, a firm leading the way in human-centered design and engineering — especially in healthcare. From developing wearable technologies for spinal cord injury patients to crafting surgical tools built around human ergonomics, Ben and his team are proving that empathy is a competitive advantage in the age of AI.We dive into the central question: If AI can optimize everything, where do we still matter? Ben argues that empathy isn't just a soft skill — it's a design superpower. He discusses how Kaleidoscope's cross-functional teams of designers, engineers, and researchers bridge the gap between physical and digital worlds, why 95% of AI projects fail due to lack of human context, and how clarity of thought and ethical design can shape a better, more responsible tech future.If you're a founder, product designer, healthcare innovator, engineer, or policymaker interested in building smarter systems with deeper purpose — this episode is for you.
Cisco's Vijoy Pandey - SVP & GM of Outshift by Cisco - explains how AI agents and quantum networks could completely redefine how software, infrastructure, and security function in the next decade.You'll learn:→ What “Agentic AI” and the “Internet of Agents” actually are→ How Cisco open-sourced the Internet of Agents framework and why decentralization matters→ The security threat of “store-now, decrypt-later” attacks—and how post-quantum cryptography will defend against them→ How Outshift's “freedom to fail” model fuels real innovation inside a Fortune-500 company→ Why the next generation of software will blur the line between humans, AI agents, and machines→ The vision behind Cisco's Quantum Internet—and two real-world use cases you can see today: Quantum Sync and Quantum AlertAbout Today's Guest:Meet Vijoy Pandey, the mind behind Cisco's Outshift—a team pushing the boundaries of what's next in AI, quantum computing, and the future internet. With 80+ patents to his name and a career spent redefining how systems connect and think, he's one of the few leaders truly building the next era of computing before the rest of us even see it coming.Key Moments:00:00 Meet Vijoy Pandey & Outshift's mission04:30 The two hardest problems in computer science: Superintelligence & Quantum Computing06:30 Why “freedom to fail” is Cisco's innovation superpower10:20 Inside the Outshift model: incubating like a startup inside Cisco21:00 What is Agentic AI? The rise of the Internet of Agents27:00 AGNTCY.org and open-sourcing the Internet of Agents32:00 What would an Internet of Agents actually look like?38:19 Responsible AI & governance: putting guardrails in early49:40 What is quantum computing? What is quantum networking?55:27 The vision for a global Quantum InternetWatch Next: https://youtu.be/-Jb2tWsAVwI?si=l79rdEGxB-i-Wrrn -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
What does it take to lead digital transformation when fear, culture, and AI disruption collide?
Carnegie Mellon business ethics professor Derek Leben joins Kevin Werbach to trace how AI ethics evolved from an early focus on embodied systems—industrial robots, drones, self-driving cars—to today's post-ChatGPT landscape that demands concrete, defensible recommendations for companies. Leben explains why fairness is now central: firms must decide which features are relevant to a task (e.g., lending or hiring) and reject those that are irrelevant—even if they're predictive. Drawing on philosophers such as John Rawls and Michael Sandel, he argues for objective judgments about a system's purpose and qualifications. Getting practical about testing for AI fairness, he distinguishes blunt outcome checks from better metrics, and highlights counterfactual tools that reveal whether a feature actually drives decisions. With regulations uncertain, he urges companies to treat ethics as navigation, not mere compliance: Make and explain principled choices (including how you mitigate models), accept that everything you do is controversial, and communicate trade-offs honestly to customers, investors, and regulators. In the end, Leben argues, we all must become ethicists to address the issues AI raises...whether we want to or not. Derek Leben is Associate Teaching Professor of Ethics at the Tepper School of Business, Carnegie Mellon University, where he teaches courses such as “Ethics of Emerging Technologies,” “Fairness in Business,” and “Ethics & AI.” Leben is the author of Ethics for Robots (Routledge, 2018) and AI Fairness (MIT Press, 2025). He founded the consulting group Ethical Algorithms, through which he advises governments and corporations on how to build fair, socially responsible frameworks for AI and autonomous Transcript AI Fairness: Designing Equal Opportunity Algorithms (MIT Press 2025) Ethics for Robots: How to Design a Moral Algorithm (Routledge 2019) The Ethical Challenges of AI Agents (Blog post, 2025)
In this episode of Careers and the Business of Law, David Cowen sits down with Nathan Reichardt, PwC's Lead Managed Services Director and AI Champion, for a conversation that bridges technology and humanity. They unpack why “observability” isn't just a technical concept, it's the foundation of trust in an age of autonomous agents. From building glass-box systems that make AI accountable to recognizing the invisible pressures on professionals, this discussion explores what it really takes to lead responsibly in the era of AI. Key Topics Covered: Agents aren't magic, you must observe them. Why oversight is essential as AI agents act and learn autonomously. From black box to glass box. Transparency, explainability, and compliance as non-negotiable design principles. Responsible AI in practice. What observability really means for governance, risk, and trust. The rise of new roles. Why “AI Observer” and “Observability Lead” may soon become critical titles inside legal and business ops. The human dimension. How leaders can apply observability to people spotting stress, isolation, and burnout before it's too late. From pilot to practice. PwC's approach to scaling agentic AI safely through iteration, measurement, and feedback.
Send us a textWhat happens when a robot colors the sun green? That playful mistake helps us unlock a bigger idea: AI needs you. Not as a spectator, but as the guide who brings context, empathy, and fairness.In this episode, Y stands for “You + AI.” We explore how people and AI work together in the real world, where speed and pattern spotting meet human judgment and care. We explain the idea of “human in the loop,” a simple way to make sure people stay in charge of goals, guardrails, and final decisions.You'll hear how AI helps doctors flag issues in X-rays while physicians decide treatment, supports teachers by grading routine work while educators respond to emotions and needs, and boosts artists by creating quick sketches while humans bring meaning and message. Along the way, we talk about bias, brittle rules, and why unchecked automation can lead to unfair results. The solution isn't magic code, it's a culture of curiosity, feedback, and review.We also share a fun family activity called “Who's in the Loop?” that helps kids practice spotting bad rules and adding nuance. Try saying “All fruit is round” and see how bananas save the day. Then talk together about where people and AI work as partners, when humans should have the final say, and which choices are safe to automate.Join us as we celebrate kids' questions, creativity, and courage, the real drivers of responsible AI. If we want smarter tools that serve people, your voice matters most.Subscribe, share this episode, and leave a review to help more families explore AI with curiosity and care.Resources:Sign up for the AiDigiCards waitlistFollow our KickstarterBig Emotions: Kids Listen Mashups About FeelingsSupport the showHelp us become the #1 podcast for AI for Kids.Buy our new book "Let Kids Be Kids, Not Robots!: Embracing Childhood in an Age of AI"Social Media & Contact: Website: www.aidigitales.com Email: contact@aidigitales.com Follow Us: Instagram, YouTube Gift or get our books on Amazon or Free AI Worksheets Listen, rate, and subscribe! Stay updated with our latest episodes by subscribing to AI for Kids on your favorite podcast platform. Apple Podcasts Amazon Music Spotify YouTube Other Like our content, subscribe or feel free to donate to our Patreon here: patreon.com/AiDigiTales...
Most artists get left behind in tech—but not Kaila Love.Once a homeless teen sleeping in her car, Kaila became a UC Berkeley grad, hip-hop artist, and founder of KailaLove.ai—a pioneering AI education company blending music, automation, and empowerment. Known as The AI Homegirl, she teaches creatives how to protect their IP, grow fanbases with AI, and own their digital destiny.Timestamps:00:00 – From Homeless to Berkeley05:00 – Sync Deals & Music Wins10:00 – Building with AI17:00 – Bootleg Brain & IP25:00 – Responsible Tech32:00 – Future Vision35:45 – Final Message
The best time to regulate AI was yesterday, and the next best time is now. There is a clear and urgent need for responsible AI development that implements reasonable guidelines to mitigate harms and foster innovation, yet the conversation in DC and capitals around the world remains muddled. NYU's Dr. Julia Stoyanovich joins David Rothkopf to explore the role of collective action in AI development and why responsible AI is the responsibility of each of us. This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices
Kevin Werbach interviews Heather Domin, Global Head of the Office of Responsible AI and Governance at HCLTech. Domin reflects on her path into AI governance, including her pioneering work at IBM to establish foundational AI ethics practices. She discusses how the field has grown from a niche concern to a recognized profession, and the importance of building cross-functional teams that bring together technologists, lawyers, and compliance experts. Domin emphasizes the advances in governance tools, bias testing, and automation that are helping developers and organizations keep pace with rapidly evolving AI systems. She describes her role at HCLTech, where client-facing projects across multiple industries and jurisdictions create unique governance challenges that require balancing company standards with client-specific risk frameworks. Domin notes that while most executives acknowledge the importance of responsible AI, few feel prepared to operationalize it. She emphasizes the growing demand for proof and accountability from regulators and courts, and finds the work exciting for its urgency and global impact. She also talks about the new chalenges of agentic AI, and the potential for "oversight agents" that use AI to govern AI. Heather Domin is Global Head of the Office of Responsible AI and Governance at HCLTech and co-chair of the IAPP AI Governance Professional Certification. A former leader of IBM's AI ethics initiatives, she has helped shape global standards and practices in responsible AI. Named one of the Top 100 Brilliant Women in AI Ethics™ 2025, her work has been featured in Stanford executive education and outlets including CNBC, AI Today, Management Today, Computer Weekly, AI Journal, and the California Management Review. Transcript AI Governance in the Agentic Era Implementing Responsible AI in the Generative Age - Study Between HCL Tech and MIT
The best time to regulate AI was yesterday, and the next best time is now. There is a clear and urgent need for responsible AI development that implements reasonable guidelines to mitigate harms and foster innovation, yet the conversation in DC and capitals around the world remains muddled. NYU's Dr. Julia Stoyanovich joins David Rothkopf to explore the role of collective action in AI development and why responsible AI is the responsibility of each of us. This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices
Maximilian Vogel dismisses tales of agentic unicorns, relying instead on human expertise, rational objectives, and rigorous design to deploy enterprise agentic systems. Maximilian and Kimberly discuss what an agentic system is (emphasis on system); why agency in agentic AI resides with humans; engineering agentic workflows; agentic AI as a mule not a unicorn; establishing confidence and accuracy; codesigning with business/domain experts; why 100% of anything is not the goal; focusing on KPIs not features; tricks to keep models from getting tricked; modeling agentic workflows on human work; live data and human-in-the-loop validation; AI agents as a support team and implications for human work. Maximilian Vogel is the Co-Founder of BIG PICTURE, a digital transformation boutique specializing in the use of AI for business innovation. Maximilian enables the strategic deployment of safe, secure, and reliable agentic AI systems.Related ResourcesMedium: https://medium.com/@maximilian.vogelA transcript of this episode is here.
Scott heads to Microsoft's campus for the VS Code Insider Summit to sit down with Dr. Sarah Bird and explore what “Responsible AI” really means for developers. From protecting user privacy to keeping humans in the loop, they dig into how everyday coders can play a role in shaping AI's future. Show Notes 00:00 Welcome to Syntax! 01:27 Brought to you by Sentry.io. 03:13 The path the machine learning. 04:44 How do you get to ‘Responsible AI'? 06:43 Is there such a thing as ‘Responsible AI'? 07:34 Does the average developer have a part to play? 09:12 How can AI tools protect inexperienced users? 11:55 Let's talk about user and company privacy. 13:57 Are local tools and services becoming more viable? 15:06 Are people right to be skeptical? 16:58 The software developer role is fundamentally changing. 17:43 Human in the loop. 19:37 The career path to Responsible AI. 21:21 Sick Picks. Sick Picks Sarah: Japanese pottery Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
Membership | Donations | Spotify | YouTube | Apple PodcastsThis week we hear from Larry Muhlstein, who worked on Responsible AI at Google and DeepMind before leaving to found the Holistic Technology Project. In Larry's words:“Care is crafted from understanding, respect, and will. Once care is deep enough and in a generative reciprocal relationship, it gives rise to self-expanding love. My work focuses on creating such systems of care by constructing a holistic sociotechnical tree with roots of philosophical orientation, a trunk of theoretical structure, and technological leaves and fruit that offer nourishment and support to all parts of our world. I believe that we can grow love through technologies of togetherness that help us to understand, respect, and care for each other. I am committed to supporting the responsible development of such technologies so that we can move through these trying times towards a world where we are all well together.”In this episode, Larry and I explore the “roots of philosophical orientation” and “trunk of theoretical structure” as he lays them out in his Technological Love knowledge garden, asking how technologies for reality, perspectives, and karma can help us grow a world in love. What is just enough abstraction? When is autonomy desirable and when is it a false god? What do property and selfhood look like in a future where the ground truths of our interbeing shape design and governance?It's a long, deep conversation on fundamentals we need to reckon with if we are to live in futures we actually want. I hope you enjoy it as much as we did.Our next dialogue is with Sam Arbesman, resident researcher at Lux Capital and author of The Magic of Code. We'll interrogate the distinctions between software and spellcraft, explore the unique blessings and challenges of a world defined by advanced computing, and probe the good, bad, and ugly of futures that move at the speed of thought…✨ Show Links• Hire me for speaking or consulting• Explore the interactive knowledge garden grown from over 250 episodes• Explore the Humans On The Loop dialogue and essay archives• Browse the books we discuss on the show at Bookshop.org• Dig into nine years of mind-expanding podcasts✨ Additional Resources“Growing A World In Love” — Larry Muhlstein at Hurry Up, We're Dreaming“The Future Is Both True & False” — Michael Garfield on Medium“Sacred Data” — Michael Garfield at Hurry Up, We're Dreaming“The Right To Destroy” — Lior Strahilevitz at Chicago Unbound“Decentralized Society: Finding Web3's Soul” — Puja Ohlhaver, E. Glen Weyl, and Vitalik Buterin at SSRN✨ MentionsKarl Schroeder's “Degrees of Freedom”Joshua DiCaglio's Scale TheoryGeoffrey West's ScaleHannah ArendtKen WilberDoug Rushkoff's Survival of the RichestManda Scott's Any Human Power Torey HaydenChaim Gingold's Building SimCityJames P. Carse's Finite & Infinite GamesJohn C. Wright's The Golden OecumeneEckhart Tolle's The Power of Now✨ Related Episodes This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit michaelgarfield.substack.com/subscribe