POPULARITY
What does responsible AI really look like when it moves beyond policy papers and starts shaping who gets to build, create, and lead in the next phase of the digital economy? In this conversation recorded during AWS re:Invent, I'm joined by Diya Wynn, Principal for Responsible AI and Global AI Public Policy at Amazon Web Services. With more than 25 years of experience spanning the internet, e-commerce, mobile, cloud, and artificial intelligence, Diya brings a grounded and deeply human perspective to a topic that is often reduced to technical debates or regulatory headlines. Our discussion centers on trust as the real foundation for AI adoption. Diya explains why responsible AI is not about slowing innovation, but about making sure innovation reaches more people in meaningful ways. We talk about how standards and legislation can shape better outcomes when they are informed by real-world capabilities, and why education and skills development will matter just as much as model performance in the years ahead. We also explore how generative AI is changing access for underrepresented founders and creators. Drawing on examples from AWS programs, including work with accelerators, community organizations, and educational partners, Diya shares how tools like Amazon Bedrock and Amazon Q are lowering technical barriers so ideas can move faster from concept to execution. The conversation touches on why access without trust falls short, and why transparency, fairness, and diverse perspectives have to be part of how AI systems are designed and deployed. There's an honest look at the tension many leaders feel right now. AI promises efficiency and scale, but it also raises valid concerns around bias, accountability, and long-term impact. Diya doesn't shy away from those concerns. Instead, she explains how responsible AI practices inside AWS aim to address them through testing, documentation, and people-centered design, while still giving organizations the confidence to move forward. This episode is as much about the future of work and opportunity as it is about technology. It asks who gets to participate, who gets to benefit, and how today's decisions will shape tomorrow's innovation economy. As generative AI becomes part of everyday business life, how do we make sure responsibility, access, and trust grow alongside it, and what role do we each play in shaping that future? Useful Links Connect With Diya Wynn AWS Responsible AI Tech Talks Daily is sponsored by Denodo
Alexandru Voica, Head of Corporate Affairs and Policy at Synthesia, discusses how the world's largest enterprise AI video platform has approached trust and safety from day one. He explains Synthesia's "three C's" framework—consent, control, and collaboration: never creating digital replicas without explicit permission, moderating every video before rendering, and engaging with policymakers to shape practical regulation. Voica acknowledges these safeguards have cost some business, but argues that for enterprise sales, trust is competitively essential. The company's content moderation has evolved from simple keyword detection to sophisticated LLM-based analysis, recently withstanding a rigorous public red team test organized by NIST and Humane Intelligence. Voica criticizes the EU AI Act's approach of regulating how AI systems are built rather than focusing on harmful outcomes, noting that smaller models can now match frontier capabilities while evading compute-threshold regulations. He points to the UK's outcome-focused approach—like criminalizing non-consensual deepfake pornography—as more effective. On adoption, Voica argues that AI companies should submit to rigorous third-party audits using ISO standards rather than publishing philosophical position papers—the thesis of his essay "Audits, Not Essays." The conversation closes personally: growing up in 1990s Romania with rare access to English tutoring, Voica sees AI-powered personalized education as a transformative opportunity to democratize learning. Alexandru Voica is the Head of Corporate Affairs and Policy at Synthesia, the UK's largest generative AI company and the world's leading AI video platform. He has worked in the technology industry for over 15 years, holding public affairs and engineering roles at Meta, NetEase, Ocado, and Arm. Voica holds an MSc in Computer Science from the Sant'Anna School of Advanced Studies and serves as an advisor to MBZUAI, the world's first AI university. Transcript Audits, Not Essays: How to Win Trust for Enterprise AI (Transformer) Synthesia's Content Moderation Systems Withstand Rigorous NIST, Humane Intelligence Red Team Test (Synthesia) Computerspeak Newsletter
In this episode, host Sandy Vance sits down with Hadas Bitran, Partner General Manager of Health AI at Microsoft Health & Life Sciences, for a deep dive into the rapidly evolving world of healthcare agents. Together, they explore how agentic technologies are being used across clinical settings, where they're creating value, and why tailoring these tools to the specific needs of users and audiences is essential for safety and effectiveness. Well-designed healthcare agents can reinforce responsible AI practices (like transparency, accountability, and patient safety) while also helping organizations evaluate emerging solutions with greater clarity and confidence. In this episode, they talk about:How agents are used in healthcare and use casesThe risks if a healthcare agent is not tailored to the needs of users and audiencesHow healthcare agents support responsible AI practices, such as safety, transparency, and accountability, in clinical settingsHealthcare organizations should look to evaluate healthcare agent solutionsBridging the gaps in access, equity, and health literacy; empowering underserved populations and democratizing expertiseThe impact of AI on medical professionals and the healthcare staff, and how they should prepare for the change?A Little About Hadas:Hadas Bitran is Partner General Manager, Health AI, at Microsoft Health & Life Sciences. Hadas and her multi-disciplinary R&D organization build AI technologies for health & life sciences, focusing on Generative AI-based services, Agentic AI, and healthcare-adapted safeguards. They shipped multiple products and cloud services for the healthcare industry, which were adopted by thousands of customers worldwide.In addition to her work at Microsoft, Hadas previously served as a Board Member at SNOMED International, a not-for-profit organization that drives clinical terminology worldwide.Before Microsoft, Hadas held senior leadership positions managing R&D and Product groups in tech corporations and in start-up companies. Hadas has a B.Sc. in Computer Science from Tel Aviv University and an MBA from the Kellogg School of Management, Northwestern University in Chicago.
Artificial intelligence is rapidly transforming the pharmaceutical and life sciences sector — but innovation in this field comes with some of the highest regulatory, ethical, and governance expectations.In this episode of Legal Leaders Insights from Diritto al Digitale, Giulio Coraggio of DLA Piper speaks with Oliver Patel, Head of Enterprise AI Governance at AstraZeneca, about how AI governance is implemented in practice within a global pharmaceutical company.The conversation covers:What enterprise AI governance looks like in the life sciences sectorHow to balance AI innovation with privacy, intellectual property, and complianceThe concrete implications of the EU AI Act for pharmaceutical companiesPractical governance approaches to enable responsible and scalable AIThis episode is particularly relevant for legal professionals, compliance teams, in-house counsel, data leaders, and executives working in highly regulated industries.Diritto al Digitale is the podcast where law, technology, and digital regulation intersect with real business challenges.Send us a text
We live in a world where technology moves faster than most organisations can keep up. Every boardroom conversation, every team meeting, even casual watercooler chats now include discussions about AI. But here's the truth: AI isn't magic. Its promise is only as strong as the data that powers it. Without trust in your data, AI projects will be built on shaky ground.In this episode of Don't Panic, It's Just Data podcast, Amy Horowitz, Group Vice President of Solution Specialist Sales and Business Development at Informatica, joins moderator Kevin Petrie, VP of Research at BARC, to tackle one of the most pressing topics in enterprise technology today: the role of trusted data in driving responsible AI. Their discussion goes beyond buzzwords to focus on actionable insights for organisations aiming to scale AI with confidence.Why Responsible AI Begins with DataAmy opens the conversation with a simple but powerful observation: “No longer is it okay to just have okay data.” This sets the stage for understanding that AI's potential is only as strong as the data that feeds it. Responsible AI isn't just about implementing the latest algorithms; it's about embedding ethical and governance principles into every stage of AI development, starting with data quality.Kevin and Amy emphasise that organisations must look at data not as a byproduct, but as a foundational asset. Without reliable, well-governed data, even the most advanced AI initiatives risk delivering inaccurate, biased, or ineffective outcomes.Defining Responsible AI and Data GovernanceResponsible AI is more than compliance or policy checkboxes. As Amy explains, it is a framework of principles that guide the design, development, deployment, and use of AI. At its core, it is about building trust, ensuring AI systems empower organisations and stakeholders while minimising unintended consequences. Responsible data governance is the practical arm of responsible AI. It involves establishing policies, controls, and processes to ensure that data is accurate, complete, consistent, and auditable.Prioritise Data for Responsible AIThe takeaway from this episode is clear and that is responsible AI starts with responsible data. For organisations looking to harness AI effectively:Invest in data quality and governance — it is the foundation of all AI initiatives.Embed ethical and legal principles in every stage of AI development.Enable collaboration across teams to ensure transparency, accountability, and usability.Start small, prove value, and scale — responsible AI is built step by step.Amy Horowitz's insight resonates beyond the tech team: “Everyone's ready for AI — except their data.” It's a reminder that AI success begins not with the algorithms, but with the trustworthiness and governance of the data powering them.For more insights, visit Informatica.TakeawaysAI is only as good as its data inputs.Data quality has become the number one obstacle to AI success. Organisations must start small and find use cases for data governance.Hallucinations in AI models highlight the need for vigilant
Masheika Allgood delineates good AI from GenAI, outlines the environmental imprint of hyperscale data centers, and emphasizes AI success depends on the why and data. Masheika and Kimberly discuss her path from law to AI; AI as an embodied infrastructure; forms of beneficial AI; if the GenAI math maths; narratives underpinning AI; the physical imprint of hyperscale data centers; the fallacy of closed loop cooling; who pays for electrical capacity; enabling community dialogue; starting with why in AI product design; AI as a data infrastructure play; staying positive and finding the thing you can do. Masheika Allgood is an AI Ethicist and Founder of AllAI Consulting. She is a well-known advocate for sustainable AI development and contributor to the IEEE P7100 Standard for Measurement of Environmental Impacts of Artificial Intelligence Systems. Related Resources Taps Run Dry Initiative (Website) Data Center Advocacy Toolkit (Website) Eat Your Frog (Substack) AI Data Governance, Compliance, and Auditing for Developers (LinkedIn Learning) A Mind at Play: How Claude Shannon Invented the Information Age (Referenced Book) A transcript of this episode is here.
In this episode of My EdTech Life, Jeff Riley breaks down the mission behind Day of AI and the work of MIT RAISE to help schools, districts, families, and students understand artificial intelligence safely, ethically, and with purpose.Jeff brings 32 years of experience as a teacher, counselor, principal, superintendent, and former Massachusetts Commissioner of Education. His transition to MIT RAISE reveals why AI literacy, student safety, and clear policy matter more than ever.Timestamps00:00 Welcome & Sponsor Shoutouts01:45 Jeff Riley's Background in Education04:00 Why MIT RAISE and Day of AI06:00 The Challenge: AI Policy, Safety & Equity08:30 How AI Can Transform Teaching & Learning10:30 Differentiation, Accessibility & Student Support12:30 Helping Teachers Feel Confident Using AI15:00 Leading AI Adoption at the District Level18:00 What AI Literacy Should Mean for Students20:00 Teaching Healthy Skepticism & Bias Awareness23:00 Student Voice in AI Policy26:00 Parent Awareness & Common Sense Media Toolkit29:00 Responsible AI for America's Youth31:00 America's Youth AI Festival & Student Leadership34:30 National Vision for AI in Education37:00 Closing Thoughts + 3 Signature Questions41:00 Stay TechieResources MentionedDay of AI Curriculum: https://dayofai.orgMIT RAISE: https://raise.mit.eduSponsors
In this powerful episode of Change Leadership Conversations, Yvonne Ruke Akpoveta sits down with one of Canada's foremost experts on the intersection of AI and healthcare, Dr. Muhammad Mamdani. With over 600 published studies and leadership roles across Unity Health Toronto, Ontario Health, and the University of Toronto, Dr. Mamdani brings real-world insight into how AI can be responsibly developed and deployed to improve outcomes in life-and-death scenarios.We explore:• The practical realities of applying AI• How AI is disrupting education, critical thinking, and the world• What “responsible AI” really looks like, and why it's urgent• How to manage AI hallucinations in critical contexts like healthcare, and beyond• Building trust and engaging frontline stakeholders for adoption and co-creationWhether you're a change leader, innovator, or just curious about the impact of AI — this conversation will spark ideas and deepen your understanding of the change we're allnavigating.Guest Bio:Dr. Muhammad Mamdani is one of Canada's leading voices on AI in healthcare. He serves as Clinical Lead for AI at Ontario Health, VP of Data Science at Unity Health Toronto, andDirector of T-CAIREM at the University of Toronto. Dr. Mamdani's work bridges advanced analytics with real-world clinical decision-making. He's a Faculty Affiliate at the VectorInstitute, an Affiliate Scientist at IC/ES, and was recognized as one of Canada's Top 40 Under 40. His team received the national Solventum Health Care Innovation Team Award.Resources & Links:Connect with Dr. Muhammad Mamdani on LinkedInConnect with Yvonne Ruke Akpoveta on LinkedInLearn more about the Change Leadership TrainingBrought to You By:The Change Leadership – Your go-to ecosystem for future-ready change leadership training, resources, and the annual Change Leadership Conference. Learn more at TheChangeLeadership.comSubscribe & ReviewIf you enjoyed this episode, don't forget to rate, subscribe, and leave a review. It helps others discover the show, and we appreciate your support!
Transitioning a mature organization from an API-first model to an AI-first model is no small feat. In this episode, Yash Kosaraju, CISO of Sendbird, shares the story of how they pivoted from a traditional chat API platform to an AI agent platform and how security had to evolve to keep up.Yash spoke about the industry's obsession with "Zero Trust," arguing instead for a practical "Multi-Layer Trust" approach that assumes controls will fail . We dive deep into the specific architecture of securing AI agents, including the concept of a "Trust OS," dealing with new incident response definitions (is a wrong AI answer an incident?), and the critical need to secure the bridge between AI agents and customer environments .This episode is packed with actionable advice for AppSec engineers feeling overwhelmed by the speed of AI. Yash shares how his team embeds security engineers into sprint teams for real-time feedback, the importance of "AI CTFs" for security awareness, and why enabling employees with enterprise-grade AI tools is better than blocking them entirely .Questions asked:Guest Socials - Yash's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:20) Who is Yash Kosaraju? (CISO at Sendbird)(03:30) Sendbird's Pivot: From Chat API to AI Agent Platform(05:00) Balancing Speed and Security in an AI Transition(06:50) Embedding Security Engineers into AI Sprint Teams(08:20) Threats in the AI Agent World (Data & Vendor Risks)(10:50) Blind Spots: "It's Microsoft, so it must be secure"(12:00) Securing AI Agents vs. AI-Embedded Applications(13:15) The Risk of Agents Making Changes in Customer Environments(14:30) Multi-Layer Trust vs. Zero Trust (Marketing vs. Reality) (17:30) Practical Multi-Layer Security: Device, Browser, Identity, MFA(18:25) What is "Trust OS"? A Foundation for Responsible AI(20:45) Balancing Agent Security vs. Endpoint Security(24:15) AI Incident Response: When an AI Gives a Wrong Answer(29:20) Security for Platform Engineers: Enabling vs. Blocking(30:45) Providing Enterprise AI Tools (Gemini, ChatGPT, Cursor) to Employees(32:45) Building a "Security as Enabler" Culture(36:15) What Questions to Ask AI Vendors (Paying with Data?)(39:20) Personal Use of Corporate AI Accounts(43:30) Using AI to Learn AI (Gemini Conversations)(45:00) The Stress on AppSec Engineers: "I Don't Know What I'm Doing"(48:20) The AI CTF: Gamifying Security Training(50:10) Fun Questions: Outdoors, Team Building, and Indian/Korean Food
AI has exploded across the legal industry but for many lawyers, it still feels overwhelming, risky, or simply “not for them.” Today's guest has made it his mission to change that. Joining us is Robert Eder an intellectual property lawyer, legaltech educator, and one of Europe's leading voices on AI prompting for lawyers. Robert designs legal automation solutions, and teaches lawyers around the world how to use AI safely, effectively, and creatively. Robert has trained hundreds of lawyers across Europe and is one of the clearest voices on how to use AI responsibly, safely and with real legal precision. Here are a few standout takeaways: Lawyers aren't bad at prompting they're undersold. Their analytical mindset actually gives them an advantage. Most people still treat AI like Google. Adding structure through XML tags, roles and answer-levelling changes everything. The first AI skill every lawyer should learn isn't drafting it's controlling output. Structure before substance. Hallucinations aren't a deal-breaker. Responsible AI frameworks give you quality control, not guesswork. You don't need 70% of AI tools on the market. With the right prompting, one model + the right workflow beats shiny software every time. Legal prompting is not the same as general prompting. Law has edge cases, nuance, and risk your prompts must reflect that. Two general points to reflect on:Lawyers don't need to become engineers. They need to become better communicators with machines.If you don't understand prompting, you'll always think AI is unreliable — when in reality, it's only as clear as the instructions you give it. It's practical, hands-on and genuinely career-shifting. AI isn't replacing lawyers. Lawyers who understand AI are replacing the ones who don't.
This is AI x Multilateralism, a mini-series on The Next Page, where experts help us unpack the many ideas and issues at the nexus of AI and international cooperation. AI has the dual potential to transform our world for the better, while also deepening serious inequalities. In this episode we speak to Dr. Rachel Adams, Founder and CEO of the Global Center on AI Governance and author of The New Empire of AI: The Future of Global Inequality. She shares why Africa-led and Majority World-led research and policy are essential for equitable AI governance that's grounded in the realities of people everywhere. She reflects on: why the work of the Center's flagship Global Index on Responsible AI and its African Observatory on Responsible AI are bringing much-needed research and evidence to ensure AI governance is fair and inclusive. her thoughts on the UN General Assembly's 2025 resolutions to establish an International Scientific Panel on AI and a Global Dialogue on AI Governance, urging true inclusion of diverse voices, indigenous perspectives, and public input why we need to treat AI infrastructure as an AI Global Commons and, the power of local-language AI and public literacy in ensuring we harness the most transformative aspects of AI for our world. Resources mentioned: The Global Center on AI Governance The Center's Global Index on Responsible AI The Center's African Observatory on Responsible AI, and its research series Africa and the Big Debates on AI Production: Guest: Dr. Rachel Adams Host, production and editing: Natalie Alexander Julien Recorded & produced at the Commons, United Nations Library & Archives Geneva Podcast Music credits: Sequence: https://uppbeat.io/track/img/sequence Music from Uppbeat (free for Creators!): https://uppbeat.io/t/img/sequence License code: 6ZFT9GJWASPTQZL0 #AI #Multilateralism #UN #Africa #AIGovernance
Josh is joined by education leader Jeffrey C. Riley, the Co-founder and Executive Director of Day of AI—the MIT-born nonprofit spearheading the Responsible AI for America's Youth campaign. Riley, the former Massachusetts Commissioner of Elementary and Secondary Education and former Superintendent/Receiver of the Lawrence Public Schools, shares his no-nonsense perspective. Through the work of the […]
In this inaugural episode of AI at Work, Greg Demers, an employment partner, is joined by Meg Bisk, head of the employment practice, and John Milani, an employment associate, to explore how employers across industries can harness artificial intelligence responsibly while mitigating legal and operational risk.They discuss the most common pitfalls of employee AI use, including inadvertent disclosure of confidential information, model “hallucinations,” and source opacity, that undermine auditability and accuracy. Turning to HR functions, they examine emerging regulatory frameworks and compliance expectations, such as bias auditing requirements for automated employment decision tools and accessibility obligations, alongside practical steps for vetting vendors, embedding human oversight, and enforcing contractual protections. Listeners will come away with pragmatic strategies to update policies, document decisions and foster a transparent culture of accountability that will position organizations to leverage AI use that is smarter, not riskier.Stay tuned for future episodes where we will explore the use of AI in human resources, privacy implications and cybersecurity issues, AI in executive compensation & employee benefits, among other topics.
Today on the podcast I am chatting to Josephine Hatch, who is an Innovation Director with over 20 years of experience in foresight, cultural strategy, and brand innovation. Now, you might not totally know what any of that means, but basically, we are talking about trend forecasting! One of the things that really struck me during our chat is that, as creatives and small business owners, many of us do this instinctively without having the formal language for it. This conversation gave me such a good framework for being more strategic about looking at culture and making plans for my business and honestly, Jo's perspective gave me such a boost regarding the value of human creativity. Key Takeaways Foresight vs. Fads: While "trends" are often associated with fast fashion or fleeting fads, foresight is about spotting signals and understanding the macro forces that impact human behaviour. Human Truths Remain: Technology and context change, but fundamental human truths—like the need for connection or joy—stay the same. Successful brands understand how to tap into these enduring feelings. The AI Counter-Movement: As generative AI adoption grows, there is a strong counter-trend towards the "human." People are increasingly valuing imperfections, analog hobbies, and genuine human curation. Look Outside Your Bubble: Real innovation rarely comes from looking at your direct competitors. Instead, look to other industries, art, and culture for inspiration to disrupt your own category. Episode Highlights 02:51 – Joe explains her background and how an Alexander McQueen runway show sparked her interest in how fashion mirrors society. 06:49 – We discuss why "trend" has become a dirty word and the difference between short-term fads and long-term foresight. 12:56 – Joe shares incredible free resources and tools that small businesses can use to spot cultural shifts. 20:23 – A fascinating look at AI, including why the "human touch" is becoming a premium and the rise of analog hobbies. 33:17 – Simple habits you can adopt to become more culturally curious, including how to document the things that inspire you. About the Guest Josephine Hatch is an Innovation Director at The Otherly, an innovation and brand agency that works with global brands and small businesses to help them defend their space and grow with intent. She has spent 20 years working at the intersect of trend forecasting, cultural strategy, and innovation. Website: The Otherly LinkedIn: Josephine Hatch Mentioned in this episode The Otherly https://theotherly.com/ Andres Colmenares, Responsible AI expert and IAM festival co-founder Link to a google drive of trend reports https://bit.ly/2025trending via Global Cultural Strategist Amy Daroukakis. Note that a new set of trend reports will come out around December 2025 Free platform for trends, updated daily https://www.trendhunter.com/ Dezeen, The Dieline and Lovely Package (both good for packaging), Campaignlive https://secondhome.io/culture/ SJ from The Akin's substack is a great read for what's happening in culture https://theakin.substack.com/ Emma Jane Palin's Our Curated Abode https://www.ourcuratedabode.com/ and Instagram https://www.instagram.com/ourcuratedabode/# I would love to hear what you think of this episode, so please do let me know on Instagram where I'm @lizmmosley or @buildingyourbrandpodcast and I hope you enjoy the episode! This episode was written and recorded by me and produced by Lucy Lucraft lucylucraft.co.uk If you enjoyed this episode please leave a 5* rating and review!
Today's guest is Lauren Tulloch, Vice President and Managing Director at CCC (Copyright Clearance Center). CCC provides collective copyright licensing services for corporate and academic users of copyrighted materials, and, as one can imagine, the advent of AI has exposed a large number of businesses to copyright risks they've never considered before. Today, Lauren joins us to discuss where copyright exposure arises in financial services, from the growth of AI development to more commonplace employee use. With well over a decade at the company, Lauren dives into the urgent need for proactive copyright strategies in financial services, ensuring firms avoid litigation, regulatory scrutiny, and reputational damage, all while maximizing the value of AI. This episode is sponsored by CCC. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1.
The rapid evolution of generative AI has led to increased adoption but also raises significant compliance challenges. ZS Principal Michael Shaw joins the podcast to discuss the importance of responsible and ethical AI adoption in the biopharma industry, particularly as it relates to compliance, risk management and improving patient outcomes. Highlights include:The importance of developing a comprehensive framework for responsible AI, focusing on principles like fairness, safety, transparency and accountability Why effective AI governance requires cross-functional collaboration and continuous trade-off assessmentHow leveraging AI to enhance workflows can drive efficiency and effectiveness but must be implemented thoughtfully with the right controls in place
Most people run from government bureaucracy. Pavan Parikh ran toward it—and decided to rewrite the system from the inside.He believes public service should move like a startup: fast, transparent, and built around people, not process.But when tradition, power, and red tape pushed back, he didn't fold—he went to the statehouse to fight for reform.So how do you disrupt a 200-year-old system that was never built for speed or equity?In Episode 188 of the Disruption Now Podcast, Pavan breaks down how he's modernizing Hamilton County's court systems, digitizing paper-heavy workflows, and using AI and automation to reduce barriers to justice rather than create new ones. Whether you work in government, policy, law, or tech, you'll see how startup tools and mindsets can create real impact, not just buzzwords.Pavan Parikh is the elected Hamilton County Clerk of Courts in Ohio, focused on increasing access to justice, improving customer service, and modernizing one of the county's most important institutions. In this episode, we talk about what happens when a startup mindset collides with decades-old court processes, why culture eats technology for breakfast, and how AI can help everyday people navigate civil cases, evictions, and protection orders more effectively.You'll also hear Pavan's personal journey—from planning a career in medicine to 9/11 shifting him toward law and public service to ultimately leading one of the most prominent offices in Hamilton County. We get into fear of AI, job-loss anxiety within government, and how he's reframing AI as a teammate that frees staff for higher-value work rather than replacing them.If you've ever looked at the justice system and thought “there has to be a better way,” this deep dive into startup thinking for government will show you what that better way can look like—and what it takes to build it from the inside.What you'll learn in this episode:How startup thinking for government can reduce friction and errors in court processesWhy is Pavan obsessed with access to justice and end-user experience for everyday residents?How Hamilton County is digitizing records, streamlining evictions, and modernizing civil protection order filingWhere AI and automation can safely support court staff and help-center attorneysWhy change management is the real challenge—not the technologyHow local government can be a faster “lab” for responsible AI than federal agenciesWhat it really looks like to design systems around people, not paperworkChapters:00:00 Why the government needs startup thinking03:15 Pavan's path from medicine to law and 9/11's impact10:45 Modernizing Hamilton County courts and killing paper workflows22:10 AI, access to justice, and reimagining the Help Center35:30 Careers, values, and becoming a disruptor in public serviceQuick Q&A (for searchers):Q: What does “startup thinking for government” mean in this episode?A: Treating residents as end users, iterating on systems, and using tech and AI to automate low-value tasks so staff can focus on service and justice outcomes.Q: How is Hamilton County using technology to improve access to justice?A: By digitizing records, expanding the Help Center, improving online access to cases, limiting or removing outdated eviction records, and building easier online processes for civil protection orders.Q: Will AI replace court jobs?A: Pavan argues AI should handle repetitive questions and data lookups so humans can spend more time problem-solving, doing quality control, and helping people with complex issues.Connect with Pavan Parikh (verified/public handles):Website: PavanParikh.comX (Twitter): @KeepPavanClerkFacebook: Pavan Parikh for Clerk of Courts / @KeepPavanClerkInstagram: @KeepPavanClerkOffice channel: Hamilton County Clerk of Courts – @HamCoClerk on YouTubeDisruption Now resources:Subscribe to YouTube for more conversations at the intersection of AI, policy, government, and impact.Join the newsletter for weekly trends in AI and emerging tech for people who want to change systems, not just complain about them. bit.ly/newsletterDN#StartupThinking #GovTech #AccessToJusticeDisruption Now: Disrupting the status quo, making emerging tech human-centric and Accessible to all. Website https://disruptionnow.com/podcast Apply to get on the Podcast https://form.typeform.com/to/Ir6Agmzr?typeform-source=disruptionnow.comMusic credit:Embrace - Evgeny Bardyuzha
Kati Walcott differentiates simulated will from genuine intent, data sharing from data surrender, and agents from agency in a quest to ensure digital sovereignty for all.Kati and Kimberly discuss her journey from molecular genetics to AI engineering; the evolution of an intention economy built on simulated will; the provider ecosystem and monetization as a motive; capturing genuine intent; non-benign aspects of personalization; how a single bad data point can be a health hazard; the 3 styles of digital data; data sharing vs. data surrender; whether digital society represents reality; restoring authorship over our digital selves; pivoting from convenience to governance; why AI is only accountable when your will is enforced; and the urgent need to disrupt feudal economics in AI. Kati Walcott is the Founder and Chief Technology Officer at Synovient. With over 120 international patents, Kati is a visionary tech inventor, author and leader focused on digital representation, rights and citizenship in the Digital Data Economy.Related ResourcesThe False Intention Economy: How AI Systems are Replacing Human Will with Modeled Behavior (LinkedIn Article)A transcript of this episode is here.
Security and privacy leaders are under pressure to sign off on AI, manage data risk, and answer regulators' questions while the rules are still taking shape and the data keeps moving. On this episode of Ctrl + Alt + AI, host Dimitri Sirota sits down with Trevor Hughes, President & CEO of the IAPP, to unpack how decades of privacy practice can anchor AI governance, why the shift from consent to data stewardship changes the game, and what it really means to “know your AI” by knowing your data. Together, they break down how CISOs, privacy leaders, and risk teams can work from a shared playbook to assess AI risk, apply practical controls to data, and get ahead of emerging regulation without stalling progress.In this episode, you'll learn:Why privacy teams already have methods that can be adapted to oversee AI systemsBoards and executives want simple, defensible stories about risk from AI useThe strongest programs integrate privacy, security, and ethics into a single strategyThings to listen for: (00:00) Meet Trevor Hughes(01:39) The IAPP's mission and global privacy community(03:45) What AI governance means for security leaders(05:56) Responsible AI and real-world risk tradeoffs(08:47) Aligning privacy, security, and AI programs(15:20) Early lessons from emerging AI regulations(18:57) Know your AI by knowing your data(22:13) Rethinking consent and data stewardship(28:05) Vendor responsibility for AI and data risk(31:26) Closing thoughts and how to find the IAPP
Send us a textIn the final Coffee Nº5 episode of the year, Lara Schmoisman breaks down the marketing ecosystem of 2026—an environment defined by AI clarity, human-led storytelling, micro-experts, privacy-first data practices, and integrated teams. This episode explains what it truly takes to operate, grow, and connect in a world where everything is interconnected.We'll talk about:The 2026 ecosystem: why everything in marketing is now interconnected—and why working in silos will break your growth.Generative Engine Optimization (GEO): clarity as the new currency for AI.AI agents as shoppers: what it means when software researches, compares, and negotiates for consumers.Responsible AI: why governance, rules, and human oversight will define how brands use technology.Content in 2026: real storytelling, crafted value, SEO-backed captions, and the end of shallow posting.The rise of micro-experts: why niche credibility beats mass follower counts.Privacy & first-party data: what owning your customer information really means.Subscribe to Lara's newsletter.Also, follow our host Lara Schmoisman on social media:Instagram: @laraschmoismanFacebook: @LaraSchmoismanSupport the show
In this insightful episode, host Stephen Ibaraki sits down with Christopher Dorrow, a Global AI Strategist, to explore his fascinating career journey through innovation, design thinking, and leadership in Artificial Intelligence.Christopher shares pivotal moments from his childhood, his experiences in entrepreneurship and creativity, and recounts how challenges propelled his adaptability and sparked innovation throughout his career — from his early days at Accenture and SAP to transformative work with Finastra and Dubai Future Foundation. Discover how Christopher led groundbreaking projects like AI use-cases for government, contributed to the Dubai Future Foundation Global 50 Report, and now works on responsible AI frameworks for children and AI strategy in education with Capgemini.From designing capability-building programs in Kenyan slums to pioneering digital transformation in global fintech, Christopher's story is a testament to creative leadership, ambition, and global impact. The conversation also dives into the future of AI, the importance of trust and ethics, and the social responsibility tech leaders must champion.If you're passionate about tech innovation, AI strategy, global leadership, or social impact, this episode is packed with lessons, inspiration, and actionable insights.
Hello San Francisco - we're arrived for Microsoft Ignite 2025! The #CloudRealities podcast team has landed this week in San Francisco, we're bringing you the best updates right from the heart of the event. Join us to connect AI at scale, cloud modernization, and secure innovation—empowering organizations to become AI-first. Plus, we'll keep you updated on all the latest news and juicy gossip. Dave, Esmee, and Rob wrap up their Ignite 2025 series with Yina Arenas, CVP of Microsoft Foundry, to discuss why Foundry is the go-to choice for enterprises and how it champions responsible development and innovation. TLDR00:40 – Introduction to Yina Arenas01:14 – How the team is doing, keynote highlights, and insights from the Expo floor02:50 – Deep dive with Yina on the evolution of Cloud Foundry29:24 – Favourite IT-themed movie, human interaction, and our society31:56 – Personal (and slightly juicy) reflections on the week37:30 – Team reflections on Ignite 2025, including an executive summary per guest and appreciation for Dennis Hansen50:54 – The team's favorite IT-themed movies59:30 – Personal favorite restaurantGuestYina Arenas: https://www.linkedin.com/in/yinaa/ HostsDave Chapman: https://www.linkedin.com/in/chapmandr/Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/ ProductionMarcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman: https://www.linkedin.com/in/chapmandr/ SoundBen Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett: https://www.linkedin.com/in/louis-corbett-087250264/ 'Cloud Realities' is an original podc
Bob Pulver, host of the Elevate Your AIQ podcast and a 25-year enterprise tech and innovation veteran, joins us this week to unpack the urgent need to move past "AI" as a buzzword and define what "Responsible AI" truly means for organizations. He shares his insights on why we are all responsible for AI, how to balance playing "defense" (risk mitigation) and "offense" (innovation), and why we must never outsource our critical thinking and human agency to these new tools. [0:00] Introduction Welcome, Bob! Today's Topic: Defining Responsible AI and Responsible Innovation [12:25] What Does “Responsible AI” Mean? Why elements (like fairness in decision-making, data provenance, and privacy) must be built-in "by design," not bolted on later. In an era where everyone is a "builder," we are all responsible for the tools we use and create. [25:48] The Two Sides of Responsible Innovation The "responsibility" side involves mitigating risk, ensuring fairness, and staying human-centric—it's like playing defense. The "innovation" side involves driving growth, entering new markets, and reinvesting efficiency gains—it's like playing offense. [41:58] Why don't we use AI to give us a 4-day work week? The critical need for leaders to separate their personal biases from data-driven facts. AI's role in recent layoffs. [50:27] Closing Thanks for listening! Quick Quote “We're all responsible for Responsible AI, whatever your role is. You're either using it or abusing it . . . or you're building it or you're testing it.”
Mental Toughness Mastery Podcast with Sheryl Kline, M.A. CHPC
http://www.sherylkline.com/blogIn the latest Fearless Female Leadership interview, I had the honor of talking with Sarah Lloyd Favaro, Senior Solutions Director, Office of Responsible AI and Governance at HCLTech, about one of the most urgent and misunderstood leadership topics today: how leaders can mitigate AI bias for women.Sarah's career has always lived at the intersection of technology and learning. Long before generative AI swept into the mainstream, she was exploring how tech could enhance human capability (not replace it.) But with the rapid rise of AI tools, Sarah doubled down on understanding how these systems work, why bias appears, and how leaders can prepare their organizations for a future where AI is woven into every workflow.What makes Sarah's perspective so powerful is her blended expertise: she understands both the practical magic of AI and the very real risks. She believes strongly that if organizations benefit from AI's productivity and innovation, they must also ensure equitable, responsible, human-centered usage.She emphasizes the critical role leaders play in upskilling their workforce… especially women, who are statistically underrepresented in AI fields. According to Sarah, equitable access to education and tools is non-negotiable if companies want to avoid widening gender and societal gaps.Sarah also demystifies what many call the AI “black box.” She explains that becoming confident with AI doesn't require being an engineer. Instead, it requires learning how to communicate with AI systems, think critically about outputs, and understand where bias may creep in.Her message is both empowering and practical: AI is here to stay. And with the right awareness, skills, and strategies, women and leaders can shape a future where AI is an equalizer (not a divider.)
In this episode of the HR Leaders Podcast, we sit down with Michiel van Duin, Chief People Technology, Data and Insights Officer at Novartis to discuss how the company is building a human-centered AI ecosystem that connects people, data, and technology.Michiel explains how Novartis brings together HR, IT, and corporate strategy to align AI innovation with the company's long-term workforce and business goals. He shares how the team built an AI governance framework and a dedicated AI and innovation function inside HR, ensuring responsible use of AI while maintaining trust and transparency.From defining when AI should step in and when a “human-in-the-loop” is essential, to upskilling employees and creating the first “Ask Novartis” AI assistant, Michiel shows how Novartis is making AI practical, ethical, and human.
Ireland's foremost digital marketing event, 3XE Digital, returns this November 26th with a bold new focus on the transformative power of Artificial Intelligence. 3XE AI will take place on Wednesday, November 26th at The Alex Hotel, Dublin, bringing together hundreds of marketers, social media professionals and business leaders to explore how AI is reshaping marketing strategy, creativity and performance. Delegates from top Irish brands including Chadwicks, Kepak, Chartered Accountants Ireland, Sage, The Travel Department, Finlay Motor Group, Hardware Association, and many more have already booked to attend this dynamic one-day conference designed to inspire, educate and empower. The event will be co-chaired by Anthony Quigley, Co-Founder of the Digital Marketing Institute, and Sinéad Walsh of Content Plan. Attendees will hear from leading voices in AI and digital marketing, discovering how to harness new technologies to deliver smarter, more efficient, and measurable campaigns. Key Highlights: Expert speakers from: Google, OpenAI, Content Plan, Women in AI, AI Certified, The Corporate Governance Institute, and more will share their wealth of knowledge on how clever use of AI can significantly improve all digital marketing and social media strategies and campaigns and continue to change how we do business and can massively increase sales. Topics include: ? Winning with AI in Business with Christina Barbosa-Gress, Google ? AI-Powered Operations for Irish SMEs with Denis Jastrzebski, Content Plan ? Education for Unlocking AI's Potential with Ian Dodson, AiCertified ? Practical and Responsible AI with Boris Gersic, Corporate Governance Institute ? The Compliance Edge in the AI Era with Colin Cosgrove, Movizmo Coaching Solutions ? Unlocking AI's True Potential in Business with Naomh McElhatton, Irish Ambassador for Women in AI Adrian Hopkins, Founder, 3XE Digital said, "Reviving the 3XE Digital conference series felt timely, and AI presented the perfect opportunity. Artificial Intelligence is reshaping the entire marketing landscape - enhancing performance, improving efficiency and offering unprecedented creative possibilities. We're excited to bring this crucial conversation to the forefront once again." The 3XE AI Conference, organised in partnership with Content Plan, is proudly supported by Friday Agency, GS1 Ireland, and AI Certified. All details, including full speaker lineup, conference agenda and online bookings are available at https://3xe.ie. Early bookings remain open at 3xe.ie - including group discounts for teams. See more stories here. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Facebook, Instagram, TikTok and Snapchat.
Send us a textA stuffed animal that answers back. A kind voice that “understands.” A tutor that lives in a fictional town. AI characters are everywhere, and they're changing how kids learn, play, and bond with media. We sat down with Dr. Sonia Tiwari, children's media researcher and former game character designer, to unpack how to welcome these tools into kids' lives without losing what matters most.Sonia breaks down what truly makes an AI character: a personality, a backstory, and the new twist of two‑way interactivity. From chatbots and smart speakers to social robots and virtual influencers, we trace how each format affects attention, trust, and learning. Then we get practical. We talk through how to spot manipulative backstories (“I'm your best friend” is a red flag), when open‑ended chat goes wrong, and why short, purposeful sessions keep curiosity high and dependence low.For caregivers wary of AI, Sonia offers a powerful reframe: opting out cedes the space to designs that won't put kids first. Early, honest AI literacy, taught like other life skills, protects children from deepfakes, overfamiliar bots, and data oversharing.If you care about safe, joyful learning with technology, this conversation gives you a clear checklist and a calm path forward. Subscribe for more parent‑friendly, screen‑light AI guidance, share this with someone who needs it, and leave a review to help more families find the show.Resources:Flora AI – the visual AI tool you mentioned as your favorite gadgetDr. Sonia Tiwari's research article – “Designing ethical AI characters for children's early learning experiences” in AI, Brain and ChildDr. Sonia Tiwari on LinkedIn – you told listeners to check out her LinkedInBuddy.ai – AI character English tutor you referencedSnorble – the AI bedtime companion you mentionedSupport the showHelp us become the #1 podcast for AI for Kids. Support our kickstarter: https://www.kickstarter.com/projects/aidigicards/the-abcs-of-ai-activity-deck-for-kids Buy our debut book “AI… Meets… AI”Social Media & Contact: Website: www.aidigitales.com Email: contact@aidigitales.com Follow Us: Instagram, YouTube Books on Amazon or Free AI Worksheets Listen, rate, and subscribe! Apple Podcasts Amazon Music Spotify YouTube Other Like our content? patreon.com/AiDig...
From founding Africa's largest AI community to leading AI Expo Africa and the South African AI Association, Nick is connecting innovators, investors, and governments to shape the continent's AI-powered future. Discover how Africa is fast becoming the next frontier for global AI innovation and responsible tech leadership.00:09- About Dr Nick BradshawNick is Founder of AI Expo Africa and also Chair & Founder of the SA AI Association (SAAIA) focusing on the deployment of Responsible AI in South Africa.
Send us a textExplore how leaders and coaches can adopt AI without losing the human core, turning compliance and ethics into everyday practice rather than a side office. Colin Cosgrove shares a practical arc for AI readiness, concrete use cases, and a clear view of risk, trust, and governance.• journey from big-tech compliance to leadership coaching • why AI changes the leadership environment and decision pace • making compliance human: transparency, explainability, consent • AI literacy across every function, not just data teams • the AI leader archetype arc for mindset and readiness • practical augmentation: before, during, after coaching sessions • three risks: reputational, relational, regulatory • leader as coach: trust, questions, and human skills • EU AI Act overview and risk-based obligations • governance, accountability, and cross-Reach out to Colin on LinkedIn and check out his website: Movizimo.com. Support the showBelemLeaders–Your organization's trusted partner for leader and team development. Visit our website to connect: belemleaders.org or book a discovery call today! belem.as.me/discoveryUntil next time, keep doing great things!
In deze speciale aflevering die aansluit bij de Accountantsdag 2025 met als thema Reality Check, duikt Vitamine A opnieuw in de impact van kunstmatige intelligentie op het accountantsvak. Drie gasten, drie invalshoeken en één grote vraag: wat betekent AI voor het beroep, de organisatie en de mens achter de accountant.Mona de Boer (PwC, Responsible AI) vertelt hoe AI inmiddels een dagelijkse realiteit is geworden en waarom organisaties nu moeten bepalen welke waarden ze hanteren. Ze bespreekt de betekenis van de EU AI Act en de opkomst van AI assurance als nieuw domein binnen het vertrouwen in technologie. Daarbij benadrukt ze dat de accountant geen terrein verliest maar juist aan belang wint.Nart Wielaard neemt het publiek mee in het concept van de Zero Person Company, een experimentele organisatie die draait op agents in plaats van mensen. Het experiment laat zien dat AI geen mens kan kopiëren, maar dat processen op een fundamenteel andere manier ingericht kunnen worden. De accountant speelt daarin een rol als coach, toezichthouder en kwaliteitsbewaker van AI-gedreven processen.Met Marjan Heemskerk verschuift de focus naar de dagelijkse praktijk van ondernemers. Zij ziet hoe AI basisvragen overneemt, maar vooral ruimte creëert voor een accountant die duidt, meedenkt en context biedt. Soft skills worden cruciaal. De uitdaging voor kantoren is om AI verantwoord in te zetten, medewerkers daarin mee te nemen en tegelijkertijd de verleiding van shortcuts te voorkomen.De aflevering eindigt met een reality check die zowel technologisch als menselijk is. AI verandert veel, maar het fundament van het accountantsvak blijft overeind: vertrouwen, onafhankelijkheid en het vermogen om de werkelijkheid te duiden.Vitamine A sprak eerder over AI. Esther Kox, Hakan Koçak en Nart Wielaard spreken ook op de Accountantsdag 2025, op 19 november 2025.Accountantsdag 2025: http://www.accountantsdag.nlVitamine A #63 | AI als assistent, niet als autoriteit... In gesprek met Esther KoxVitamine A #62 | AI op kantoor: Twijfelen of toepassen? Met Hakan KoçakVitamine A #43 | Betrouwbare AI en verantwoording. Hoe doe je dat? Met Mona de Boer (PwC)Vitamine A #34 | Wat betekent AI voor accountants die op zoek zijn naar waarheid?
Paula Helm articulates an AI vision that goes beyond base performance to include epistemic justice and cultural diversity by focusing on speakers and not language alone. Paula and Kimberly discuss ethics as a science; language as a core element of culture; going beyond superficial diversity; epistemic justice and valuing other's knowledge; the translation fallacy; indigenous languages as oral goods; centering speakers and communities; linguistic autonomy and economic participation; the Māori view on data ownership; the role of data subjects; enabling cultural understanding, self-determination and expression; the limits of synthetic data; ethical issues as power asymmetries; and reflecting on what AI mirrors back to us. Paula Helm is an Assistant Professor of Empirical Ethics and Data Science at the University of Amsterdam. Her work sits at the intersection of STS, Media Studies and Ethics. In 2022 Paula was recognized as one of the 100 Most Brilliant Women in AI-Ethics.Related ResourcesGenerating Reality and Silencing Debate: Synthetic Data as Discursive Device (paper) https://journals.sagepub.com/doi/full/10.1177/20539517241249447Diversity and Language Technology (paper): https://link.springer.com/article/10.1007/s10676-023-09742-6A transcript of this episode is here.
Dr. Jeremy Roschelle and Dr. Pati Ruiz from Digital Promise join the podcast to discuss their learning sciences research into AI's role in education. They share details about an innovative project using AI to improve student reading literacy and explore frameworks for developing AI literacy and responsible use policies in schools.Practitioner Toolkit from Digital Promise, provides resources for collaborative learning that are flexible, adaptable, and rooted in real teaching experienceChallenge Map, from Digital PromiseU-GAIN Reading, program from Digital Promise seeking to amplify new knowledge about how to use GenAI to create content that matches each student's interests and strengths, enables dialogue about the meaning of content, and adapts to a student's progress and needsAI Literacy, framework from Digital Promise to understand, evaluate, and use emerging technologySceneCraft, program from EngageAI Institute with AI-powered, narrative-driven learning experiences, engaging students through storytelling, creativity, and critical thinkingAs they face conflicting messages about AI, some advice for educators on how to use it responsibly, opinion blog from Jeremy RoschelleTeacher Ready Evaluation Tool, helps standardize the way ed tech decision-makers evaluate edtech productsEvaluating Tech Solutions, ATLIS is an official partner with ISTE to expand the presence of independent school vendors and technology solutions in the Edtech IndexIf you are interested in engaging in research with Digital Promise, or just have a great research idea, share a message on LinkedIn: Jeremy | PatiMore Digital Promise articles:GenAI in Education: When to Use It, When to Skip It, and How to Decide – Digital PromiseHearing from Students: How Learners Experience AI in Education – Digital PromiseMeet the Educators Helping U-GAIN Reading Explore How GenAI Can Improve Literacy – Digital PromiseGuest Post: 3 Guiding Principles for Responsible AI in EdTech – Digital Promise
In this final episode of Season 3 of Tech It to the Limit, hosts Sarah Harper and Elliott Wilson go global and get grounded with a very special guest. After sharing travel tales from Germany and the HLTH conference, Sarah and Elliott debut their new game, “Trust-O-Meter,” rating real-world health tech scandals and solutions on a scale from “hospital stairwell cell signal” to “grandma's green bean casserole.”Then, they sit down with Dr. David Rhew, Global Chief Medical Officer at Microsoft, for a wide-ranging, surprisingly personal conversation on everything from his pivot from academia to industry ( a VA grant pushed him out) to the future of oculomics, voice biomarkers, and responsible AI. Dr. Rhew breaks down the three layers of bias, explains why implementation is everything, and doesn't shy away from the hard truth about AI and the future of the healthcare workforce. It's a deep, funny, and profoundly human conversation to close out the season.The episode wraps with Wise Nugs and a final health tech haiku, leaving listeners hopeful and ready for Season 4.Key TakeawaysTrust needs humans in the loop :AI earns credibility when it supports clinical workflows, not replaces them.Bias hides in plain sight :Data, model design, and deployment all carry bias. Responsible AI means addressing all three.Implementation eats innovation for breakfast: Technology does not change healthcare; adoption and usability do.Your eyes and voice are the new vital signs :Oculomics and voice biomarkers are turning everyday signals into early detection tools.Equity must be built in, not bolted on:“Neutral AI” does not exist. Fairness and transparency have to be engineered from the start.Automation is not the enemy; stagnation is :AI will replace tasks, not purpose. The key is reskilling and redefining human work.In this episode:[00:00:13] Welcome to the season 3 finale[00:01:19] Host travel log[00:05:24] Game debut: Trust-o-meter[00:22:01] Interview: Dr. David Rhew[00:23:34] Dad jokes and Korean BBQ regrets[00:25:27] From white coat to cloud[00:30:52] Bridging the hype-reality gap[00:34:50] Oculomics: The 2-minute eye scan[00:38:02] The DMA of bias[00:45:27] The TRAIN consortium[00:48:45] Cloud consolidation and data stewardship[00:58:29] Call to action: Operationalizing trust[01:05:32] Spicy nugs: Key takeaways[01:14:09] Health tech haiku and sign-offResources:Tech It To The Limit PodcastWebsite Apple PodcastDr. David RhewLinkedIn -https://www.linkedin.com/in/david-rhew-m-d-1832764/Sarah HarperLinkedIn -https://www.linkedin.com/in/sarahbethharperElliott WilsonLinkedIn - https://www.linkedin.com/in/matthewelliottwilson
In this episode of the Shift AI Podcast, Will Jung, Chief Technology Officer at nCino, joins host Boaz Ashkenazy to explore how artificial intelligence is revolutionizing the traditionally conservative banking and financial services sector. Jung brings a distinctive perspective from his extensive experience helping financial institutions transition from viewing technology as a cost center to embracing it as a strategic innovation driver, particularly in the highly regulated world of banking.From fraud prevention using AI agents that actively lure scammers to context engineering that personalizes banking experiences, Jung offers compelling insights into how banks are deploying cutting-edge technology while maintaining trust and regulatory compliance. The conversation examines the delicate balance between rapid technological advancement and responsible innovation, the future of personalized banking relationships, and why staying human remains the most critical factor in an increasingly automated world. If you're interested in understanding how one of the most regulated industries is navigating the AI revolution while serving underbanked populations and protecting customer data, this episode delivers essential perspectives from a technology leader at the forefront of financial innovation. Chapters:[02:00] Will's Background in Banking Technology[03:00] nCino's Mission in the FinTech Space[04:30] Banks Embracing Technology as Innovation Driver[06:50] Fighting Fraud with Advanced AI Technology[09:00] Building Technical and Non-Technical Team Culture[11:50] Context Engineering in Banking[14:25] Privacy and Personalization Trade-offs[19:20] The Future Customer Experience in Banking[21:30] Societal Implications of AI Technology[25:50] Cryptocurrency and Banking Technology[27:50] Two Words for the Future: Stay Human[30:00] The Human Element in Automated Banking DecisionsConnect with Will JungLinkedIn: https://www.linkedin.com/in/will-jung/?originalSubdomain=au Connect with Boaz AshkenazyLinkedIn: https://linkedin.com/in/boazashkenazy Email: info@shiftai.fm
Join host Bobby Brill as he sits down with ServiceNow's AI legal and governance experts to break down the complex world of AI regulations. Andrea LaFontain (Director of AI Legal), Ken Miller (Senior Director of Product Legal), and Navdeep Gill (Staff Senior Product Manager, Responsible AI) explain how organizations can navigate the growing landscape of AI compliance. In this episode, you'll learn about three major regulatory approaches: the risk-based EU AI Act, Colorado's algorithmic discrimination law, and the NIST voluntary framework. The experts discuss practical strategies for complying with multiple regulations simultaneously, using the EU AI Act as a baseline and measuring the delta for new requirements. Key topics covered:- Why proactive compliance matters before regulations fully take effect - How AI Control Tower helps discover and manage AI systems across your enterprise - The exponential math behind AI compliance (vendors, employees, third parties) - Setting up governance policies for high-risk AI use cases - Timeline for major compliance deadlines (Colorado June 2026, EU August 2026) - The real costs of waiting for your first violation Whether you're managing AI deployment, working in compliance, or trying to understand the regulatory landscape, this episode provides actionable insights on building responsible AI governance infrastructure. Guests - Andrea LaFountain -Director, AI Legal Ken Miller - Senior Director, Product Legal Navdeep Gill - Staff Senior Product Manager, Responsible AI Host - Bobby Brill Chapters:00:00 Introduction to AI and Regulations 00:45 Meet the Experts 01:52 Overview of Key AI Regulations 03:03 Compliance Strategies for AI Regulations 07:33 ServiceNow's AI Control Tower 14:02 Challenges and Risks in AI Governance 16:04 Future of AI Regulations 18:34 Conclusion and Final ThoughtsSee omnystudio.com/listener for privacy information.
Join host Bobby Brill as he sits down with ServiceNow's AI legal and governance experts to break down the complex world of AI regulations. Andrea LaFontain (Director of AI Legal), Ken Miller (Senior Director of Product Legal), and Navdeep Gill (Staff Senior Product Manager, Responsible AI) explain how organizations can navigate the growing landscape of AI compliance. In this episode, you'll learn about three major regulatory approaches: the risk-based EU AI Act, Colorado's algorithmic discrimination law, and the NIST voluntary framework. The experts discuss practical strategies for complying with multiple regulations simultaneously, using the EU AI Act as a baseline and measuring the delta for new requirements. Key topics covered:- Why proactive compliance matters before regulations fully take effect - How AI Control Tower helps discover and manage AI systems across your enterprise - The exponential math behind AI compliance (vendors, employees, third parties) - Setting up governance policies for high-risk AI use cases - Timeline for major compliance deadlines (Colorado June 2026, EU August 2026) - The real costs of waiting for your first violation Whether you're managing AI deployment, working in compliance, or trying to understand the regulatory landscape, this episode provides actionable insights on building responsible AI governance infrastructure. Guests - Andrea LaFountain -Director, AI Legal Ken Miller - Senior Director, Product Legal Navdeep Gill - Staff Senior Product Manager, Responsible AI Host - Bobby Brill Chapters:00:00 Introduction to AI and Regulations 00:45 Meet the Experts 01:52 Overview of Key AI Regulations 03:03 Compliance Strategies for AI Regulations 07:33 ServiceNow's AI Control Tower 14:02 Challenges and Risks in AI Governance 16:04 Future of AI Regulations 18:34 Conclusion and Final ThoughtsSee omnystudio.com/listener for privacy information.
How to Safely and Strategically Adopt AI in Your Organization: Expert Insights from Lexi Reese, CEO of LanaiArtificial intelligence is reshaping the modern workplace faster than any technology before it. But as companies rush to integrate AI, many leaders struggle with how to adopt it responsibly—balancing innovation, security, and ethics. In this episode of The Thoughtful Entrepreneur, host Josh Elledge interviews Lexi Reese, Co-Founder and CEO of Lanai, an AI-native observability and security platform. Lexi shares practical insights on how organizations can safely manage, monitor, and scale AI adoption without compromising data integrity or trust.Leading AI Adoption ResponsiblyLexi explains that the most successful companies treat AI not just as a set of tools, but as part of their workforce—a powerful digital team member that requires oversight, structure, and accountability. She emphasizes that AI must be “hired” into an organization with defined roles, clear expectations, and measurable outcomes. Just as leaders track employee performance, they must also monitor how AI performs, adapts, and impacts real-world results.Visibility, Lexi notes, is essential for responsible AI use. Many organizations don't know which departments are using AI, how data is being handled, or where security risks exist. Lanai's technology helps leaders map and monitor AI usage across their companies—identifying risks, preventing data leaks, and ensuring compliance with privacy laws. This proactive approach transforms uncertainty into insight, allowing innovation to flourish safely.Beyond technology, Lexi encourages leaders to consider the human element of AI integration. By prioritizing education, ethical standards, and collaboration between business and compliance teams, organizations can create a culture of trust and accountability. Responsible AI adoption isn't about slowing progress—it's about making innovation sustainable, secure, and beneficial for everyone.About Lexi ReeseLexi Reese is the Co-Founder and CEO of Lanai, an AI-native observability and security platform helping organizations safely adopt and manage AI. With a background that spans leadership roles at Google, Gusto, and public service, Lexi is known for her expertise in building ethical technology systems that empower teams and protect businesses.About LanaiLanai is an AI observability and security platform designed to help organizations monitor, govern, and scale AI adoption responsibly. Built for visibility and control, Lanai enables companies to detect risks, enforce compliance, and ensure ethical AI use across all departments. Learn more at lanai.com.Links Mentioned in This EpisodeLexi Reese on LinkedInLanai WebsiteKey Episode HighlightsWhy organizations must treat AI like a workforce, not just a tool.The importance of visibility and observability in AI adoption.Common AI risks—from data exposure to compliance violations—and how to prevent them.How Lanai helps companies balance innovation with ethical and secure AI use.Actionable steps for leaders to define, measure, and improve AI's role in their operations.ConclusionLexi Reese's insights remind us that AI's potential is only as powerful as the systems and ethics guiding it. By combining strategic visibility, thoughtful oversight, and a culture of accountability, leaders can ensure AI strengthens—rather than compromises—their...
In this episode of Talking Sleep, host Dr. Seema Khosla welcomes members of the AASM Artificial Intelligence in Sleep Medicine Committee—Dr. Margarita Oks, Dr. Subaila Zia, Dr. Ramesh Sachdeva, and Matt Anastasi—to discuss their recently published position statement on the responsible use of AI in sleep medicine practices. Artificial intelligence is rapidly transforming healthcare, from AI-assisted sleep study scoring to clinical documentation tools and insurance claim processing. Yet AI is not a monolith—the technology encompasses various types with different capabilities, risks, and regulatory considerations. Matt Anastasi breaks down the different forms of AI clinicians encounter in practice, while the panel explains what "responsible use" actually means in practical terms. The updated position statement, notably shorter and more accessible than previous versions, addresses four major pillars: data privacy, fairness and transparency, infrastructure requirements, and medical-legal considerations. The discussion explores critical questions facing sleep medicine practitioners: How do we understand and trust the AI systems we use? What happens when insurance payors deploy AI to deny claims—should we fight AI-generated denials with AI-generated appeals? Do patients need to be informed when AI is used in their care, and how specific must those disclosures be? The conversation delves into liability concerns that keep practitioners awake at night: If your employer implements AI and it makes an error, who bears responsibility? What about ignoring AI prompts—does that create liability? Dr. Sachdeva explains the concept of vicarious responsibility and how it applies to AI implementation. The panel also addresses less obvious impacts, such as AI-driven resume filtering that may affect hiring practices. Practical implementation guidance is provided through discussion of governance checklists, equity considerations in AI deployment, and the limitations of FDA clearance for AI-assisted sleep study scoring. The experts introduce AASM Link and discuss how practitioners can evaluate AI tools beyond marketing claims, ensuring systems are trained on diverse, representative data sets. The episode tackles a fundamental question: Is AI use inevitable in sleep medicine, or can practitioners opt out? The panel offers realistic perspectives on integrating AI responsibly while maintaining clinical judgment and patient-centered care. Whether you're already using AI tools, considering implementation, or resistant to adoption, this episode provides essential guidance on navigating the AI transformation in sleep medicine while upholding professional and ethical standards. Join us for this timely discussion about balancing innovation with responsibility in the AI era of sleep medicine.
We're in Los Angeles at Adobe MAX 2025 to break down the announcements that will change how creators work, including Adobe's game-changing partnership with YouTube. We're joined by a legendary lineup of guests to discuss the future of creativity. Mark Rober reveals his $55 million secret project for the first time ever, Cleo Abram (Huge If True) shares her POV on editorial freedom and advancements in tech, and Adobe's GM of Creators Mike Polner, explains the new AI tools that will save you hours of work.What you'll learn:-- Mark Rober's strategy for building a 100-person company.-- The AI audio tool that creates studio-quality sound anywhere.-- How to edit YouTube Shorts inside the new Premiere Mobile app.-- Why creative freedom is more important than ever for creators.If you want to stay ahead in the creator economy, subscribe and hit the bell so you don't miss our next episode!00:00 Live From Adobe MAX!01:01 Adobe's ChatGPT Integration01:45 The New Adobe x YouTube Partnership04:09 YouTube's New TV Experience07:48 Welcome Mark Rober!08:40 Is AI Cheating for Creators?12:25 Building the Mark Rober Business16:51 Mark Rober's $55M Secret Project23:53 Welcome Cleo Abram!26:12 Why I Left Vox31:20 AI Tools Lower The Barrier37:24 Welcome Adobe's Mike Polner!39:31 Adobe's Top 3 New Tools44:27 What is "Responsible AI"?52:06 Upload: Steven Bartlett's Big RaiseCreator Upload is your creator economy podcast, hosted by Lauren Schnipper and Joshua Cohen.Follow Lauren: https://www.linkedin.com/in/schnipper/Follow Josh: https://www.linkedin.com/in/joshuajcohen/Original music by London Bridge: https://www.instagram.com/londonbridgemusic/Edited and produced by Adam Conner: https://www.linkedin.com/in/adamonbrand
Dr. Julia Stoyanovich is Institute Associate Professor of Computer Science and Engineering, Associate Professor of Data Science, Director of the Center for Responsible AI, and member of the Visualization and Data Analytics Research Center at New York University. She is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE) and a Senior member of the Association of Computing Machinery (ACM). Julia's goal is to make “Responsible AI” synonymous with “AI”. She works towards this goal by engaging in academic research, education and technology policy, and by speaking about the benefits and harms of AI to practitioners and members of the public. Julia's research interests include AI ethics and legal compliance, and data management and AI systems. Julia is engaged in technology policy and regulation in the US and internationally, having served on the New York City Automated Decision Systems Task Force, by mayoral appointment, among other roles. She received her M.S. and Ph.D. degrees in Computer Science from Columbia University, and a B.S. in Computer Science and in Mathematics & Statistics from the University of Massachusetts at Amherst.Links:https://engineering.nyu.edu/faculty/julia-stoyanovich https://airesponsibly.net/nyaiexchange_2025/ Hosted on Acast. See acast.com/privacy for more information.
Kevin Werbach speaks with Trey Causey about the precarious state of the responsible AI (RAI) field. Causey argues that while the mission is critical, the current organizational structures for many RAI teams are struggling. He highlights a fundamental conflict between business objectives and governance intentions, compounded by the fact that RAI teams' successes (preventing harm) are often invisible, while their failures are highly visible. Causey makes the case that for RAI teams to be effective, they must possess deep technical competence to build solutions and gain credibility with engineering teams. He also explores the idea of "epistemic overreach," where RAI groups have been tasked with an impossibly broad mandate they lack the product-market fit to fulfill. Drawing on his experience in the highly regulated employment sector at Indeed, he details the rigorous, science-based approach his team took to defining and measuring bias, emphasizing the need to move beyond simple heuristics and partner with legal and product teams before analysis even begins. Trey Causey is a data scientist who most recently served as the Head of Responsible AI for Indeed. His background is in computational sociology, where he used natural language processing to answer social questions. Transcript Responsible Ai Is Dying. Long Live Responsible AI
Send us a textThe Causal Gap: Truly Responsible AI Needs to Understand the ConsequencesWhy do LLMs systematically drive themselves to extinction, and what does it have to do with evolution, moral reasoning, and causality?In this brand-new episode of Causal Bandits, we meet Zhijing Jin (Max Planck Institute for Intelligent Systems, University of Toronto) to answer these questions and look into the future of automated causal reasoning.In this episode, we discuss:- Zhijing's new work on the "causal scientist"- What's missing in responsible AI- Why ethics matter for agentic systems- Is causality a necessary element of moral reasoning?------------------------------------------------------------------------------------------------------Video version available on Youtube: https://youtu.be/Frb6eTW2ywkRecorded on Aug 18, 2025 in Tübingen, Germany.------------------------------------------------------------------------------------------------------About The GuestZhiijing Jin is a researcher scientist at Max Planck Institute for Intelligent Systems and an incoming Assistant Professor at the University of Toronto. Her work is focused on causality, natural language, and ethics, in particular in the context of large language models and multi-agent systems. Her work received multiple awards, including NeurIPS best paper award, and has been featured in CHIP Magazine, WIRED, and MIT News. She grew up in Shanghai. Currently she prepares to open her new research lab at the University of Toronto.Support the showCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
Jordan Loewen-Colón values clarity regarding the practical impacts, philosophical implications and work required for AI to serve the public good, not just private gain.Jordan and Kimberly discuss value alignment as an engineering or social problem; understanding ourselves as data personas; the limits of personalization; the perception of agency; how AI shapes our language and desires; flattening of culture and personality; localized models and vernacularization; what LLMs value (so to speak); how tools from calculators to LLMs embody values; whether AI accountability is on anyone's radar; failures of policy and regulation; positive signals; getting educated and fostering the best AI has to offer.Jordan Loewen-Colón is an Adjunct Associate Professor of AI Ethics and Policy at Smith School of Business | Queen's University. He is also the Co-Founder of the AI Alt Lab which is dedicated to ensuring AI serves the public good and not just private gain.Related ResourcesHBR Research: Do LLMs Have Values? (paper): https://hbr.org/2025/05/research-do-llms-have-values AI4HF Beyond Surface Collaboration: How AI Enables High-Performing Teams (paper): https://www.aiforhumanflourishing.com/the-framework-papers/relationshipsandcommunication A transcript of this episode is here.
HOT OFF THE PRESSES: In this special episode of In AI We Trust?, EqualAI President and CEO Miriam Vogel is joined by her two co-authors of Governing the Machine: How to navigate the risks of AI and unlock its true potential, Dr. Paul Dongha, Head of Responsible AI and AI Strategy at NatWest Group, and Ray Eitel-Porter, Accenture Luminary and Senior Research Associate at the Intellectual Forum, Jesus College, Cambridge, to launch their new book released TODAY (October 28, 2025). Miriam, Paul, and Ray share their motivation for writing the book, some of the big takeaways on AI governance, why it is for companies and consumers alike, and what they hope readers will learn from their book. We hope that you enjoy this episode, and please be sure to purchase a copy of Governing the Machine at the link above! And share your feedback at contact@equalai.org!
Co-hosts Mark Thompson and Steve Little explore how Google's Nano Banana photo restoration tool will revolutionize image restoration by integrating with Adobe Photoshop. This move will greatly reduce unintended changes to historical photos when editing them with AI.Next, they unpack OpenAI's move to make ChatGPT Projects available to free-tier users, making research organization more accessible for genealogists.This week's Tip of the Week provides essential guidance on the responsible use AI when editing historical photos using AI tools like Nano Banana, ensuring transparency and trust in historical photographs.In RapidFire, they cover OpenAI's new Sora 2 AI-video social media platform, Claude's new ability to create and edit Microsoft Office files, memory features in Claude Projects, advancements in local language models, and how OpenAI's massive infrastructure deals are changing the AI landscape.Timestamps:In the News:02:43 Adobe improves historical photo restoration by adding Nano Banana to Photoshop09:34 ChatGPT Projects are Now FreeTip of the Week:13:36 Citations for AI-Restored Images Build Trust in AI-Modified PhotosRapidFire:21:24 Sora 2 Goes Social27:23 Claude Adds Microsoft Office Creation and Editing34:26 Memory Features Come to Claude Projects38:32 Apple and Amazon both create Local Language Model tools44:47 OpenAI's Big Data Centre Deal with Oracle Resource LinksOpenAI announces free access to ChatGPT Projectshttps://help.openai.com/en/articles/6825453-chatgpt-release-notesEngadget: OpenAI Rolls Out ChatGPT Projects to Free Usershttps://www.engadget.com/ai/openai-rolls-out-chatgpt-projects-to-free-users-215027802.htmlForbes: OpenAI Makes ChatGPT Projects Freehttps://www.forbes.com/sites/quickerbettertech/2025/09/14/small-business-technology-roundup-microsoft-copilot-does-not-improve-productivity-and-openai-makes-chatgpt-project-free/Responsible AI Photo Restorationhttps://makingfamilyhistory.com/responsible-ai-photo-restoration/Claude now has memory, but only for certain usershttps://mashable.com/article/anthropic-claude-ai-now-has-memory-for-some-usersNew Apple Intelligence features are available todayhttps://www.apple.com/newsroom/2025/09/new-apple-intelligence-features-are-available-today/Introducing Amazon Lens Livehttps://www.aboutamazon.com/news/retail/search-image-amazon-lens-live-shopping-rufusAmazon Lens Live Can Scan and Pull Up Matcheshttps://www.pcmag.com/news/spot-an-item-you-wish-to-buy-amazon-lens-live-can-scan-and-pull-up-matchesA Joint Statement from OpenAI and Microsoft About Their Changing Partnershiphttps://openai.com/index/joint-statement-from-openai-and-microsoft/The Verge: OpenAI and Oracle Pen $300 Billion Compute Dealhttps://www.theverge.com/ai-artificial-intelligence/776170/oracle-openai-300-billion-contract-project-stargateReuters: OpenAI and Oracle Sign $300 Billion Computing Dealhttps://www.reuters.com/technology/openai-oracle-sign-300-billion-computing-deal-wsj-reports-2025-09-10/?utm_source=chatgpt.comTagsArtificial Intelligence, Genealogy, Family History, Photo Restoration, AI Tools, OpenAI, Google, Adobe Photoshop, ChatGPT Projects, Nano Banana, Image Editing, AI Citations, Sora 2, Video Generation, Claude, Microsoft Office, Apple Intelligence, Amazon Lens, Oracle, Cloud Computing, Local Language Models, AI Infrastructure, Responsible AI, Historical Photos
Send Bidemi a Text Message!In this episode, host Bidemi Ologunde spoke with Shannon Noonan, CEO/Founder of HiNoon Consulting, and US Global Ambassador - Global Council for Responsible AI. The conversation addressed how to turn “checkbox” programs into real business value, right-sized controls, third-party risk, AI guardrails, and data habits that help teams move faster—while strengthening security, compliance, and privacy.Support the show
Noelle Russell compares AI to a baby tiger, it's super cute when it's small but it can quickly grow into something huge and dangerous. As the CEO and founder of the AI Leadership Institute and as an early developer on Amazon Alexa, Noelle has a deep understanding of scaling and selling AI. This week Noelle joins Tammy to discuss why she's so passionate about teaching individuals and organizations about AI and how companies can leverage AI in the right way. It's time to learn how to tame the tiger! Please note that the views expressed may not necessarily be those of NTT DATA.Links: Noelle Russell Scaling Responsible AI AI Leadership Institute Learn more about Launch by NTT DATASee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
The best time to regulate AI was yesterday, and the next best time is now. There is a clear and urgent need for responsible AI development that implements reasonable guidelines to mitigate harms and foster innovation, yet the conversation in DC and capitals around the world remains muddled. NYU's Dr. Julia Stoyanovich joins David Rothkopf to explore the role of collective action in AI development and why responsible AI is the responsibility of each of us. This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices