Podcasts about responsible ai

  • 667PODCASTS
  • 1,180EPISODES
  • 36mAVG DURATION
  • 1DAILY NEW EPISODE
  • Jan 9, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about responsible ai

Show all podcasts related to responsible ai

Latest podcast episodes about responsible ai

Breakfast Leadership
Deep Dive: When AI Becomes More Than a Tool — How AI Predictions from PwC Signal a New Era for Work, Culture, and Leadership

Breakfast Leadership

Play Episode Listen Later Jan 9, 2026 14:57


Introduction In this Deep Dive episode, we dive into PwC's latest AI Business Predictions — a roadmap offering insight into how companies can harness artificial intelligence not just for efficiency, but as a strategic lever to reshape operations, workforce, and long-term growth. We explore why “AI adoption” is now about more than technology: it's about vision, leadership, and rethinking what work and human potential look like in a rapidly shifting landscape. Key Insights from PwC AI success is as much about vision as about adoption According to PwC, what separates companies that succeed with AI from those that merely dabble is leadership clarity and strategic alignment. Firms that view AI as central to their business model — rather than as an add-on — are more likely to reap measurable gains.  AI agents can meaningfully expand capacity — even double workforce impact One bold prediction: with AI agents and automation, a smaller human team can produce work at a scale that might resemble having a much larger workforce — without proportionally increasing staff size. For private firms especially, this means you can “leapfrog” traditional growth limitations.  From pilots to scale: real ROI is emerging — but requires discipline While many organizations experimented with AI in 2023–2024, PwC argues that 2025 and 2026 are about turning experiments into engines of growth. The companies that succeed are those that pick strategic high-impact areas, double down, and avoid spreading efforts too thin.  Workforce composition will shift — rise of the “AI-generalist” As AI agents take over more routine, data-heavy or repetitive tasks, human roles will trend toward design, oversight, strategy, and creative judgment. The “AI-generalist” — someone who can bridge human judgment, organizational culture, and AI tools — will become increasingly valuable.  Responsible AI, governance, and sustainability are non-negotiables PwC insists that success with AI isn't just about technology rollout; it's also about embedding ethical governance, sustainability, and data integrity. Organizations that treat AI as a core piece of long-term strategy — not a flashy add-on — will be the ones that unlock lasting value.  What This Means for Leaders, Culture & Burnout (Especially for Humans, Not Just AI) Opportunity to reimagine roles — more meaning, less drudgery As AI takes over repetitive, transactional work, human roles can shift toward creativity, strategy, mentorship, emotional intelligence, and leadership. That aligns with your mission around workplace culture and “Burnout-Proof” leadership: this could reduce burnout if implemented thoughtfully. Culture becomes the strategic differentiator As more companies adopt similar AI tools, organizational vision, values, psychological safety, and human connection may become the real competitive edge. Leaders who “get culture right” will be ahead — not because of tech, but because of people. Upskilling, transparency and trust are essential With AI in the mix, employees need clarity, training, and trust. Mismanaged adoption could lead to fear, resistance, or misalignment. Leaders must shepherd not just technology, but human transition. AI-driven efficiency must be balanced with empathy and human-centered leadership The automation and “workforce multiplier” potential is seductive — but if leaders lose sight of human needs, purpose, and wellbeing, there's a risk of burnout, disengagement, or erosion of cultural integrity. For small & private companies: a chance to leapfrog giants — but only with clarity and discipline Smaller firms often lack the resources of large enterprises, but according to PwC, those constraints may shrink when AI is used strategically. For mission-driven companies (like yours), this creates an opportunity to scale impact — provided leadership stays grounded in purpose and values. Why This Topic Matters for the Breakfast Leadership Network & Our Audience Given your work in leadership development, burnout prevention, workplace culture, and coaching — PwC's predictions offer a crucial lens. It's no longer optional for organizations to ignore AI. The question isn't “Will we use AI?” but “How will we use AI — and who do we become in the process?” For founders, people-leaders, HR strategists: this is a call to be intentional. To lead with vision, grounded in human values. To design workplaces that thrive in the AI era — not suffer. Questions for Reflection  What parts of your organization's workflow could be transformed by AI — and what human strengths should those tools free up rather than replace? How might embracing AI shift your organizational culture and the expectations for leaders? What ethical, psychological, or human-impact considerations must you address before “going all in” on AI? As a leader, how will you ensure the “AI-generalists” — employees blending tech fluency with empathy, creativity, and human judgment — are cultivated and supported? How do you prevent burnout and disconnection while dramatically increasing capacity and output via AI? Learn more at https://BreakfastLeadership.com/blog Research:  https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-predictions.html    

ITSPmagazine | Technology. Cybersecurity. Society
CES 2026: Why NVIDIA's Jensen Huang Won IEEE Medal of Honor | A Conversation with Mary Ellen Randall, IEEE's 2026 President and CEO | Redefining Society and Technology with Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Jan 8, 2026 24:46


Jensen Huang Just Won IEEE's Highest Honor. The Reason Tells Us Everything About Where Tech Is Headed.IEEE announced Jensen Huang as its 2026 Medal of Honor recipient at CES this week. The NVIDIA founder joins a lineage stretching back to 1917—over a century of recognizing people who didn't just advance technology, but advanced humanity through technology.That distinction matters more than ever.I spoke with Mary Ellen Randall, IEEE's 2026 President and CEO, from the floor of CES Las Vegas. The timing felt significant. Here we are, surrounded by the latest gadgets and AI demonstrations, having a conversation about something deeper: what all this technology is actually for.IEEE isn't a small operation. It's the world's largest technical professional society—500,000 members across 190 countries, 38 technical societies, and 142 years of history that traces back to when the telegraph was connecting continents and electricity was the revolutionary new thing. Back then, engineers gathered to exchange ideas, challenge each other's thinking, and push innovation forward responsibly.The methods have evolved. The mission hasn't."We're dedicated to advancing technology for the benefit of humanity," Randall told me. Not advancing technology for its own sake. Not for quarterly earnings. For humanity. It sounds like a slogan until you realize it's been their operating principle since before radio existed.What struck me was her framing of this moment. Randall sees parallels to the Renaissance—painters working with sculptors, sharing ideas with scientists, cross-pollinating across disciplines to create explosive growth. "I believe we're in another time like that," she said. "And IEEE plays a crucial role because we are the way to get together and exchange ideas on a very rapid scale."The Jensen Huang selection reflects this philosophy. Yes, NVIDIA built the hardware that powers AI. But the Medal of Honor citation focuses on something broader—the entire ecosystem NVIDIA created that enables AI advancement across healthcare, autonomous systems, drug discovery, and beyond. It's not just about chips. It's about what the chips make possible.That ecosystem thinking matters when AI is moving faster than our ethical frameworks can keep pace. IEEE is developing standards to address bias in AI models. They've created certification programs for ethical AI development. They even have standards for protecting young people online—work that doesn't make headlines but shapes the digital environment we all inhabit."Technology is a double-edged sword," Randall acknowledged. "But we've worked very hard to move it forward in a very responsible and ethical way."What does responsible look like when everything is accelerating? IEEE's answer involves convening experts to challenge each other, peer-reviewing research to maintain trust, and developing standards that create guardrails without killing innovation. It's the slow, unglamorous work that lets the exciting breakthroughs happen safely.The organization includes 189,000 student members—the next generation of engineers who will inherit both the tools and the responsibilities we're creating now. "Engineering with purpose" is the phrase Randall kept returning to. People don't join IEEE just for career advancement. They join because they want to do good.I asked about the future. Her answer circled back to history: the Renaissance happened when different disciplines intersected and people exchanged ideas freely. We have better tools for that now—virtual conferences, global collaboration, instant communication. The question is whether we use them wisely.We live in a Hybrid Analog Digital Society where the choices engineers make today ripple through everything tomorrow. Organizations like IEEE exist to ensure those choices serve humanity, not just shareholder returns.Jensen Huang's Medal of Honor isn't just recognition of past achievement. It's a statement about what kind of innovation matters.Subscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.My Newsletter? Yes, of course, it is here: https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Product Talk
Cognizant Senior Director on Building Responsible AI Gateways for Healthcare at Scale

Product Talk

Play Episode Listen Later Jan 7, 2026 38:22


How do you scale generative AI in healthcare without sacrificing trust, transparency, or governance? In this podcast hosted by Mphasis Vice President of Products Chenny Solaiyappan, Cognizant Senior Director Elliot Papadakis shares how Cognizant is building and operationalizing an AI gateway that sits at the center of its responsible AI strategy. Elliot discusses embedding generative AI into payer workflows, designing human-in-the-loop guardrails, and using AI orchestration to unlock productivity gains across complex healthcare systems while keeping accountability and patient impact front and center.

Twins Talk it Up Podcast
Episode 302: 2025 Insights & What to Expect in 2026

Twins Talk it Up Podcast

Play Episode Listen Later Jan 7, 2026 42:44


In this special, we reflect on insights from 2025 and what leaders should prepare for in 2026. From the realities behind last year's AI hype, to the need for emphasizing execution, data quality, culture, and leadership.  We also touch on experiences gained from producing over 400 podcast episodes.  We also look ahead to 2026 as the focus shifts to responsible and agentic AI moving from theory into practice, and to AI integration across business functions.  We even lay out some of our commitments for the year ahead—tying AI to measurable business outcomes, investing in learning frameworks, and taking more calculated risks—while reinforcing a core belief: technology adoption without mindset and culture change will not stick.    Key Highlights AI in 2025 was all the rage and yet execution and data quality were true differentiators. Action beats indecision, and sustainability beats speed. Growth comes from focus, not endless expansion. Culture and mindset must lead technology adoption. 2026 will emphasize Responsible AI, agentic workflows, and outcome-driven benchmarks. Continued commitment to provide resources, including our AI simulation, courses and books.  Be intentional, stay informed, and be bold as you build for the future. Become a sponsor, Patreon member and pick up our latest book entitled, Identically Opposite: Find Your Voice and SPEAK. Timestamps: Leverage AI 6:02 Identically Opposite SPEAK framework 9:12 Responsible AI 23:22  

Twins Talk it Up Podcast
Episode 177: 2025 Insights & What to Expect in 2026

Twins Talk it Up Podcast

Play Episode Listen Later Jan 7, 2026 42:21


In this special, we reflect on insights from 2025 and what leaders should prepare for in 2026. From the realities behind last year's AI hype, to the need for emphasizing execution, data quality, culture, and leadership.  We also touch on experiences gained from producing over 400 podcast episodes.  We also look ahead to 2026 as the focus shifts to responsible and agentic AI moving from theory into practice, and to AI integration across business functions.  We even lay out some of our commitments for the year ahead—tying AI to measurable business outcomes, investing in learning frameworks, and taking more calculated risks—while reinforcing a core belief: technology adoption without mindset and culture change will not stick.    Key Highlights AI in 2025 was all the rage and yet execution and data quality were true differentiators. Action beats indecision, and sustainability beats speed. Growth comes from focus, not endless expansion. Culture and mindset must lead technology adoption. 2026 will emphasize Responsible AI, agentic workflows, and outcome-driven benchmarks. Continued commitment to provide resources, including our AI simulation, courses and books.  Be intentional, stay informed, and be bold as you build for the future. Become a sponsor, Patreon member and pick up our latest book entitled, Identically Opposite: Find Your Voice and SPEAK. Timestamps: Leverage AI 6:02 Identically Opposite SPEAK framework 9:12 Responsible AI 23:22

Artificial Intelligence and You
290 - Guest: Jeff Riley, Former Commissioner of Education, part 1

Artificial Intelligence and You

Play Episode Listen Later Jan 5, 2026 28:35


This and all episodes at: https://aiandyou.net/ . What's going on with getting AI education into America's classrooms? We're going to find out from Jeff Riley, former Commissioner of Education for the state of Massachusetts and founder of a new organization – Day of AI, started by MIT's Responsible AI for Social Empowerment and Education institute. And they are mounting a campaign called Responsible AI for America's Youth, which is now running across all 50 states and will run an event called America's Youth AI Festival in July 2026 in Boston. Jeff has got master's degrees from Johns Hopkins and Harvard. He was a Boston school principal and as commissioner, successfully navigated Massachusetts schools through Covid and other crises. We talk about what the campaign is doing and how teachers are responding to it, risks of AI and social media to kids, what to do about cheating and AI detectors, and much more! All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.        

Outgrow's Marketer of the Month
EPISODE 244- When Data Behaves, AI Performs: Philips' Gen AI/Responsible AI & UX Lead Rakesh Doddamane on enterprise readiness

Outgrow's Marketer of the Month

Play Episode Listen Later Dec 29, 2025 20:08


Rakesh Doddamane is a seasoned technology leader with over 25 years of experience specializing in Generative AI, UX Design, and Digital Transformation. Currently serving as Leader of Gen AI & UX at Philips, he has established the Generative AI Centre of Excellence and spearheaded AI governance frameworks across global organizations. On The Menu: Value-driven approach to scaling generative AI solutionsStrategic AI investments across Philips' business functionsCloud infrastructure governance and cost optimization frameworksGen AI Ninja Certification: three-tier upskilling programCustomer insights leveraging AI for product innovationFuture of autonomous agents and orchestration governanceNavigating EU AI Act compliance in regulated industries

Defence Connect Podcast
Robotics and artificial intelligence in the military domain, with Lieutenant Colonel Dr Adam J Hepworth

Defence Connect Podcast

Play Episode Listen Later Dec 25, 2025 35:30


In this episode of the Defence Connect Podcast, host Robert Dougherty is joined by the director of the Australian Army's Robotic and Autonomous Systems Implementation and Coordination Office, Lieutenant Colonel Dr Adam J Hepworth, as they discuss emerging artificial intelligence and robotics implications for the Australian Army. LTCOL Hepworth leads the advancement of emerging technology, including robotics, autonomous systems, AI and autonomy for the Australian Army. He holds a bachelor of science in mathematics from the University of NSW, a master of logistics and supply chain management from the University of South Australia, a graduate diploma in scientific computation and a master of science in operations research from the United States Naval Postgraduate School, and a doctor of philosophy in computer science from the University of NSW. He is a visiting fellow at the University of NSW and an expert member of the Global Commission for Responsible AI in the Military. The pair discuss a range of topics, including: An outline of LTCOL Hepworth's responsibilities as director of the Australian Army's Robotic and Autonomous Systems Implementation and Coordination Office. An invitation to join the Expert Advisory Group for the Global Commission on Responsible AI in the Military Domain, on behalf of the Dutch Minister of Foreign Affairs. A general overview of responsible military AI and irresponsible AI, as well as the benefits achieved with military AI use and challenges from that use, that Australia needs to be aware of. Short and long-term recommendations for governance and regulation of artificial intelligence in the military domain. Work on responsible artificial intelligence in the military domain being completed in Australia. The importance of keeping a human in the loop for AI-based decision making and the evolution of new military technology into the future. Enjoy the podcast, The Defence Connect team

Pondering AI
Perspectives and Predictions: Looking Back at 2025 and Forward to 2026

Pondering AI

Play Episode Listen Later Dec 24, 2025 24:14


A retrospective sampling of ideas and questions our illustrious guests gifted us in 2025 alongside some glad and not so glad tidings (ok, predictions) for AI in 2026.In this episode we revisit insights from our guests and, perhaps, introduce those you may have missed along the way. Select guests provide sparky takes on what may happen in 2026.Host Note: I desperately wanted to use the work prognostication in reference to the latter segment. But although the word sounds cool it implies a level of mysticism entirely out of keeping with the informed opinions these guests have proffered. So, predictions it is.A transcript of this episode is here.   

THE MIND FULL MEDIC PODCAST
The Art and Science of Bringing Joy and Humanity back to Medicine with Dr Graham Walker MD

THE MIND FULL MEDIC PODCAST

Play Episode Listen Later Dec 24, 2025 70:21


In season 6 Episode 6  I am delighted to welcome Graham Walker, MD to the podcast. Dr Walker is an emergency physician by clinical background who practices in San Francisco, completing his medical school training at Stanford University School of Medicine. He also majored in social policy as an undergraduate and has had a life long interest in software development and digital technology. Today he combines these interests and skills in his clinical and non-clinical roles.Graham is the co-director of Advanced Development at The Permanente Medical Group (TPMG), which delivers care for Kaiser Permanente's 4.6 million members in Northern California. At TPMG, Graham works on practical AI development, commercialization, partnerships, and technology transformation under the Chief Innovation Officer. As a clinical informaticist, he also leads emergency and urgent care strategy for KP's electronic medical record. With an appetite and aptitude for tech innovation and entrepreneurship Dr Walker created and founded MDCalc as a resident, a free online clinical decision support tool many clinicians including myself have been using for years now, indeed MDCalc turned 20 this month.More recently Graham co-founded the platform and community  Offcall with a mission to bring joy back to medicine and facilitate contract transparency and financial literacy for US medical practitioners. These two online resources are dedicated to helping physicians globally.  In his free time Graham writes about the intersection of AI, technology, and medicine, and created The Physicians' Charter for Responsible AI, a practical guide to implementing safe, accurate, and fair AI in healthcare settings.In this conversation I have an opportunity to explore Dr Walkers perspective at the intersection of AI, technology and medicine and revisit some emerging themes from season 6 in relation to leading change and innovation in complex, siloed healthcare systems.  We discuss the challenges of communicating change efforts in this context and Graham shares his practical pearls for elevating clinical voices and translating information across disparate stakeholder groups. With a unique leadership role in his own organisation in this space I was keen to explore his perspectives on creating opportunities for intrapreneurship. Dr Walker is a globally respective clinical voice on AI in healthcare and we discuss the perils and promise of AI , the Physician Charter for responsible use of AI in medicine and his Offcall survey and  pending report on physicians views on the technology application and implementation in medicine ( now published and linked below) Finally it is Graham's mission to bring joy and humanity back to medicine and to use technology responsibly to augment the clinician experience and skillset enabling safer and higher quality care that is the major draw for me.  Thank you Dr Walker for you work which has global relevance and reach.Links / References:https://drgrahamwalker.comhttps://www.offcall.comThe Mind Full Medic Podcast is proudly sponsored by the MBA NSW-ACT Find out more about the charitable organisation supporting doctors and their families and/ or donate today at www.mbansw.org.auDisclaimer: The content in this podcast is not intended to constitute or be a substitute for professional medical advice, diagnosis or treatment. Always seek the advice of your doctor or other qualified health care professional. Moreover views expressed here are our own and do not necessarily reflect those of our employers or other official organisations.

IBS Intelligence Podcasts
EP946: From Insight to Impact: How AI Is Reshaping Financial Services

IBS Intelligence Podcasts

Play Episode Listen Later Dec 24, 2025 8:06


Jeeja Gopinath, Managing Director, Zentis AI & Techvantage.aiArtificial Intelligence is moving beyond experimentation to become a core driver of transformation across banking, financial services, and FinTech. But as institutions grapple with legacy systems, regulatory scrutiny, and the need for responsible innovation, translating AI vision into real-world impact remains a challenge.In this episode, Vriti Gothi speaks with Jeeja Gopinath, Managing Director at Zentis AI and Techvantage.ai, about where AI is delivering tangible value today, how financial institutions can move from strategy to execution, and the role of governance in scaling AI responsibly. They explore real-world enterprise use cases, the balance between speed and compliance, and the emerging AI trends shaping the next wave of FinTech innovation.

Marketing Jam
Imposter Syndrome, Authentic Content & Using AI Responsibly in Marketing

Marketing Jam

Play Episode Listen Later Dec 23, 2025 26:29


Recorded live at Calgary's SocialWest 2025, this episode of the Marketing News Canada podcast features guest host Laila Hobbs, Co-Founder of Social Launch Labs, in conversation with Hiba Amin, Co-Founder of Creative Little Planet.Hiba shares a candid look at navigating imposter syndrome throughout her marketing career, from being the sole marketer during a startup downturn to finding confidence through community, conversation, and lived experience. She also discusses the evolution of content creation, why authenticity resonates more than polished perfection, and how marketers can build meaningful connections with their audiences.The conversation dives into the responsible use of AI in content marketing, including where it can support creative work, where it falls short, and why strong foundational ideas must come before scale. Packed with thoughtful insights and real-world perspective, this episode is a must-listen for marketers navigating growth, creativity, and confidence in a rapidly changing industry.

Data Culture Podcast
From Data to AI Culture – Winning the Head, the Heart and the Herd – with Stefanie Babka, Merck

Data Culture Podcast

Play Episode Listen Later Dec 22, 2025 36:20


"I always say, you can't learn how to swim if you don't jump into the water. And it's so important for people to be able to jump into the water and to really test it out."

The Tech Blog Writer Podcast
3527: How AWS Is Building Trust Into Responsible AI Adoption

The Tech Blog Writer Podcast

Play Episode Listen Later Dec 21, 2025 27:01


What does responsible AI really look like when it moves beyond policy papers and starts shaping who gets to build, create, and lead in the next phase of the digital economy? In this conversation recorded during AWS re:Invent, I'm joined by Diya Wynn, Principal for Responsible AI and Global AI Public Policy at Amazon Web Services. With more than 25 years of experience spanning the internet, e-commerce, mobile, cloud, and artificial intelligence, Diya brings a grounded and deeply human perspective to a topic that is often reduced to technical debates or regulatory headlines. Our discussion centers on trust as the real foundation for AI adoption. Diya explains why responsible AI is not about slowing innovation, but about making sure innovation reaches more people in meaningful ways. We talk about how standards and legislation can shape better outcomes when they are informed by real-world capabilities, and why education and skills development will matter just as much as model performance in the years ahead. We also explore how generative AI is changing access for underrepresented founders and creators. Drawing on examples from AWS programs, including work with accelerators, community organizations, and educational partners, Diya shares how tools like Amazon Bedrock and Amazon Q are lowering technical barriers so ideas can move faster from concept to execution. The conversation touches on why access without trust falls short, and why transparency, fairness, and diverse perspectives have to be part of how AI systems are designed and deployed. There's an honest look at the tension many leaders feel right now. AI promises efficiency and scale, but it also raises valid concerns around bias, accountability, and long-term impact. Diya doesn't shy away from those concerns. Instead, she explains how responsible AI practices inside AWS aim to address them through testing, documentation, and people-centered design, while still giving organizations the confidence to move forward. This episode is as much about the future of work and opportunity as it is about technology. It asks who gets to participate, who gets to benefit, and how today's decisions will shape tomorrow's innovation economy. As generative AI becomes part of everyday business life, how do we make sure responsibility, access, and trust grow alongside it, and what role do we each play in shaping that future? Useful Links Connect With Diya Wynn AWS Responsible AI Tech Talks Daily is sponsored by Denodo

The Road to Accountable AI
Alexandru Voica: Responsible AI Video

The Road to Accountable AI

Play Episode Listen Later Dec 18, 2025 38:23


Alexandru Voica, Head of Corporate Affairs and Policy at Synthesia, discusses how the world's largest enterprise AI video platform has approached trust and safety from day one. He explains Synthesia's "three C's" framework—consent, control, and collaboration: never creating digital replicas without explicit permission, moderating every video before rendering, and engaging with policymakers to shape practical regulation. Voica acknowledges these safeguards have cost some business, but argues that for enterprise sales, trust is competitively essential. The company's content moderation has evolved from simple keyword detection to sophisticated LLM-based analysis, recently withstanding a rigorous public red team test organized by NIST and Humane Intelligence. Voica criticizes the EU AI Act's approach of regulating how AI systems are built rather than focusing on harmful outcomes, noting that smaller models can now match frontier capabilities while evading compute-threshold regulations. He points to the UK's outcome-focused approach—like criminalizing non-consensual deepfake pornography—as more effective. On adoption, Voica argues that AI companies should submit to rigorous third-party audits using ISO standards rather than publishing philosophical position papers—the thesis of his essay "Audits, Not Essays." The conversation closes personally: growing up in 1990s Romania with rare access to English tutoring, Voica sees AI-powered personalized education as a transformative opportunity to democratize learning. Alexandru Voica is the Head of Corporate Affairs and Policy at Synthesia, the UK's largest generative AI company and the world's leading AI video platform. He has worked in the technology industry for over 15 years, holding public affairs and engineering roles at Meta, NetEase, Ocado, and Arm. Voica holds an MSc in Computer Science from the Sant'Anna School of Advanced Studies and serves as an advisor to MBZUAI, the world's first AI university. Transcript Audits, Not Essays: How to Win Trust for Enterprise AI (Transformer) Synthesia's Content Moderation Systems Withstand Rigorous NIST, Humane Intelligence Red Team Test (Synthesia) Computerspeak Newsletter

HLTH Matters
AI @ HLTH : Responsible AI in the Clinic: Insights from Microsoft's Hadas Bitran

HLTH Matters

Play Episode Listen Later Dec 18, 2025 25:39


In this episode, host Sandy Vance sits down with Hadas Bitran, Partner General Manager of Health AI at Microsoft Health & Life Sciences, for a deep dive into the rapidly evolving world of healthcare agents. Together, they explore how agentic technologies are being used across clinical settings, where they're creating value, and why tailoring these tools to the specific needs of users and audiences is essential for safety and effectiveness. Well-designed healthcare agents can reinforce responsible AI practices (like transparency, accountability, and patient safety) while also helping organizations evaluate emerging solutions with greater clarity and confidence. In this episode, they talk about:How agents are used in healthcare and use casesThe risks if a healthcare agent is not tailored to the needs of users and audiencesHow healthcare agents support responsible AI practices, such as safety, transparency, and accountability, in clinical settingsHealthcare organizations should look to evaluate healthcare agent solutionsBridging the gaps in access, equity, and health literacy; empowering underserved populations and democratizing expertiseThe impact of AI on medical professionals and the healthcare staff, and how they should prepare for the change?A Little About Hadas:Hadas Bitran is Partner General Manager, Health AI, at Microsoft Health & Life Sciences. Hadas and her multi-disciplinary R&D organization build AI technologies for health & life sciences, focusing on Generative AI-based services, Agentic AI, and healthcare-adapted safeguards. They shipped multiple products and cloud services for the healthcare industry, which were adopted by thousands of customers worldwide.In addition to her work at Microsoft, Hadas previously served as a Board Member at SNOMED International, a not-for-profit organization that drives clinical terminology worldwide.Before Microsoft, Hadas held senior leadership positions managing R&D and Product groups in tech corporations and in start-up companies. Hadas has a B.Sc. in Computer Science from Tel Aviv University and an MBA from the Kellogg School of Management, Northwestern University in Chicago.

Data Breach Today Podcast
Preparing Healthcare Workers for Secure, Responsible AI Use

Data Breach Today Podcast

Play Episode Listen Later Dec 17, 2025


Info Risk Today Podcast
Preparing Healthcare Workers for Secure, Responsible AI Use

Info Risk Today Podcast

Play Episode Listen Later Dec 17, 2025


Data Transforming Business
Responsible AI Starts with Responsible Data: Building Trust at Scale

Data Transforming Business

Play Episode Listen Later Dec 11, 2025 26:00


We live in a world where technology moves faster than most organisations can keep up. Every boardroom conversation, every team meeting, even casual watercooler chats now include discussions about AI. But here's the truth: AI isn't magic. Its promise is only as strong as the data that powers it. Without trust in your data, AI projects will be built on shaky ground.In this episode of Don't Panic, It's Just Data podcast, Amy Horowitz, Group Vice President of Solution Specialist Sales and Business Development at Informatica, joins moderator Kevin Petrie, VP of Research at BARC, to tackle one of the most pressing topics in enterprise technology today: the role of trusted data in driving responsible AI. Their discussion goes beyond buzzwords to focus on actionable insights for organisations aiming to scale AI with confidence.Why Responsible AI Begins with DataAmy opens the conversation with a simple but powerful observation: “No longer is it okay to just have okay data.” This sets the stage for understanding that AI's potential is only as strong as the data that feeds it. Responsible AI isn't just about implementing the latest algorithms; it's about embedding ethical and governance principles into every stage of AI development, starting with data quality.Kevin and Amy emphasise that organisations must look at data not as a byproduct, but as a foundational asset. Without reliable, well-governed data, even the most advanced AI initiatives risk delivering inaccurate, biased, or ineffective outcomes.Defining Responsible AI and Data GovernanceResponsible AI is more than compliance or policy checkboxes. As Amy explains, it is a framework of principles that guide the design, development, deployment, and use of AI. At its core, it is about building trust, ensuring AI systems empower organisations and stakeholders while minimising unintended consequences. Responsible data governance is the practical arm of responsible AI. It involves establishing policies, controls, and processes to ensure that data is accurate, complete, consistent, and auditable.Prioritise Data for Responsible AIThe takeaway from this episode is clear and that is responsible AI starts with responsible data. For organisations looking to harness AI effectively:Invest in data quality and governance — it is the foundation of all AI initiatives.Embed ethical and legal principles in every stage of AI development.Enable collaboration across teams to ensure transparency, accountability, and usability.Start small, prove value, and scale — responsible AI is built step by step.Amy Horowitz's insight resonates beyond the tech team: “Everyone's ready for AI — except their data.” It's a reminder that AI success begins not with the algorithms, but with the trustworthiness and governance of the data powering them.For more insights, visit Informatica.TakeawaysAI is only as good as its data inputs.Data quality has become the number one obstacle to AI success. Organisations must start small and find use cases for data governance.Hallucinations in AI models highlight the need for vigilant

Pondering AI
An Environmental Grounding with Masheika Allgood

Pondering AI

Play Episode Listen Later Dec 10, 2025 57:17


Masheika Allgood delineates good AI from GenAI, outlines the environmental imprint of hyperscale data centers, and emphasizes AI success depends on the why and data.  Masheika and Kimberly discuss her path from law to AI; AI as an embodied infrastructure; forms of beneficial AI; if the GenAI math maths; narratives underpinning AI; the physical imprint of hyperscale data centers; the fallacy of closed loop cooling; who pays for electrical capacity; enabling community dialogue; starting with why in AI product design; AI as a data infrastructure play; staying positive and finding the thing you can do.  Masheika Allgood is an AI Ethicist and Founder of AllAI Consulting. She is a well-known advocate for sustainable AI development and contributor to the IEEE P7100 Standard for Measurement of Environmental Impacts of Artificial Intelligence Systems. Related Resources Taps Run Dry Initiative (Website) Data Center Advocacy Toolkit (Website) Eat Your Frog (Substack) AI Data Governance, Compliance, and Auditing for Developers (LinkedIn Learning) A Mind at Play: How Claude Shannon Invented the Information Age (Referenced Book) A transcript of this episode is here.

My EdTech Life
The Mission Behind Day of AI ft. Jeff Riley | My EdTech Life 347

My EdTech Life

Play Episode Listen Later Dec 10, 2025 43:47 Transcription Available


In this episode of My EdTech Life, Jeff Riley breaks down the mission behind Day of AI and the work of MIT RAISE to help schools, districts, families, and students understand artificial intelligence safely, ethically, and with purpose.Jeff brings 32 years of experience as a teacher, counselor, principal, superintendent, and former Massachusetts Commissioner of Education. His transition to MIT RAISE reveals why AI literacy, student safety, and clear policy matter more than ever.Timestamps00:00 Welcome & Sponsor Shoutouts01:45 Jeff Riley's Background in Education04:00 Why MIT RAISE and Day of AI06:00 The Challenge: AI Policy, Safety & Equity08:30 How AI Can Transform Teaching & Learning10:30 Differentiation, Accessibility & Student Support12:30 Helping Teachers Feel Confident Using AI15:00 Leading AI Adoption at the District Level18:00 What AI Literacy Should Mean for Students20:00 Teaching Healthy Skepticism & Bias Awareness23:00 Student Voice in AI Policy26:00 Parent Awareness & Common Sense Media Toolkit29:00 Responsible AI for America's Youth31:00 America's Youth AI Festival & Student Leadership34:30 National Vision for AI in Education37:00 Closing Thoughts + 3 Signature Questions41:00 Stay TechieResources MentionedDay of AI Curriculum: https://dayofai.orgMIT RAISE: https://raise.mit.eduSponsors

Cloud Security Podcast
How to secure your AI Agents: A CISOs Journey

Cloud Security Podcast

Play Episode Listen Later Dec 9, 2025 54:52


Transitioning a mature organization from an API-first model to an AI-first model is no small feat. In this episode, Yash Kosaraju, CISO of Sendbird, shares the story of how they pivoted from a traditional chat API platform to an AI agent platform and how security had to evolve to keep up.Yash spoke about the industry's obsession with "Zero Trust," arguing instead for a practical "Multi-Layer Trust" approach that assumes controls will fail . We dive deep into the specific architecture of securing AI agents, including the concept of a "Trust OS," dealing with new incident response definitions (is a wrong AI answer an incident?), and the critical need to secure the bridge between AI agents and customer environments .This episode is packed with actionable advice for AppSec engineers feeling overwhelmed by the speed of AI. Yash shares how his team embeds security engineers into sprint teams for real-time feedback, the importance of "AI CTFs" for security awareness, and why enabling employees with enterprise-grade AI tools is better than blocking them entirely .Questions asked:Guest Socials - Yash's LinkedinPodcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you are interested in AI Cybersecurity, you can check out our sister podcast -⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ AI Security Podcast⁠Questions asked:(00:00) Introduction(02:20) Who is Yash Kosaraju? (CISO at Sendbird)(03:30) Sendbird's Pivot: From Chat API to AI Agent Platform(05:00) Balancing Speed and Security in an AI Transition(06:50) Embedding Security Engineers into AI Sprint Teams(08:20) Threats in the AI Agent World (Data & Vendor Risks)(10:50) Blind Spots: "It's Microsoft, so it must be secure"(12:00) Securing AI Agents vs. AI-Embedded Applications(13:15) The Risk of Agents Making Changes in Customer Environments(14:30) Multi-Layer Trust vs. Zero Trust (Marketing vs. Reality) (17:30) Practical Multi-Layer Security: Device, Browser, Identity, MFA(18:25) What is "Trust OS"? A Foundation for Responsible AI(20:45) Balancing Agent Security vs. Endpoint Security(24:15) AI Incident Response: When an AI Gives a Wrong Answer(29:20) Security for Platform Engineers: Enabling vs. Blocking(30:45) Providing Enterprise AI Tools (Gemini, ChatGPT, Cursor) to Employees(32:45) Building a "Security as Enabler" Culture(36:15) What Questions to Ask AI Vendors (Paying with Data?)(39:20) Personal Use of Corporate AI Accounts(43:30) Using AI to Learn AI (Gemini Conversations)(45:00) The Stress on AppSec Engineers: "I Don't Know What I'm Doing"(48:20) The AI CTF: Gamifying Security Training(50:10) Fun Questions: Outdoors, Team Building, and Indian/Korean Food

Legally Speaking Podcast - Powered by Kissoon Carr
Prompt. Learn. Transform: How AI Is Rewiring the Way Lawyers Work

Legally Speaking Podcast - Powered by Kissoon Carr

Play Episode Listen Later Dec 9, 2025 29:53


AI has exploded across the legal industry but for many lawyers, it still feels overwhelming, risky, or simply “not for them.” Today's guest has made it his mission to change that. Joining us is Robert Eder an intellectual property lawyer, legaltech educator, and one of Europe's leading voices on AI prompting for lawyers. Robert designs legal automation solutions, and teaches lawyers around the world how to use AI safely, effectively, and creatively. Robert has trained hundreds of lawyers across Europe and is one of the clearest voices on how to use AI responsibly, safely and with real legal precision.  Here are a few standout takeaways:  Lawyers aren't bad at prompting they're undersold. Their analytical mindset actually gives them an advantage.  Most people still treat AI like Google. Adding structure through XML tags, roles and answer-levelling changes everything.  The first AI skill every lawyer should learn isn't drafting it's controlling output. Structure before substance.  Hallucinations aren't a deal-breaker. Responsible AI frameworks give you quality control, not guesswork.  You don't need 70% of AI tools on the market. With the right prompting, one model + the right workflow beats shiny software every time.  Legal prompting is not the same as general prompting. Law has edge cases, nuance, and risk your prompts must reflect that. Two general points to reflect on:Lawyers don't need to become engineers. They need to become better communicators with machines.If you don't understand prompting, you'll always think AI is unreliable — when in reality, it's only as clear as the instructions you give it. It's practical, hands-on and genuinely career-shifting. AI isn't replacing lawyers. Lawyers who understand AI are replacing the ones who don't.

The Next Page
AI x Multilateralism: AI Empire or Global Commons? Why Inclusive Governance Matters, with Dr. Rachel Adams

The Next Page

Play Episode Listen Later Dec 5, 2025 34:25 Transcription Available


This is AI x Multilateralism, a mini-series on The Next Page, where experts help us unpack the many ideas and issues at the nexus of AI and international cooperation.   AI has the dual potential to transform our world for the better, while also deepening serious inequalities. In this episode we speak to Dr. Rachel Adams, Founder and CEO of the Global Center on AI Governance and author of The New Empire of AI: The Future of Global Inequality. She shares why Africa-led and Majority World-led research and policy are essential for equitable AI governance that's grounded in the realities of people everywhere.  She reflects on: why the work of the Center's flagship Global Index on Responsible AI and its African Observatory on Responsible AI are bringing much-needed research and evidence to ensure AI governance is fair and inclusive.  her thoughts on the UN General Assembly's 2025 resolutions to establish an International Scientific Panel on AI and a Global Dialogue on AI Governance, urging true inclusion of diverse voices, indigenous perspectives, and public input why we need to treat AI infrastructure as an AI Global Commons and, the power of local-language AI and public literacy in ensuring we harness the most transformative aspects of AI for our world.  Resources mentioned:  The Global Center on AI Governance The Center's Global Index on Responsible AI The Center's African Observatory on Responsible AI, and its research series Africa and the Big Debates on AI Production:    Guest: Dr. Rachel Adams Host, production and editing: Natalie Alexander Julien  Recorded & produced at the Commons, United Nations Library & Archives Geneva  Podcast Music credits: Sequence: https://uppbeat.io/track/img/sequence Music from Uppbeat (free for Creators!): https://uppbeat.io/t/img/sequence License code: 6ZFT9GJWASPTQZL0 #AI #Multilateralism #UN #Africa #AIGovernance

#BCSTech Podcast
The AI Revolution: Time to Surf the Wave, Not Hold Back the Tide.

#BCSTech Podcast

Play Episode Listen Later Dec 4, 2025 53:16


Josh is joined by education leader Jeffrey C. Riley, the Co-founder and Executive Director of Day of AI—the MIT-born nonprofit spearheading the Responsible AI for America's Youth campaign. Riley, the former Massachusetts Commissioner of Elementary and Secondary Education and former Superintendent/Receiver of the Lawrence Public Schools, shares his no-nonsense perspective. Through the work of the […]

Ropes & Gray Podcasts
AI at Work – Smarter, Not Riskier: Employer Strategies for Responsible AI Use

Ropes & Gray Podcasts

Play Episode Listen Later Dec 3, 2025 15:40


In this inaugural episode of AI at Work, Greg Demers, an employment partner, is joined by Meg Bisk, head of the employment practice, and John Milani, an employment associate, to explore how employers across industries can harness artificial intelligence responsibly while mitigating legal and operational risk.They discuss the most common pitfalls of employee AI use, including inadvertent disclosure of confidential information, model “hallucinations,” and source opacity, that undermine auditability and accuracy. Turning to HR functions, they examine emerging regulatory frameworks and compliance expectations, such as bias auditing requirements for automated employment decision tools and accessibility obligations, alongside practical steps for vetting vendors, embedding human oversight, and enforcing contractual protections. Listeners will come away with pragmatic strategies to update policies, document decisions and foster a transparent culture of accountability that will position organizations to leverage AI use that is smarter, not riskier.Stay tuned for future episodes where we will explore the use of AI in human resources, privacy implications and cybersecurity issues, AI in executive compensation & employee benefits, among other topics.

Building your Brand
Foresight vs. Fads: Building a Brand That Lasts with Josephine Hatch

Building your Brand

Play Episode Listen Later Dec 3, 2025 36:47


Today on the podcast I am chatting to Josephine Hatch, who is an Innovation Director with over 20 years of experience in foresight, cultural strategy, and brand innovation. Now, you might not totally know what any of that means, but basically, we are talking about trend forecasting! One of the things that really struck me during our chat is that, as creatives and small business owners, many of us do this instinctively without having the formal language for it. This conversation gave me such a good framework for being more strategic about looking at culture and making plans for my business and honestly, Jo's perspective gave me such a boost regarding the value of human creativity.   Key Takeaways Foresight vs. Fads: While "trends" are often associated with fast fashion or fleeting fads, foresight is about spotting signals and understanding the macro forces that impact human behaviour. Human Truths Remain: Technology and context change, but fundamental human truths—like the need for connection or joy—stay the same. Successful brands understand how to tap into these enduring feelings. The AI Counter-Movement: As generative AI adoption grows, there is a strong counter-trend towards the "human." People are increasingly valuing imperfections, analog hobbies, and genuine human curation. Look Outside Your Bubble: Real innovation rarely comes from looking at your direct competitors. Instead, look to other industries, art, and culture for inspiration to disrupt your own category.   Episode Highlights 02:51 – Joe explains her background and how an Alexander McQueen runway show sparked her interest in how fashion mirrors society. 06:49 – We discuss why "trend" has become a dirty word and the difference between short-term fads and long-term foresight. 12:56 – Joe shares incredible free resources and tools that small businesses can use to spot cultural shifts. 20:23 – A fascinating look at AI, including why the "human touch" is becoming a premium and the rise of analog hobbies. 33:17 – Simple habits you can adopt to become more culturally curious, including how to document the things that inspire you.   About the Guest Josephine Hatch is an Innovation Director at The Otherly, an innovation and brand agency that works with global brands and small businesses to help them defend their space and grow with intent. She has spent 20 years working at the intersect of trend forecasting, cultural strategy, and innovation. Website: The Otherly LinkedIn: Josephine Hatch   Mentioned in this episode The Otherly https://theotherly.com/ Andres Colmenares, Responsible AI expert and IAM festival co-founder Link to a google drive of trend reports https://bit.ly/2025trending  via Global Cultural Strategist Amy Daroukakis. Note that a new set of trend reports will come out around December 2025 Free platform for trends, updated daily https://www.trendhunter.com/ Dezeen, The Dieline and Lovely Package (both good for packaging), Campaignlive https://secondhome.io/culture/   SJ from The Akin's substack is a great read for what's happening in culture https://theakin.substack.com/ Emma Jane Palin's Our Curated Abode https://www.ourcuratedabode.com/  and Instagram https://www.instagram.com/ourcuratedabode/#   I would love to hear what you think of this episode, so please do let me know on Instagram where I'm @‌lizmmosley or @‌buildingyourbrandpodcast and I hope you enjoy the episode! This episode was written and recorded by me and produced by Lucy Lucraft lucylucraft.co.uk If you enjoyed this episode please leave a 5* rating and review!

Artificial Intelligence in Industry with Daniel Faggella
Copyright Risk in Financial Services and the Rise of Responsible AI – with Lauren Tulloch of CCC

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Dec 1, 2025 14:08


Today's guest is Lauren Tulloch, Vice President and Managing Director at CCC (Copyright Clearance Center). CCC provides collective copyright licensing services for corporate and academic users of copyrighted materials, and, as one can imagine, the advent of AI has exposed a large number of businesses to copyright risks they've never considered before. Today, Lauren joins us to discuss where copyright exposure arises in financial services, from the growth of AI development to more commonplace employee use. With well over a decade at the company, Lauren dives into the urgent need for proactive copyright strategies in financial services, ensuring firms avoid litigation, regulatory scrutiny, and reputational damage, all while maximizing the value of AI. This episode is sponsored by CCC. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1.

Transforming Biopharma
‘Move slow to go fast': Keeping pace with responsible AI with Michael Shaw

Transforming Biopharma

Play Episode Listen Later Dec 1, 2025 50:35


The rapid evolution of generative AI has led to increased adoption but also raises significant compliance challenges. ZS Principal Michael Shaw joins the podcast to discuss the importance of responsible and ethical AI adoption in the biopharma industry, particularly as it relates to compliance, risk management and improving patient outcomes. Highlights include:The importance of developing a comprehensive framework for responsible AI, focusing on principles like fairness, safety, transparency and accountability Why effective AI governance requires cross-functional collaboration and continuous trade-off assessmentHow leveraging AI to enhance workflows can drive efficiency and effectiveness but must be implemented thoughtfully with the right controls in place

Disruption Now
Disruption Now Episode 188 | Upgrading Government Tech: Startup Thinking for Public Service

Disruption Now

Play Episode Listen Later Nov 27, 2025 46:12


Most people run from government bureaucracy. Pavan Parikh ran toward it—and decided to rewrite the system from the inside.He believes public service should move like a startup: fast, transparent, and built around people, not process.But when tradition, power, and red tape pushed back, he didn't fold—he went to the statehouse to fight for reform.So how do you disrupt a 200-year-old system that was never built for speed or equity?In Episode 188 of the Disruption Now Podcast, Pavan breaks down how he's modernizing Hamilton County's court systems, digitizing paper-heavy workflows, and using AI and automation to reduce barriers to justice rather than create new ones. Whether you work in government, policy, law, or tech, you'll see how startup tools and mindsets can create real impact, not just buzzwords.Pavan Parikh is the elected Hamilton County Clerk of Courts in Ohio, focused on increasing access to justice, improving customer service, and modernizing one of the county's most important institutions. In this episode, we talk about what happens when a startup mindset collides with decades-old court processes, why culture eats technology for breakfast, and how AI can help everyday people navigate civil cases, evictions, and protection orders more effectively.You'll also hear Pavan's personal journey—from planning a career in medicine to 9/11 shifting him toward law and public service to ultimately leading one of the most prominent offices in Hamilton County. We get into fear of AI, job-loss anxiety within government, and how he's reframing AI as a teammate that frees staff for higher-value work rather than replacing them.If you've ever looked at the justice system and thought “there has to be a better way,” this deep dive into startup thinking for government will show you what that better way can look like—and what it takes to build it from the inside.What you'll learn in this episode:How startup thinking for government can reduce friction and errors in court processesWhy is Pavan obsessed with access to justice and end-user experience for everyday residents?How Hamilton County is digitizing records, streamlining evictions, and modernizing civil protection order filingWhere AI and automation can safely support court staff and help-center attorneysWhy change management is the real challenge—not the technologyHow local government can be a faster “lab” for responsible AI than federal agenciesWhat it really looks like to design systems around people, not paperworkChapters:00:00 Why the government needs startup thinking03:15 Pavan's path from medicine to law and 9/11's impact10:45 Modernizing Hamilton County courts and killing paper workflows22:10 AI, access to justice, and reimagining the Help Center35:30 Careers, values, and becoming a disruptor in public serviceQuick Q&A (for searchers):Q: What does “startup thinking for government” mean in this episode?A: Treating residents as end users, iterating on systems, and using tech and AI to automate low-value tasks so staff can focus on service and justice outcomes.Q: How is Hamilton County using technology to improve access to justice?A: By digitizing records, expanding the Help Center, improving online access to cases, limiting or removing outdated eviction records, and building easier online processes for civil protection orders.Q: Will AI replace court jobs?A: Pavan argues AI should handle repetitive questions and data lookups so humans can spend more time problem-solving, doing quality control, and helping people with complex issues.Connect with Pavan Parikh (verified/public handles):Website: PavanParikh.comX (Twitter): @KeepPavanClerkFacebook: Pavan Parikh for Clerk of Courts / @KeepPavanClerkInstagram: @KeepPavanClerkOffice channel: Hamilton County Clerk of Courts – @HamCoClerk on YouTubeDisruption Now resources:Subscribe to YouTube for more conversations at the intersection of AI, policy, government, and impact.Join the newsletter for weekly trends in AI and emerging tech for people who want to change systems, not just complain about them. bit.ly/newsletterDN#StartupThinking #GovTech #AccessToJusticeDisruption Now: Disrupting the status quo, making emerging tech human-centric and Accessible to all. Website https://disruptionnow.com/podcast Apply to get on the Podcast https://form.typeform.com/to/Ir6Agmzr?typeform-source=disruptionnow.comMusic credit:Embrace - Evgeny Bardyuzha

Pondering AI
Your Digital Twin Is Not You with Kati Walcott

Pondering AI

Play Episode Listen Later Nov 26, 2025 53:17


Kati Walcott differentiates simulated will from genuine intent, data sharing from data surrender, and agents from agency in a quest to ensure digital sovereignty for all.Kati and Kimberly discuss her journey from molecular genetics to AI engineering; the evolution of an intention economy built on simulated will; the provider ecosystem and monetization as a motive; capturing genuine intent; non-benign aspects of personalization; how a single bad data point can be a health hazard; the 3 styles of digital data; data sharing vs. data surrender; whether digital society represents reality; restoring authorship over our digital selves; pivoting from convenience to governance; why AI is only accountable when your will is enforced; and the urgent need to disrupt feudal economics in AI. Kati Walcott is the Founder and Chief Technology Officer at Synovient. With over 120 international patents, Kati is a visionary tech inventor, author and leader focused on digital representation, rights and citizenship in the Digital Data Economy.Related ResourcesThe False Intention Economy: How AI Systems are Replacing Human Will with Modeled Behavior (LinkedIn Article)A transcript of this episode is here.   

BigIDeas On The Go
Privacy Professionals on the Front Lines of AI Risk

BigIDeas On The Go

Play Episode Listen Later Nov 26, 2025 32:11


Security and privacy leaders are under pressure to sign off on AI, manage data risk, and answer regulators' questions while the rules are still taking shape and the data keeps moving. On this episode of Ctrl + Alt + AI, host Dimitri Sirota sits down with Trevor Hughes, President & CEO of the IAPP, to unpack how decades of privacy practice can anchor AI governance, why the shift from consent to data stewardship changes the game, and what it really means to “know your AI” by knowing your data. Together, they break down how CISOs, privacy leaders, and risk teams can work from a shared playbook to assess AI risk, apply practical controls to data, and get ahead of emerging regulation without stalling progress.In this episode, you'll learn:Why privacy teams already have methods that can be adapted to oversee AI systemsBoards and executives want simple, defensible stories about risk from AI useThe strongest programs integrate privacy, security, and ethics into a single strategyThings to listen for: (00:00) Meet Trevor Hughes(01:39) The IAPP's mission and global privacy community(03:45) What AI governance means for security leaders(05:56) Responsible AI and real-world risk tradeoffs(08:47) Aligning privacy, security, and AI programs(15:20) Early lessons from emerging AI regulations(18:57) Know your AI by knowing your data(22:13) Rethinking consent and data stewardship(28:05) Vendor responsibility for AI and data risk(31:26) Closing thoughts and how to find the IAPP

Coffe N. 5
2026 Is Calling: What Marketers Need to Know with Lara Schmoisman

Coffe N. 5

Play Episode Listen Later Nov 25, 2025 12:25


Send us a textIn the final Coffee Nº5 episode of the year, Lara Schmoisman breaks down the marketing ecosystem of 2026—an environment defined by AI clarity, human-led storytelling, micro-experts, privacy-first data practices, and integrated teams. This episode explains what it truly takes to operate, grow, and connect in a world where everything is interconnected.We'll talk about:The 2026 ecosystem: why everything in marketing is now interconnected—and why working in silos will break your growth.Generative Engine Optimization (GEO): clarity as the new currency for AI.AI agents as shoppers: what it means when software researches, compares, and negotiates for consumers.Responsible AI: why governance, rules, and human oversight will define how brands use technology.Content in 2026: real storytelling, crafted value, SEO-backed captions, and the end of shallow posting.The rise of micro-experts: why niche credibility beats mass follower counts.Privacy & first-party data: what owning your customer information really means.Subscribe to Lara's newsletter.Also, follow our host Lara Schmoisman on social media:Instagram: @laraschmoismanFacebook: @LaraSchmoismanSupport the show

The Brand Called You
Christopher Dorrow: Global AI Strategist on Innovation, Education, and Responsible AI | TBCY Podcast

The Brand Called You

Play Episode Listen Later Nov 24, 2025 53:30


In this insightful episode, host Stephen Ibaraki sits down with Christopher Dorrow, a Global AI Strategist, to explore his fascinating career journey through innovation, design thinking, and leadership in Artificial Intelligence.Christopher shares pivotal moments from his childhood, his experiences in entrepreneurship and creativity, and recounts how challenges propelled his adaptability and sparked innovation throughout his career — from his early days at Accenture and SAP to transformative work with Finastra and Dubai Future Foundation. Discover how Christopher led groundbreaking projects like AI use-cases for government, contributed to the Dubai Future Foundation Global 50 Report, and now works on responsible AI frameworks for children and AI strategy in education with Capgemini.From designing capability-building programs in Kenyan slums to pioneering digital transformation in global fintech, Christopher's story is a testament to creative leadership, ambition, and global impact. The conversation also dives into the future of AI, the importance of trust and ethics, and the social responsibility tech leaders must champion.If you're passionate about tech innovation, AI strategy, global leadership, or social impact, this episode is packed with lessons, inspiration, and actionable insights.

Cloud Realities
CRLIVE52 Microsoft Ignite 2025: Scaling responsible AI agents with Yina Arenas from Microsoft – Plus Team Ignite 2025 Reflections

Cloud Realities

Play Episode Listen Later Nov 21, 2025 63:57


Hello San Francisco - we're arrived for Microsoft Ignite 2025! The #CloudRealities podcast team has landed this week in San Francisco, we're bringing you the best updates right from the heart of the event. Join us to connect AI at scale, cloud modernization, and secure innovation—empowering organizations to become AI-first. Plus, we'll keep you updated on all the latest news and juicy gossip. Dave, Esmee, and Rob wrap up their Ignite 2025 series with Yina Arenas, CVP of Microsoft Foundry, to discuss why Foundry is the go-to choice for enterprises and how it champions responsible development and innovation.  TLDR00:40 – Introduction to Yina Arenas01:14 – How the team is doing, keynote highlights, and insights from the Expo floor02:50 – Deep dive with Yina on the evolution of Cloud Foundry29:24 – Favourite IT-themed movie, human interaction, and our society31:56 – Personal (and slightly juicy) reflections on the week37:30 – Team reflections on Ignite 2025, including an executive summary per guest and appreciation for Dennis Hansen50:54 – The team's favorite IT-themed movies59:30 – Personal favorite restaurantGuestYina Arenas: https://www.linkedin.com/in/yinaa/ HostsDave Chapman: https://www.linkedin.com/in/chapmandr/Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/ ProductionMarcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman: https://www.linkedin.com/in/chapmandr/ SoundBen Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett:  https://www.linkedin.com/in/louis-corbett-087250264/ 'Cloud Realities' is an original podc

HR Data Labs podcast
Bob Pulver - Maintaining Personal Agency Through AI Integration

HR Data Labs podcast

Play Episode Listen Later Nov 20, 2025 53:07


Bob Pulver, host of the Elevate Your AIQ podcast and a 25-year enterprise tech and innovation veteran, joins us this week to unpack the urgent need to move past "AI" as a buzzword and define what "Responsible AI" truly means for organizations. He shares his insights on why we are all responsible for AI, how to balance playing "defense" (risk mitigation) and "offense" (innovation), and why we must never outsource our critical thinking and human agency to these new tools. [0:00] Introduction Welcome, Bob! Today's Topic: Defining Responsible AI and Responsible Innovation [12:25] What Does “Responsible AI” Mean? Why elements (like fairness in decision-making, data provenance, and privacy) must be built-in "by design," not bolted on later. In an era where everyone is a "builder," we are all responsible for the tools we use and create. [25:48] The Two Sides of Responsible Innovation The "responsibility" side involves mitigating risk, ensuring fairness, and staying human-centric—it's like playing defense. The "innovation" side involves driving growth, entering new markets, and reinvesting efficiency gains—it's like playing offense. [41:58] Why don't we use AI to give us a 4-day work week? The critical need for leaders to separate their personal biases from data-driven facts. AI's role in recent layoffs. [50:27] Closing Thanks for listening! Quick Quote “We're all responsible for Responsible AI, whatever your role is. You're either using it or abusing it . . . or you're building it or you're testing it.”

HR Leaders
How to Build a Responsible AI Ecosystem

HR Leaders

Play Episode Listen Later Nov 19, 2025 15:52


In this episode of the HR Leaders Podcast, we sit down with Michiel van Duin, Chief People Technology, Data and Insights Officer at Novartis to discuss how the company is building a human-centered AI ecosystem that connects people, data, and technology.Michiel explains how Novartis brings together HR, IT, and corporate strategy to align AI innovation with the company's long-term workforce and business goals. He shares how the team built an AI governance framework and a dedicated AI and innovation function inside HR, ensuring responsible use of AI while maintaining trust and transparency.From defining when AI should step in and when a “human-in-the-loop” is essential, to upskilling employees and creating the first “Ask Novartis” AI assistant, Michiel shows how Novartis is making AI practical, ethical, and human.

AI for Kids
How Parents Can Guide Kids Through Talking Toys And Chatbots (Middle School+)

AI for Kids

Play Episode Listen Later Nov 18, 2025 36:10 Transcription Available


Send us a textA stuffed animal that answers back. A kind voice that “understands.” A tutor that lives in a fictional town. AI characters are everywhere, and they're changing how kids learn, play, and bond with media. We sat down with Dr. Sonia Tiwari, children's media researcher and former game character designer, to unpack how to welcome these tools into kids' lives without losing what matters most.Sonia breaks down what truly makes an AI character: a personality, a backstory, and the new twist of two‑way interactivity. From chatbots and smart speakers to social robots and virtual influencers, we trace how each format affects attention, trust, and learning. Then we get practical. We talk through how to spot manipulative backstories (“I'm your best friend” is a red flag), when open‑ended chat goes wrong, and why short, purposeful sessions keep curiosity high and dependence low.For caregivers wary of AI, Sonia offers a powerful reframe: opting out cedes the space to designs that won't put kids first. Early, honest AI literacy, taught like other life skills, protects children from deepfakes, overfamiliar bots, and data oversharing.If you care about safe, joyful learning with technology, this conversation gives you a clear checklist and a calm path forward. Subscribe for more parent‑friendly, screen‑light AI guidance, share this with someone who needs it, and leave a review to help more families find the show.Resources:Flora AI – the visual AI tool you mentioned as your favorite gadgetDr. Sonia Tiwari's research article – “Designing ethical AI characters for children's early learning experiences” in AI, Brain and ChildDr. Sonia Tiwari on LinkedIn – you told listeners to check out her LinkedInBuddy.ai – AI character English tutor you referencedSnorble – the AI bedtime companion you mentionedSupport the showHelp us become the #1 podcast for AI for Kids. Support our kickstarter: https://www.kickstarter.com/projects/aidigicards/the-abcs-of-ai-activity-deck-for-kids Buy our debut book “AI… Meets… AI”Social Media & Contact: Website: www.aidigitales.com Email: contact@aidigitales.com Follow Us: Instagram, YouTube Books on Amazon or Free AI Worksheets Listen, rate, and subscribe! Apple Podcasts Amazon Music Spotify YouTube Other Like our content? patreon.com/AiDig...

Conversations For Leaders & Teams
E89. Responsible AI for the Modern Leader & Coach w/Colin Cosgrove

Conversations For Leaders & Teams

Play Episode Listen Later Nov 15, 2025 34:36


Send us a textExplore how leaders and coaches can adopt AI without losing the human core, turning compliance and ethics into everyday practice rather than a side office. Colin Cosgrove shares a practical arc for AI readiness, concrete use cases, and a clear view of risk, trust, and governance.• journey from big-tech compliance to leadership coaching • why AI changes the leadership environment and decision pace • making compliance human: transparency, explainability, consent • AI literacy across every function, not just data teams • the AI leader archetype arc for mindset and readiness • practical augmentation: before, during, after coaching sessions • three risks: reputational, relational, regulatory • leader as coach: trust, questions, and human skills • EU AI Act overview and risk-based obligations • governance, accountability, and cross-Reach out to Colin on LinkedIn and check out his website: Movizimo.com. Support the showBelemLeaders–Your organization's trusted partner for leader and team development. Visit our website to connect: belemleaders.org or book a discovery call today! belem.as.me/discoveryUntil next time, keep doing great things!

Pondering AI
No Community Left Behind with Paula Helm

Pondering AI

Play Episode Listen Later Nov 12, 2025 52:06


Paula Helm articulates an AI vision that goes beyond base performance to include epistemic justice and cultural diversity by focusing on speakers and not language alone. Paula and Kimberly discuss ethics as a science; language as a core element of culture; going beyond superficial diversity; epistemic justice and valuing other's knowledge; the translation fallacy; indigenous languages as oral goods; centering speakers and communities; linguistic autonomy and economic participation; the Māori view on data ownership; the role of data subjects; enabling cultural understanding, self-determination and expression; the limits of synthetic data; ethical issues as power asymmetries; and reflecting on what AI mirrors back to us.  Paula Helm is an Assistant Professor of Empirical Ethics and Data Science at the University of Amsterdam. Her work sits at the intersection of STS, Media Studies and Ethics. In 2022 Paula was recognized as one of the 100 Most Brilliant Women in AI-Ethics.Related ResourcesGenerating Reality and Silencing Debate: Synthetic Data as Discursive Device (paper) https://journals.sagepub.com/doi/full/10.1177/20539517241249447Diversity and Language Technology (paper): https://link.springer.com/article/10.1007/s10676-023-09742-6A transcript of this episode is here.   

Talking Technology with ATLIS
The Learning Science of AI in Education with Dr. Jeremy Roschelle and Dr. Pati Ruiz

Talking Technology with ATLIS

Play Episode Listen Later Nov 11, 2025 51:50 Transcription Available


Dr. Jeremy Roschelle and Dr. Pati Ruiz from Digital Promise join the podcast to discuss their learning sciences research into AI's role in education. They share details about an innovative project using AI to improve student reading literacy and explore frameworks for developing AI literacy and responsible use policies in schools.Practitioner Toolkit from Digital Promise, provides resources for collaborative learning that are flexible, adaptable, and rooted in real teaching experienceChallenge Map, from Digital PromiseU-GAIN Reading, program from Digital Promise seeking to amplify new knowledge about how to use GenAI to create content that matches each student's interests and strengths, enables dialogue about the meaning of content, and adapts to a student's progress and needsAI Literacy, framework from Digital Promise to understand, evaluate, and use emerging technologySceneCraft, program from EngageAI Institute with AI-powered, narrative-driven learning experiences, engaging students through storytelling, creativity, and critical thinkingAs they face conflicting messages about AI, some advice for educators on how to use it responsibly, opinion blog from Jeremy RoschelleTeacher Ready Evaluation Tool, helps standardize the way ed tech decision-makers evaluate edtech productsEvaluating Tech Solutions, ATLIS is an official partner with ISTE to expand the presence of independent school vendors and technology solutions in the Edtech IndexIf you are interested in engaging in research with Digital Promise, or just have a great research idea, share a message on LinkedIn: Jeremy | PatiMore Digital Promise articles:GenAI in Education: When to Use It, When to Skip It, and How to Decide – Digital PromiseHearing from Students: How Learners Experience AI in Education – Digital PromiseMeet the Educators Helping U-GAIN Reading Explore How GenAI Can Improve Literacy – Digital PromiseGuest Post: 3 Guiding Principles for Responsible AI in EdTech – Digital Promise

The Thoughtful Entrepreneur
2304 - Understanding the Importance of AI Visibility and Control in Modern Business with Lanai's Lexi Reese

The Thoughtful Entrepreneur

Play Episode Listen Later Oct 31, 2025 19:31


How to Safely and Strategically Adopt AI in Your Organization: Expert Insights from Lexi Reese, CEO of LanaiArtificial intelligence is reshaping the modern workplace faster than any technology before it. But as companies rush to integrate AI, many leaders struggle with how to adopt it responsibly—balancing innovation, security, and ethics. In this episode of The Thoughtful Entrepreneur, host Josh Elledge interviews Lexi Reese, Co-Founder and CEO of Lanai, an AI-native observability and security platform. Lexi shares practical insights on how organizations can safely manage, monitor, and scale AI adoption without compromising data integrity or trust.Leading AI Adoption ResponsiblyLexi explains that the most successful companies treat AI not just as a set of tools, but as part of their workforce—a powerful digital team member that requires oversight, structure, and accountability. She emphasizes that AI must be “hired” into an organization with defined roles, clear expectations, and measurable outcomes. Just as leaders track employee performance, they must also monitor how AI performs, adapts, and impacts real-world results.Visibility, Lexi notes, is essential for responsible AI use. Many organizations don't know which departments are using AI, how data is being handled, or where security risks exist. Lanai's technology helps leaders map and monitor AI usage across their companies—identifying risks, preventing data leaks, and ensuring compliance with privacy laws. This proactive approach transforms uncertainty into insight, allowing innovation to flourish safely.Beyond technology, Lexi encourages leaders to consider the human element of AI integration. By prioritizing education, ethical standards, and collaboration between business and compliance teams, organizations can create a culture of trust and accountability. Responsible AI adoption isn't about slowing progress—it's about making innovation sustainable, secure, and beneficial for everyone.About Lexi ReeseLexi Reese is the Co-Founder and CEO of Lanai, an AI-native observability and security platform helping organizations safely adopt and manage AI. With a background that spans leadership roles at Google, Gusto, and public service, Lexi is known for her expertise in building ethical technology systems that empower teams and protect businesses.About LanaiLanai is an AI observability and security platform designed to help organizations monitor, govern, and scale AI adoption responsibly. Built for visibility and control, Lanai enables companies to detect risks, enforce compliance, and ensure ethical AI use across all departments. Learn more at lanai.com.Links Mentioned in This EpisodeLexi Reese on LinkedInLanai WebsiteKey Episode HighlightsWhy organizations must treat AI like a workforce, not just a tool.The importance of visibility and observability in AI adoption.Common AI risks—from data exposure to compliance violations—and how to prevent them.How Lanai helps companies balance innovation with ethical and secure AI use.Actionable steps for leaders to define, measure, and improve AI's role in their operations.ConclusionLexi Reese's insights remind us that AI's potential is only as powerful as the systems and ethics guiding it. By combining strategic visibility, thoughtful oversight, and a culture of accountability, leaders can ensure AI strengthens—rather than compromises—their...

The Bid Picture - Cybersecurity & Intelligence Analysis

Send Bidemi a Text Message!In this episode, host Bidemi Ologunde spoke with Shannon Noonan, CEO/Founder of HiNoon Consulting, and US Global Ambassador - Global Council for Responsible AI. The conversation addressed how to turn “checkbox” programs into real business value, right-sized controls, third-party risk, AI guardrails, and data habits that help teams move faster—while strengthening security, compliance, and privacy.Support the show