Podcasts about ai ethics

  • 819PODCASTS
  • 1,458EPISODES
  • 45mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Dec 17, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ai ethics

Show all podcasts related to ai ethics

Latest podcast episodes about ai ethics

Career Unicorns - Spark Your Joy
The Power of Enough: Choosing More Mommy Over More Money, Investing In Yourself, and Being A Leader In AI Ethics With Irene Liu (Ep. 199)

Career Unicorns - Spark Your Joy

Play Episode Listen Later Dec 17, 2025 48:13


  What happens when a high-powered executive, responsible for scaling multi-billion dollar companies, is asked by her 10-year-old: "What does that money actually mean to us?"   In this deeply insightful episode, we sit down with Irene Liu, founder of Hypergrowth GC and former Chief Financial and Legal Officer at Hopin. Irene shares her journey from the Department of Justice to the front lines of the AI revolution, where she now advises the California Senate on AI safety.   We explore the "Politics of the C-Suite," the necessity of high EQ in leadership, and why Irene decided to step out of the "survival mode" of corporate life to define what "enough" looks like for her family.   In this episode, we dive deep into: Resilience born from crisis: how working in finance in Manhattan during 9/11 shaped Irene's mental fortitude. Navigating layoffs with humanity: whether you are the one being let go, the one left with survivor's guilt, or the executive making the difficult calls. The art of the pivot: effective strategies for transitioning from public service and government roles into the private sector. The AI frontier: a sobering look at the "Empire of AI," the global race for innovation, and the urgent need for safeguards to protect children and vulnerable populations. The path to the C-Suite: the two key qualities you need to transition from "just a lawyer" to a business leader. "More Mommy" vs. "More Money": how to evaluate career choices through the lens of family values and the "seasons of life." Owning your growth: Why you shouldn't let your employer drive your career, and the importance of self-investment and building a genuine community.   Connect with us: Learn more about our guest, Irene Liu, on LinkedIn at https://www.linkedin.com/in/ireneliu1/.   Follow our host, Samorn Selim, on LinkedIn at https://www.linkedin.com/in/samornselim/. Get a copy of Samorn's book, Career Unicorns™ 90-Day 5-Minute Gratitude Journal: An Easy & Proven Way To Cultivate Mindfulness, Beat Burnout & Find Career Joy, at https://tinyurl.com/49xdxrz8. Ready for a career change?  Schedule a free 30-minute build your dream career consult by sending a message at www.careerunicorns.com. Disclaimer:  Irene would like our listeners to know that her views expressed in this podcast are her own and do not represent those of any referenced organizations.

AwesomeCast: Tech and Gadget Talk
2025 Predictions on AI, 3D Printers and more! | AwesomeCast 762

AwesomeCast: Tech and Gadget Talk

Play Episode Listen Later Dec 17, 2025 60:49


In this end-of-year AwesomeCast, hosts Michael Sorg and Katie Dudas are joined by original AwesomeCast co-host Rob De La Cretaz for a wide-ranging discussion on the biggest tech shifts of 2025 — and what's coming next. The panel breaks down how AI tools became genuinely useful in everyday workflows, from content production and health tracking to decision-making and trend analysis. Rob shares why Bambu Labs 3D printers represent a turning point in consumer and professional 3D printing, removing friction and making rapid prototyping accessible for creators, engineers, and hobbyists alike. The episode also covers the evolving role of AI in media creation, concerns around over-reliance and trust, and why human-made content may soon become a premium feature. Intern Mac reflects on changing career paths into media production, while the crew revisits their 2025 tech predictions, holds themselves accountable, and locks in bold forecasts for 2026. Plus: Chachi's Video Game Minute, AI competition heating up, Apple Vision Pro speculation, and why “AI inside” may need clearer definitions moving forward.

SparX by Mukesh Bansal
The Future of AI: Ethics, Safety & the Rise of Intelligence

SparX by Mukesh Bansal

Play Episode Listen Later Dec 13, 2025 55:01


In this episode of SparX, Mukeshl sits down with, Debjani Ghosh, leader of  the Frontier Tech Hub within NITI Aayog for a critical discussion. They dive deep into India's technological future, the existential role of AI in national growth, and the dramatic changes impacting careers and geopolitics.Debjani, who brings a unique perspective from 21 years at Intel and leadership at NASSCOM, discusses her experience driving change from within the government and why technology is now the "axis of power" globally.

Books & Writers · The Creative Process
The Ethics of AI w/ SVEN NYHOLM, Author & Lead Researcher, Munich Centre for Machine Learning

Books & Writers · The Creative Process

Play Episode Listen Later Dec 12, 2025 62:12


As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Books & Writers · The Creative Process
The AI Wager: Betting on Technology's Future w/ Philosopher & Author SVEN NYHOLM - Highlights

Books & Writers · The Creative Process

Play Episode Listen Later Dec 12, 2025 16:29


“ I think we're betting on AI as something that can help to solve a lot of problems for us. It's the future, we think, whether it's producing text or art, or doing medical research or planning our lives for us, etc., the bet is that AI is going to be great, that it's going to get us everything we want and make everything better. But at the same time, we're gambling, at the extreme end, with the future of humanity  , hoping for the best and hoping that this, what I'm calling the AI wager, is going to work out to our advantage, but we'll see.”As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Education · The Creative Process
The Ethics of AI w/ SVEN NYHOLM, Author & Lead Researcher, Munich Centre for Machine Learning

Education · The Creative Process

Play Episode Listen Later Dec 12, 2025 62:12


As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Education · The Creative Process
The AI Wager: Betting on Technology's Future w/ Philosopher & Author SVEN NYHOLM - Highlights

Education · The Creative Process

Play Episode Listen Later Dec 12, 2025 16:29


“ I think we're betting on AI as something that can help to solve a lot of problems for us. It's the future, we think, whether it's producing text or art, or doing medical research or planning our lives for us, etc., the bet is that AI is going to be great, that it's going to get us everything we want and make everything better. But at the same time, we're gambling, at the extreme end, with the future of humanity  , hoping for the best and hoping that this, what I'm calling the AI wager, is going to work out to our advantage, but we'll see.”As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

The Creative Process in 10 minutes or less · Arts, Culture & Society
The Ethics of AI w/ SVEN NYHOLM, Author & Lead Researcher, Munich Centre for Machine Learning

The Creative Process in 10 minutes or less · Arts, Culture & Society

Play Episode Listen Later Dec 12, 2025 16:29


“ I think we're betting on AI as something that can help to solve a lot of problems for us. It's the future, we think, whether it's producing text or art, or doing medical research or planning our lives for us, etc., the bet is that AI is going to be great, that it's going to get us everything we want and make everything better. But at the same time, we're gambling, at the extreme end, with the future of humanity  , hoping for the best and hoping that this, what I'm calling the AI wager, is going to work out to our advantage, but we'll see.”As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Edgy Ideas
101: The Future of Coaching: AI, Ethics, and Belonging

Edgy Ideas

Play Episode Listen Later Dec 10, 2025 37:27


Show Notes In this episode Simon speaks with Tatiana Bachkirova, a leading scholar in coaching psychology. They explore how AI is impacting on the field of coaching and what it means to remain human in a world increasingly driven by algorithms. The discussion moves fluidly between neuroscience, pseudo-science, identity, belonging, and ethics, reflecting on the tensions between performance culture and authentic human development. They discuss how coaching must expand beyond individual self-optimization toward supporting meaningful, value-based projects and understanding the broader social and organisational contexts in which people live and work.  AI underscores the need for ethical grounding in coaching. Ultimately, the episode reclaims coaching as a moral and relational practice, reminding listeners that the future of coaching depends not on technology, but on how we choose to stay human within it. Key Reflections AI is often a solution in search of a problem, revealing more about our anxieties than our needs. Coaching must evolve with the changing world, engaging complexity rather than retreating to technique. The focus should be on meaningful, value-driven projects that connect personal purpose with collective good. AI coaching risks eroding depth, ethics, and relational presence if not grounded in human awareness. Critical thinking anchors coaching in understanding rather than compliance, enabling ethical discernment. The relational quality defines coaching effectiveness - authentic dialogue remains its living core. Coaching should move from performance and self-optimization to reflection, purpose, and contribution. Human connection and ethical practice sustain trust, belonging, and relevance in the digital age. The future of coaching lies in integrating technology without losing our humanity. Keywords Coaching psychology, AI in coaching, organisational coaching, identity, belonging, neuroscience, critical thinking, human coaching, coaching ethics, coaching research Brief Bio Tatiana Bachkirova is Professor of Coaching Psychology in the International Centre for Coaching and Mentoring Studies at Oxford Brookes University, UK. She supervises doctoral students as an academic, and human coaches as a practitioner. She is a leading scholar in Coaching Psychology and in recent years has been exploring themes such as the role of AI in coaching, the deeper purpose of organisational coaching, what leaders seek to learn at work, and critical perspectives on the neuroscience of coaching.  In her over 80 research articles in leading journals, book chapters and books and in her many speaking engagements she addresses most challenging issues of coaching as a service to individuals, organisations and wider societies.

The DEI Discussions - Powered by Harrington Starr
Digital Colonialism, AI & Inclusion: Why Ethical Tech Can't Wait | John Archbold, a professional overthinker for AI ethics, governance and the intersection of technology operations

The DEI Discussions - Powered by Harrington Starr

Play Episode Listen Later Dec 9, 2025 21:07


On this episode of FinTech's DEI Discussions, Nadia sits down with John Archbold, a professional overthinker for AI ethics, governance and the intersection of technology operations. This conversation dives into digital colonialism, the unseen labour behind AI, hiring bias, and how algorithms are quietly removing humanity from decision-making. John challenges leaders to rethink responsible leadership, explains why employee networks must influence AI governance, and asks us all to consider how technology impacts marginalised communities in real time, not in theory.FinTech's DEI Discussions is powered by Harrington Starr, global leaders in Financial Technology Recruitment. For more episodes or recruitment advice, please visit our website www.harringtonstarr.com

Tech, Innovation & Society - The Creative Process
The AI Wager: Betting on Technology's Future w/ Philosopher & Author SVEN NYHOLM - Highlights

Tech, Innovation & Society - The Creative Process

Play Episode Listen Later Dec 2, 2025 16:29


“ I think we're betting on AI as something that can help to solve a lot of problems for us. It's the future, we think, whether it's producing text or art, or doing medical research or planning our lives for us, etc., the bet is that AI is going to be great, that it's going to get us everything we want and make everything better. But at the same time, we're gambling, at the extreme end, with the future of humanity  , hoping for the best and hoping that this, what I'm calling the AI wager, is going to work out to our advantage, but we'll see.”As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

RRC Now
Ep. 3 - The Future of Real Estate: AI, Ethics, & Education - Pt. 1

RRC Now

Play Episode Listen Later Dec 2, 2025 30:26


In this two-part conversation, CRS Designee Tim Kinzie brings decades of real estate wisdom to Real Estate Real Talk. Together, we unpack how AI is reshaping the industry—without replacing the relationships that keep it human. Kinzie dives into ethical must-knows, the importance of transparency when using AI and why protecting client data has never been more critical. He also shares forward-looking insights on the future of real estate education and how emerging tech like blockchain could transform the transaction process. Whether you're excited about innovation or cautious about change, this series shows how agents can stay ahead and stay true to what matters: trust, expertise and connection.

The Pastor's Heart with Dominic Steele
AI Ethics and Preaching: Plagiarism, Spiritual Formation & Pastoral Voice - with Stephen Driscoll

The Pastor's Heart with Dominic Steele

Play Episode Listen Later Dec 2, 2025 31:26 Transcription Available


What are the dangers when pastors let AI assist… or sometimes author?How do we think well about plagiarism, spiritual formation and the loss of our pastoral voice?And are there positive, God-honouring ways to use these tools?Stephen Driscoll works in Campus Ministry in Canberra.  He's the author of 'Made in Our Image: God, artificial intelligence and you. 'Stephen argues that writing is thinking, and when we automate the writing we risk automating away the deep thinking and wrestling with God's word that forms the preacher's heart. We talk dangers, temptations, reputation, the Holy Spirit, and the kinds of careful, ethical uses of AI that still require the pastor to be the author.Stephen helps us preach faithfully and use AI to assist in that in an ethical way in a rapidly changing world. Also see:The traumatic implications of artificial intelligence.What morality to teach artificial intelligence?The Church Cohttp://www.thechurchco.com is a website and app platform built specifically for churches.  Advertise on The Pastor's HeartTo advertise on The Pastor's Heart go to thepastorsheart.net/sponsorSupport the show

Tech, Innovation & Society - The Creative Process
The Ethics of AI w/ SVEN NYHOLM, Author & Lead Researcher, Munich Centre for Machine Learning

Tech, Innovation & Society - The Creative Process

Play Episode Listen Later Nov 27, 2025 62:12


As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Double Tap Canada
Be My Eyes and AI: Balancing Tech and Human Connection

Double Tap Canada

Play Episode Listen Later Nov 26, 2025 57:24


Explore how Be My Eyes is redefining accessibility with AI and human connection. CEO Mike Buckley discusses their Apple App Store Finalist nomination, the ethics of AI in assistive technology, and the challenges of awareness and global reach.This episode is supported by Pneuma Solutions. Creators of accessible tools like Remote Incident Manager and Scribe. Get $20 off with code dt20 at https://pneumasolutions.com/ and enter to win a free subscription at doubletaponair.com/subscribe!In this episode of Double Tap, Steven Scott and Shaun Preece chat with Be My Eyes CEO Mike Buckley. The conversation begins with the app's recognition as an Apple App Store Cultural Impact finalist, celebrating its global influence on the blind and low vision community. The discussion evolves into an honest exploration of AI's role in accessibility, including Be My AI, human volunteers, and the emotional dimensions of social connection. Mike shares insights into: The balance between AI utility and human kindness. Overcoming the trepidation blind users feel before calling a volunteer. Ethical dilemmas around AI companionship, mental health, and responsible guardrails. Future possibilities for niche AI models designed for blind users. Like, comment, and subscribe for more conversations on tech and accessibility.Share your thoughts: feedback@doubletaponair.comLeave us a voicemail: 1-877-803-4567Send a voice or video message via WhatsApp: +1-613-481-0144 Relevant LinksBe My Eyes: https://www.bemyeyes.com Find Double Tap online: YouTube, Double Tap Website---Follow on:YouTube: https://www.doubletaponair.com/youtubeX (formerly Twitter): https://www.doubletaponair.com/xInstagram: https://www.doubletaponair.com/instagramTikTok: https://www.doubletaponair.com/tiktokThreads: https://www.doubletaponair.com/threadsFacebook: https://www.doubletaponair.com/facebookLinkedIn: https://www.doubletaponair.com/linkedin Subscribe to the Podcast:Apple: https://www.doubletaponair.com/appleSpotify: https://www.doubletaponair.com/spotifyRSS: https://www.doubletaponair.com/podcastiHeadRadio: https://www.doubletaponair.com/iheart About Double TapHosted by the insightful duo, Steven Scott and Shaun Preece, Double Tap is a treasure trove of information for anyone who's blind or partially sighted and has a passion for tech. Steven and Shaun not only demystify tech, but they also regularly feature interviews and welcome guests from the community, fostering an interactive and engaging environment. Tune in every day of the week, and you'll discover how technology can seamlessly integrate into your life, enhancing daily tasks and experiences, even if your sight is limited. "Double Tap" is a registered trademark of Double Tap Productions Inc. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Health Ranger Report
Brighteon Broadcast News, Nov 23, 2025 - OPT OUT of the western medical system, and you'll be healthier, wealthier and happier

The Health Ranger Report

Play Episode Listen Later Nov 23, 2025 116:39


- Updates on AI Tools and Book Generator (0:10) - Health Advice and Lifestyle Habits (1:42) - Critique of Conventional Doctors (6:50) - The Rise of AI in Healthcare (10:05) - Better Than a Doctor AI Feature (17:24) - Health Ranger's AI and Robotics Projects (36:07) - Philosophical Discussion on AI and Human Rights (1:10:58) - The Future of AI and Human Interaction (1:17:53) - The Role of AI in Survival Scenarios (1:18:57) - The Potential for AI in Enhancing Human Life (1:19:13) - Personal Experience with AI and Health Data (1:19:32) - AI in Diagnostics and Natural Solutions (1:22:17) - Critique of Google and AI Ethics (1:25:00) - Impact of AI on Human Relationships and Society (1:30:24) - Debate on Consciousness and AI (1:35:54) - Historical and Scientific Perspectives on Consciousness (1:50:21) - Practical Applications and Future of AI (1:53:17) For more updates, visit: http://www.brighteon.com/channel/hrreport  NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com

Tea With GenZ
Engineering the Future: Dr. Aqeel Ahmed on AI, Ethics, and Gen Z

Tea With GenZ

Play Episode Listen Later Nov 22, 2025 38:19


In this episode of Tea With Gen Z, we sit down with Dr. Aqeel Taher, a long-serving AUS faculty member. We discuss the strengths and challenges of Gen Z engineering students, touching on ethics, learning styles, and the evolving academic landscape. The conversation concludes with Dr. Aqeel's perspective on AI, its ethical use, and his advice for the next generation of engineers.

The Future of Everything presented by Stanford Engineering

Gabriel Weintraub studies how digital markets evolve. In that regard, he says platforms like Amazon, Uber, and Airbnb have already disrupted multiple verticals through their use of data and digital technologies. Now, they face both the opportunity and the challenge of leveraging AI to further transform markets, while doing so in a responsible and accountable way. Weintraub is also applying these insights to ease friction and accelerate results in government procurement and regulation. Ultimately, we must fall in love with solving the problem, not with the technology itself, Weintraub tells host Russ Altman on this episode of Stanford Engineering's The Future of Everything podcast.Have a question for Russ? Send it our way in writing or via voice memo, and it might be featured on an upcoming episode. Please introduce yourself, let us know where you're listening from, and share your question. You can send questions to thefutureofeverything@stanford.edu.Episode Reference Links:Stanford Profile: Gabriel WeintraubConnect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>> Twitter/X / Instagram / LinkedIn / FacebookChapters:(00:00:00) IntroductionRuss Altman introduces guest Gabriel Weintraub, a professor of operations, information, and technology at Stanford University.(00:03:00) School Lunches to Digital PlatformsHow designing markets in Chile led Gabriel to study digital marketplaces.(00:03:57) What Makes a Good MarketOutlining the core principles that constitute a well-functioning market.(00:05:29) Opportunities and Challenges OnlineThe challenges associated with the vast data visibility of digital markets.(00:06:56) AI and the Future of SearchHow AI and LLMs could revolutionize digital platforms.(00:08:15) Rise of Vertical MarketplacesThe new specialized markets that curate supply and ensure quality.(00:10:23) Winners and Losers in Market ShiftsHow technology is reshaping industries from real estate to travel.(00:12:38) Government Procurement in ChileApplying market design and AI tools to Chile's procurement system.(00:15:00) Leadership and AdoptionThe role of leadership in modernizing government systems.(00:18:59) AI in Government and RegulationUsing AI to help governments streamline complex bureaucratic systems.(00:21:45) Streamlining Construction PermitsPiloting AI tools to speed up municipal construction-permit approvals.(00:23:20) Building an AI StrategyCreating an AI strategy that aligns with business or policy goals.(00:25:26) Workforce and ExperimentationTraining employees to experiment with LLMs and explore productivity gains.(00:27:36) Humans and AI CollaborationThe importance of designing AI systems to augment human work, not replace it.(00:28:26) Future in a MinuteRapid-fire Q&A: AI's impact, passion and resilience, and soccer dreams.(00:30:39) Conclusion Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>>Twitter/X / Instagram / LinkedIn / Facebook Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

My Good Woman
106 | AI Ethics and Security with Elizabeth Goede (Part 2)

My Good Woman

Play Episode Listen Later Nov 18, 2025 18:31 Transcription Available


Send us a textAre you feeding your AI tools private info you'd never hand to a stranger?If you're dropping sensitive data into ChatGPT, Canva, or Notion without blinking, this episode is your wake-up call. In Part 2 of our eye-opening conversation with AI ethics strategist Elizabeth Goede, we delve into the practical aspects of AI use and how to safeguard your business, clients, and future.This one isn't about fear. It's about founder-level responsibility and smart decision-making in a world where the tools are evolving faster than most policies.Grab your ticket to the AI in Action Conference — March 19–20, 2026 in Grand Rapids, MI. You'll get two days of hands-on AI application with 12 done-with-you business tools. This isn't theory. It's transformation.In This Episode, You'll Learn:Why founders must have an AI policy (yes, even solopreneurs)The #1 AI tool Elizabeth would never trust with sensitive dataHow to vet the tools you already use (based on their founders, not just features)What "locking down your data" actually looks likeA surprising leadership insight AI will reveal about your teamResources & Links:AI in Action Conference – RegistrationFollow Elizabeth Goede socials (LinkedIn, Instagram)Related episode:Episode 104 | AI Ethics and Security (Part 1) with Elizabeth GoedeWant to increase revenue and impact? Listen to “She's That Founder” for insights on business strategy and female leadership to scale your business. Each episode offers advice on effective communication, team building, and management. Learn to master routines and systems to boost productivity and prevent burnout. Our delegation tips and business consulting will advance your executive leadership skills and presence.

Pondering AI
No Community Left Behind with Paula Helm

Pondering AI

Play Episode Listen Later Nov 12, 2025 52:06


Paula Helm articulates an AI vision that goes beyond base performance to include epistemic justice and cultural diversity by focusing on speakers and not language alone. Paula and Kimberly discuss ethics as a science; language as a core element of culture; going beyond superficial diversity; epistemic justice and valuing other's knowledge; the translation fallacy; indigenous languages as oral goods; centering speakers and communities; linguistic autonomy and economic participation; the Māori view on data ownership; the role of data subjects; enabling cultural understanding, self-determination and expression; the limits of synthetic data; ethical issues as power asymmetries; and reflecting on what AI mirrors back to us.  Paula Helm is an Assistant Professor of Empirical Ethics and Data Science at the University of Amsterdam. Her work sits at the intersection of STS, Media Studies and Ethics. In 2022 Paula was recognized as one of the 100 Most Brilliant Women in AI-Ethics.Related ResourcesGenerating Reality and Silencing Debate: Synthetic Data as Discursive Device (paper) https://journals.sagepub.com/doi/full/10.1177/20539517241249447Diversity and Language Technology (paper): https://link.springer.com/article/10.1007/s10676-023-09742-6A transcript of this episode is here.   

My Good Woman
104 | AI Ethics and Security with Elizabeth Goede (Part 1)

My Good Woman

Play Episode Listen Later Nov 12, 2025 21:12 Transcription Available


Send us a textIs your AI use exposing your business to risks you can't see coming?It's not just about saving time — it's about protecting your clients, your content, and your credibility.In this episode, Dawn Andrews sits down with AI strategist Elizabeth Goede to unpack the real (and often ignored) risks of using AI in business. From ChatGPT to Claude, learn what founders must know about security, data privacy, and ethical use — without getting lost in the tech.“You wouldn't post your financials on Instagram. So why are you pasting them into AI tools without checking where they're going?”Listen in and get equipped to lead smart, safe, and scalable with AI — no fear-mongering, just facts with a side of sass.Want to stop talking about AI and actually use it safely and strategically?Join us at the AI in Action Conference, happening March 19–20, 2026 in Grand Rapids, Michigan. Get hands-on with 12 action-packed micro workshops designed to help you apply AI in real time to boost your business, protect your data, and ditch the digital grunt work.Register now What You'll Learn:How even small service businesses are vulnerable to AI misuseThe one rule for deciding what data is safe to input into AI toolsWhy AI models like ChatGPT, Claude, and Copilot aren't created equalThe hidden risks of giving tools access to your drive, emails, or client docsWhat every founder should ask before signing any AI-related agreementResources & Links:AI in Action Conference – RegistrationFollow Elizabeth Goede socials (LinkedIn, Instagram)Related episode:Episode 93 | The Dirty Secret About AI No Female Executive Wants To Admit—And Why It's Hurting You - This episode dives into the real reason female founders hesitate with AI — and the hidden risks of staying on the sidelines. Includes smart insights on the security tradeoffs when you don't understand where your data is going or how to control it.Want to increase revenue and impact? Listen to “She's That Founder” for insights on business strategy and female leadership to scale your business. Each episode offers advice on effective communication, team building, and management. Learn to master routines and systems to boost productivity and prevent burnout. Our delegation tips and business consulting will advance your executive leadership skills and presence.

Hub Culture presents: The Chronicle Discussions
Episode 113: AI Ethics Are Awesome with Dr. Mona Hamdy, Harvard, FII9 Conversations, Part 2

Hub Culture presents: The Chronicle Discussions

Play Episode Listen Later Nov 7, 2025 47:23


AI ethics expert Mona Hamdy joins Stan Stalnaker for a beguiling look at the future of AI and how to get it right, at FII9 in Riyadh, Saudi's Arabia. Part 2 of a 5 part series.

ResearchPod
Fuzzy Logic and the Human Side of Artificial Intelligence

ResearchPod

Play Episode Listen Later Nov 7, 2025 52:17 Transcription Available


Artificial intelligence often struggles with the ambiguity, nuance, and shifting context that defines human reasoning. Fuzzy logic offers an alternative, by modelling meaning in degrees rather than absolutes.In this roundtable episode, ResearchPod speaks with Professors Edy Portmann, Irina Perfilieva, Vilem Novak, Cristina Puente, and José María Alonso about how fuzzy systems capture perception, language, social cues, and uncertainty. Their insights contribute to the upcoming FMsquare Foundation booklet on fuzzy logic, exploring the role of uncertainty-aware reasoning in the future of AI.You can read the previous booklet from this series here: Fuzzy Design-Science ResearchYou can listen to previous fuzzy podcasts here: fmsquare.org

The Road to Accountable AI
Ravit Dotan: Rethinking AI Ethics

The Road to Accountable AI

Play Episode Listen Later Nov 6, 2025 33:55


Ravit Dotan argues that the primary barrier to accountable AI is not a lack of ethical clarity, but organizational roadblocks. While companies often understand what they should do, the real challenge is organizational dynamics that prevent execution—AI ethics has been shunted into separate teams lacking power and resources, with incentive structures that discourage engineers from raising concerns. Drawing on work with organizational psychologists, she emphasizes that frameworks prescribe what systems companies should have but ignore how to navigate organizational realities. The key insight: responsible AI can't be a separate compliance exercise but must be embedded organically into how people work. Ravit discusses a recent shift in her orientation from focusing solely on governance frameworks to teaching people how to use AI thoughtfully. She critiques "take-out mode" where users passively order finished outputs, which undermines skills and critical review. The solution isn't just better governance, but teaching workers how to incorporate responsible AI practices into their actual workflows.  Dr. Ravit Dotan is the founder and CEO of TechBetter, an AI ethics consulting firm, and Director of the Collaborative AI Responsibility (CAIR) Lab at the University of Pittsburgh. She holds a Ph.D. in Philosophy from UC Berkeley and has been named one of the "100 Brilliant Women in AI Ethics" (2023), and was a finalist for "Responsible AI Leader of the Year" (2025). Since 2021, she has consulted with tech companies, investors, and local governments on responsible AI. Her recent work emphasizes teaching people to use AI thoughtfully while maintaining their agency and skills. Her work has been featured in The New York Times, CNBC, Financial Times, and TechCrunch. Transcript My New Path in AI Ethics (October 2025) The Values Encoded in Machine Learning Research (FAccT 2022 Distinguished Paper Award) - Responsible AI Maturity Framework  

Mornings with Carmen
AI genetic revolution and AI ethics - Austin Gravley | Veteran's Day, Thanksgiving, and telling of God's Glory - Kathy Branzell

Mornings with Carmen

Play Episode Listen Later Nov 6, 2025 48:47


Austin Gravley of Digital Babylon and the What Would Jesus Tech podcast talks about how the Chinese Communist Party is looking at using AI to enhance the genetic "quality" of their children, among other uses.  What are the ethical guidelines?  What are acceptable and unacceptable uses?  The National Day of Prayer Taskforce's Kathy Branzell (who is a "military brat") talks about the importance of supporting and praying for our veterans and current military members.  The also talks about giving thanks and "telling of His glory among the nations, Hsi wonderful deeds among all the peoples."  Faith Radio podcasts are made possible by your support. Give now: Click here  

No Password Required
No Password Required Podcast Episode 65 — Steve Orrin

No Password Required

Play Episode Listen Later Nov 4, 2025 44:51


Keywordscybersecurity, technology, AI, IoT, Intel, startups, security culture, talent development, career advice  SummaryIn this episode of No Password Required, host Jack Clabby and Kayleigh Melton engage with Steve Orrin, the federal CTO at Intel, discussing the evolving landscape of cybersecurity, the importance of diverse teams, and the intersection of technology and security. Steve shares insights from his extensive career, including his experiences in the startup scene, the significance of AI and IoT, and the critical blind spots in cybersecurity practices. The conversation also touches on nurturing talent in technology and offers valuable advice for young professionals entering the field.  TakeawaysIoT is now referred to as the Edge in technology.Diverse teams bring unique perspectives and solutions.Experience in cybersecurity is crucial for effective team building.The startup scene in the 90s was vibrant and innovative.Understanding both biology and technology can lead to unique career paths.AI and IoT are integral to modern cybersecurity solutions.Organizations often overlook the importance of security in early project stages.Nurturing talent involves giving them interesting projects and autonomy.Young professionals should understand the hacker mentality to succeed in cybersecurity.Customer feedback is essential for developing effective security solutions.  TitlesThe Edge of Cybersecurity: Insights from Steve OrrinNavigating the Intersection of Technology and Security  Sound bites"IoT is officially called the Edge.""We're making mainframe sexy again.""Surround yourself with people smarter than you."  Chapters00:00 Introduction to Cybersecurity and the Edge01:48 Steve Orrin's Role at Intel04:51 The Evolution of Security Technology09:07 The Startup Scene in the 90s13:00 The Intersection of Biology and Technology15:52 The Importance of AI and IoT20:30 Blind Spots in Cybersecurity25:38 Nurturing Talent in Technology28:57 Advice for Young Cybersecurity Professionals32:10 Lifestyle Polygraph: Fun Questions with Steve

ai technology advice young innovation evolution startups artificial intelligence collaboration networking mentorship cybersecurity biology intel cto organizations compliance intersection required diverse governance machine learning nurturing misinformation iot surround homeland security poker lovecraft autonomy team building passwords internet of things deepfakes federal government community engagement critical thinking hellraiser blind spots body language collectibles phishing emerging technologies cloud computing hackathons jim collins hands on learning scalability encryption defcon call of cthulhu career journey data protection good to great team dynamics social engineering built to last leadership roles world series of poker zero trust summaryin ai ethics pinhead cryptography predictive analytics intelligence community experiential learning firmware veterans administration edge computing department of defense intel corporation learning from failure threat intelligence pattern recognition orrin startup culture bruce schneier creative collaboration human psychology ethical hacking ai security customer focus physical security performance optimization technology leadership applied ai innovation culture fedramp capture the flag behavioral analysis web security kali linux federal programs cybersecurity insights government technology puzzle box pathfinding continuous monitoring nurturing talent reliability engineering failure analysis buffer overflow poker tells quality of service
Tech Hive: The Tech Leaders Podcast
#121: “People, product, process, platform” - Cindy Turner, Chief Product Officer at Worldpay

Tech Hive: The Tech Leaders Podcast

Play Episode Listen Later Oct 31, 2025 54:13


Join us this week for The Tech Leaders Podcast, where Gareth sits down with Cindy Turner, Chief Product Officer at Worldpay. Cindy talks about Worldpay's current priorities, the trade offs between structure and agility, and how Stablecoins can help make international payments quicker and easier. On this episode, Cindy and Gareth discuss the benefits of pushing ownership down to your team, launching Apple Pay back in 2014, and why we might all have our own personal shopping agent in our pockets sooner than you think… Timestamps: Good Leadership and Early Career (1:51) Corporates, Start-Ups and Scale (12:24) Worldpay Priorities (16:16) International Payments and Stablecoin (22:50) Agentic Commerce (28:56) AI Ethics and Governance (37:17) Advice for 21-year-old Cindy (45:31) https://www.bedigitaluk.com/

This Week in Google (MP3)
IM 843: Immortal Beloved, You've Arrived - AI's Emotional Intelligence Paradox

This Week in Google (MP3)

Play Episode Listen Later Oct 30, 2025 182:04


Can an AI truly care about your feelings—or is emotional intelligence in machines just the most sophisticated form of manipulation? Dr. Alan Cowen of Hume AI joins the crew to unpack the promise and peril of emotionally adept bots, even as they're quietly shaping how we connect, seek help, and parent in the digital age. Virtual Try On Free Online - AI Clothes Changer | i-TryOn Oreo-maker Mondelez to use new generative AI tool to slash marketing costs OpenAI Moves to Generate AI Music in Potential Rivalry With Startup Suno Surprising no one, researchers confirm that AI chatbots are incredibly sycophantic Microsoft's Mico heightens the risks of parasocial LLM relationships Armed police swarm student after AI mistakes bag of Doritos for a weapon - Dexerto A Definition of AGI OpenAI Finalizes Corporate Restructuring, Gives Microsoft 27% Stake and Technology Access Until 2032 - Slashdot This mom's son was asking Tesla's Grok AI chatbot about soccer. It told him to send nude pics, she says Nvidia Becomes World's First $5 Trillion Company - Slashdot Paris Hilton Has Been Training Her AI for Years How Realistic is OpenAI's 2028 Timeline For Automating AI Research Itself? Tesla's "Mad Max" mode is now under federal scrutiny Zenni's Anti-Facial Recognition Glasses are Eyewear for Our Paranoid Age Alphabet earnings Meta earnings You Have No Idea How Screwed OpenAI Actually Is Elon Musk's Grokipedia Pushes Far-Right Talking Points AOL to be sold to Bending Spoons for roughly $1.5B Casio's Fluffy AI Robot Squeaked Its Way Into My Heart For the sake of the show, I will suffer this torture -jj Machine Olfaction and Embedded AI Are Shaping the New Global Sensing Industry A silly little photoshoot with your friends Bugonia Celebrating 25 years of Google Ads The Data Is In: The Washington Post Can't Replace Its "TikTok Guy" Peak screen? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Dr. Alan Cowen Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security zapier.com/machines agntcy.org ventionteams.com/twit

All TWiT.tv Shows (MP3)
Intelligent Machines 843: Immortal Beloved, You've Arrived

All TWiT.tv Shows (MP3)

Play Episode Listen Later Oct 30, 2025 182:04


Can an AI truly care about your feelings—or is emotional intelligence in machines just the most sophisticated form of manipulation? Dr. Alan Cowen of Hume AI joins the crew to unpack the promise and peril of emotionally adept bots, even as they're quietly shaping how we connect, seek help, and parent in the digital age. Virtual Try On Free Online - AI Clothes Changer | i-TryOn Oreo-maker Mondelez to use new generative AI tool to slash marketing costs OpenAI Moves to Generate AI Music in Potential Rivalry With Startup Suno Surprising no one, researchers confirm that AI chatbots are incredibly sycophantic Microsoft's Mico heightens the risks of parasocial LLM relationships Armed police swarm student after AI mistakes bag of Doritos for a weapon - Dexerto A Definition of AGI OpenAI Finalizes Corporate Restructuring, Gives Microsoft 27% Stake and Technology Access Until 2032 - Slashdot This mom's son was asking Tesla's Grok AI chatbot about soccer. It told him to send nude pics, she says Nvidia Becomes World's First $5 Trillion Company - Slashdot Paris Hilton Has Been Training Her AI for Years How Realistic is OpenAI's 2028 Timeline For Automating AI Research Itself? Tesla's "Mad Max" mode is now under federal scrutiny Zenni's Anti-Facial Recognition Glasses are Eyewear for Our Paranoid Age Alphabet earnings Meta earnings You Have No Idea How Screwed OpenAI Actually Is Elon Musk's Grokipedia Pushes Far-Right Talking Points AOL to be sold to Bending Spoons for roughly $1.5B Casio's Fluffy AI Robot Squeaked Its Way Into My Heart For the sake of the show, I will suffer this torture -jj Machine Olfaction and Embedded AI Are Shaping the New Global Sensing Industry A silly little photoshoot with your friends Bugonia Celebrating 25 years of Google Ads The Data Is In: The Washington Post Can't Replace Its "TikTok Guy" Peak screen? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Dr. Alan Cowen Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security zapier.com/machines agntcy.org ventionteams.com/twit

Radio Leo (Audio)
Intelligent Machines 843: Immortal Beloved, You've Arrived

Radio Leo (Audio)

Play Episode Listen Later Oct 30, 2025 182:04


Can an AI truly care about your feelings—or is emotional intelligence in machines just the most sophisticated form of manipulation? Dr. Alan Cowen of Hume AI joins the crew to unpack the promise and peril of emotionally adept bots, even as they're quietly shaping how we connect, seek help, and parent in the digital age. Virtual Try On Free Online - AI Clothes Changer | i-TryOn Oreo-maker Mondelez to use new generative AI tool to slash marketing costs OpenAI Moves to Generate AI Music in Potential Rivalry With Startup Suno Surprising no one, researchers confirm that AI chatbots are incredibly sycophantic Microsoft's Mico heightens the risks of parasocial LLM relationships Armed police swarm student after AI mistakes bag of Doritos for a weapon - Dexerto A Definition of AGI OpenAI Finalizes Corporate Restructuring, Gives Microsoft 27% Stake and Technology Access Until 2032 - Slashdot This mom's son was asking Tesla's Grok AI chatbot about soccer. It told him to send nude pics, she says Nvidia Becomes World's First $5 Trillion Company - Slashdot Paris Hilton Has Been Training Her AI for Years How Realistic is OpenAI's 2028 Timeline For Automating AI Research Itself? Tesla's "Mad Max" mode is now under federal scrutiny Zenni's Anti-Facial Recognition Glasses are Eyewear for Our Paranoid Age Alphabet earnings Meta earnings You Have No Idea How Screwed OpenAI Actually Is Elon Musk's Grokipedia Pushes Far-Right Talking Points AOL to be sold to Bending Spoons for roughly $1.5B Casio's Fluffy AI Robot Squeaked Its Way Into My Heart For the sake of the show, I will suffer this torture -jj Machine Olfaction and Embedded AI Are Shaping the New Global Sensing Industry A silly little photoshoot with your friends Bugonia Celebrating 25 years of Google Ads The Data Is In: The Washington Post Can't Replace Its "TikTok Guy" Peak screen? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Dr. Alan Cowen Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security zapier.com/machines agntcy.org ventionteams.com/twit

This Week in Google (Video HI)
IM 843: Immortal Beloved, You've Arrived - AI's Emotional Intelligence Paradox

This Week in Google (Video HI)

Play Episode Listen Later Oct 30, 2025 182:04


Can an AI truly care about your feelings—or is emotional intelligence in machines just the most sophisticated form of manipulation? Dr. Alan Cowen of Hume AI joins the crew to unpack the promise and peril of emotionally adept bots, even as they're quietly shaping how we connect, seek help, and parent in the digital age. Virtual Try On Free Online - AI Clothes Changer | i-TryOn Oreo-maker Mondelez to use new generative AI tool to slash marketing costs OpenAI Moves to Generate AI Music in Potential Rivalry With Startup Suno Surprising no one, researchers confirm that AI chatbots are incredibly sycophantic Microsoft's Mico heightens the risks of parasocial LLM relationships Armed police swarm student after AI mistakes bag of Doritos for a weapon - Dexerto A Definition of AGI OpenAI Finalizes Corporate Restructuring, Gives Microsoft 27% Stake and Technology Access Until 2032 - Slashdot This mom's son was asking Tesla's Grok AI chatbot about soccer. It told him to send nude pics, she says Nvidia Becomes World's First $5 Trillion Company - Slashdot Paris Hilton Has Been Training Her AI for Years How Realistic is OpenAI's 2028 Timeline For Automating AI Research Itself? Tesla's "Mad Max" mode is now under federal scrutiny Zenni's Anti-Facial Recognition Glasses are Eyewear for Our Paranoid Age Alphabet earnings Meta earnings You Have No Idea How Screwed OpenAI Actually Is Elon Musk's Grokipedia Pushes Far-Right Talking Points AOL to be sold to Bending Spoons for roughly $1.5B Casio's Fluffy AI Robot Squeaked Its Way Into My Heart For the sake of the show, I will suffer this torture -jj Machine Olfaction and Embedded AI Are Shaping the New Global Sensing Industry A silly little photoshoot with your friends Bugonia Celebrating 25 years of Google Ads The Data Is In: The Washington Post Can't Replace Its "TikTok Guy" Peak screen? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Dr. Alan Cowen Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security zapier.com/machines agntcy.org ventionteams.com/twit

All TWiT.tv Shows (Video LO)
Intelligent Machines 843: Immortal Beloved, You've Arrived

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Oct 30, 2025 182:04 Transcription Available


Can an AI truly care about your feelings—or is emotional intelligence in machines just the most sophisticated form of manipulation? Dr. Alan Cowen of Hume AI joins the crew to unpack the promise and peril of emotionally adept bots, even as they're quietly shaping how we connect, seek help, and parent in the digital age. Virtual Try On Free Online - AI Clothes Changer | i-TryOn Oreo-maker Mondelez to use new generative AI tool to slash marketing costs OpenAI Moves to Generate AI Music in Potential Rivalry With Startup Suno Surprising no one, researchers confirm that AI chatbots are incredibly sycophantic Microsoft's Mico heightens the risks of parasocial LLM relationships Armed police swarm student after AI mistakes bag of Doritos for a weapon - Dexerto A Definition of AGI OpenAI Finalizes Corporate Restructuring, Gives Microsoft 27% Stake and Technology Access Until 2032 - Slashdot This mom's son was asking Tesla's Grok AI chatbot about soccer. It told him to send nude pics, she says Nvidia Becomes World's First $5 Trillion Company - Slashdot Paris Hilton Has Been Training Her AI for Years How Realistic is OpenAI's 2028 Timeline For Automating AI Research Itself? Tesla's "Mad Max" mode is now under federal scrutiny Zenni's Anti-Facial Recognition Glasses are Eyewear for Our Paranoid Age Alphabet earnings Meta earnings You Have No Idea How Screwed OpenAI Actually Is Elon Musk's Grokipedia Pushes Far-Right Talking Points AOL to be sold to Bending Spoons for roughly $1.5B Casio's Fluffy AI Robot Squeaked Its Way Into My Heart For the sake of the show, I will suffer this torture -jj Machine Olfaction and Embedded AI Are Shaping the New Global Sensing Industry A silly little photoshoot with your friends Bugonia Celebrating 25 years of Google Ads The Data Is In: The Washington Post Can't Replace Its "TikTok Guy" Peak screen? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Dr. Alan Cowen Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security zapier.com/machines agntcy.org ventionteams.com/twit

Causal Bandits Podcast
The Causal Gap: Truly Responsible AI Needs to Understand the Consequences | Zhijing Jin S2E7

Causal Bandits Podcast

Play Episode Listen Later Oct 30, 2025 63:17


Send us a textThe Causal Gap: Truly Responsible AI Needs to Understand the ConsequencesWhy do LLMs systematically drive themselves to extinction, and what does it have to do with evolution, moral reasoning, and causality?In this brand-new episode of Causal Bandits, we meet Zhijing Jin (Max Planck Institute for Intelligent Systems, University of Toronto) to answer these questions and look into the future of automated causal reasoning.In this episode, we discuss:- Zhijing's new work on the "causal scientist"- What's missing in responsible AI- Why ethics matter for agentic systems- Is causality a necessary element of moral reasoning?------------------------------------------------------------------------------------------------------Video version available on Youtube: https://youtu.be/Frb6eTW2ywkRecorded on Aug 18, 2025 in Tübingen, Germany.------------------------------------------------------------------------------------------------------About The GuestZhiijing Jin is a researcher scientist at Max Planck Institute for Intelligent Systems and an incoming Assistant Professor at the University of Toronto. Her work is focused on causality, natural language, and ethics, in particular in the context of large language models and multi-agent systems. Her work received multiple awards, including NeurIPS best paper award, and has been featured in CHIP Magazine, WIRED, and MIT News. She grew up in Shanghai. Currently she prepares to open her new research lab at the University of Toronto.Support the showCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4

The Tech Humanist Show
Designing for Dignity: Privacy, AI Ethics, and Democracy with Dr. Carissa Véliz

The Tech Humanist Show

Play Episode Listen Later Oct 30, 2025 33:40


What's at stake when our personal data becomes a tool of power? In this episode, Dr. Carissa Véliz explores how privacy shapes democracy, technology, and our day-to-day lives—asking what ethical guardrails are needed as AI and digital surveillance expand. Topics Covered: The primal importance of privacyPersonal data as a toxic asset Privacy as a collective […]

Pondering AI
What AI Values with Jordan Loewen-Colón

Pondering AI

Play Episode Listen Later Oct 29, 2025 51:41


Jordan Loewen-Colón values clarity regarding the practical impacts, philosophical implications and work required for AI to serve the public good, not just private gain.Jordan and Kimberly discuss value alignment as an engineering or social problem; understanding ourselves as data personas; the limits of personalization; the perception of agency; how AI shapes our language and desires; flattening of culture and personality; localized models and vernacularization; what LLMs value (so to speak); how tools from calculators to LLMs embody values; whether AI accountability is on anyone's radar; failures of policy and regulation; positive signals; getting educated and fostering the best AI has to offer.Jordan Loewen-Colón is an Adjunct Associate Professor of AI Ethics and Policy at Smith School of Business | Queen's University. He is also the Co-Founder of the AI Alt Lab which is dedicated to ensuring AI serves the public good and not just private gain.Related ResourcesHBR Research: Do LLMs Have Values? (paper): https://hbr.org/2025/05/research-do-llms-have-values  AI4HF Beyond Surface Collaboration: How AI Enables High-Performing Teams (paper): https://www.aiforhumanflourishing.com/the-framework-papers/relationshipsandcommunication A transcript of this episode is here.

The 92 Report
150. Steve Petersen, ​​From Improv to Philosophy of AI

The 92 Report

Play Episode Listen Later Oct 27, 2025 61:47


Show Notes: Steve recounts his senior year at Harvard, and how he was torn between pursuing acting and philosophy. He graduated with a dual degree in philosophy and math but also found time to act in theater and participated in 20 shows.  A Love of Theater and a Move to London Steve explains why the lack of a theater major at Harvard allowed him to explore acting more than a university with a theater major. He touches on his parents' concerns about his career prospects if he pursued acting, and his decision to apply to both acting and philosophy graduate schools. Steve discusses his rejection from all graduate schools and why he decided to move to London with friends Evan Cohn and Brad Rouse. He talks about his experience in London. Europe on $20 a Day Steve details his backpacking trip through Europe on a $20 a day budget, staying with friends from Harvard and high school. He mentions a job opportunity in Japan through the Japanese Ministry of Education and describes his three-year stint in Japan, working as a native English speaker for the Japanese Ministry of Education, and being immersed in Japanese culture. He shares his experiences of living in the countryside and reflects on the impact of living in a different culture, learning some Japanese, and making Japanese friends. He discusses the personal growth and self-reflection that came from his time in Japan, including his first steps off the "achiever track." On to Philosophy Graduate School  When Steve returned to the U.S. he decided to apply to philosophy graduate schools again, this time with more success. He enrolled at the University of Michigan. However, he was miserable during grad school, which led him to seek therapy. Steve credits therapy with helping him make better choices in life. He discusses the competitive and prestigious nature of the Michigan philosophy department and the challenges of finishing his dissertation. He touches on the narrow and competitive aspects of pursuing a career in philosophy and shares his experience of finishing his dissertation and the support he received from a good co-thesis advisor. Kalamazoo College and Improv Steve describes his postdoc experience at Kalamazoo College, where he continued his improv hobby and formed his own improv group. He mentions a mockumentary-style improv movie called Comic Evangelists that premiered at the AFI Film Festival. Steve moved to Buffalo, Niagara University, and reflects on the challenges of adjusting to a non-research job. He discusses his continued therapy in Buffalo and the struggle with both societal and his own expectations of  professional status, however, with the help of a friend, he came to the realization that he had "made it" in his current circumstances. Steve describes his acting career in Buffalo, including roles in Shakespeare in the Park and collaborating with a classmate, Ian Lithgow. A Speciality in Philosophy of Science Steve shares his personal life, including meeting his wife in 2009 and starting a family. He explains his specialty in philosophy of science, focusing on the math and precise questions in analytic philosophy. He discusses his early interest in AI and computational epistemology, including the ethics of AI and the superintelligence worry. Steve describes his involvement in a group that discusses the moral status of digital minds and AI alignment.  Aligning AI with Human Interests Steve reflects on the challenges of aligning AI with human interests and the potential existential risks of advanced AI. He shares his concerns about the future of AI and the potential for AI to have moral status. He touches on the superintelligence concern and the challenges of aligning AI with human goals. Steve mentions the work of Eliezer Yudkowsky and the importance of governance and alignment in AI development. He reflects on the broader implications of AI for humanity and the need for careful consideration of long-term risks. Harvard Reflections Steve mentions Math 45 and how it kicked his butt, and his core classes included jazz, an acting class and clown improv with Jay Nichols.  Timestamps: 01:43: Dilemma Between Acting and Philosophy 03:44: Rejection and Move to London  07:09: Life in Japan and Cultural Insights  12:19: Return to Academia and Grad School Challenges  20:09: Therapy and Personal Growth  22:06: Transition to Buffalo and Philosophy Career  26:54: Philosophy of Science and AI Ethics  33:20: Future Concerns and AI Predictions  55:17: Reflections on Career and Personal Growth  Links: Steve's Website: https://stevepetersen.net/ On AI Superintelligence:  If Anyone Builds it, Everyone Dies Superintelligence The Alignment Problem Some places to donate: The Long-Term Future Fund Open Philanthropy On improv Impro Upright Citizens Brigade Comedy Improvisation Manual   Featured Non-profit: The featured non-profit of this week's episode is brought to you by Rich Buery who reports: “Hi, I'm Rich Buery, class of 1992. The featured nonprofit of this episode of The 92 Report is imentor. imentor is a powerful youth mentoring organization that connects volunteers with high school students and prepares them on the path to and through college. Mentors stay with the students through the last two years of high school and on the beginning of their college journey. I helped found imentor over 25 years ago and served as its founding executive director, and I am proud that over the last two decades, I've remained on the board of directors. It's truly a great organization. They need donors and they need volunteers. You can learn more about their work@www.imentor.org That's www, dot i m, e n, t, O, r.org, and now here is Will Bachman with this week's episode. To learn more about their work, visit: www.imentor.org.   

Voices for Excellence
The Seatbelt of Strategy: How AI Ethics Keeps Innovation Human

Voices for Excellence

Play Episode Listen Later Oct 27, 2025 52:39 Transcription Available


In this thought-provoking episode of Voices for Excellence, Dr. Michael Conner sits down with Rebecca Bultsma — international AI ethics researcher, Chief Innovation Officer, and co-host of AmpED to 11 — to explore what it really means to lead in the age of artificial intelligence.From Kendrick Lamar lyrics to ethical paradoxes, this conversation moves from AI strategy and governance to the human heartbeat of education. Rebecca unpacks her belief that “ethics is the seatbelt of AI strategy”—a grounding metaphor for how innovation must move fast, but never without responsibility.Together, Dr. Conner and Rebecca dive into:How ethics and strategy intersect in real-world decision-making, from boardrooms to classrooms.The VIBE Framework (Visible, Intentional, Beneficial, Earned) — Rebecca's approach to ensuring AI use in schools stays transparent and trustworthy.The coming wave of frontier models like GPT-5, Claude 4.1, and Grok 4 — and what they mean for leaders, teachers, and students.How AI agents could transform personalized learning and potentially repurpose traditional schools into community-based learning hubs.Why the next decade demands permission-to-fail leadership — cultures that value experimentation, iteration, and vulnerability over perfection.And how the learners of tomorrow — Generation Alpha and soon Generation Beta — will thrive by staying curious, unimpressed, and unapologetically human.Rebecca challenges listeners to imagine a future where education isn't confined to classrooms, grades, or standardized tests, but exists as a network of personalized experiences, guided by ethical innovation and human connection.This is more than a conversation about technology — it's a blueprint for human-centered transformation in the age of AI.

The Light in Every Thing
Antichrist - Episode 7 in the series "Facing Evil"

The Light in Every Thing

Play Episode Listen Later Oct 26, 2025 64:05


As billionaire Peter Thiel takes his Antichrist lecture tour around the world, Patrick and Jonah return to Scripture to ask what John actually meant by “Antichrist.” Drawing from Revelation, the letters of John, and Vladimir Soloviev's haunting story The Antichrist, they explore how this archetype appears whenever the self refuses to bow, refuses to be wounded, refuses to love through sacrifice.Against the world's fascination with power and control stands the Lamb—wounded yet overflowing with life.Support the showThe Light in Every Thing is a podcast of The Seminary of The Christian Community in North America. Learn more about the Seminary and its offerings at our website. This podcast is supported by our growing Patreon community. To learn more, go to www.patreon.com/ccseminary. Thanks to Elliott Chamberlin who composed our theme music, “Seeking Together,” and the legacy of our original show-notes and patreon producer, Camilla Lake.

The Healthier Tech Podcast
The Right to Refuse AI

The Healthier Tech Podcast

Play Episode Listen Later Oct 23, 2025 6:02


Have you ever chatted with customer support, only to realize halfway through that it wasn't a person? That split-second drop in your stomach — when the replies feel right but not real — might be the defining digital wellness moment of our time. In this episode, we dive into a provocative question that could shape the future of human connection: Should we all have the right to refuse AI? As artificial intelligence quietly takes over everything from hiring to healthcare, we explore what happens when convenience starts replacing consent — and when “I want to talk to a person” becomes a radical demand. Here's what you'll hear: Real-world examples of AI in daily life that you probably didn't realize you've already consented to. How AI is changing the way we experience empathy, trust, and human connection. The emerging idea of a Right to Refuse AI — what it could mean for your health, your data, and your dignity. Why automation might make “human interaction” the next luxury product. The ethical, emotional, and psychological costs of letting AI speak for us. If you care about digital balance, human-centered technology, and wellness in the age of automation, this conversation will challenge how you think about your relationship with machines. The right to privacy. The right to repair. The right to disconnect. Is it time we added one more — the right to refuse AI? Listen now and discover why the most human act in the digital age might be as simple as saying, I want a person. Stay connected, stay curious, and subscribe to The Healthier Tech Podcast for more conversations at the crossroads of technology and wellbeing. This episode is brought to you by Shield Your Body—a global leader in EMF protection and digital wellness. Because real wellness means protecting your body, not just optimizing it. If you found this episode eye-opening, leave a review, share it with someone tech-curious, and don't forget to subscribe to Shield Your Body on YouTube for more insights on living healthier with technology.

What Gets Measured
Nobody Cares About AI Ethics

What Gets Measured

Play Episode Listen Later Oct 21, 2025 38:02


Copywriter-turned–AI ethics educator Felicity Wild breaks down what marketing teams risk when they automate without oversight—and how ethics can drive better, smarter decisions. SHOWPAGE: www.ninjacat.io/blog/wgm-podcast-nobody-cares-about-ai-ethics  © 2025, NinjaCat 

Problem Solved: The IISE Podcast
Trailer | Navigating AI's Next Frontier

Problem Solved: The IISE Podcast

Play Episode Listen Later Oct 21, 2025 1:00


Here's the problem: AI is evolving faster than most organizations can keep up — and the risks of falling behind are real.In this episode, futurist and researcher Mike Courtney, CEO of Aperio Insights, joins IISE's David Brandt to explore how industrial and systems engineers can lead through the AI revolution. From balancing innovation with ethics to building systems that keep “humans in the loop,” this conversation reveals how to harness AI's power without losing our human advantage.Full episode available October 28.

StarTalk Radio
Deepfakes and the War on Truth with Bogdan Botezatu

StarTalk Radio

Play Episode Listen Later Oct 17, 2025 63:53


Is there anything real left on the internet? Neil deGrasse Tyson and co-hosts Chuck Nice and Gary O'Reilly explore deepfakes, scams, and cybercrime with the Director of Threat Research at Bitdefender, Bogdan Botezatu. ​​Scams are a trillion-dollar industry; keep your loved ones safe with Bitdefender: https://bitdefend.me/90-StarTalkNOTE: StarTalk+ Patrons can listen to this entire episode commercial-free here: https://startalkmedia.com/show/deepfakes-and-the-war-on-truth-with-bogdan-botezatu/Thanks to our Patrons Bubbalotski, Oskar Yazan Mellemsether, Craig A, Andrew, Liagadd, William ROberts, Pratiksha, Corey Williams, Keith, anirao, matthew, Cody T, Janna Ladd, Jen Richardson, Elizaveta Nikitenko, James Quagliariello, LA Stritt, Rocco Ciccolini, Kyle Jones, Jeremy Jones, Micheal Fiebelkorn, Erik the Nerd, Debbie Gloom, Adam Tobias Lofton, Chad Stewart, Christy Bradford, David Jirel, e4e5Nf3, John Rost, cluckaizo, Diane Féve, Conny Vigström, Julian Farr, karl Lebeau, AnnElizabeth, p johnson, Jarvis, Charles Bouril, Kevin Salam, Alex Rzem, Joseph Strolin, Madelaine Bertelsen, noel jimenez, Arham Jain, Tim Manzer, Alex, Ray Weikal, Kevin O'Reilly, Mila Love, Mert Durak, Scrubbing Bubblez, Lili Rose, Ram Zaidenvorm, Sammy Aleksov, Carter Lampe, Tom Andrusyna, Raghvendra Singh Bais, ramenbrownie, cap kay, B Rhodes, Chrissi Vergoglini, Micheal Reilly, Mone, Brendan D., Mung, J Ram, Katie Holliday, Nico R, Riven, lanagoeh, Shashank, Bradley Andrews, Jeff Raimer, Angel velez, Sara, Timothy Criss, Katy Boyer, Jesse Hausner, Blue Cardinal, Benjamin Kedwards, Dave, Wen Wei LOKE, Micheal Sacher, Lucas, Ken Kuipers, Alex Marks, Amanda Morrison, Gary Ritter Jr, Bushmaster, thomas hennigan, Erin Flynn, Chad F, fro drick, Ben Speire, Sanjiv VIJ, Sam B, BriarPatch, and Mario Boutet for supporting us this week. Subscribe to SiriusXM Podcasts+ to listen to new episodes of StarTalk Radio ad-free and a whole week early.Start a free trial now on Apple Podcasts or by visiting siriusxm.com/podcastsplus. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Coffey & Code
Navigating Digital Well-Being in the Age of AI

Coffey & Code

Play Episode Listen Later Oct 17, 2025 47:42


Author and experience designer Caitlin Krause joins Coffey & Code to unpack digital wellbeing beyond buzzwords: agency over algorithms, the “presence pyramid,” culture cornerstones (dignity, freedom, invention, agency), and practical ways to design for authentic connection in an age of AI, agents, and XR. The conversation spans data ownership, interoperable “internet of agents” ideas like Project NANDA, the loneliness epidemic, and responsible product choices that reduce harm and increase belongingEpisode At A Glance:Define “digital wellbeing” without the hype: aligning intention and attention; context over one-size-fits-all rulesAgency over algorithms: opting into platforms and practices that honor user choice, not just engagement metricsPresence Pyramid & somatic awareness: embodied practices that translate across 2D, XR, and spatial environmentsCulture Cornerstones: dignity → freedom → invention → agency as a repeatable loop for teams and communitiesFrom silos to interoperability: why open protocols for AI agents matter (e.g., Project NANDA)Designing for belonging: move beyond performative social to ambient, low-stakes co-presence that reduces lonelinessSafety first: name harms clearly; pair AI with human support paths and mood check-ins after useResources Mentioned:Digital Well-being (book) by Caitlin Krause; also: Designing Wonder, Mindful by DesignPresence Pyramid (framework)Project NANDA: Networked AI Agents and Decentralized ArchitectureStanford HAI; MIT Media Lab; AR in ActionResearch/voices referenced: Esther Perel, Sherry Turkle, Brené Brown, Fei-Fei Li, Ramesh Raskar, David EaglemanOn anthropomorphism of AI (NPR segment)988 Suicide & Crisis Lifeline (US)  EPISODE CREDITS:Produced and edited by Ashley Coffey. Cover art designed by Ashley Coffey.Headshot by Brandlink MediaIntroduction music composed and produced by Ashley Coffey LINKSFollow Coffey & Code on Instagram, Facebook, Linkedin, and YouTube for the latest emerging tech updates! Subscribe to the Coffey & Code Podcast wherever you get your podcasts to be notified when new episodes go live. © 2025 Coffey & Code Podcast. All rights reserved. The content of this podcast, including but not limited to text, graphics, audio, and images, is the property of Ashley Coffey and may not be reproduced, redistributed, or used in any manner without the express written consent of the owner. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Future of Work With Jacob Morgan
Walmart Partners with ChatGPT, GE Bets on Humans, and the Rise of AI Scapegoating

The Future of Work With Jacob Morgan

Play Episode Listen Later Oct 15, 2025 16:27


October 15, 2025: AI is no longer just automating work — it's reorganizing it. In today's episode of Future Ready Today, Jacob Morgan explores five major stories reshaping leadership and HR:

Metaverse Marketing
Sora 2, AI Ethics, Nintendo Research, Apple Vision Pro, Support Ends for Windows 10 with Lee Kebler and Adam Davis McGee

Metaverse Marketing

Play Episode Listen Later Oct 15, 2025 47:36


In this episode of TechMagic, hosts Lee Kebler and Adam Davis McGee explore the evolving intersection of AI, creativity, and ethics. Cathy is away this week and will rejoin the show next week. Meanwhile, Lee and Adam delve into OpenAI's Sora 2 and its implications for digital rights, content authenticity, and ethical innovation. The hosts examine Nintendo's research on gaming's cognitive benefits, Apple Vision Pro's NBA partnership, and the Windows 10 end-of-support scenario. They also discuss AI's energy consumption and emerging global regulations on intellectual property. Perfect for tech enthusiasts, creators, and industry professionals, this episode provides balanced insights into the opportunities and responsibilities that accompany today's rapidly evolving digital landscape.Come for the tech, stay for the magic!Adam Davis-McGee BioAdam Davis-McGee is a dynamic Creative Director and Producer specializing in immersive storytelling across XR and traditional media. As Senior Producer at Journey, he led the virtual studio, pioneering cutting-edge virtual experiences. He developed a Web3 playbook for Yum! Brands, integrating blockchain and NFT strategies. At Condé Nast, Adam produced engaging video content for Wired and Ars Technica, amplifying digital storytelling. His groundbreaking XR journalism project, In Protest: Grassroots Stories from the Frontlines (Oculus/Meta), captured historic moments in VR. Passionate about pushing creative boundaries, Adam thrives on crafting innovative narratives that captivate audiences worldwide.Adam Davis-McGee on LinkedInKey Discussion Topics:00:00 Intro: Welcome to Tech Magic with Lee Kebler and ADM04:07 Exploring Artist Reactions to AI: Surprising Enthusiasm in LA08:03 Sora 2: Ethical Concerns and Digital Rights23:55 AI Content Bias: OpenAI's Power Consumption Story33:20 Roblox's New Parent Council: Better Late Than Never38:35 Nintendo Debunks Gaming Myths: Benefits for Attention Span42:47 Apple Vision Pro: NBA License and VR History46:41 Windows 10 Support Ending: What Users Need to Know50:30 Recommendations and Closing Thoughts Hosted on Acast. See acast.com/privacy for more information.

Windowsill Chats
Creative Current Events: Analog Tools, AI Ethics, and a Vintage Catalog Revival

Windowsill Chats

Play Episode Listen Later Oct 10, 2025 39:07


In this episode of Creative Current Events, Margo and Abby dive into a whirlwind of fascinating stories and fresh perspectives from the worlds of creativity, tech, and everyday life. They chat about the accidental invention of the snow globe and the surprising rise of art fairs hosted in U-Haul trucks — celebrating human resourcefulness and the scrappy side of creativity. They also dig into AI and authenticity — from lawsuits against media companies accused of data theft, to AI-generated actors in Hollywood, and the ethical gray areas of algorithm-driven platforms like Spotify. Together, Margo and Abby unpack how these developments are reshaping creative industries and what it means to stay human in a data-driven world. Whether you're a maker, dreamer, or just looking for a new lens on today's creative headlines, this episode proves that inspiration is everywhere — sometimes in the most unexpected places. Articles Mentioned: AI Lawsuits: Japanese Media Giants vs. Perplexity AI Actor Sparks Outrage in Hollywood Cities & Memory: Global Sound Mapping Project The Sphere: Wizard of Oz Experience Magnopus: Storytelling Through Immersive Tech Banana Republic's Vintage Catalog Revival Carhartt x Bethany Yellowtail Collaboration Coach's Coffee Shops Connect with Gen Z Ugmonk: Intentional Design Meets the Analog To-Do List   Connect with Abby: https://www.abbyjcampbell.com/ https://www.instagram.com/ajcampkc/ https://www.pinterest.com/ajcampbell/   Connect with Margo: www.windowsillchats.com www.instagram.com/windowsillchats www.patreon.com/inthewindowsill https://www.yourtantaustudio.com/thefoundry  

The Future of Work With Jacob Morgan
The AI Reckoning — Human Quotas, Ethical Bots, and Legal Risk in HR

The Future of Work With Jacob Morgan

Play Episode Listen Later Oct 10, 2025 11:29


October 10, 2025: A new era of Responsible Intelligence is emerging. Governments are considering human-quota laws to keep people in the loop. Kroger is rolling out a values-based AI assistant that redefines trust and transparency. And legal experts warn that AI bias in HR could soon become a courtroom reality. In today's Future-Ready Today, Jacob Morgan explores how these stories signal the end of reckless automation and the rise of accountable leadership. He shares how the future of work will be shaped not by faster machines, but by wiser humans—and offers one simple “1%-a-Day” challenge to help you lead responsibly in the age of AI.

This Week in Google (MP3)
IM 840: Pudding Forks - Industrial Bubble or Tech Boom?

This Week in Google (MP3)

Play Episode Listen Later Oct 9, 2025 159:53


From lawmakers cracking down on loud ads to Deloitte caught peddling AI-fabricated reports, this episode explores how tech's greatest promises and worst follies are colliding right now. No more loud commercials: Governor Newsom signs SB 576 | Governor of California ChatGPT Now Has 800 Million Weekly Active Users - Slashdot OpenAI will let developers build apps that work inside ChatGPT Senate Dem Report Finds Almost 100 Million Jobs Could Be Lost To AI - Slashdot Jony Ive's secretive AI hardware reportedly hit three problems Deloitte to refund Australian government after AI hallucinations found in report Anthropic and Deloitte Partner to Build AI Solutions for Regulated Industries America is now one big bet on AI The flawed Silicon Valley consensus on AI Data centers responsible for 92% of GDP growth in the first half of this year Martin Peers: The AI Profit Fantasy A Debate About A.I. Plays Out on the Subway Walls Insurers hesitate at multibillion-dollar claims faced by OpenAI, Anthropic in AI lawsuits Slop factory worries about slop: MrBeast says AI could threaten creators' livelihoods, calling it 'scary times' for the industry CAN LARGE LANGUAGE MODELS DEVELOP GAMBLING ADDICTION? Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence Have we passed peak social media? As Elon Musk Preps Tesla's Optimus for Prime Time, Big Hurdles Remain OpenAI signs huge chip deal with AMD, and AMD stock soars Google CodeMender Introducing the Gemini 2.5 Computer Use model Young People Are Falling in Love With Old Technology Our friend Glenn Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/machines agntcy.org fieldofgreens.com Promo Code "IM" pantheon.io

Dropping Bombs
Get Rich in the NEW Era of AI (DO THIS NOW)

Dropping Bombs

Play Episode Listen Later Sep 18, 2025 77:13


LightSpeed VT: https://www.lightspeedvt.com/ Dropping Bombs Podcast: https://www.droppingbombs.com/ What if a 16-year-old yogurt scooper could turn into a billionaire exit master by 31?