POPULARITY
Apple is kicking off a 3‑day product launch event with at least five new devices rumoured, including the iPhone 17 E, new iPads, and MacBooks with M4 and M5 chips. Steven Scott and Shaun Preece break down what this means for blind and visually impaired users, plus the latest in AI, accessibility, and tech news from Sonos, Mobile World Congress, and beyond. This lively episode of Double Tap dives into Apple's 3‑day event running 2–4 March, with speculation around an affordable MacBook, iPhone 17 E, and next‑gen iPads. Steven and Shaun share personal experiences with old and new Apple gear, including an 18‑year‑old MacBook still running Snow Leopard and performing surprisingly well with VoiceOver. The discussion expands to AI ethics, including Be My Eyes' new partnership with Meta to enhance inclusive AI training, and OpenAI's controversial deal with the US Department of Defence. The hosts also explore the flood of AI‑generated disinformation on X during recent global events. Other highlights include accessibility updates for Twitch and WhatsApp via new scripts and add‑ons, the rise of modular laptops at Mobile World Congress, and a nostalgic detour into CD ripping, 3D printing for blind users, and the enduring value of accessible tech tools. Call to Action Support accessible tech conversations!
How has the idea of ethics been affected by the rise of AI? This week, Technology Now is exploring the ideas of ethical and responsible AI. We examine how integrated into society AI has become, we ask how we co-exist with AI, and we look into how regular people, organisations, and governments are having to respond to the increasing adoption of AI. Kay Firth-Butterfield, CEO of Good Tech Advisory LLC and the world's first Chief AI Ethics Officer, tells us more.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week, hosts Michael Bird and Sam Jarrell look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations. This episode is available in both video and audio formats.About Kay: https://kayfirthbutterfield.comSources:https://www.bbc.co.uk/news/business-66807456https://www.bbc.co.uk/news/world-us-canada-65735769https://www.bbc.co.uk/news/articles/cq808px90wxohttps://www.npr.org/2025/05/07/g-s1-64640/ai-impact-statement-murder-victimhttps://www.academia.edu/123541578/The_Clinical_Chemist
In this week's episode, Leslie Heaney sits down with Vasant Dhar—professor at NYU Stern School of Business and the Center for Data Science at New York University, founder of SCT Capital, and author of Thinking with Machines: The Brave New World of AI.Together, they explore how artificial intelligence evolved, why language prediction changed everything, and what it means now that machines can think alongside humans. The conversation examines the growing divide between those who use AI to sharpen judgment and those who rely on it to think for them, as well as the broader implications for work, education, power, and responsibility.This is a grounded, honest conversation about the power of AI—and how we choose to live with it.Hosted on Ausha. See ausha.co/privacy-policy for more information.
This week on To The Point Cybersecurity Podcast, hosts Rachael Lyon and Jonathan Knepher sit down with Erica Shoemate—international bestselling author, tech policy leader, and advocate for maternal health—to tackle the hot-button issues surrounding AI and cybersecurity. From headline-making resignations at leading AI companies to the complexities of human-centered design, ethics, and regulation, Erica Shoemate brings her unique insight from years in national security and big tech. Join us as we dig into concerns about innovation versus profit, the pitfalls of monetizing user data, and the ever-evolving guardrails around generative AI. We'll explore whether current regulations are enough to protect consumers—especially children—and what tech leaders should be thinking about as AI reshapes the landscape. Don't miss this thought-provoking conversation that's equal parts cautionary tale and call to action for more transparent, ethical leadership in technology. For links and resources discussed in this episode, please visit our show notes at https://www.forcepoint.com/govpodcast/e373
Support the show via PatreonWelcome to season 5! In this fun chat with Mark Brown, we immediately get sidetracked with Star Trek, a topic we have been discussing for over 30 years. We explore the podcasts we hosted long before it was cool. Mark speaks about peer support and how it plays a crucial role in mental health recovery. And most important, we tackle the ethics of AI and imagine what the future might look like with AI in our lives. My discussions with Mark have always been deep and complex. I hope you enjoy. Mark Brown is a Senior Research Fellow at Summer Foundation Ltd and member of PauseAI Australia.Here is the video we referenced Bobby Flynn from Australian Idol interviewed on Idol SparksPlease come join us on our socials where we are very much present. We very much want you to share your stories and opinions. Join our public and private pages to start the discussion.Public Facebook page at https://www.facebook.com/latetothepartypodcastPrivate Facebook group at https://www.facebook.com/groups/1168470233702726Email us at latetothepartyasd@gmail.comInstagram page - https://www.instagram.com/latetothepartyasd/Website at https://latetotheparty.buzzsprout.comSupport the show
How did we go from digital computers to AI seemingly everywhere? Neil deGrasse Tyson, Chuck Nice, & Gary O'Reilly dive into the mechanics of thinking, how AI got its start, and what deep learning really means with cognitive and computer scientist, Nobel Laureate, and one of the architects of AI, Geoffrey Hinton. Subscribe to SiriusXM Podcasts+ to listen to new episodes of StarTalk Radio ad-free and a whole week early.Start a free trial now on Apple Podcasts or by visiting siriusxm.com/podcastsplus. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this episode, I'm joined by Nasser Jones, founder of the nonprofit Bending the AI Curve, for a powerful conversation about equitable innovation and what AI ethics looks like in practice for education and beyond. You'll also hear how bias, access, policy decisions, and tool overload shape who benefits from AI—and how schools and communities can take a more proactive, inclusive approach. If you want to help students and educators engage with AI thoughtfully, responsibly, and with equity at the center, this episode has you covered! Show notes: https://classtechtips.com/2026/02/20/ai-ethics-bonus/ Sponsored by Jotform: http://jotform.com/education/ Follow Nasser Jones on social: https://www.linkedin.com/in/nasserkjones/ Follow Monica on Instagram: https://www.instagram.com/classtechtips/ Take your pick of free EdTech resources: https://classtechtips.com/free-stuff-favorites/
How do we balance free speech, platform accountability, and democratic integrity when technology moves faster than policy? In this episode, Katie Harbath, the "election whisperer to the tech industry," joins Corey Nathan to discuss the impossible trade-offs facing social media platforms, the evolving landscape of AI and misinformation, and what it means to "panic responsibly" in an era of rapid technological change. Katie spent a decade at Facebook as a policy director managing elections globally, navigating crises from Cambridge Analytica to the 2020 election. Now as CEO of Anchor Change and Chief Global Affairs Officer at Duco, she helps organizations understand how the internet shapes democracy. The conversation explores how to use AI ethically in creative work, the challenges of content moderation at scale, why community notes might be better than fact-checking, and how individuals can reclaim agency over their information diets. Katie also shares her personal evolution on free speech, the difference between distribution and moderation, and why the next four years will require all of us to find new ways to ground ourselves. Calls to Action ✅ If this conversation resonates, consider sharing it with someone who believes connection across difference still matters. ✅ Subscribe to Corey's Substack: coreysnathan.substack.com ✅ Leave a review on Apple Podcasts, Spotify, or wherever you listen: ratethispodcast.com/goodfaithpolitics ✅ Subscribe to Talkin' Politics & Religion Without Killin' Each Other on your favorite podcast platform. ✅ Watch the full conversation and subscribe on YouTube: youtube.com/@politicsandreligion Key Takeaways Panic Responsibly: Don't be paralyzed by fear of AI or technological change. Take agency over how you use these tools while considering ethical guardrails Impossible Trade-offs: Platform decisions involve choices between imperfect options with unknowable long-term consequences (see: Cambridge Analytica stemming from 2010's Open Graph) AI Ethics in Practice: Katie uses AI to organize thoughts, identify themes, spot repetitive phrases, and show line edits; but keeps human input and output central to the creative process Free Speech Evolution: Even tech policy experts are evolving their views. Katie has moved toward greater support for free speech while recognizing the importance of context and consequences Distribution vs. Moderation: The key question isn't just what stays on platforms, but what gets amplified by algorithms. Distribution decisions matter as much as content decisions Community Notes > Fact-Checking: Collaborative, crowdsourced context may be more effective and less politically fraught than centralized fact-checking operations You Have Agency: Individuals control which platforms they use, what content they engage with, and what news sources they consume. These choices train algorithms and shape experiences Election Infrastructure Improved: Despite continued challenges, election officials have made significant strides since 2020 in security, preparedness, and collaboration with tech platforms Social Media: Mixed Bag: Platforms have given voice to candidates and causes that would otherwise struggle for attention, but have also created new challenges for democracy Information Audit: Katie recommends doing an annual "news audit" to ensure your media consumption aligns with your values and includes diverse perspectives across the political spectrum About Our Guest Katie Harbath is an award-winning global leader at the intersection of technology, policy, and elections. She spent a decade at Facebook as a Public Policy Director, where she built and led the teams that managed elections globally, navigating some of the platform's most challenging moments. Today, Katie is the CEO of Anchor Change, a technology consulting firm, and Chief Global Affairs Officer at Duco. Described as the "election whisperer to the tech industry," she helps organizations navigate the complex intersections of technology, democracy, and policy. Katie is writing a book about her experiences in tech policy and is a sought-after voice on issues of platform governance, content moderation, AI ethics, and the future of democracy in the digital age. She is known for her pragmatic approach to impossible trade-offs and her catchphrase "panic responsibly" when it comes to emerging technologies. Links and Resources Katie Harbath's Work: Substack: anchorchange.substack.com Anchor Change: anchorchange.com Duco Experts: ducoexperts.com Katie's AI Ethics and Disclosure Statement: anchorchange.substack.com/p/ethics-and-transparency-statement Connect on Social Media Corey is @coreysnathan on all the socials... Substack LinkedIn Facebook Instagram Twitter Threads Bluesky TikTok Thanks to our Sponsors and Partners Thanks to Pew Research Center for making today's conversation possible. Gratitude as well to Village Square for coming alongside us in this work and helping foster better civic dialogue. Links and additional resources: Pew Research Center: pewresearch.org The Village Square: villagesquare.us Meza Wealth Management: mezawealth.com Proud members of The Democracy Group Clarity, charity, and conviction can live in the same room.
How do we balance free speech, platform accountability, and democratic integrity when technology moves faster than policy? In this episode, Katie Harbath, the "election whisperer to the tech industry," joins Corey Nathan to discuss the impossible trade-offs facing social media platforms, the evolving landscape of AI and misinformation, and what it means to "panic responsibly" in an era of rapid technological change. Katie spent a decade at Facebook as a policy director managing elections globally, navigating crises from Cambridge Analytica to the 2020 election. Now as CEO of Anchor Change and Chief Global Affairs Officer at Duco, she helps organizations understand how the internet shapes democracy. The conversation explores how to use AI ethically in creative work, the challenges of content moderation at scale, why community notes might be better than fact-checking, and how individuals can reclaim agency over their information diets. Katie also shares her personal evolution on free speech, the difference between distribution and moderation, and why the next four years will require all of us to find new ways to ground ourselves. Calls to Action ✅ If this conversation resonates, consider sharing it with someone who believes connection across difference still matters. ✅ Subscribe to Corey's Substack: coreysnathan.substack.com ✅ Leave a review on Apple Podcasts, Spotify, or wherever you listen: ratethispodcast.com/goodfaithpolitics ✅ Subscribe to Talkin' Politics & Religion Without Killin' Each Other on your favorite podcast platform. ✅ Watch the full conversation and subscribe on YouTube: youtube.com/@politicsandreligion Key Takeaways Panic Responsibly: Don't be paralyzed by fear of AI or technological change. Take agency over how you use these tools while considering ethical guardrails Impossible Trade-offs: Platform decisions involve choices between imperfect options with unknowable long-term consequences (see: Cambridge Analytica stemming from 2010's Open Graph) AI Ethics in Practice: Katie uses AI to organize thoughts, identify themes, spot repetitive phrases, and show line edits; but keeps human input and output central to the creative process Free Speech Evolution: Even tech policy experts are evolving their views. Katie has moved toward greater support for free speech while recognizing the importance of context and consequences Distribution vs. Moderation: The key question isn't just what stays on platforms, but what gets amplified by algorithms. Distribution decisions matter as much as content decisions Community Notes > Fact-Checking: Collaborative, crowdsourced context may be more effective and less politically fraught than centralized fact-checking operations You Have Agency: Individuals control which platforms they use, what content they engage with, and what news sources they consume. These choices train algorithms and shape experiences Election Infrastructure Improved: Despite continued challenges, election officials have made significant strides since 2020 in security, preparedness, and collaboration with tech platforms Social Media: Mixed Bag: Platforms have given voice to candidates and causes that would otherwise struggle for attention, but have also created new challenges for democracy Information Audit: Katie recommends doing an annual "news audit" to ensure your media consumption aligns with your values and includes diverse perspectives across the political spectrum About Our Guest Katie Harbath is an award-winning global leader at the intersection of technology, policy, and elections. She spent a decade at Facebook as a Public Policy Director, where she built and led the teams that managed elections globally, navigating some of the platform's most challenging moments. Today, Katie is the CEO of Anchor Change, a technology consulting firm, and Chief Global Affairs Officer at Duco. Described as the "election whisperer to the tech industry," she helps organizations navigate the complex intersections of technology, democracy, and policy. Katie is writing a book about her experiences in tech policy and is a sought-after voice on issues of platform governance, content moderation, AI ethics, and the future of democracy in the digital age. She is known for her pragmatic approach to impossible trade-offs and her catchphrase "panic responsibly" when it comes to emerging technologies. Links and Resources Katie Harbath's Work: Substack: anchorchange.substack.com Anchor Change: anchorchange.com Duco Experts: ducoexperts.com Katie's AI Ethics and Disclosure Statement: anchorchange.substack.com/p/ethics-and-transparency-statement Connect on Social Media Corey is @coreysnathan on all the socials... Substack LinkedIn Facebook Instagram Twitter Threads Bluesky TikTok Thanks to our Sponsors and Partners Thanks to Pew Research Center for making today's conversation possible. Gratitude as well to Village Square for coming alongside us in this work and helping foster better civic dialogue. Links and additional resources: Pew Research Center: pewresearch.org The Village Square: villagesquare.us Meza Wealth Management: mezawealth.com Proud members of The Democracy Group Clarity, charity, and conviction can live in the same room.
This week’s Hen Report explores the complex intersection of animal rights with technology, research, and activism. Hosts Jasmin Singer and Mariann Sullivan dive into the ethical implications of AI on animal welfare, highlighting how AI systems reflect human biases against animals while potentially offering both setbacks and opportunities for animal advocacy. The episode also covers groundbreaking developments in primate research, ongoing…
00:00 Introduction to Boys Club Live 00:44 The viral Vogue clip 03:46 Market Talk 07:13 Shoutout to Octant 11:29 AI Etiquette and Social Contracts 15:19 Gigi Claudid: Training our AI agent 20:49 Norwegian Athlete's Emotional Confession 23:34 Unpacking Relationship Drama 24:44 Messy Olympics: Scandals in Sports 25:32 Partner Shoutout: Anchorage Digital 27:27 Podcast Recommendation: The Rest is History 29:40 Interview with Tatum Hunter: Internet Culture Insights 30:06 Deepfakes and AI Ethics 38:43 Personal Surveillance and Trust Issues 48:52 TikTok's Mental Health Rabbit Hole 52:16 Shill Minute: Best Cookie in Crown Heights 53:08 Introduction to Octant: Innovating Funding Models 54:52 Funding Ethereum: Grants and Sustainability 56:50 Octant V2: Revolutionizing Community Funding 58:43 Sustainable Growth and the Future of Ethereum 01:05:56 The Intersection of Venture Capital and Sustainable Funding 01:11:25 Guest Nick Devor of Barrons on Prediction Markets 01:12:50 Gambling and Insider Trading in Prediction Markets 01:23:01 CFTC Challenges and the Future of Regulation 01:26:11 Free Groceries: A Marketing Strategy 01:29:50 Conclusion and Final Thoughts
We examine the heart of the MBA experience — the curriculum itself — at Georgetown University's McDonough School of Business. Georgetown McDonough recently announced a redesigned MBA curriculum with a strong emphasis on AI, ethical leadership, global perspective, and helping students build career momentum earlier in the program. To unpack these changes, host Graham Richmond welcomes special guest Dr. Sudipta Dasmohapatra, Professor of the Practice (Marketing and Business Analytics) and Senior Associate Dean of MBA Programs at McDonough.
Meredith's Husband shares recent AI developments, including a piece of advice from Sam Altman, ChatGPT's personalization updates, and concerns about safety issues at OpenAI. He reviews alternative AI models including Google Gemini 3.0's impressive benchmarks, Amazon's new releases, and explains why Claude remains his preferred choice due to Anthropic's transparency and ethical approach to AI development.Timestamps: [0:00] Introduction [0:42] Sam Altman's quote on AI and jobs [1:06] ChatGPT's new customization features [2:15] Safety concerns and lawsuits against ChatGPT [4:50] Google Gemini 3.0 benchmarks and performance [6:32] Amazon's entry into AI models [7:16] Chinese AI models and international competition [7:44] Claude and Anthropic's transparent approach [8:14] Anthropic's AI safety testing example [10:12] Claude's real-time project updates [11:24] Choosing AI tools based on ethics -- CONTACTLeave Feedback or Request Topics:https://forms.gle/bqxbwDWBySoiUYxL7
A candid after-show conversation with George Jack, Patrick Wraight and instructor Heather Blevins explore how AI is transforming insurance claims, why human oversight and ethical guardrails still matter, … Read More » The post Navigating AI, Ethics, and Accountability | IJA Aftershow: Heather Blevins appeared first on Insurance Journal TV.
Welcome to Exponential View, the show where I explore how exponential technologies such as AI are reshaping our future. I've been studying AI and exponential technologies at the frontier for over ten years.Each week, I share some of my analysis or speak with an expert guest to make light of a particular topic.To keep up with the Exponential transition, subscribe to this channel or to my newsletter: https://www.exponentialview.co/-----A week before OpenClaw exploded, I recorded a prescient conversation with Mustafa Suleyman, CEO of Microsoft AI and co-founder of DeepMind. We talked about what happens when AI starts to seem conscious – even if it isn't. Today, you get to hear our conversation.Mustafa has been sounding the alarm about what he calls “seemingly conscious AI” and the risk of collective AI psychosis for a long time. We discussed this idea of the “fourth class of being” – neither human, tool, nor nature – that AI is becoming and all it brings with it.Skip to the best bits:(03:38) Why consciousness means the ability to suffer(06:52) "Your empathy circuits are being hacked"(07:23) Consciousness as the basis of rights(10:47) A fourth class of being(13:41) Why market forces push toward seemingly conscious AI(20:56) What AI should never be allowed to say(25:06) The proliferation problem with open-source chatbots(29:09) Why we need well-paid civil servants(30:17) Where should we draw the line with AI?(37:48) The counterintuitive case for going faster(42:00) The vibe coding dopamine hit(47:09) Social intelligence as the next AI frontier(48:50) The case for humanist super intelligence-----Where to find Mustafa:- X (Twitter): https://x.com/mustafasuleyman- LinkedIn: https://www.linkedin.com/in/mustafa-suleyman/- Personal Website: https://mustafa-suleyman.ai/Where to find me:- Substack: https://www.exponentialview.co/- Website: https://www.azeemazhar.com/- LinkedIn: https://www.linkedin.com/in/azhar- Twitter/X: https://x.com/azeemProduced by supermix.io and EPIIPLUS1 Ltd. Production and research: Chantal Smith and Marija Gavrilov. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
With every major advancement in science or technology, there is bound to be pushback. AI is no exception. This season is for toy and game creators navigating AI in the toy industry. In 10 episodes we'll explore how to use AI while maintaining your taste, trusting your judgement, protecting your IP, and actually saving time.This first episode is the foundation for the coming weeks. Before you dive any deeper into AI, let's get real about the risks of operating it with little to no guardrails.In this episode, we'll talk about:digital amnesiathe environmental impact of AIthe hidden ways AI is already showing up inside your tool stackAI use cases that dull your brain and personalitySeason 6 begins now.- - - - About My NEW Podcast Art:The podcast art for Season 6 of Making It In The Toy Industry features product illustrations of toys and games I helped guide in Toy Creators Academy and TCA Accelerator. Tap the brand name below to check them out!Playcor by Courtney Smithee9 to 5 Warriors by Brandon BraswellCatoms by Kieche O'ConnellThe Lunch Room by EAP Toys and Games founder, Chrissy FagerholtSend The Toy Coach Fan Mail!Support the show
The Last Touch: Why AI Will Never Be an ArtistI had one of those conversations... the kind where you're nodding along, then suddenly stop because someone just articulated something you've been feeling but couldn't quite name.Andrea Isoni is a Chief AI Officer. He builds and delivers AI solutions for a living. And yet, sitting across from him (virtually, but still), I heard something I rarely hear from people deep in the AI industry: a clear, unromantic take on what this technology actually is — and what it isn't.His argument is elegant in its simplicity. Think about Michelangelo. We picture him alone with a chisel, carving David from marble. But that's not how it worked. Michelangelo ran a workshop. He had apprentices — skilled craftspeople who did the bulk of the work. The master would look at a semi-finished piece, decide what needed refinement, and add the final touch.That final touch is everything.Andrea draws the same line with chefs. A Michelin-starred kitchen isn't one person cooking. It's a team executing the chef's vision. But the chef decides what's on the menu. The chef check the dish before it leaves. The chef adds that last adjustment that transforms good into memorable.AI, in this framework, is the newest apprentice. It can do the bulk work. It can generate drafts, produce code, create images. But it cannot — and here's the key — provide that final touch. Because that touch comes from somewhere AI doesn't have access to: lived experience, suffering, joy, the accumulated weight of being human in a particular time and place.This matters beyond art. Andrea calls it the "hacker economy" — a future where AI handles the volume, but humans handle the value. Think about code generation. Yes, AI can write software. But code with a bug doesn't work. Period. Someone has to fix that last bug. And in a world where AI produces most of the code, the value of fixing that one critical bug increases exponentially. The work becomes rarer but more valuable. Less frequent, but essential.We went somewhere unexpected in our conversation — to electricity. What does AI "need"? Not food. Not warmth. Electricity. So if AI ever developed something like feelings, they wouldn't be tied to hunger or cold or human vulnerability. They'd be tied to power supply. The most important being to an AI wouldn't be a human — it would be whoever controls the electricity grid.That's not a being we can relate to. And that's the point.Andrea brought up Guernica. Picasso's masterpiece isn't just innovative in style — it captures something society was feeling in 1937, the horror of the Spanish Civil War. Great art does two things: it innovates, and it expresses something the collective needs expressed. AI might be able to generate the first. It cannot do the second. It doesn't know what we feel. It doesn't know what moment we're living through. It doesn't have that weight of context.The research community calls this "world models" — the attempt to give AI some built-in understanding of reality. A dog doesn't need to be taught to swim; it's born knowing. Humans have similar innate knowledge, layered with everything we learn from family, culture, experience. AI starts from zero. Every time.Andrea put it simply: AI contextualization today is close to zero.I left the conversation thinking about what we protect when we acknowledge AI's limits. Not anti-technology. Not fear. Just clarity. The "last touch" isn't a romantic notion — it's what makes something resonate. And that resonance comes from us.Stay curious. Subscribe to the podcast. And if you have thoughts, drop them in the comments — I actually read them.Marco CiappelliSubscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.> https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The University of Notre Dame received a $50 million grant from the Lilly Endowment to work toward developing a faith-based approach to AI ethics. That grant landed in the university's Institute for Ethics and the Common Good, which is spearheading this work. My guest today is the institute's Director of Research and External Engagement—my good friend Adam Kronk.Our discussion is about establishing the kind of practice-based formation for promoting human flourishing in the AI age. It is about education, faith communities, and public engagement. It is about becoming ever more intentional about knowing what our ends are and judging our means accordingly. It is about setting the right conditions for responsible and creative agency.This is the first part of a two-part discussion with Adam, with the second focusing even more intently on issues related to education, under the looming promise of tacos.Follow up resources:Notre Dame's Institute for Ethics and the Common Good“The Next Wave of Artificial Intelligence and Our Humanity, with Stephanie DePrez,” podcast episode via Church Life Today“What is Man that AI is Mindful of Him,” by Jeffrey Bishop, journal article via Church Life JournalChurch Life Today is a partnership between the McGrath Institute for Church Life at the University of Notre Dame and OSV Podcasts from Our Sunday Visitor. Discover more ways to live, learn, and love your Catholic faith at osvpodcasts.com. Sharing stories, starting conversations.
Hour 3 moves from Washington chaos to cutting-edge tech and sports controversy. Gary and Shannon track the latest political firestorms, speculate on who won’t survive Trump’s second cabinet, wrestle with the ethics of AI scraping human knowledge, and debate whether public meltdowns are fair game in the age of constant cameras.• #SwampWatch: Democrats call for Kristi Noem’s firing, shutdown fears loom, and political tensions spike.• Who Falls First?: Prediction markets weigh in on which Trump cabinet member could be first out.• AI Crossroads: A new border shooting collides with a debate over Claude, Anthropic, and the morality of training AI on human work.• Privacy vs Reality: Coco Gauff’s private frustration goes public, should athletes always expect the cameras?See omnystudio.com/listener for privacy information.
Send us a textIn this episode, PRSA CEO Matthew Marcial joins host Jason Mudd to discuss the ethical use of AI in PR and key insights for communicators.Tune in to learn more!Meet our guest:Our episode guest is Matthew Marcial, CEO of the Public Relations Society of America. He leads PRSA's strategic priorities, focusing on advancing the profession and guiding communicators through emerging challenges, including the ethical use of artificial intelligence.Five things you'll learn from this episode:1. The biggest ethical risks with generative AI in PR2. The “Promise and Pitfalls” principles every PR team should adopt 3. How smart PR teams are using AI without crossing ethical lines4. PRSA's role in helping professionals navigate the fast-changing AI landscape5. Tips for rising PR pros who want to lead the profession forwardQuotables“As a leader, you really need to be able to set clear expectations with your team around what the role of AI is and what it is for your organization.” — Matthew Marcial“Being comfortable with that, sharing, and training across your teams is really going to help leverage that (AI) insight and expertise.” — Matthew Marcial“I think that as a communicator, putting out anything that compromises your reputation is going to be a risk.” — Matthew Marcial“We are taking a bolder voice on issues that impact our members, the industry, and the profession.” — Matthew Marcial“The best way to learn is through trial and error.” — Jason MuddIf you enjoyed this episode, please take a moment to share it with a colleague or friend. You may also support us through Buy Me a Coffee or by leaving us a quick podcast review.More about Matthew MarcialMatthew Marcial, CAE, CMP, is the CEO of the Public Relations Society of America, the nation's leading organization for public relations and communications professionals. Appointed in March 2025, he leads PRSA's strategic priorities, focusing on advancing the profession, supporting member growth, and navigating emerging challenges, such as the ethical use of artificial intelligence. With more than 20 years of association leadership experience, Matthew is a frequent speaker on ethical leadership and professional development and has recently led sessions across PRSA's regional districts on the organization's AI Ethics Guide for PR professionals.Guest's contact info and resources:Matthew Marcial on LinkedInPRSA websitePRSA's Promise and Pitfalls: Ethical AI GuidePRSA's DEI ToolkitPRSA's Membership | Promo Code for Listeners: PRPROD25Support the show On Top of PR is produced by Axia Public Relations, named by Forbes as one of America's Best PR Agencies. Axia is an expert PR firm for national brands. On Top of PR is sponsored by ReviewMaxer, the platform for monitoring, improving, and promoting online customer reviews.
We’re exploring the spiritual implications of Artificial Intelligence with pastor and author Reverend Nathan Webb. Nathan is the founding pastor of Checkpoint Church, a digital-first church aimed at connecting with individuals who identify as nerds, geeks, and gamers. Through the conversation, they explore the intersection of artificial intelligence (AI) and spirituality, discussing how AI can … Continue reading "[172] AI, ethics and the spiritual journey"
As the Vatican seeks to harness social media to spread its message, others are warning that artificial intelligence poses a huge challenge to all religion. Could AI even be a rival to faith, projecting itself as a source of wisdom that's neither human nor divine?Professor BETH SINGLER of the University of Zurich is the author of the new book, Religion and Artificial Intelligence.GUEST:Professor Beth Singler - Assistant Professor in Digital Religions at the University of Zurich
Are AI images fooling you? They're everywhere. Perhaps you saw all those cute "candy cane" body suit photos and thought, "That looks fun." Or, maybe you posted one yourself! In this thought-provoking episode, Heather Creekmore unpacks the rise of AI-generated photos and their profound impact on how we see ourselves—and each other. What started years ago as a debate over Photoshop has now exploded into a world where anyone can create altered, “flawless” images of themselves in a matter of seconds. But the effects go far beyond just looking different in pictures. These doctored images are changing our brains, our body image, and even our spiritual health. Heather shares what happened when she created a bunch of AI photos of herself, including her hilarious results. What You’ll Hear The Evolution from Photoshop to AI:Heather Creekmore reminisces about early discussions on Photoshop and magazine covers—and how AI has made “perfect” images accessible to everyone, not just celebrities and models. Personal Experiment with AI Headshots:Hear about Heather’s own journey using an AI headshot generator, the surprising (and sometimes hilarious) results, and the unsettling emotional triggers that come with seeing an altered version of yourself. The Science Behind How Images Affect Us:Learn how the brain processes images, why filtered photos are so convincing (even when we know they're fake), and how repeated exposure to “perfect” bodies rewires our brains to set unrealistic standards. Real Dangers: Snapchat Dysmorphia and Beyond:Explore the rise in people seeking cosmetic procedures to look like their filtered selfies, and understand why AI-generated “ideal images” up the stakes for comparison, perfectionism, and dissatisfaction. Spiritual Implications:Heather dives deep into the spiritual cost of chasing AI perfection, discussing body image idolatry, why you were purposefully designed by a loving Creator, and the difference between being designed vs. manufactured. Practical Tips to Beat Comparison:Walk away with actionable advice, from mindful scrolling to curating your social media feed, setting screen time limits, and turning to prayer when you're tempted by those idealized images. Memorable Quotes “Now you can actually have an image of yourself to worship.” “Our brains know these images are fake, but our hearts still hurt as if they’re real.” “You’re not a red Solo cup. You’re not manufactured. You’re uniquely designed.” "Are you worshipping a perfect image, or are you worshipping a perfect God?" Helpful Links 40-Day Body Image Journey:Feeling stuck in comparison and body obsession? Join Heather Creekmore’s quarterly 40-day journey for Christian women at improvebodyimage.com (look for the “40 Day Journey” tab). Related Resources: See the photos! Find this episode on YouTube or visit the blog, here. Listen to more episodes on faith and body image Find the 40-Day Body Image Workbook * (Amazon affiliate link. Tiny portion of your purchase goes to support this ministry.) Final Thoughts If you’ve ever scrolled through Instagram and felt “less than,” or if you’re curious about how AI might be affecting your mental—and spiritual—health, this episode is for you. Heather Creekmore reminds us that our value isn’t found in a perfectly curated image, but in the unique design given to us by God. Be sure to subscribe so you never miss an episode. If this conversation resonated with you, share it with a friend or leave a review. Thanks for listening! Remember: Stop comparing and start living. Follow Heather Creekmore on Instagram and YouTube for more encouragement on faith, body image, and comparison-free living. Discover more Christian podcasts at lifeaudio.com and inquire about advertising opportunities at lifeaudio.com/contact-us.
Arjita Sethi is a serial entrepreneur, physical therapist, certified yoga teacher, Ayurveda practitioner, and meditation expert, recognized as a leading voice at the intersection of AI and wellbeing. She is the founder of Shaanti, an AI-powered wellness platform creating personalized rituals rooted in Ayurveda, and New Founder School, which equips entrepreneurs with practical strategies to launch and grow sustainably. Arjita sits on the advisory board of the NASDAQ Entrepreneurial Center, teaches entrepreneurship at San Francisco State University, and has impacted hundreds of thousands of people across 40 countries through her businesses, teaching, and advisory work. A TEDx speaker, angel investor, and advocate for women in technology, she brings her philosophy of life-synced success into her work as a partner and mother.
CES 2026 Just Showed Us the Future. It's More Practical Than You Think.CES has always been part crystal ball, part carnival. But something shifted this year.I caught up with Brian Comiskey—Senior Director of Innovation and Trends at CTA and a futurist by trade—days after 148,000 people walked the Las Vegas floor. What he described wasn't the usual parade of flashy prototypes destined for tech graveyards. This was different. This was technology getting serious about actually being useful.Three mega trends defined the show: intelligent transformation, longevity, and engineering tomorrow. Fancy terms, but they translate to something concrete: AI that works, health tech that extends lives, and innovations that move us, power us, and feed us. Not technology for its own sake. Technology with a job to do.The AI conversation has matured. A year ago, generative AI was the headline—impressive demos, uncertain applications. Now the use cases are landing. Industrial AI is optimizing factory operations through digital twins. Agentic AI is handling enterprise workflows autonomously. And physical AI—robotics—is getting genuinely capable. Brian pointed to robotic vacuums that now have arms, wash floors, and mop. Not revolutionary in isolation, but symbolic of something larger: AI escaping the screen and entering the physical world.Humanoid robots took a visible leap. Companies like Sharpa and Real Hand showcased machines folding laundry, picking up papers, playing ping pong. The movement is becoming fluid, dexterous, human-like. LG even introduced a consumer-facing humanoid. We're past the novelty phase. The question now is integration—how these machines will collaborate, cowork, and coexist with humans.Then there's energy—the quiet enabler hiding behind the AI headlines.Korea Hydro Nuclear Power demonstrated small modular reactors. Next-generation nuclear that could cleanly power cities with minimal waste. A company called Flint Paper Battery showcased recyclable batteries using zinc instead of lithium and cobalt. These aren't sexy announcements. They're foundational.Brian framed it well: AI demands energy. Quantum computing demands energy. The future demands energy. Without solving that equation, everything else stalls. The good news? AI itself is being deployed for grid modernization, load balancing, and optimizing renewable cycles. The technologies aren't competing—they're converging.Quantum made the leap from theory to presence. CES launched a new area called Foundry this year, featuring innovations from D-Wave and Quantum Computing Inc. Brian still sees quantum as a 2030s defining technology, but we're in the back half of the 2020s now. The runway is shorter than we thought.His predictions for 2026: quantum goes more mainstream, humanoid robotics moves beyond enterprise into consumer markets, and space technologies start playing a bigger role in connectivity and research. The threads are weaving together.Technology conversations often drift toward dystopia—job displacement, surveillance, environmental cost. Brian sees it differently. The convergence of AI, quantum, and clean energy could push things toward something better. The pieces exist. The question is whether we assemble them wisely.CES is a snapshot. One moment in the relentless march. But this year's snapshot suggests technology is entering a phase where substance wins over spectacle.That's a future worth watching.This episode is part of the Redefining Society and Technology podcast's CES 2026 coverage. Subscribe to stay informed as technology and humanity continue to intersect.Subscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.> https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
AI systems are moving fast, sometimes faster than the guardrails meant to contain them. In this episode of Security Matters, host David Puner digs into the hidden risks inside modern AI models with Pamela K. Isom, exploring the governance gaps that allow agents to make decisions, recommendations, and even commitments far beyond their intended authority.Isom, former director of AI and technology at the U.S. Department of Energy (DOE) and now founder and CEO of IsAdvice & Consulting, explains why AI red teaming must extend beyond cybersecurity, how to stress test AI governance before something breaks, and why human oversight, escalation paths, and clear limits are essential for responsible AI.The conversation examines real-world examples of AI drift, unintended or unethical model behavior, data lineage failures, procurement and vendor blind spots, and the rising need for scalable AI governance, AI security, responsible AI practices, and enterprise red teaming as organizations adopt generative AI.Whether you work in cybersecurity, identity security, AI development, or technology leadership, this episode offers practical insights for managing AI risk and building systems that stay aligned, accountable, and trustworthy.
Follow Olivia on Linkedin and Substack! Check out her website.Follow us on Instagram and on X!Created by SOUR, this podcast is part of the studio's "Future of X,Y,Z" research, where the collaborative discussion outcomes serve as the basis for the futuristic concepts built in line with the studio's mission of solving urban, social, and environmental problems through intelligent design.Make sure to visit our website and subscribe to the show on Apple Podcasts, Spotify, or Google Podcasts so you never miss an episode. If you found value in this show, we would appreciate it if you could head over to iTunes to rate and leave a review – or simply share the show with your friends!Don't forget to join us next week for another episode. Thank you for listening!
Why Every AI Project Needs an Ethics CommitteeIn this snippet, Matthew Blakemore, CEO at AI Caramba!, stresses why AI ethics committees shouldn't be optional.He shares that every project he's worked on included one, because it's extremely difficult for companies to ethically assess their own products in isolation. Independent voices matter.By collaborating with external experts, including the University of Bath, and bringing in outside perspectives, Matthew believes companies can build stronger, more trustworthy AI systems.His advice is clear:
In this episode, Tomasz Hollanek argues that design is central to AI ethics. We discuss what role designers should play in AI ethics, the significance of AI literacy, and the responsibility of journalists in reporting on AI technologies.Edited by: Meibel Dabodabo
AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic
In this episode, Conor and Jaeden dive into the ethical dilemmas surrounding AI models like Grok, discussing the implications of content moderation, free speech, and the responsibilities of tech giants. They explore the recent controversies and the impact of paywalls on content accountability, offering a nuanced perspective on the balance between innovation and regulation.Get the top 40+ AI Models for $20 at AI Box: https://aibox.aiConor's AI Course: https://www.ai-mindset.ai/coursesConor's AI Newsletter: https://www.ai-mindset.ai/Jaeden's AI Hustle Community: https://www.skool.com/aihustleSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Jensen Huang Just Won IEEE's Highest Honor. The Reason Tells Us Everything About Where Tech Is Headed.IEEE announced Jensen Huang as its 2026 Medal of Honor recipient at CES this week. The NVIDIA founder joins a lineage stretching back to 1917—over a century of recognizing people who didn't just advance technology, but advanced humanity through technology.That distinction matters more than ever.I spoke with Mary Ellen Randall, IEEE's 2026 President and CEO, from the floor of CES Las Vegas. The timing felt significant. Here we are, surrounded by the latest gadgets and AI demonstrations, having a conversation about something deeper: what all this technology is actually for.IEEE isn't a small operation. It's the world's largest technical professional society—500,000 members across 190 countries, 38 technical societies, and 142 years of history that traces back to when the telegraph was connecting continents and electricity was the revolutionary new thing. Back then, engineers gathered to exchange ideas, challenge each other's thinking, and push innovation forward responsibly.The methods have evolved. The mission hasn't."We're dedicated to advancing technology for the benefit of humanity," Randall told me. Not advancing technology for its own sake. Not for quarterly earnings. For humanity. It sounds like a slogan until you realize it's been their operating principle since before radio existed.What struck me was her framing of this moment. Randall sees parallels to the Renaissance—painters working with sculptors, sharing ideas with scientists, cross-pollinating across disciplines to create explosive growth. "I believe we're in another time like that," she said. "And IEEE plays a crucial role because we are the way to get together and exchange ideas on a very rapid scale."The Jensen Huang selection reflects this philosophy. Yes, NVIDIA built the hardware that powers AI. But the Medal of Honor citation focuses on something broader—the entire ecosystem NVIDIA created that enables AI advancement across healthcare, autonomous systems, drug discovery, and beyond. It's not just about chips. It's about what the chips make possible.That ecosystem thinking matters when AI is moving faster than our ethical frameworks can keep pace. IEEE is developing standards to address bias in AI models. They've created certification programs for ethical AI development. They even have standards for protecting young people online—work that doesn't make headlines but shapes the digital environment we all inhabit."Technology is a double-edged sword," Randall acknowledged. "But we've worked very hard to move it forward in a very responsible and ethical way."What does responsible look like when everything is accelerating? IEEE's answer involves convening experts to challenge each other, peer-reviewing research to maintain trust, and developing standards that create guardrails without killing innovation. It's the slow, unglamorous work that lets the exciting breakthroughs happen safely.The organization includes 189,000 student members—the next generation of engineers who will inherit both the tools and the responsibilities we're creating now. "Engineering with purpose" is the phrase Randall kept returning to. People don't join IEEE just for career advancement. They join because they want to do good.I asked about the future. Her answer circled back to history: the Renaissance happened when different disciplines intersected and people exchanged ideas freely. We have better tools for that now—virtual conferences, global collaboration, instant communication. The question is whether we use them wisely.We live in a Hybrid Analog Digital Society where the choices engineers make today ripple through everything tomorrow. Organizations like IEEE exist to ensure those choices serve humanity, not just shareholder returns.Jensen Huang's Medal of Honor isn't just recognition of past achievement. It's a statement about what kind of innovation matters.Subscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.My Newsletter? Yes, of course, it is here: https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Innovation comes in many forms, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom welcomes Cristina DiGiacomo, founder of 10P1 Inc. Cristina has an extensive background in communications, business, and practical philosophy. Cristina introduces her '10+1 Commandments', a set of ethical guidelines for human interaction with artificial intelligence. They discuss the compelling need to integrate these principles into business compliance and governance frameworks. The commandments aim to provide a high-level, universal, and perpetual moral code that addresses the risks and ethical considerations of AI in the corporate world. Cristina emphasizes the importance of maintaining ethical AI practices amidst the evolving regulatory landscape. Key highlights: Philosophy in Everyday Life Ancient Wisdom and Modern Application The 10+1 Commandments Explained Applying the Commandments in Business Governance and Ethical AI Resources: Cristina DiGiacomo on LinkedIn Website-10+1 Innovation in Compliance was recently ranked the 4th podcast in Risk Management by 1,000,000 Podcasts.
I spotted a LinkedIn post the other day—obviously AI-generated—with dozens of enthusiastic comments underneath. Every single one also written by AI. Bots responding to bots, a whole conversation with zero humans involved. It was both hilarious and deeply sad. This got me thinking about the dead internet theory and our role as founders in either contributing to it or pushing back against it. Today I'm exploring how we can build AI tools that augment human connection rather than replace it entirely—using AI as the means, not the end.This episode of The Bootstraped Founder is sponsored by Paddle.comThe blog post: https://thebootstrappedfounder.com/the-dead-internet-theory-are-we-building-machines-that-only-talk-to-other-machines/ The podcast episode: https://tbf.fm/episodes/the-dead-internet-theory-are-we-building-machines-that-only-talk-to-other-machines Check out Podscan, the Podcast database that transcribes every podcast episode out there minutes after it gets released: https://podscan.fmSend me a voicemail on Podline: https://podline.fm/arvidYou'll find my weekly article on my blog: https://thebootstrappedfounder.comPodcast: https://thebootstrappedfounder.com/podcastNewsletter: https://thebootstrappedfounder.com/newsletterMy book Zero to Sold: https://zerotosold.com/My book The Embedded Entrepreneur: https://embeddedentrepreneur.com/My course Find Your Following: https://findyourfollowing.comHere are a few tools I use. Using my affiliate links will support my work at no additional cost to you.- Notion (which I use to organize, write, coordinate, and archive my podcast + newsletter): https://affiliate.notion.so/465mv1536drx- Riverside.fm (that's what I recorded this episode with): https://riverside.fm/?via=arvid- TweetHunter (for speedy scheduling and writing Tweets): http://tweethunter.io/?via=arvid- HypeFury (for massive Twitter analytics and scheduling): https://hypefury.com/?via=arvid60- AudioPen (for taking voice notes and getting amazing summaries): https://audiopen.ai/?aff=PXErZ- Descript (for word-based video editing, subtitles, and clips): https://www.descript.com/?lmref=3cf39Q- ConvertKit (for email lists, newsletters, even finding sponsors): https://convertkit.com?lmref=bN9CZw
We hope you're enjoying the holiday season with family, friends, and loved ones. We'll be releasing new episodes again in the new year – in the meantime, today, we're re-running a fascinating episode on The future of AI coaching. The past few years have seen an incredible boom in AI and one of our colleagues, James Landay, a professor in Computer Science, thinks that when it comes to AI and education, things are just getting started. He's particularly excited about the potential for AI to serve as a coach or tutor. We hope you'll take another listen to this conversation and come away with some optimism for the potential AI has to help make us smarter and healthier. Have a question for Russ? Send it our way in writing or via voice memo, and it might be featured on an upcoming episode. Please introduce yourself, let us know where you're listening from, and share your question. You can send questions to thefutureofeverything@stanford.edu.Episode Reference Links:Stanford Profile: James LandayConnect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>> Twitter/X / Instagram / LinkedIn / FacebookChapters:(00:00:00) IntroductionRuss Altman introduces guest James Landay, a professor of Computer Science at Stanford University.(00:01:44) Evolving AI ApplicationsHow large language models can replicate personal coaching experiences.(00:06:24) Role of Health Experts in AIIntegrating insights from medical professionals into AI coaching systems.(00:10:01) Personalization in AI CoachingHow AI coaches can adapt personalities and avatars to cater to user preferences.(00:12:30) Group Dynamics in AI CoachingPros and cons of adding social features and group support to AI coaching systems.(00:13:48) Ambient Awareness in TechnologyAmbient awareness and how it enhances user engagement without active attention.(00:17:24) Using AI in Elementary EducationNarrative-driven tutoring systems to inspire kids' learning and creativity.(00:22:39) Encouraging Student Writing with AIUsing LLMs to motivate students to write through personalized feedback.(00:23:32) Scaling AI Educational ToolsThe ACORN project and creating dynamic, scalable learning experiences.(00:27:38) Human-Centered AIThe concept of human-centered AI and its focus on designing for society.(00:30:13) Conclusion Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>>Twitter/X / Instagram / LinkedIn / Facebook Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this episode of the Product Experience Podcast, we speak with Kasia Chmielinski, co-founder of The Data Nutrition Project, who discusses their work on responsible AI, data quality, and the Data Nutrition Project. Kasia highlights the importance of balancing innovation with ethical considerations in product management, the challenges of working within large organizations like the UN, and the need for transparency in data usage. Featured Links: Follow Kasia on LinkedIn | The Data Nutrition Project | 'What we learned at Pendomonium and #mtpcon 2024 Raleigh: Day 2' feature by Louron PrattOur HostsLily Smith enjoys working as a consultant product manager with early-stage and growing startups and as a mentor to other product managers. She's currently Chief Product Officer at BBC Maestro, and has spent 13 years in the tech industry working with startups in the SaaS and mobile space. She's worked on a diverse range of products – leading the product teams through discovery, prototyping, testing and delivery. Lily also founded ProductTank Bristol and runs ProductCamp in Bristol and Bath. Randy Silver is a Leadership & Product Coach and Consultant. He gets teams unstuck, helping you to supercharge your results. Randy's held interim CPO and Leadership roles at scale-ups and SMEs, advised start-ups, and been Head of Product at HSBC and Sainsbury's. He participated in Silicon Valley Product Group's Coaching the Coaches forum, and speaks frequently at conferences and events. You can join one of communities he runs for CPOs (CPO Circles), Product Managers (Product In the {A}ether) and Product Coaches. He's the author of What Do We Do Now? A Product Manager's Guide to Strategy in the Time of COVID-19. A recovering music journalist and editor, Randy also launched Amazon's music stores in the US & UK.
ChatGPT ads are coming y'all.
On this episode of The Association Podcast, we welcome repeat guest Jeff De Cagna, AIMP, FRSA, FASAE, and Executive Advisor at Foresight First LLC, for a deep dive into the challenges and considerations involving AI and association boards. We discuss the Future of Association Boards (FAB) Report, which De Cagna curated and edited, touching on the importance of creating a better future for association boards. Jeff stresses the need for ethical reflection in adopting AI, the concept of stewardship over traditional leadership, and fostering humanity within organizational purposes. The conversation also covers practical approaches for boards, board readiness, and actions association leaders can take to effectively navigate the evolving landscape.FAB Report
What happens when a high-powered executive, responsible for scaling multi-billion dollar companies, is asked by her 10-year-old: "What does that money actually mean to us?" In this deeply insightful episode, we sit down with Irene Liu, founder of Hypergrowth GC and former Chief Financial and Legal Officer at Hopin. Irene shares her journey from the Department of Justice to the front lines of the AI revolution, where she now advises the California Senate on AI safety. We explore the "Politics of the C-Suite," the necessity of high EQ in leadership, and why Irene decided to step out of the "survival mode" of corporate life to define what "enough" looks like for her family. In this episode, we dive deep into: Resilience born from crisis: how working in finance in Manhattan during 9/11 shaped Irene's mental fortitude. Navigating layoffs with humanity: whether you are the one being let go, the one left with survivor's guilt, or the executive making the difficult calls. The art of the pivot: effective strategies for transitioning from public service and government roles into the private sector. The AI frontier: a sobering look at the "Empire of AI," the global race for innovation, and the urgent need for safeguards to protect children and vulnerable populations. The path to the C-Suite: the two key qualities you need to transition from "just a lawyer" to a business leader. "More Mommy" vs. "More Money": how to evaluate career choices through the lens of family values and the "seasons of life." Owning your growth: Why you shouldn't let your employer drive your career, and the importance of self-investment and building a genuine community. Connect with us: Learn more about our guest, Irene Liu, on LinkedIn at https://www.linkedin.com/in/ireneliu1/. Follow our host, Samorn Selim, on LinkedIn at https://www.linkedin.com/in/samornselim/. Get a copy of Samorn's book, Career Unicorns™ 90-Day 5-Minute Gratitude Journal: An Easy & Proven Way To Cultivate Mindfulness, Beat Burnout & Find Career Joy, at https://tinyurl.com/49xdxrz8. Ready for a career change? Schedule a free 30-minute build your dream career consult by sending a message at www.careerunicorns.com. Disclaimer: Irene would like our listeners to know that her views expressed in this podcast are her own and do not represent those of any referenced organizations.
In this end-of-year AwesomeCast, hosts Michael Sorg and Katie Dudas are joined by original AwesomeCast co-host Rob De La Cretaz for a wide-ranging discussion on the biggest tech shifts of 2025 — and what's coming next. The panel breaks down how AI tools became genuinely useful in everyday workflows, from content production and health tracking to decision-making and trend analysis. Rob shares why Bambu Labs 3D printers represent a turning point in consumer and professional 3D printing, removing friction and making rapid prototyping accessible for creators, engineers, and hobbyists alike. The episode also covers the evolving role of AI in media creation, concerns around over-reliance and trust, and why human-made content may soon become a premium feature. Intern Mac reflects on changing career paths into media production, while the crew revisits their 2025 tech predictions, holds themselves accountable, and locks in bold forecasts for 2026. Plus: Chachi's Video Game Minute, AI competition heating up, Apple Vision Pro speculation, and why “AI inside” may need clearer definitions moving forward.
As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast
“ I think we're betting on AI as something that can help to solve a lot of problems for us. It's the future, we think, whether it's producing text or art, or doing medical research or planning our lives for us, etc., the bet is that AI is going to be great, that it's going to get us everything we want and make everything better. But at the same time, we're gambling, at the extreme end, with the future of humanity , hoping for the best and hoping that this, what I'm calling the AI wager, is going to work out to our advantage, but we'll see.”As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast
As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast
“ I think we're betting on AI as something that can help to solve a lot of problems for us. It's the future, we think, whether it's producing text or art, or doing medical research or planning our lives for us, etc., the bet is that AI is going to be great, that it's going to get us everything we want and make everything better. But at the same time, we're gambling, at the extreme end, with the future of humanity , hoping for the best and hoping that this, what I'm calling the AI wager, is going to work out to our advantage, but we'll see.”As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast
The Creative Process in 10 minutes or less · Arts, Culture & Society
“ I think we're betting on AI as something that can help to solve a lot of problems for us. It's the future, we think, whether it's producing text or art, or doing medical research or planning our lives for us, etc., the bet is that AI is going to be great, that it's going to get us everything we want and make everything better. But at the same time, we're gambling, at the extreme end, with the future of humanity , hoping for the best and hoping that this, what I'm calling the AI wager, is going to work out to our advantage, but we'll see.”As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast
Show Notes In this episode Simon speaks with Tatiana Bachkirova, a leading scholar in coaching psychology. They explore how AI is impacting on the field of coaching and what it means to remain human in a world increasingly driven by algorithms. The discussion moves fluidly between neuroscience, pseudo-science, identity, belonging, and ethics, reflecting on the tensions between performance culture and authentic human development. They discuss how coaching must expand beyond individual self-optimization toward supporting meaningful, value-based projects and understanding the broader social and organisational contexts in which people live and work. AI underscores the need for ethical grounding in coaching. Ultimately, the episode reclaims coaching as a moral and relational practice, reminding listeners that the future of coaching depends not on technology, but on how we choose to stay human within it. Key Reflections AI is often a solution in search of a problem, revealing more about our anxieties than our needs. Coaching must evolve with the changing world, engaging complexity rather than retreating to technique. The focus should be on meaningful, value-driven projects that connect personal purpose with collective good. AI coaching risks eroding depth, ethics, and relational presence if not grounded in human awareness. Critical thinking anchors coaching in understanding rather than compliance, enabling ethical discernment. The relational quality defines coaching effectiveness - authentic dialogue remains its living core. Coaching should move from performance and self-optimization to reflection, purpose, and contribution. Human connection and ethical practice sustain trust, belonging, and relevance in the digital age. The future of coaching lies in integrating technology without losing our humanity. Keywords Coaching psychology, AI in coaching, organisational coaching, identity, belonging, neuroscience, critical thinking, human coaching, coaching ethics, coaching research Brief Bio Tatiana Bachkirova is Professor of Coaching Psychology in the International Centre for Coaching and Mentoring Studies at Oxford Brookes University, UK. She supervises doctoral students as an academic, and human coaches as a practitioner. She is a leading scholar in Coaching Psychology and in recent years has been exploring themes such as the role of AI in coaching, the deeper purpose of organisational coaching, what leaders seek to learn at work, and critical perspectives on the neuroscience of coaching. In her over 80 research articles in leading journals, book chapters and books and in her many speaking engagements she addresses most challenging issues of coaching as a service to individuals, organisations and wider societies.
“ I think we're betting on AI as something that can help to solve a lot of problems for us. It's the future, we think, whether it's producing text or art, or doing medical research or planning our lives for us, etc., the bet is that AI is going to be great, that it's going to get us everything we want and make everything better. But at the same time, we're gambling, at the extreme end, with the future of humanity , hoping for the best and hoping that this, what I'm calling the AI wager, is going to work out to our advantage, but we'll see.”As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast
- Updates on AI Tools and Book Generator (0:10) - Health Advice and Lifestyle Habits (1:42) - Critique of Conventional Doctors (6:50) - The Rise of AI in Healthcare (10:05) - Better Than a Doctor AI Feature (17:24) - Health Ranger's AI and Robotics Projects (36:07) - Philosophical Discussion on AI and Human Rights (1:10:58) - The Future of AI and Human Interaction (1:17:53) - The Role of AI in Survival Scenarios (1:18:57) - The Potential for AI in Enhancing Human Life (1:19:13) - Personal Experience with AI and Health Data (1:19:32) - AI in Diagnostics and Natural Solutions (1:22:17) - Critique of Google and AI Ethics (1:25:00) - Impact of AI on Human Relationships and Society (1:30:24) - Debate on Consciousness and AI (1:35:54) - Historical and Scientific Perspectives on Consciousness (1:50:21) - Practical Applications and Future of AI (1:53:17) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
Is there anything real left on the internet? Neil deGrasse Tyson and co-hosts Chuck Nice and Gary O'Reilly explore deepfakes, scams, and cybercrime with the Director of Threat Research at Bitdefender, Bogdan Botezatu. Scams are a trillion-dollar industry; keep your loved ones safe with Bitdefender: https://bitdefend.me/90-StarTalkNOTE: StarTalk+ Patrons can listen to this entire episode commercial-free here: https://startalkmedia.com/show/deepfakes-and-the-war-on-truth-with-bogdan-botezatu/Thanks to our Patrons Bubbalotski, Oskar Yazan Mellemsether, Craig A, Andrew, Liagadd, William ROberts, Pratiksha, Corey Williams, Keith, anirao, matthew, Cody T, Janna Ladd, Jen Richardson, Elizaveta Nikitenko, James Quagliariello, LA Stritt, Rocco Ciccolini, Kyle Jones, Jeremy Jones, Micheal Fiebelkorn, Erik the Nerd, Debbie Gloom, Adam Tobias Lofton, Chad Stewart, Christy Bradford, David Jirel, e4e5Nf3, John Rost, cluckaizo, Diane Féve, Conny Vigström, Julian Farr, karl Lebeau, AnnElizabeth, p johnson, Jarvis, Charles Bouril, Kevin Salam, Alex Rzem, Joseph Strolin, Madelaine Bertelsen, noel jimenez, Arham Jain, Tim Manzer, Alex, Ray Weikal, Kevin O'Reilly, Mila Love, Mert Durak, Scrubbing Bubblez, Lili Rose, Ram Zaidenvorm, Sammy Aleksov, Carter Lampe, Tom Andrusyna, Raghvendra Singh Bais, ramenbrownie, cap kay, B Rhodes, Chrissi Vergoglini, Micheal Reilly, Mone, Brendan D., Mung, J Ram, Katie Holliday, Nico R, Riven, lanagoeh, Shashank, Bradley Andrews, Jeff Raimer, Angel velez, Sara, Timothy Criss, Katy Boyer, Jesse Hausner, Blue Cardinal, Benjamin Kedwards, Dave, Wen Wei LOKE, Micheal Sacher, Lucas, Ken Kuipers, Alex Marks, Amanda Morrison, Gary Ritter Jr, Bushmaster, thomas hennigan, Erin Flynn, Chad F, fro drick, Ben Speire, Sanjiv VIJ, Sam B, BriarPatch, and Mario Boutet for supporting us this week. Subscribe to SiriusXM Podcasts+ to listen to new episodes of StarTalk Radio ad-free and a whole week early.Start a free trial now on Apple Podcasts or by visiting siriusxm.com/podcastsplus. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.