Podcasts about ai ethics

  • 830PODCASTS
  • 1,482EPISODES
  • 45mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Feb 5, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about ai ethics

Show all podcasts related to ai ethics

Latest podcast episodes about ai ethics

Azeem Azhar's Exponential View
Mustafa Suleyman — AI is hacking our empathy circuits

Azeem Azhar's Exponential View

Play Episode Listen Later Feb 5, 2026 50:16


Welcome to Exponential View, the show where I explore how exponential technologies such as AI are reshaping our future. I've been studying AI and exponential technologies at the frontier for over ten years.Each week, I share some of my analysis or speak with an expert guest to make light of a particular topic.To keep up with the Exponential transition, subscribe to this channel or to my newsletter: https://www.exponentialview.co/-----A week before OpenClaw exploded, I recorded a prescient conversation with Mustafa Suleyman, CEO of Microsoft AI and co-founder of DeepMind. We talked about what happens when AI starts to seem conscious – even if it isn't. Today, you get to hear our conversation.Mustafa has been sounding the alarm about what he calls “seemingly conscious AI” and the risk of collective AI psychosis for a long time. We discussed this idea of the “fourth class of being” – neither human, tool, nor nature – that AI is becoming and all it brings with it.Skip to the best bits:(03:38) Why consciousness means the ability to suffer(06:52) "Your empathy circuits are being hacked"(07:23) Consciousness as the basis of rights(10:47) A fourth class of being(13:41) Why market forces push toward seemingly conscious AI(20:56) What AI should never be allowed to say(25:06) The proliferation problem with open-source chatbots(29:09) Why we need well-paid civil servants(30:17) Where should we draw the line with AI?(37:48) The counterintuitive case for going faster(42:00) The vibe coding dopamine hit(47:09) Social intelligence as the next AI frontier(48:50) The case for humanist super intelligence-----Where to find Mustafa:- X (Twitter): https://x.com/mustafasuleyman- LinkedIn: https://www.linkedin.com/in/mustafa-suleyman/- Personal Website: https://mustafa-suleyman.ai/Where to find me:- Substack: https://www.exponentialview.co/- Website: https://www.azeemazhar.com/- LinkedIn: https://www.linkedin.com/in/azhar- Twitter/X: https://x.com/azeemProduced by supermix.io and EPIIPLUS1 Ltd. Production and research: Chantal Smith and Marija Gavrilov. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Making It in The Toy Industry
S6E01 | The Hidden Cost of AI In The Toy Industry

Making It in The Toy Industry

Play Episode Listen Later Feb 4, 2026 38:18


With every major advancement in science or technology, there is bound to be pushback. AI is no exception. This season is for toy and game creators navigating AI in the toy industry. In 10 episodes we'll explore how to use AI while maintaining your taste, trusting your judgement, protecting your IP, and actually saving time.This first episode is the foundation for the coming weeks. Before you dive any deeper into AI, let's get real about the risks of operating it with little to no guardrails.In this episode, we'll talk about:digital amnesiathe environmental impact of AIthe hidden ways AI is already showing up inside your tool stackAI use cases that dull your brain and personalitySeason 6 begins now.- - - - About My NEW Podcast Art:The podcast art for Season 6 of Making It In The Toy Industry features product illustrations of toys and games I helped guide in Toy Creators Academy and TCA Accelerator. Tap the brand name below to check them out!Playcor by Courtney Smithee9 to 5 Warriors by Brandon BraswellCatoms by Kieche O'ConnellThe Lunch Room by EAP Toys and Games founder, Chrissy FagerholtSend The Toy Coach Fan Mail!Support the show

ITSPmagazine | Technology. Cybersecurity. Society
AI Art vs Human Creativity — The Real Difference and why AI Cannot Be An Artist | A Conversation with AI Expert Andrea Isoni, PhD, Chief AI Officer, AI speaker | Redefining Society and Technology with Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Feb 2, 2026 30:14


The Last Touch: Why AI Will Never Be an ArtistI had one of those conversations... the kind where you're nodding along, then suddenly stop because someone just articulated something you've been feeling but couldn't quite name.Andrea Isoni is a Chief AI Officer. He builds and delivers AI solutions for a living. And yet, sitting across from him (virtually, but still), I heard something I rarely hear from people deep in the AI industry: a clear, unromantic take on what this technology actually is — and what it isn't.His argument is elegant in its simplicity. Think about Michelangelo. We picture him alone with a chisel, carving David from marble. But that's not how it worked. Michelangelo ran a workshop. He had apprentices — skilled craftspeople who did the bulk of the work. The master would look at a semi-finished piece, decide what needed refinement, and add the final touch.That final touch is everything.Andrea draws the same line with chefs. A Michelin-starred kitchen isn't one person cooking. It's a team executing the chef's vision. But the chef decides what's on the menu. The chef check the dish before it leaves. The chef adds that last adjustment that transforms good into memorable.AI, in this framework, is the newest apprentice. It can do the bulk work. It can generate drafts, produce code, create images. But it cannot — and here's the key — provide that final touch. Because that touch comes from somewhere AI doesn't have access to: lived experience, suffering, joy, the accumulated weight of being human in a particular time and place.This matters beyond art. Andrea calls it the "hacker economy" — a future where AI handles the volume, but humans handle the value. Think about code generation. Yes, AI can write software. But code with a bug doesn't work. Period. Someone has to fix that last bug. And in a world where AI produces most of the code, the value of fixing that one critical bug increases exponentially. The work becomes rarer but more valuable. Less frequent, but essential.We went somewhere unexpected in our conversation — to electricity. What does AI "need"? Not food. Not warmth. Electricity. So if AI ever developed something like feelings, they wouldn't be tied to hunger or cold or human vulnerability. They'd be tied to power supply. The most important being to an AI wouldn't be a human — it would be whoever controls the electricity grid.That's not a being we can relate to. And that's the point.Andrea brought up Guernica. Picasso's masterpiece isn't just innovative in style — it captures something society was feeling in 1937, the horror of the Spanish Civil War. Great art does two things: it innovates, and it expresses something the collective needs expressed. AI might be able to generate the first. It cannot do the second. It doesn't know what we feel. It doesn't know what moment we're living through. It doesn't have that weight of context.The research community calls this "world models" — the attempt to give AI some built-in understanding of reality. A dog doesn't need to be taught to swim; it's born knowing. Humans have similar innate knowledge, layered with everything we learn from family, culture, experience. AI starts from zero. Every time.Andrea put it simply: AI contextualization today is close to zero.I left the conversation thinking about what we protect when we acknowledge AI's limits. Not anti-technology. Not fear. Just clarity. The "last touch" isn't a romantic notion — it's what makes something resonate. And that resonance comes from us.Stay curious. Subscribe to the podcast. And if you have thoughts, drop them in the comments — I actually read them.Marco CiappelliSubscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.> https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Gary and Shannon
AI Ethics & Privacy Gone Public

Gary and Shannon

Play Episode Listen Later Jan 27, 2026 32:27 Transcription Available


Hour 3 moves from Washington chaos to cutting-edge tech and sports controversy. Gary and Shannon track the latest political firestorms, speculate on who won’t survive Trump’s second cabinet, wrestle with the ethics of AI scraping human knowledge, and debate whether public meltdowns are fair game in the age of constant cameras.• #SwampWatch: Democrats call for Kristi Noem’s firing, shutdown fears loom, and political tensions spike.• Who Falls First?: Prediction markets weigh in on which Trump cabinet member could be first out.• AI Crossroads: A new border shooting collides with a debate over Claude, Anthropic, and the morality of training AI on human work.• Privacy vs Reality: Coco Gauff’s private frustration goes public, should athletes always expect the cameras?See omnystudio.com/listener for privacy information.

On Top of PR
Navigating AI ethics in public relations with PRSA CEO Matthew Marcial

On Top of PR

Play Episode Listen Later Jan 27, 2026 37:01


Send us a textIn this episode, PRSA CEO Matthew Marcial joins host Jason Mudd to discuss the ethical use of AI in PR and key insights for communicators.Tune in to learn more!Meet our guest:Our episode guest is Matthew Marcial, CEO of the Public Relations Society of America. He leads PRSA's strategic priorities, focusing on advancing the profession and guiding communicators through emerging challenges, including the ethical use of artificial intelligence.Five things you'll learn from this episode:1. The biggest ethical risks with generative AI in PR2. The “Promise and Pitfalls” principles every PR team should adopt 3. How smart PR teams are using AI without crossing ethical lines4. PRSA's role in helping professionals navigate the fast-changing AI landscape5. Tips for rising PR pros who want to lead the profession forwardQuotables“As a leader, you really need to be able to set clear expectations with your team around what the role of AI is and what it is for your organization.” — Matthew Marcial“Being comfortable with that, sharing, and training across your teams is really going to help leverage that (AI) insight and expertise.” — Matthew Marcial“I think that as a communicator, putting out anything that compromises your reputation is going to be a risk.” — Matthew Marcial“We are taking a bolder voice on issues that impact our members, the industry, and the profession.” — Matthew Marcial“The best way to learn is through trial and error.” — Jason MuddIf you enjoyed this episode, please take a moment to share it with a colleague or friend. You may also support us through Buy Me a Coffee or by leaving us a quick podcast review.More about Matthew MarcialMatthew Marcial, CAE, CMP, is the CEO of the Public Relations Society of America, the nation's leading organization for public relations and communications professionals. Appointed in March 2025, he leads PRSA's strategic priorities, focusing on advancing the profession, supporting member growth, and navigating emerging challenges, such as the ethical use of artificial intelligence. With more than 20 years of association leadership experience, Matthew is a frequent speaker on ethical leadership and professional development and has recently led sessions across PRSA's regional districts on the organization's AI Ethics Guide for PR professionals.Guest's contact info and resources:Matthew Marcial on LinkedInPRSA websitePRSA's Promise and Pitfalls: Ethical AI GuidePRSA's DEI ToolkitPRSA's Membership | Promo Code for Listeners: PRPROD25Support the show On Top of PR is produced by Axia Public Relations, named by Forbes as one of America's Best PR Agencies. Axia is an expert PR firm for national brands. On Top of PR is sponsored by ReviewMaxer, the platform for monitoring, improving, and promoting online customer reviews.

No Parachute
Buddhism and AI: Ethics, Consciousness, and Compassion in Technology.

No Parachute

Play Episode Listen Later Jan 21, 2026 31:47


No Parachute
Buddhism and AI: Ethics, Consciousness, and Compassion in Technology

No Parachute

Play Episode Listen Later Jan 21, 2026 31:47


Compass Podcast: Finding the spirituality in the day-to-day
[172] AI, ethics and the spiritual journey

Compass Podcast: Finding the spirituality in the day-to-day

Play Episode Listen Later Jan 21, 2026 39:33


We’re exploring the spiritual implications of Artificial Intelligence with pastor and author Reverend Nathan Webb. Nathan is the founding pastor of Checkpoint Church, a digital-first church aimed at connecting with individuals who identify as nerds, geeks, and gamers. Through the conversation, they explore the intersection of artificial intelligence (AI) and spirituality, discussing how AI can … Continue reading "[172] AI, ethics and the spiritual journey"

Compared to Who?
How AI-Generated Photos Impact Body Image, Comparison, and Faith: Understanding the Dangers of Altered Images

Compared to Who?

Play Episode Listen Later Jan 20, 2026 38:27 Transcription Available


Are AI images fooling you? They're everywhere. Perhaps you saw all those cute "candy cane" body suit photos and thought, "That looks fun." Or, maybe you posted one yourself! In this thought-provoking episode, Heather Creekmore unpacks the rise of AI-generated photos and their profound impact on how we see ourselves—and each other. What started years ago as a debate over Photoshop has now exploded into a world where anyone can create altered, “flawless” images of themselves in a matter of seconds. But the effects go far beyond just looking different in pictures. These doctored images are changing our brains, our body image, and even our spiritual health. Heather shares what happened when she created a bunch of AI photos of herself, including her hilarious results. What You’ll Hear The Evolution from Photoshop to AI:Heather Creekmore reminisces about early discussions on Photoshop and magazine covers—and how AI has made “perfect” images accessible to everyone, not just celebrities and models. Personal Experiment with AI Headshots:Hear about Heather’s own journey using an AI headshot generator, the surprising (and sometimes hilarious) results, and the unsettling emotional triggers that come with seeing an altered version of yourself. The Science Behind How Images Affect Us:Learn how the brain processes images, why filtered photos are so convincing (even when we know they're fake), and how repeated exposure to “perfect” bodies rewires our brains to set unrealistic standards. Real Dangers: Snapchat Dysmorphia and Beyond:Explore the rise in people seeking cosmetic procedures to look like their filtered selfies, and understand why AI-generated “ideal images” up the stakes for comparison, perfectionism, and dissatisfaction. Spiritual Implications:Heather dives deep into the spiritual cost of chasing AI perfection, discussing body image idolatry, why you were purposefully designed by a loving Creator, and the difference between being designed vs. manufactured. Practical Tips to Beat Comparison:Walk away with actionable advice, from mindful scrolling to curating your social media feed, setting screen time limits, and turning to prayer when you're tempted by those idealized images. Memorable Quotes “Now you can actually have an image of yourself to worship.” “Our brains know these images are fake, but our hearts still hurt as if they’re real.” “You’re not a red Solo cup. You’re not manufactured. You’re uniquely designed.” "Are you worshipping a perfect image, or are you worshipping a perfect God?" Helpful Links 40-Day Body Image Journey:Feeling stuck in comparison and body obsession? Join Heather Creekmore’s quarterly 40-day journey for Christian women at improvebodyimage.com (look for the “40 Day Journey” tab). Related Resources: See the photos! Find this episode on YouTube or visit the blog, here. Listen to more episodes on faith and body image Find the 40-Day Body Image Workbook * (Amazon affiliate link. Tiny portion of your purchase goes to support this ministry.) Final Thoughts If you’ve ever scrolled through Instagram and felt “less than,” or if you’re curious about how AI might be affecting your mental—and spiritual—health, this episode is for you. Heather Creekmore reminds us that our value isn’t found in a perfectly curated image, but in the unique design given to us by God. Be sure to subscribe so you never miss an episode. If this conversation resonated with you, share it with a friend or leave a review. Thanks for listening! Remember: Stop comparing and start living. Follow Heather Creekmore on Instagram and YouTube for more encouragement on faith, body image, and comparison-free living. Discover more Christian podcasts at lifeaudio.com and inquire about advertising opportunities at lifeaudio.com/contact-us.

Wings Of...Inspired Business
Personalized Ayurvedic Wellness: Serial Entrepreneur Arjita Sehti on Revenue, Resilience and AI Ethics

Wings Of...Inspired Business

Play Episode Listen Later Jan 20, 2026 54:28


Arjita Sethi is a serial entrepreneur, physical therapist, certified yoga teacher, Ayurveda practitioner, and meditation expert, recognized as a leading voice at the intersection of AI and wellbeing. She is the founder of Shaanti, an AI-powered wellness platform creating personalized rituals rooted in Ayurveda, and New Founder School, which equips entrepreneurs with practical strategies to launch and grow sustainably. Arjita sits on the advisory board of the NASDAQ Entrepreneurial Center, teaches entrepreneurship at San Francisco State University, and has impacted hundreds of thousands of people across 40 countries through her businesses, teaching, and advisory work. A TEDx speaker, angel investor, and advocate for women in technology, she brings her philosophy of life-synced success into her work as a partner and mother.

ITSPmagazine | Technology. Cybersecurity. Society
CES 2026 Recap | AI, Robotics, Quantum, And Renewable Energy: The Future Is More Practical Than You Think | A Conversation with CTA Senior Director and Futurist Brian Comiskey | Redefining Society and Technology with Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Jan 17, 2026 23:55


CES 2026 Just Showed Us the Future. It's More Practical Than You Think.CES has always been part crystal ball, part carnival. But something shifted this year.I caught up with Brian Comiskey—Senior Director of Innovation and Trends at CTA and a futurist by trade—days after 148,000 people walked the Las Vegas floor. What he described wasn't the usual parade of flashy prototypes destined for tech graveyards. This was different. This was technology getting serious about actually being useful.Three mega trends defined the show: intelligent transformation, longevity, and engineering tomorrow. Fancy terms, but they translate to something concrete: AI that works, health tech that extends lives, and innovations that move us, power us, and feed us. Not technology for its own sake. Technology with a job to do.The AI conversation has matured. A year ago, generative AI was the headline—impressive demos, uncertain applications. Now the use cases are landing. Industrial AI is optimizing factory operations through digital twins. Agentic AI is handling enterprise workflows autonomously. And physical AI—robotics—is getting genuinely capable. Brian pointed to robotic vacuums that now have arms, wash floors, and mop. Not revolutionary in isolation, but symbolic of something larger: AI escaping the screen and entering the physical world.Humanoid robots took a visible leap. Companies like Sharpa and Real Hand showcased machines folding laundry, picking up papers, playing ping pong. The movement is becoming fluid, dexterous, human-like. LG even introduced a consumer-facing humanoid. We're past the novelty phase. The question now is integration—how these machines will collaborate, cowork, and coexist with humans.Then there's energy—the quiet enabler hiding behind the AI headlines.Korea Hydro Nuclear Power demonstrated small modular reactors. Next-generation nuclear that could cleanly power cities with minimal waste. A company called Flint Paper Battery showcased recyclable batteries using zinc instead of lithium and cobalt. These aren't sexy announcements. They're foundational.Brian framed it well: AI demands energy. Quantum computing demands energy. The future demands energy. Without solving that equation, everything else stalls. The good news? AI itself is being deployed for grid modernization, load balancing, and optimizing renewable cycles. The technologies aren't competing—they're converging.Quantum made the leap from theory to presence. CES launched a new area called Foundry this year, featuring innovations from D-Wave and Quantum Computing Inc. Brian still sees quantum as a 2030s defining technology, but we're in the back half of the 2020s now. The runway is shorter than we thought.His predictions for 2026: quantum goes more mainstream, humanoid robotics moves beyond enterprise into consumer markets, and space technologies start playing a bigger role in connectivity and research. The threads are weaving together.Technology conversations often drift toward dystopia—job displacement, surveillance, environmental cost. Brian sees it differently. The convergence of AI, quantum, and clean energy could push things toward something better. The pieces exist. The question is whether we assemble them wisely.CES is a snapshot. One moment in the relentless march. But this year's snapshot suggests technology is entering a phase where substance wins over spectacle.That's a future worth watching.This episode is part of the Redefining Society and Technology podcast's CES 2026 coverage. Subscribe to stay informed as technology and humanity continue to intersect.Subscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.> https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Trust Issues
EP 23 - Red teaming AI governance: catching model risk early

Trust Issues

Play Episode Listen Later Jan 14, 2026 34:37


AI systems are moving fast, sometimes faster than the guardrails meant to contain them. In this episode of Security Matters, host David Puner digs into the hidden risks inside modern AI models with Pamela K. Isom, exploring the governance gaps that allow agents to make decisions, recommendations, and even commitments far beyond their intended authority.Isom, former director of AI and technology at the U.S. Department of Energy (DOE) and now founder and CEO of IsAdvice & Consulting, explains why AI red teaming must extend beyond cybersecurity, how to stress test AI governance before something breaks, and why human oversight, escalation paths, and clear limits are essential for responsible AI.The conversation examines real-world examples of AI drift, unintended or unethical model behavior, data lineage failures, procurement and vendor blind spots, and the rising need for scalable AI governance, AI security, responsible AI practices, and enterprise red teaming as organizations adopt generative AI.Whether you work in cybersecurity, identity security, AI development, or technology leadership, this episode offers practical insights for managing AI risk and building systems that stay aligned, accountable, and trustworthy.

What's Wrong With: The Podcast
"In Pursuit of Good Tech" ft. Olivia Gambelin

What's Wrong With: The Podcast

Play Episode Listen Later Jan 14, 2026 58:33


Follow Olivia on Linkedin and Substack! Check out her website.Follow us on Instagram and on X!Created by SOUR, this podcast is part of the studio's "Future of X,Y,Z" research, where the collaborative discussion outcomes serve as the basis for the futuristic concepts built in line with the studio's mission of solving urban, social, and environmental problems through intelligent design.Make sure to visit our website and subscribe to the show on Apple Podcasts, Spotify, or Google Podcasts so you never miss an episode. If you found value in this show, we would appreciate it if you could head over to iTunes to rate and leave a review – or simply share the show with your friends!Don't forget to join us next week for another episode. Thank you for listening!

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic

In this episode, Conor and Jaeden dive into the ethical dilemmas surrounding AI models like Grok, discussing the implications of content moderation, free speech, and the responsibilities of tech giants. They explore the recent controversies and the impact of paywalls on content accountability, offering a nuanced perspective on the balance between innovation and regulation.Get the top 40+ AI Models for $20 at AI Box: ⁠⁠https://aibox.aiConor's AI Course: https://www.ai-mindset.ai/coursesConor's AI Newsletter: https://www.ai-mindset.ai/Jaeden's AI Hustle Community: https://www.skool.com/aihustleSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

ITSPmagazine | Technology. Cybersecurity. Society
CES 2026: Why NVIDIA's Jensen Huang Won IEEE Medal of Honor | A Conversation with Mary Ellen Randall, IEEE's 2026 President and CEO | Redefining Society and Technology with Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Jan 8, 2026 24:46


Jensen Huang Just Won IEEE's Highest Honor. The Reason Tells Us Everything About Where Tech Is Headed.IEEE announced Jensen Huang as its 2026 Medal of Honor recipient at CES this week. The NVIDIA founder joins a lineage stretching back to 1917—over a century of recognizing people who didn't just advance technology, but advanced humanity through technology.That distinction matters more than ever.I spoke with Mary Ellen Randall, IEEE's 2026 President and CEO, from the floor of CES Las Vegas. The timing felt significant. Here we are, surrounded by the latest gadgets and AI demonstrations, having a conversation about something deeper: what all this technology is actually for.IEEE isn't a small operation. It's the world's largest technical professional society—500,000 members across 190 countries, 38 technical societies, and 142 years of history that traces back to when the telegraph was connecting continents and electricity was the revolutionary new thing. Back then, engineers gathered to exchange ideas, challenge each other's thinking, and push innovation forward responsibly.The methods have evolved. The mission hasn't."We're dedicated to advancing technology for the benefit of humanity," Randall told me. Not advancing technology for its own sake. Not for quarterly earnings. For humanity. It sounds like a slogan until you realize it's been their operating principle since before radio existed.What struck me was her framing of this moment. Randall sees parallels to the Renaissance—painters working with sculptors, sharing ideas with scientists, cross-pollinating across disciplines to create explosive growth. "I believe we're in another time like that," she said. "And IEEE plays a crucial role because we are the way to get together and exchange ideas on a very rapid scale."The Jensen Huang selection reflects this philosophy. Yes, NVIDIA built the hardware that powers AI. But the Medal of Honor citation focuses on something broader—the entire ecosystem NVIDIA created that enables AI advancement across healthcare, autonomous systems, drug discovery, and beyond. It's not just about chips. It's about what the chips make possible.That ecosystem thinking matters when AI is moving faster than our ethical frameworks can keep pace. IEEE is developing standards to address bias in AI models. They've created certification programs for ethical AI development. They even have standards for protecting young people online—work that doesn't make headlines but shapes the digital environment we all inhabit."Technology is a double-edged sword," Randall acknowledged. "But we've worked very hard to move it forward in a very responsible and ethical way."What does responsible look like when everything is accelerating? IEEE's answer involves convening experts to challenge each other, peer-reviewing research to maintain trust, and developing standards that create guardrails without killing innovation. It's the slow, unglamorous work that lets the exciting breakthroughs happen safely.The organization includes 189,000 student members—the next generation of engineers who will inherit both the tools and the responsibilities we're creating now. "Engineering with purpose" is the phrase Randall kept returning to. People don't join IEEE just for career advancement. They join because they want to do good.I asked about the future. Her answer circled back to history: the Renaissance happened when different disciplines intersected and people exchanged ideas freely. We have better tools for that now—virtual conferences, global collaboration, instant communication. The question is whether we use them wisely.We live in a Hybrid Analog Digital Society where the choices engineers make today ripple through everything tomorrow. Organizations like IEEE exist to ensure those choices serve humanity, not just shareholder returns.Jensen Huang's Medal of Honor isn't just recognition of past achievement. It's a statement about what kind of innovation matters.Subscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.My Newsletter? Yes, of course, it is here: https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Innovation in Compliance with Tom Fox
10+1 Commandments: A Moral Code for AI Ethics in Business with Cristina DiGiacomo

Innovation in Compliance with Tom Fox

Play Episode Listen Later Jan 6, 2026 19:28


Innovation comes in many forms, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode,  host Tom welcomes Cristina DiGiacomo, founder of 10P1 Inc. Cristina has an extensive background in communications, business, and practical philosophy. Cristina introduces her '10+1 Commandments', a set of ethical guidelines for human interaction with artificial intelligence. They discuss the compelling need to integrate these principles into business compliance and governance frameworks. The commandments aim to provide a high-level, universal, and perpetual moral code that addresses the risks and ethical considerations of AI in the corporate world. Cristina emphasizes the importance of maintaining ethical AI practices amidst the evolving regulatory landscape. Key highlights: Philosophy in Everyday Life Ancient Wisdom and Modern Application The 10+1 Commandments Explained Applying the Commandments in Business Governance and Ethical AI Resources: Cristina DiGiacomo on LinkedIn Website-10+1  Innovation in Compliance was recently ranked the 4th podcast in Risk Management by 1,000,000 Podcasts.

The Bootstrapped Founder
429: The Dead Internet Theory: Are We Building Machines That Only Talk to Other Machines?

The Bootstrapped Founder

Play Episode Listen Later Dec 26, 2025 12:37 Transcription Available


I spotted a LinkedIn post the other day—obviously AI-generated—with dozens of enthusiastic comments underneath. Every single one also written by AI. Bots responding to bots, a whole conversation with zero humans involved. It was both hilarious and deeply sad. This got me thinking about the dead internet theory and our role as founders in either contributing to it or pushing back against it. Today I'm exploring how we can build AI tools that augment human connection rather than replace it entirely—using AI as the means, not the end.This episode of The Bootstraped Founder is sponsored by Paddle.comThe blog post: https://thebootstrappedfounder.com/the-dead-internet-theory-are-we-building-machines-that-only-talk-to-other-machines/ The podcast episode: https://tbf.fm/episodes/the-dead-internet-theory-are-we-building-machines-that-only-talk-to-other-machines Check out Podscan, the Podcast database that transcribes every podcast episode out there minutes after it gets released: https://podscan.fmSend me a voicemail on Podline: https://podline.fm/arvidYou'll find my weekly article on my blog: https://thebootstrappedfounder.comPodcast: https://thebootstrappedfounder.com/podcastNewsletter: https://thebootstrappedfounder.com/newsletterMy book Zero to Sold: https://zerotosold.com/My book The Embedded Entrepreneur: https://embeddedentrepreneur.com/My course Find Your Following: https://findyourfollowing.comHere are a few tools I use. Using my affiliate links will support my work at no additional cost to you.- Notion (which I use to organize, write, coordinate, and archive my podcast + newsletter): https://affiliate.notion.so/465mv1536drx- Riverside.fm (that's what I recorded this episode with): https://riverside.fm/?via=arvid- TweetHunter (for speedy scheduling and writing Tweets): http://tweethunter.io/?via=arvid- HypeFury (for massive Twitter analytics and scheduling): https://hypefury.com/?via=arvid60- AudioPen (for taking voice notes and getting amazing summaries): https://audiopen.ai/?aff=PXErZ- Descript (for word-based video editing, subtitles, and clips): https://www.descript.com/?lmref=3cf39Q- ConvertKit (for email lists, newsletters, even finding sponsors): https://convertkit.com?lmref=bN9CZw

ai sold bots machines tweets riverside notion human connections paddle ai ethics dead internet theory arvid kahl bootstrapped founder hypefury embedded entrepreneur tweethunter
The Future of Everything presented by Stanford Engineering
Best of: The future of AI coaching

The Future of Everything presented by Stanford Engineering

Play Episode Listen Later Dec 26, 2025 30:55


We hope you're enjoying the holiday season with family, friends, and loved ones. We'll be releasing new episodes again in the new year – in the meantime, today, we're re-running a fascinating episode on The future of AI coaching. The past few years have seen an incredible boom in AI and one of our colleagues, James Landay, a professor in Computer Science, thinks that when it comes to AI and education, things are just getting started. He's particularly excited about the potential for AI to serve as a coach or tutor. We hope you'll take another listen to this conversation and come away with some optimism for the potential AI has to help make us smarter and healthier. Have a question for Russ? Send it our way in writing or via voice memo, and it might be featured on an upcoming episode. Please introduce yourself, let us know where you're listening from, and share your question. You can send questions to thefutureofeverything@stanford.edu.Episode Reference Links:Stanford Profile: ​​James LandayConnect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>> Twitter/X / Instagram / LinkedIn / FacebookChapters:(00:00:00) IntroductionRuss Altman introduces guest James Landay, a professor of Computer Science at Stanford University.(00:01:44) Evolving AI ApplicationsHow large language models can replicate personal coaching experiences.(00:06:24) Role of Health Experts in AIIntegrating insights from medical professionals into AI coaching systems.(00:10:01) Personalization in AI CoachingHow AI coaches can adapt personalities and avatars to cater to user preferences.(00:12:30) Group Dynamics in AI CoachingPros and cons of adding social features and group support to AI coaching systems.(00:13:48) Ambient Awareness in TechnologyAmbient awareness and how it enhances user engagement without active attention.(00:17:24) Using AI in Elementary EducationNarrative-driven tutoring systems to inspire kids' learning and creativity.(00:22:39) Encouraging Student Writing with AIUsing LLMs to  motivate students to write  through personalized feedback.(00:23:32) Scaling AI Educational ToolsThe ACORN project and creating dynamic, scalable learning experiences.(00:27:38) Human-Centered AIThe concept of human-centered AI and its focus on designing for society.(00:30:13) Conclusion Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>>Twitter/X / Instagram / LinkedIn / Facebook Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Product Experience
Rerun: AI ethics advice from former White House technologist - Kasia Chmielinski (Co-Founder, The Data Nutrition Project)

The Product Experience

Play Episode Listen Later Dec 24, 2025 31:37


In this episode of the Product Experience Podcast, we speak with Kasia Chmielinski, co-founder of The Data Nutrition Project, who discusses their work on responsible AI, data quality, and the Data Nutrition Project. Kasia highlights the importance of balancing innovation with ethical considerations in product management, the challenges of working within large organizations like the UN, and the need for transparency in data usage. Featured Links: Follow Kasia on LinkedIn | The Data Nutrition Project | 'What we learned at Pendomonium and #mtpcon 2024 Raleigh: Day 2' feature by Louron PrattOur HostsLily Smith enjoys working as a consultant product manager with early-stage and growing startups and as a mentor to other product managers. She's currently Chief Product Officer at BBC Maestro, and has spent 13 years in the tech industry working with startups in the SaaS and mobile space. She's worked on a diverse range of products – leading the product teams through discovery, prototyping, testing and delivery. Lily also founded ProductTank Bristol and runs ProductCamp in Bristol and Bath. Randy Silver is a Leadership & Product Coach and Consultant. He gets teams unstuck, helping you to supercharge your results. Randy's held interim CPO and Leadership roles at scale-ups and SMEs, advised start-ups, and been Head of Product at HSBC and Sainsbury's. He participated in Silicon Valley Product Group's Coaching the Coaches forum, and speaks frequently at conferences and events. You can join one of communities he runs for CPOs (CPO Circles), Product Managers (Product In the {A}ether) and Product Coaches. He's the author of What Do We Do Now? A Product Manager's Guide to Strategy in the Time of COVID-19. A recovering music journalist and editor, Randy also launched Amazon's music stores in the US & UK.

The Association Podcast
Human-First Approaches: AI, Ethics, and Association Boards with Jeff De Cagna, AIMP, FRSA, FASAE

The Association Podcast

Play Episode Listen Later Dec 23, 2025 55:16


On this episode of The Association Podcast, we welcome repeat guest Jeff De Cagna, AIMP, FRSA, FASAE, and Executive Advisor at Foresight First LLC,  for a deep dive into the challenges and considerations involving AI and association boards. We discuss the Future of Association Boards (FAB) Report, which De Cagna curated and edited, touching on the importance of creating a better future for association boards. Jeff stresses the need for ethical reflection in adopting AI, the concept of stewardship over traditional leadership, and fostering humanity within organizational purposes. The conversation also covers practical approaches for boards, board readiness, and actions association leaders can take to effectively navigate the evolving landscape.FAB Report

Career Unicorns - Spark Your Joy
The Power of Enough: Choosing More Mommy Over More Money, Investing In Yourself, and Being A Leader In AI Ethics With Irene Liu (Ep. 199)

Career Unicorns - Spark Your Joy

Play Episode Listen Later Dec 17, 2025 48:13


  What happens when a high-powered executive, responsible for scaling multi-billion dollar companies, is asked by her 10-year-old: "What does that money actually mean to us?"   In this deeply insightful episode, we sit down with Irene Liu, founder of Hypergrowth GC and former Chief Financial and Legal Officer at Hopin. Irene shares her journey from the Department of Justice to the front lines of the AI revolution, where she now advises the California Senate on AI safety.   We explore the "Politics of the C-Suite," the necessity of high EQ in leadership, and why Irene decided to step out of the "survival mode" of corporate life to define what "enough" looks like for her family.   In this episode, we dive deep into: Resilience born from crisis: how working in finance in Manhattan during 9/11 shaped Irene's mental fortitude. Navigating layoffs with humanity: whether you are the one being let go, the one left with survivor's guilt, or the executive making the difficult calls. The art of the pivot: effective strategies for transitioning from public service and government roles into the private sector. The AI frontier: a sobering look at the "Empire of AI," the global race for innovation, and the urgent need for safeguards to protect children and vulnerable populations. The path to the C-Suite: the two key qualities you need to transition from "just a lawyer" to a business leader. "More Mommy" vs. "More Money": how to evaluate career choices through the lens of family values and the "seasons of life." Owning your growth: Why you shouldn't let your employer drive your career, and the importance of self-investment and building a genuine community.   Connect with us: Learn more about our guest, Irene Liu, on LinkedIn at https://www.linkedin.com/in/ireneliu1/.   Follow our host, Samorn Selim, on LinkedIn at https://www.linkedin.com/in/samornselim/. Get a copy of Samorn's book, Career Unicorns™ 90-Day 5-Minute Gratitude Journal: An Easy & Proven Way To Cultivate Mindfulness, Beat Burnout & Find Career Joy, at https://tinyurl.com/49xdxrz8. Ready for a career change?  Schedule a free 30-minute build your dream career consult by sending a message at www.careerunicorns.com. Disclaimer:  Irene would like our listeners to know that her views expressed in this podcast are her own and do not represent those of any referenced organizations.

AwesomeCast: Tech and Gadget Talk
2025 Predictions on AI, 3D Printers and more! | AwesomeCast 762

AwesomeCast: Tech and Gadget Talk

Play Episode Listen Later Dec 17, 2025 60:49


In this end-of-year AwesomeCast, hosts Michael Sorg and Katie Dudas are joined by original AwesomeCast co-host Rob De La Cretaz for a wide-ranging discussion on the biggest tech shifts of 2025 — and what's coming next. The panel breaks down how AI tools became genuinely useful in everyday workflows, from content production and health tracking to decision-making and trend analysis. Rob shares why Bambu Labs 3D printers represent a turning point in consumer and professional 3D printing, removing friction and making rapid prototyping accessible for creators, engineers, and hobbyists alike. The episode also covers the evolving role of AI in media creation, concerns around over-reliance and trust, and why human-made content may soon become a premium feature. Intern Mac reflects on changing career paths into media production, while the crew revisits their 2025 tech predictions, holds themselves accountable, and locks in bold forecasts for 2026. Plus: Chachi's Video Game Minute, AI competition heating up, Apple Vision Pro speculation, and why “AI inside” may need clearer definitions moving forward.

Sorgatron Media Master Feed
AwesomeCast 762: 2025 Predictions on AI, 3D Printers and more!

Sorgatron Media Master Feed

Play Episode Listen Later Dec 17, 2025 60:49


In this end-of-year AwesomeCast, hosts Michael Sorg and Katie Dudas are joined by original AwesomeCast co-host Rob De La Cretaz for a wide-ranging discussion on the biggest tech shifts of 2025 — and what's coming next. The panel breaks down how AI tools became genuinely useful in everyday workflows, from content production and health tracking to decision-making and trend analysis. Rob shares why Bambu Labs 3D printers represent a turning point in consumer and professional 3D printing, removing friction and making rapid prototyping accessible for creators, engineers, and hobbyists alike. The episode also covers the evolving role of AI in media creation, concerns around over-reliance and trust, and why human-made content may soon become a premium feature. Intern Mac reflects on changing career paths into media production, while the crew revisits their 2025 tech predictions, holds themselves accountable, and locks in bold forecasts for 2026. Plus: Chachi's Video Game Minute, AI competition heating up, Apple Vision Pro speculation, and why “AI inside” may need clearer definitions moving forward.

RRC Now
Ep. 4 - The Future of Real Estate: AI, Ethics, & Education - Pt. 2

RRC Now

Play Episode Listen Later Dec 16, 2025 26:31


In this two-part conversation, Tim Kinzie, CRS, brings decades of real estate wisdom to Real Estate Real Talk. He unpacks how AI is reshaping the industry—without replacing the relationships that keep it human. Tim dives into ethical must-knows, the importance of transparency when using AI, and why protecting client data has never been more critical. He also shares forward-looking insights on the future of real estate education and how emerging tech like blockchain could transform the transaction process. Whether you're excited about innovation or cautious about change, this series shows how agents can stay ahead and stay true to what matters: trust, expertise, and connection.

SparX by Mukesh Bansal
The Future of AI: Ethics, Safety & the Rise of Intelligence

SparX by Mukesh Bansal

Play Episode Listen Later Dec 13, 2025 55:01


In this episode of SparX, Mukeshl sits down with, Debjani Ghosh, leader of  the Frontier Tech Hub within NITI Aayog for a critical discussion. They dive deep into India's technological future, the existential role of AI in national growth, and the dramatic changes impacting careers and geopolitics.Debjani, who brings a unique perspective from 21 years at Intel and leadership at NASSCOM, discusses her experience driving change from within the government and why technology is now the "axis of power" globally.

Books & Writers · The Creative Process
The Ethics of AI w/ SVEN NYHOLM, Author & Lead Researcher, Munich Centre for Machine Learning

Books & Writers · The Creative Process

Play Episode Listen Later Dec 12, 2025 62:12


As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Books & Writers · The Creative Process
The AI Wager: Betting on Technology's Future w/ Philosopher & Author SVEN NYHOLM - Highlights

Books & Writers · The Creative Process

Play Episode Listen Later Dec 12, 2025 16:29


“ I think we're betting on AI as something that can help to solve a lot of problems for us. It's the future, we think, whether it's producing text or art, or doing medical research or planning our lives for us, etc., the bet is that AI is going to be great, that it's going to get us everything we want and make everything better. But at the same time, we're gambling, at the extreme end, with the future of humanity  , hoping for the best and hoping that this, what I'm calling the AI wager, is going to work out to our advantage, but we'll see.”As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Education · The Creative Process
The Ethics of AI w/ SVEN NYHOLM, Author & Lead Researcher, Munich Centre for Machine Learning

Education · The Creative Process

Play Episode Listen Later Dec 12, 2025 62:12


As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Education · The Creative Process
The AI Wager: Betting on Technology's Future w/ Philosopher & Author SVEN NYHOLM - Highlights

Education · The Creative Process

Play Episode Listen Later Dec 12, 2025 16:29


“ I think we're betting on AI as something that can help to solve a lot of problems for us. It's the future, we think, whether it's producing text or art, or doing medical research or planning our lives for us, etc., the bet is that AI is going to be great, that it's going to get us everything we want and make everything better. But at the same time, we're gambling, at the extreme end, with the future of humanity  , hoping for the best and hoping that this, what I'm calling the AI wager, is going to work out to our advantage, but we'll see.”As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

The Creative Process in 10 minutes or less · Arts, Culture & Society
The Ethics of AI w/ SVEN NYHOLM, Author & Lead Researcher, Munich Centre for Machine Learning

The Creative Process in 10 minutes or less · Arts, Culture & Society

Play Episode Listen Later Dec 12, 2025 16:29


“ I think we're betting on AI as something that can help to solve a lot of problems for us. It's the future, we think, whether it's producing text or art, or doing medical research or planning our lives for us, etc., the bet is that AI is going to be great, that it's going to get us everything we want and make everything better. But at the same time, we're gambling, at the extreme end, with the future of humanity  , hoping for the best and hoping that this, what I'm calling the AI wager, is going to work out to our advantage, but we'll see.”As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Edgy Ideas
101: The Future of Coaching: AI, Ethics, and Belonging

Edgy Ideas

Play Episode Listen Later Dec 10, 2025 37:27


Show Notes In this episode Simon speaks with Tatiana Bachkirova, a leading scholar in coaching psychology. They explore how AI is impacting on the field of coaching and what it means to remain human in a world increasingly driven by algorithms. The discussion moves fluidly between neuroscience, pseudo-science, identity, belonging, and ethics, reflecting on the tensions between performance culture and authentic human development. They discuss how coaching must expand beyond individual self-optimization toward supporting meaningful, value-based projects and understanding the broader social and organisational contexts in which people live and work.  AI underscores the need for ethical grounding in coaching. Ultimately, the episode reclaims coaching as a moral and relational practice, reminding listeners that the future of coaching depends not on technology, but on how we choose to stay human within it. Key Reflections AI is often a solution in search of a problem, revealing more about our anxieties than our needs. Coaching must evolve with the changing world, engaging complexity rather than retreating to technique. The focus should be on meaningful, value-driven projects that connect personal purpose with collective good. AI coaching risks eroding depth, ethics, and relational presence if not grounded in human awareness. Critical thinking anchors coaching in understanding rather than compliance, enabling ethical discernment. The relational quality defines coaching effectiveness - authentic dialogue remains its living core. Coaching should move from performance and self-optimization to reflection, purpose, and contribution. Human connection and ethical practice sustain trust, belonging, and relevance in the digital age. The future of coaching lies in integrating technology without losing our humanity. Keywords Coaching psychology, AI in coaching, organisational coaching, identity, belonging, neuroscience, critical thinking, human coaching, coaching ethics, coaching research Brief Bio Tatiana Bachkirova is Professor of Coaching Psychology in the International Centre for Coaching and Mentoring Studies at Oxford Brookes University, UK. She supervises doctoral students as an academic, and human coaches as a practitioner. She is a leading scholar in Coaching Psychology and in recent years has been exploring themes such as the role of AI in coaching, the deeper purpose of organisational coaching, what leaders seek to learn at work, and critical perspectives on the neuroscience of coaching.  In her over 80 research articles in leading journals, book chapters and books and in her many speaking engagements she addresses most challenging issues of coaching as a service to individuals, organisations and wider societies.

Tech, Innovation & Society - The Creative Process
The AI Wager: Betting on Technology's Future w/ Philosopher & Author SVEN NYHOLM - Highlights

Tech, Innovation & Society - The Creative Process

Play Episode Listen Later Dec 2, 2025 16:29


“ I think we're betting on AI as something that can help to solve a lot of problems for us. It's the future, we think, whether it's producing text or art, or doing medical research or planning our lives for us, etc., the bet is that AI is going to be great, that it's going to get us everything we want and make everything better. But at the same time, we're gambling, at the extreme end, with the future of humanity  , hoping for the best and hoping that this, what I'm calling the AI wager, is going to work out to our advantage, but we'll see.”As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

RRC Now
Ep. 3 - The Future of Real Estate: AI, Ethics, & Education - Pt. 1

RRC Now

Play Episode Listen Later Dec 2, 2025 30:26


In this two-part conversation, CRS Designee Tim Kinzie brings decades of real estate wisdom to Real Estate Real Talk. Together, we unpack how AI is reshaping the industry—without replacing the relationships that keep it human. Kinzie dives into ethical must-knows, the importance of transparency when using AI and why protecting client data has never been more critical. He also shares forward-looking insights on the future of real estate education and how emerging tech like blockchain could transform the transaction process. Whether you're excited about innovation or cautious about change, this series shows how agents can stay ahead and stay true to what matters: trust, expertise and connection.

Tech, Innovation & Society - The Creative Process
The Ethics of AI w/ SVEN NYHOLM, Author & Lead Researcher, Munich Centre for Machine Learning

Tech, Innovation & Society - The Creative Process

Play Episode Listen Later Nov 27, 2025 62:12


As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Double Tap Canada
Be My Eyes and AI: Balancing Tech and Human Connection

Double Tap Canada

Play Episode Listen Later Nov 26, 2025 57:24


Explore how Be My Eyes is redefining accessibility with AI and human connection. CEO Mike Buckley discusses their Apple App Store Finalist nomination, the ethics of AI in assistive technology, and the challenges of awareness and global reach.This episode is supported by Pneuma Solutions. Creators of accessible tools like Remote Incident Manager and Scribe. Get $20 off with code dt20 at https://pneumasolutions.com/ and enter to win a free subscription at doubletaponair.com/subscribe!In this episode of Double Tap, Steven Scott and Shaun Preece chat with Be My Eyes CEO Mike Buckley. The conversation begins with the app's recognition as an Apple App Store Cultural Impact finalist, celebrating its global influence on the blind and low vision community. The discussion evolves into an honest exploration of AI's role in accessibility, including Be My AI, human volunteers, and the emotional dimensions of social connection. Mike shares insights into: The balance between AI utility and human kindness. Overcoming the trepidation blind users feel before calling a volunteer. Ethical dilemmas around AI companionship, mental health, and responsible guardrails. Future possibilities for niche AI models designed for blind users. Like, comment, and subscribe for more conversations on tech and accessibility.Share your thoughts: feedback@doubletaponair.comLeave us a voicemail: 1-877-803-4567Send a voice or video message via WhatsApp: +1-613-481-0144 Relevant LinksBe My Eyes: https://www.bemyeyes.com Find Double Tap online: YouTube, Double Tap Website---Follow on:YouTube: https://www.doubletaponair.com/youtubeX (formerly Twitter): https://www.doubletaponair.com/xInstagram: https://www.doubletaponair.com/instagramTikTok: https://www.doubletaponair.com/tiktokThreads: https://www.doubletaponair.com/threadsFacebook: https://www.doubletaponair.com/facebookLinkedIn: https://www.doubletaponair.com/linkedin Subscribe to the Podcast:Apple: https://www.doubletaponair.com/appleSpotify: https://www.doubletaponair.com/spotifyRSS: https://www.doubletaponair.com/podcastiHeadRadio: https://www.doubletaponair.com/iheart About Double TapHosted by the insightful duo, Steven Scott and Shaun Preece, Double Tap is a treasure trove of information for anyone who's blind or partially sighted and has a passion for tech. Steven and Shaun not only demystify tech, but they also regularly feature interviews and welcome guests from the community, fostering an interactive and engaging environment. Tune in every day of the week, and you'll discover how technology can seamlessly integrate into your life, enhancing daily tasks and experiences, even if your sight is limited. "Double Tap" is a registered trademark of Double Tap Productions Inc. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Health Ranger Report
Brighteon Broadcast News, Nov 23, 2025 - OPT OUT of the western medical system, and you'll be healthier, wealthier and happier

The Health Ranger Report

Play Episode Listen Later Nov 23, 2025 116:39


- Updates on AI Tools and Book Generator (0:10) - Health Advice and Lifestyle Habits (1:42) - Critique of Conventional Doctors (6:50) - The Rise of AI in Healthcare (10:05) - Better Than a Doctor AI Feature (17:24) - Health Ranger's AI and Robotics Projects (36:07) - Philosophical Discussion on AI and Human Rights (1:10:58) - The Future of AI and Human Interaction (1:17:53) - The Role of AI in Survival Scenarios (1:18:57) - The Potential for AI in Enhancing Human Life (1:19:13) - Personal Experience with AI and Health Data (1:19:32) - AI in Diagnostics and Natural Solutions (1:22:17) - Critique of Google and AI Ethics (1:25:00) - Impact of AI on Human Relationships and Society (1:30:24) - Debate on Consciousness and AI (1:35:54) - Historical and Scientific Perspectives on Consciousness (1:50:21) - Practical Applications and Future of AI (1:53:17) For more updates, visit: http://www.brighteon.com/channel/hrreport  NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com

The Future of Everything presented by Stanford Engineering

Gabriel Weintraub studies how digital markets evolve. In that regard, he says platforms like Amazon, Uber, and Airbnb have already disrupted multiple verticals through their use of data and digital technologies. Now, they face both the opportunity and the challenge of leveraging AI to further transform markets, while doing so in a responsible and accountable way. Weintraub is also applying these insights to ease friction and accelerate results in government procurement and regulation. Ultimately, we must fall in love with solving the problem, not with the technology itself, Weintraub tells host Russ Altman on this episode of Stanford Engineering's The Future of Everything podcast.Have a question for Russ? Send it our way in writing or via voice memo, and it might be featured on an upcoming episode. Please introduce yourself, let us know where you're listening from, and share your question. You can send questions to thefutureofeverything@stanford.edu.Episode Reference Links:Stanford Profile: Gabriel WeintraubConnect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>> Twitter/X / Instagram / LinkedIn / FacebookChapters:(00:00:00) IntroductionRuss Altman introduces guest Gabriel Weintraub, a professor of operations, information, and technology at Stanford University.(00:03:00) School Lunches to Digital PlatformsHow designing markets in Chile led Gabriel to study digital marketplaces.(00:03:57) What Makes a Good MarketOutlining the core principles that constitute a well-functioning market.(00:05:29) Opportunities and Challenges OnlineThe challenges associated with the vast data visibility of digital markets.(00:06:56) AI and the Future of SearchHow AI and LLMs could revolutionize digital platforms.(00:08:15) Rise of Vertical MarketplacesThe new specialized markets that curate supply and ensure quality.(00:10:23) Winners and Losers in Market ShiftsHow technology is reshaping industries from real estate to travel.(00:12:38) Government Procurement in ChileApplying market design and AI tools to Chile's procurement system.(00:15:00) Leadership and AdoptionThe role of leadership in modernizing government systems.(00:18:59) AI in Government and RegulationUsing AI to help governments streamline complex bureaucratic systems.(00:21:45) Streamlining Construction PermitsPiloting AI tools to speed up municipal construction-permit approvals.(00:23:20) Building an AI StrategyCreating an AI strategy that aligns with business or policy goals.(00:25:26) Workforce and ExperimentationTraining employees to experiment with LLMs and explore productivity gains.(00:27:36) Humans and AI CollaborationThe importance of designing AI systems to augment human work, not replace it.(00:28:26) Future in a MinuteRapid-fire Q&A: AI's impact, passion and resilience, and soccer dreams.(00:30:39) Conclusion Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>>Twitter/X / Instagram / LinkedIn / Facebook Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

My Good Woman
106 | AI Ethics and Security with Elizabeth Goede (Part 2)

My Good Woman

Play Episode Listen Later Nov 18, 2025 18:31 Transcription Available


Send us a textAre you feeding your AI tools private info you'd never hand to a stranger?If you're dropping sensitive data into ChatGPT, Canva, or Notion without blinking, this episode is your wake-up call. In Part 2 of our eye-opening conversation with AI ethics strategist Elizabeth Goede, we delve into the practical aspects of AI use and how to safeguard your business, clients, and future.This one isn't about fear. It's about founder-level responsibility and smart decision-making in a world where the tools are evolving faster than most policies.Grab your ticket to the AI in Action Conference — March 19–20, 2026 in Grand Rapids, MI. You'll get two days of hands-on AI application with 12 done-with-you business tools. This isn't theory. It's transformation.In This Episode, You'll Learn:Why founders must have an AI policy (yes, even solopreneurs)The #1 AI tool Elizabeth would never trust with sensitive dataHow to vet the tools you already use (based on their founders, not just features)What "locking down your data" actually looks likeA surprising leadership insight AI will reveal about your teamResources & Links:AI in Action Conference – RegistrationFollow Elizabeth Goede socials (LinkedIn, Instagram)Related episode:Episode 104 | AI Ethics and Security (Part 1) with Elizabeth GoedeWant to increase revenue and impact? Listen to “She's That Founder” for insights on business strategy and female leadership to scale your business. Each episode offers advice on effective communication, team building, and management. Learn to master routines and systems to boost productivity and prevent burnout. Our delegation tips and business consulting will advance your executive leadership skills and presence.

Pondering AI
No Community Left Behind with Paula Helm

Pondering AI

Play Episode Listen Later Nov 12, 2025 52:06


Paula Helm articulates an AI vision that goes beyond base performance to include epistemic justice and cultural diversity by focusing on speakers and not language alone. Paula and Kimberly discuss ethics as a science; language as a core element of culture; going beyond superficial diversity; epistemic justice and valuing other's knowledge; the translation fallacy; indigenous languages as oral goods; centering speakers and communities; linguistic autonomy and economic participation; the Māori view on data ownership; the role of data subjects; enabling cultural understanding, self-determination and expression; the limits of synthetic data; ethical issues as power asymmetries; and reflecting on what AI mirrors back to us.  Paula Helm is an Assistant Professor of Empirical Ethics and Data Science at the University of Amsterdam. Her work sits at the intersection of STS, Media Studies and Ethics. In 2022 Paula was recognized as one of the 100 Most Brilliant Women in AI-Ethics.Related ResourcesGenerating Reality and Silencing Debate: Synthetic Data as Discursive Device (paper) https://journals.sagepub.com/doi/full/10.1177/20539517241249447Diversity and Language Technology (paper): https://link.springer.com/article/10.1007/s10676-023-09742-6A transcript of this episode is here.   

My Good Woman
104 | AI Ethics and Security with Elizabeth Goede (Part 1)

My Good Woman

Play Episode Listen Later Nov 12, 2025 21:12 Transcription Available


Send us a textIs your AI use exposing your business to risks you can't see coming?It's not just about saving time — it's about protecting your clients, your content, and your credibility.In this episode, Dawn Andrews sits down with AI strategist Elizabeth Goede to unpack the real (and often ignored) risks of using AI in business. From ChatGPT to Claude, learn what founders must know about security, data privacy, and ethical use — without getting lost in the tech.“You wouldn't post your financials on Instagram. So why are you pasting them into AI tools without checking where they're going?”Listen in and get equipped to lead smart, safe, and scalable with AI — no fear-mongering, just facts with a side of sass.Want to stop talking about AI and actually use it safely and strategically?Join us at the AI in Action Conference, happening March 19–20, 2026 in Grand Rapids, Michigan. Get hands-on with 12 action-packed micro workshops designed to help you apply AI in real time to boost your business, protect your data, and ditch the digital grunt work.Register now What You'll Learn:How even small service businesses are vulnerable to AI misuseThe one rule for deciding what data is safe to input into AI toolsWhy AI models like ChatGPT, Claude, and Copilot aren't created equalThe hidden risks of giving tools access to your drive, emails, or client docsWhat every founder should ask before signing any AI-related agreementResources & Links:AI in Action Conference – RegistrationFollow Elizabeth Goede socials (LinkedIn, Instagram)Related episode:Episode 93 | The Dirty Secret About AI No Female Executive Wants To Admit—And Why It's Hurting You - This episode dives into the real reason female founders hesitate with AI — and the hidden risks of staying on the sidelines. Includes smart insights on the security tradeoffs when you don't understand where your data is going or how to control it.Want to increase revenue and impact? Listen to “She's That Founder” for insights on business strategy and female leadership to scale your business. Each episode offers advice on effective communication, team building, and management. Learn to master routines and systems to boost productivity and prevent burnout. Our delegation tips and business consulting will advance your executive leadership skills and presence.

The Road to Accountable AI
Ravit Dotan: Rethinking AI Ethics

The Road to Accountable AI

Play Episode Listen Later Nov 6, 2025 33:55


Ravit Dotan argues that the primary barrier to accountable AI is not a lack of ethical clarity, but organizational roadblocks. While companies often understand what they should do, the real challenge is organizational dynamics that prevent execution—AI ethics has been shunted into separate teams lacking power and resources, with incentive structures that discourage engineers from raising concerns. Drawing on work with organizational psychologists, she emphasizes that frameworks prescribe what systems companies should have but ignore how to navigate organizational realities. The key insight: responsible AI can't be a separate compliance exercise but must be embedded organically into how people work. Ravit discusses a recent shift in her orientation from focusing solely on governance frameworks to teaching people how to use AI thoughtfully. She critiques "take-out mode" where users passively order finished outputs, which undermines skills and critical review. The solution isn't just better governance, but teaching workers how to incorporate responsible AI practices into their actual workflows.  Dr. Ravit Dotan is the founder and CEO of TechBetter, an AI ethics consulting firm, and Director of the Collaborative AI Responsibility (CAIR) Lab at the University of Pittsburgh. She holds a Ph.D. in Philosophy from UC Berkeley and has been named one of the "100 Brilliant Women in AI Ethics" (2023), and was a finalist for "Responsible AI Leader of the Year" (2025). Since 2021, she has consulted with tech companies, investors, and local governments on responsible AI. Her recent work emphasizes teaching people to use AI thoughtfully while maintaining their agency and skills. Her work has been featured in The New York Times, CNBC, Financial Times, and TechCrunch. Transcript My New Path in AI Ethics (October 2025) The Values Encoded in Machine Learning Research (FAccT 2022 Distinguished Paper Award) - Responsible AI Maturity Framework  

Mornings with Carmen
AI genetic revolution and AI ethics - Austin Gravley | Veteran's Day, Thanksgiving, and telling of God's Glory - Kathy Branzell

Mornings with Carmen

Play Episode Listen Later Nov 6, 2025 48:47


Austin Gravley of Digital Babylon and the What Would Jesus Tech podcast talks about how the Chinese Communist Party is looking at using AI to enhance the genetic "quality" of their children, among other uses.  What are the ethical guidelines?  What are acceptable and unacceptable uses?  The National Day of Prayer Taskforce's Kathy Branzell (who is a "military brat") talks about the importance of supporting and praying for our veterans and current military members.  The also talks about giving thanks and "telling of His glory among the nations, Hsi wonderful deeds among all the peoples."  Faith Radio podcasts are made possible by your support. Give now: Click here  

This Week in Google (MP3)
IM 843: Immortal Beloved, You've Arrived - AI's Emotional Intelligence Paradox

This Week in Google (MP3)

Play Episode Listen Later Oct 30, 2025 182:04


Can an AI truly care about your feelings—or is emotional intelligence in machines just the most sophisticated form of manipulation? Dr. Alan Cowen of Hume AI joins the crew to unpack the promise and peril of emotionally adept bots, even as they're quietly shaping how we connect, seek help, and parent in the digital age. Virtual Try On Free Online - AI Clothes Changer | i-TryOn Oreo-maker Mondelez to use new generative AI tool to slash marketing costs OpenAI Moves to Generate AI Music in Potential Rivalry With Startup Suno Surprising no one, researchers confirm that AI chatbots are incredibly sycophantic Microsoft's Mico heightens the risks of parasocial LLM relationships Armed police swarm student after AI mistakes bag of Doritos for a weapon - Dexerto A Definition of AGI OpenAI Finalizes Corporate Restructuring, Gives Microsoft 27% Stake and Technology Access Until 2032 - Slashdot This mom's son was asking Tesla's Grok AI chatbot about soccer. It told him to send nude pics, she says Nvidia Becomes World's First $5 Trillion Company - Slashdot Paris Hilton Has Been Training Her AI for Years How Realistic is OpenAI's 2028 Timeline For Automating AI Research Itself? Tesla's "Mad Max" mode is now under federal scrutiny Zenni's Anti-Facial Recognition Glasses are Eyewear for Our Paranoid Age Alphabet earnings Meta earnings You Have No Idea How Screwed OpenAI Actually Is Elon Musk's Grokipedia Pushes Far-Right Talking Points AOL to be sold to Bending Spoons for roughly $1.5B Casio's Fluffy AI Robot Squeaked Its Way Into My Heart For the sake of the show, I will suffer this torture -jj Machine Olfaction and Embedded AI Are Shaping the New Global Sensing Industry A silly little photoshoot with your friends Bugonia Celebrating 25 years of Google Ads The Data Is In: The Washington Post Can't Replace Its "TikTok Guy" Peak screen? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Dr. Alan Cowen Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security zapier.com/machines agntcy.org ventionteams.com/twit

All TWiT.tv Shows (MP3)
Intelligent Machines 843: Immortal Beloved, You've Arrived

All TWiT.tv Shows (MP3)

Play Episode Listen Later Oct 30, 2025 182:04


Can an AI truly care about your feelings—or is emotional intelligence in machines just the most sophisticated form of manipulation? Dr. Alan Cowen of Hume AI joins the crew to unpack the promise and peril of emotionally adept bots, even as they're quietly shaping how we connect, seek help, and parent in the digital age. Virtual Try On Free Online - AI Clothes Changer | i-TryOn Oreo-maker Mondelez to use new generative AI tool to slash marketing costs OpenAI Moves to Generate AI Music in Potential Rivalry With Startup Suno Surprising no one, researchers confirm that AI chatbots are incredibly sycophantic Microsoft's Mico heightens the risks of parasocial LLM relationships Armed police swarm student after AI mistakes bag of Doritos for a weapon - Dexerto A Definition of AGI OpenAI Finalizes Corporate Restructuring, Gives Microsoft 27% Stake and Technology Access Until 2032 - Slashdot This mom's son was asking Tesla's Grok AI chatbot about soccer. It told him to send nude pics, she says Nvidia Becomes World's First $5 Trillion Company - Slashdot Paris Hilton Has Been Training Her AI for Years How Realistic is OpenAI's 2028 Timeline For Automating AI Research Itself? Tesla's "Mad Max" mode is now under federal scrutiny Zenni's Anti-Facial Recognition Glasses are Eyewear for Our Paranoid Age Alphabet earnings Meta earnings You Have No Idea How Screwed OpenAI Actually Is Elon Musk's Grokipedia Pushes Far-Right Talking Points AOL to be sold to Bending Spoons for roughly $1.5B Casio's Fluffy AI Robot Squeaked Its Way Into My Heart For the sake of the show, I will suffer this torture -jj Machine Olfaction and Embedded AI Are Shaping the New Global Sensing Industry A silly little photoshoot with your friends Bugonia Celebrating 25 years of Google Ads The Data Is In: The Washington Post Can't Replace Its "TikTok Guy" Peak screen? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Dr. Alan Cowen Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security zapier.com/machines agntcy.org ventionteams.com/twit

Radio Leo (Audio)
Intelligent Machines 843: Immortal Beloved, You've Arrived

Radio Leo (Audio)

Play Episode Listen Later Oct 30, 2025 182:04


Can an AI truly care about your feelings—or is emotional intelligence in machines just the most sophisticated form of manipulation? Dr. Alan Cowen of Hume AI joins the crew to unpack the promise and peril of emotionally adept bots, even as they're quietly shaping how we connect, seek help, and parent in the digital age. Virtual Try On Free Online - AI Clothes Changer | i-TryOn Oreo-maker Mondelez to use new generative AI tool to slash marketing costs OpenAI Moves to Generate AI Music in Potential Rivalry With Startup Suno Surprising no one, researchers confirm that AI chatbots are incredibly sycophantic Microsoft's Mico heightens the risks of parasocial LLM relationships Armed police swarm student after AI mistakes bag of Doritos for a weapon - Dexerto A Definition of AGI OpenAI Finalizes Corporate Restructuring, Gives Microsoft 27% Stake and Technology Access Until 2032 - Slashdot This mom's son was asking Tesla's Grok AI chatbot about soccer. It told him to send nude pics, she says Nvidia Becomes World's First $5 Trillion Company - Slashdot Paris Hilton Has Been Training Her AI for Years How Realistic is OpenAI's 2028 Timeline For Automating AI Research Itself? Tesla's "Mad Max" mode is now under federal scrutiny Zenni's Anti-Facial Recognition Glasses are Eyewear for Our Paranoid Age Alphabet earnings Meta earnings You Have No Idea How Screwed OpenAI Actually Is Elon Musk's Grokipedia Pushes Far-Right Talking Points AOL to be sold to Bending Spoons for roughly $1.5B Casio's Fluffy AI Robot Squeaked Its Way Into My Heart For the sake of the show, I will suffer this torture -jj Machine Olfaction and Embedded AI Are Shaping the New Global Sensing Industry A silly little photoshoot with your friends Bugonia Celebrating 25 years of Google Ads The Data Is In: The Washington Post Can't Replace Its "TikTok Guy" Peak screen? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Dr. Alan Cowen Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security zapier.com/machines agntcy.org ventionteams.com/twit

This Week in Google (Video HI)
IM 843: Immortal Beloved, You've Arrived - AI's Emotional Intelligence Paradox

This Week in Google (Video HI)

Play Episode Listen Later Oct 30, 2025 182:04


Can an AI truly care about your feelings—or is emotional intelligence in machines just the most sophisticated form of manipulation? Dr. Alan Cowen of Hume AI joins the crew to unpack the promise and peril of emotionally adept bots, even as they're quietly shaping how we connect, seek help, and parent in the digital age. Virtual Try On Free Online - AI Clothes Changer | i-TryOn Oreo-maker Mondelez to use new generative AI tool to slash marketing costs OpenAI Moves to Generate AI Music in Potential Rivalry With Startup Suno Surprising no one, researchers confirm that AI chatbots are incredibly sycophantic Microsoft's Mico heightens the risks of parasocial LLM relationships Armed police swarm student after AI mistakes bag of Doritos for a weapon - Dexerto A Definition of AGI OpenAI Finalizes Corporate Restructuring, Gives Microsoft 27% Stake and Technology Access Until 2032 - Slashdot This mom's son was asking Tesla's Grok AI chatbot about soccer. It told him to send nude pics, she says Nvidia Becomes World's First $5 Trillion Company - Slashdot Paris Hilton Has Been Training Her AI for Years How Realistic is OpenAI's 2028 Timeline For Automating AI Research Itself? Tesla's "Mad Max" mode is now under federal scrutiny Zenni's Anti-Facial Recognition Glasses are Eyewear for Our Paranoid Age Alphabet earnings Meta earnings You Have No Idea How Screwed OpenAI Actually Is Elon Musk's Grokipedia Pushes Far-Right Talking Points AOL to be sold to Bending Spoons for roughly $1.5B Casio's Fluffy AI Robot Squeaked Its Way Into My Heart For the sake of the show, I will suffer this torture -jj Machine Olfaction and Embedded AI Are Shaping the New Global Sensing Industry A silly little photoshoot with your friends Bugonia Celebrating 25 years of Google Ads The Data Is In: The Washington Post Can't Replace Its "TikTok Guy" Peak screen? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Dr. Alan Cowen Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security zapier.com/machines agntcy.org ventionteams.com/twit

StarTalk Radio
Deepfakes and the War on Truth with Bogdan Botezatu

StarTalk Radio

Play Episode Listen Later Oct 17, 2025 63:53


Is there anything real left on the internet? Neil deGrasse Tyson and co-hosts Chuck Nice and Gary O'Reilly explore deepfakes, scams, and cybercrime with the Director of Threat Research at Bitdefender, Bogdan Botezatu. ​​Scams are a trillion-dollar industry; keep your loved ones safe with Bitdefender: https://bitdefend.me/90-StarTalkNOTE: StarTalk+ Patrons can listen to this entire episode commercial-free here: https://startalkmedia.com/show/deepfakes-and-the-war-on-truth-with-bogdan-botezatu/Thanks to our Patrons Bubbalotski, Oskar Yazan Mellemsether, Craig A, Andrew, Liagadd, William ROberts, Pratiksha, Corey Williams, Keith, anirao, matthew, Cody T, Janna Ladd, Jen Richardson, Elizaveta Nikitenko, James Quagliariello, LA Stritt, Rocco Ciccolini, Kyle Jones, Jeremy Jones, Micheal Fiebelkorn, Erik the Nerd, Debbie Gloom, Adam Tobias Lofton, Chad Stewart, Christy Bradford, David Jirel, e4e5Nf3, John Rost, cluckaizo, Diane Féve, Conny Vigström, Julian Farr, karl Lebeau, AnnElizabeth, p johnson, Jarvis, Charles Bouril, Kevin Salam, Alex Rzem, Joseph Strolin, Madelaine Bertelsen, noel jimenez, Arham Jain, Tim Manzer, Alex, Ray Weikal, Kevin O'Reilly, Mila Love, Mert Durak, Scrubbing Bubblez, Lili Rose, Ram Zaidenvorm, Sammy Aleksov, Carter Lampe, Tom Andrusyna, Raghvendra Singh Bais, ramenbrownie, cap kay, B Rhodes, Chrissi Vergoglini, Micheal Reilly, Mone, Brendan D., Mung, J Ram, Katie Holliday, Nico R, Riven, lanagoeh, Shashank, Bradley Andrews, Jeff Raimer, Angel velez, Sara, Timothy Criss, Katy Boyer, Jesse Hausner, Blue Cardinal, Benjamin Kedwards, Dave, Wen Wei LOKE, Micheal Sacher, Lucas, Ken Kuipers, Alex Marks, Amanda Morrison, Gary Ritter Jr, Bushmaster, thomas hennigan, Erin Flynn, Chad F, fro drick, Ben Speire, Sanjiv VIJ, Sam B, BriarPatch, and Mario Boutet for supporting us this week. Subscribe to SiriusXM Podcasts+ to listen to new episodes of StarTalk Radio ad-free and a whole week early.Start a free trial now on Apple Podcasts or by visiting siriusxm.com/podcastsplus. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Dropping Bombs
Get Rich in the NEW Era of AI (DO THIS NOW)

Dropping Bombs

Play Episode Listen Later Sep 18, 2025 77:13


LightSpeed VT: https://www.lightspeedvt.com/ Dropping Bombs Podcast: https://www.droppingbombs.com/ What if a 16-year-old yogurt scooper could turn into a billionaire exit master by 31?