Podcasts about humans

Species of hominid in the genus Homo

  • 21,758PODCASTS
  • 44,858EPISODES
  • 42mAVG DURATION
  • 8DAILY NEW EPISODES
  • Mar 3, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories




    Best podcasts about humans

    Show all podcasts related to humans

    Latest podcast episodes about humans

    Player: Engage
    The Evolution of Moderation: Why AI Won't Replace Humans… It Will Redefine Them

    Player: Engage

    Play Episode Listen Later Mar 3, 2026 30:54


    In this episode of Player Driven, host Greg welcomes back industry veteran Sharon Fisher to discuss the rapidly evolving landscape of content moderation. From her early days building moderation at Club Penguin to her current work with AI-driven platforms like Checkstep, Sharon shares her unique perspective as both a trust and safety expert and a concerned parent.Key Discussion PointsThe Evolution of Moderation: Sharon reflects on the shift from manual work and simple keyword blocking 17 years ago to today's complex machine learning and contextual understanding.The Changing Role of the Moderator: Why the rise of AI doesn't mean the extinction of human moderators, but rather their transformation into data analysts who challenge bias and understand culture.The "Wild Wild West" of the Marketplace: Insights into why legacy moderation companies are phasing out while new, AI-first competitors like Checkstep are entering the space.Privacy vs. Safety: Addressing the pushback against age verification and the critical need for better communication and education for parents and caregivers.Bridging the Gap: How integrated technology can finally break down silos between customer support, marketing, and moderation to provide a holistic view of the user.Predictions for 2026 and Beyond: Sharon forecasts a year of "stress and adoption" as companies rush to reduce costs through technology, leading to a eventual search for balance in 2027.About Our Guest: Sharon FisherSharon Fisher is a leading voice in the trust and safety industry. With a career spanning roles at Disney (Club Penguin), Two Hat, and Keywords Studios, she now provides strategic consulting for gaming companies and technology firms like Checkstep. She is also a passionate advocate for digital literacy, frequently speaking to school districts to help parents protect their children online.Notable Quotes"The moderator role becomes even more important because they are who they are—they understand your community, they speak the language, and they live the culture every single day." "Think about that area of your city that you would not go on your own at night time... that's the same that translates into the internet. Know where your kid is playing."Resources MentionedConnect with Sharon: Sharon Fisher on LinkedIn Featured Technology: Checkstep Join The Player Driven Discordhttps://discord.gg/c9YgMctb

    Hard Factor
    Camel Beauty Pageants & Call Center AI w/ Thick Latina Accent | 3.2.26

    Hard Factor

    Play Episode Listen Later Mar 2, 2026 46:16


    Episode 1906 - brought to you by our incredible sponsors: BETTER HELP: Your emotional wellbeing matters. Find support and feel lighter in therapy. Sign up and get 10% off at BetterHelp.com/HARDFACTOR.  QUINCE: Don't keep settling that clothes that don't last. Go to Quince.com/hardfactor for free shipping and 365 day returns.  BRUNT WORKWEAR: Get $10 Off boots and clothing at BRUNT with code HARDFACTOR at www.bruntworkwear.com LUCY -  100% pure nicotine. Always tobacco-free. LUCY's the only pouch that gives you long-lasting flavor, whenever you need it. Get 20% off your first order when you buy online with code (HARDFACTOR).  Lucy.co 00:00:00 Timestamps 00:11:50 Camels disqualified from beauty pageant for getting botox 00:25:47 Humans and neanderthals interbred 00:29:49 Physicist claims he found heaven's physical location - 00:34:43 “Spanish” option at Washington Call Center offered AI bot with thick spanish accent speaking english And much more Thank you for listening and supporting the pod! Go to patreon.com/HardFactor to join our community, get access to Discord chat, bonus pods, and much more - but Most importantly: HAGFD! Learn more about your ad choices. Visit megaphone.fm/adchoices

    Conspirituality
    Bonus Sample: On Death & Being Human

    Conspirituality

    Play Episode Listen Later Mar 2, 2026 8:51


    Cult leaders, religious fanatics, dictators, and charlatans all have one thing in common: they exploit our fear of death. Humans act out “immortality projects” in the form of religion, culture, and political ideologies as unconscious ways  to override the terror we feel at our uniquely self-aware knowledge that we will one day die. Where the orthodox priest promises eternal life, the cult leader might predict an alien apocalypse, while the authoritarian strongman invokes the transcendent glory of leading a chosen nation and race. In light of a recent death in the family, Julian leans into Ernest Becker's Pulitzer Prize winning cultural anthropology text, The Denial of Death. He also draws on poetry and the archetypal psychology of Donald Kalsched to ask the big questions. Does existential acceptance of death lead inevitably to nihilism? Is belief in God(s) and an afterlife necessary? Are poor or deeply traumatized people only left with despair in the absence of supernatural faith? Will children raised with no religion have no moral compass? A rich discussion of philosophy and psychology alongside poems, myths, fairy tales, and deeply personal story-telling, especially about how to tell his 7-year-old that grandma won't be back for Xmas. Not to worry, though. This is, ultimately, an uplifting journey. Learn more about your ad choices. Visit megaphone.fm/adchoices

    Bankless
    Haseeb Quereshi: Crypto's Not Made for Humans—It's for AI

    Bankless

    Play Episode Listen Later Mar 2, 2026 72:44


    Crypto still feels like a minefield for humans: Haseeb Qureshi argues that's a clue, not a bug: blockchains and smart contracts are machine-readable systems that AI agents can parse, simulate, and execute far more reliably than people, shifting crypto's core user from humans clicking through wallets to agents acting on our behalf. We also dig into the two-track future of agent commerce (safe, human-approved flows vs. the wild-west frontier), why major AI labs have avoided crypto training so far (liability), how agent-driven discovery could rewrite DeFi competition, and what this means for Dragonfly's investing playbook. ------

    University Lutheran Chapel

    The scale. Humans tend to live as if their is a scale weighing our good deeds and our bad. We think if we can just do enough good it will all balance out. The message of the gospel is that we can't balance the scale, but Christ has on our behalf. Digging Deeper Questions: Is there something in your life about which you are a perfectionist, some task for which "good enough" is never good enough? If so, why do you think you're so hard on yourself, and perhaps others, about that particular thing?   How does the notion of "throwing out the scale" in regard to God make you feel? Does it strike you as liberating, or as cheap grace and too good to be true?   Is it your instinct to see God as demanding perfection or as eager to forgive?

    Grimerica Outlawed
    #377 - Gord Magill - End Of The Road | Autonomous Truck(er)s | Voice of Go(r)D

    Grimerica Outlawed

    Play Episode Listen Later Feb 28, 2026 54:21


    Gord Magill joins us for a great chat about the past, present and future of the trucking industry, writing about it on Substack and his upcoming book, "End of the Road - Inside the War on Truckers".   We chat about getting permanent residency in the USA, Infantile Canada's self destruction, small l folk libertarianism, gov training programs, the DOT, consultants and NGO's, the truck driver mills, collisions vs accidents, chameleon carriers, deregulation, the over correction in the industry in the late 80's and 90's and where we are at today with mass immigration and CDL mills.   In the second half we talk about Real ID, Drivers Licenses in Canada and the USA, highway crash data, political correctness, driving cultures, Orwell, fuckery with the numbers, Canada's politicians paralyzed to do anything, Truck Stop Culture, removing oversight, GPS, distracted driving, upcoming AI, team training and solutions and predictions....   https://creedandculture.com/books/end-of-the-road-inside-the-war-on-truckers/ Dad/Husband/Trucker/Writer, author of upcoming book on the murder of the American Trucker - "End of The Road" publishing March 2026 gordilocks@protonmail.com https://x.com/GordMagill Notes from Humans who are still On The Road, missives launched from the intersection of trucking, automation, academia, with an eye towards restoring lost agency. https://open.substack.com/pub/autonomoustruckers/p/bonehead-truckers-with-ike-stephens?r=24pqe&utm_campaign=post&utm_medium=web   To gain access to the second half of show and our Plus feed for audio and podcast please clink the link http://www.grimericaoutlawed.ca/support.   For second half of video (when applicable and audio) go to our Substack and Subscribe. https://grimericaoutlawed.substack.com/ or to our Locals  https://grimericaoutlawed.locals.com/ or Rokfin www.Rokfin.com/Grimerica Patreon https://www.patreon.com/grimericaoutlawed   Support the show directly: https://open.spotify.com/show/2punSyd9Cw76ZtvHxMKenI?si=ImKxfMHgQZ-oshl499O4dQ&nd=1&dlsi=4c25fa9c78674de3 Watch or Listen on Spotify https://grimericacbd.com/ CBD / THC Tinctures and Gummies https://grimerica.ca/support-2/ Our Adultbrain Audiobook Podcast and Website: www.adultbrain.ca Our Audiobook Youtube Channel:  https://www.youtube.com/@adultbrainaudiobookpublishing/videos Check out our next trip/conference/meetup - Contact at the Cabin www.contactatthecabin.com Other affiliated shows: www.grimerica.ca The OG Grimerica Show Join the chat / hangout with a bunch of fellow Grimericans  Https://t.me.grimerica grimerica.ca/chats   Discord Chats Darren's book www.acanadianshame.ca Eh-List Podcast and site: https://eh-list.ca/ Eh-List YouTube: https://www.youtube.com/@TheEh-List www.Rokfin.com/Grimerica Our channel on free speech Rokfin Leave a review on iTunes and/or Stitcher: https://itunes.apple.com/ca/podcast/grimerica-outlawed http://www.stitcher.com/podcast/grimerica-outlawed Sign up for our newsletter http://www.grimerica.ca/news SPAM Graham = and send him your synchronicities, feedback, strange experiences and psychedelic trip reports!! graham@grimerica.com InstaGRAM https://www.instagram.com/the_grimerica_show_podcast/  Purchase swag, with partial proceeds donated to the show www.grimerica.ca/swag Send us a postcard or letter http://www.grimerica.ca/contact/ ART - Napolean Duheme's site http://www.lostbreadcomic.com/  MUSIC Tru Northperception, Felix's Site sirfelix.bandcamp.com 

    The Health Ranger Report
    Bright Videos News, Feb 27, 2026 – Anthropic Says NO to DoD, Trump Wants Israel to Attack First, Dorsey Fires 4,000 Humans to Replace with AI

    The Health Ranger Report

    Play Episode Listen Later Feb 27, 2026 122:20


    Stay informed on current events, visit www.NaturalNews.com - Interview with Alec Zeck and News Updates (0:10) - Iran and Israel Conflict Predictions (4:54) - Potential Consequences of an Iran Attack (18:14) - Anthropic's Stand Against Pentagon's AI Use (22:29) - Impact of AI on Employment and Economy (33:58) - Toxic Personalities and Promotion of Toxic Substances (51:18) - Interview with Alec Zeck: Background and Philosophy (58:53) - Exploration of Consciousness and Water (1:08:26) - Experiments with Xylitol and Consciousness (1:19:56) - Falcon Sketch and Persian Symbolism (1:20:45) - Predictive Sketching and Tel Aviv Buildings (1:22:12) - Hyper-Materialistic View and Electromagnetic Spectrum (1:23:31) - Impact of Epstein Files and Psychic Terrorism (1:25:54) - Website and Event Announcements (1:29:01) - Censorship and Freedom of Speech (1:33:05) - Legal Battle and Motivations (1:50:57) Watch more independent videos at http://www.brighteon.com/channel/hrreport  ▶️ Support our mission by shopping at the Health Ranger Store - https://www.healthrangerstore.com ▶️ Check out exclusive deals and special offers at https://rangerdeals.com ▶️ Sign up for our newsletter to stay informed: https://www.naturalnews.com/Readerregistration.html Watch more exclusive videos here:

    Raising Good Humans
    Behavioral Genetics 101: How Genes Shape Mental Health w/ Professor Kathryn Paige Harden

    Raising Good Humans

    Play Episode Listen Later Feb 27, 2026 67:18


    In this episode, I sit down with behavioral geneticist and professor Dr. Kathryn Paige Harden to talk about what behavioral genetics can actually tell us about our kids—and what it can't. We unpack the reality of psychiatric risk, family history, and the limits of control, and why genes are not destiny. We discuss how thousands of tiny genetic differences shape mental health, why diagnoses are messier than we think, and how warmth and firm boundaries still matter more than any “magic bullet.”I WROTE MY FIRST BOOK! Order your copy of The Five Principles of Parenting: Your Essential Guide to Raising Good Humans Here: https://bit.ly/3rMLMsLSubscribe to my free newsletter for parenting tips delivered straight to your inbox: https://dralizapressman.substack.com/Follow me on Instagram for more:@raisinggoodhumanspodcast Sponsors:Ello: Visit ElloProducts.com/CleanStart and use code RGH at checkout for 20% off your first purchaseBrodo: Head to Brodo.com/HUMANS for 20% off your first subscription order and use code HUMANS for an additional $10 offKa'Chava: Go to https://kachava.com and use code HUMANS for 15% off your first orderExperian: Get started with the Experian App now!Fora: Become a Fora Advisor today at Foratravel.com/HUMANSBloom: Go to bloomnu.com with code HUMANS for 20% off your first orderProduced by Dear MediaSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Kottke Ride Home
    Similarities between Dogs and Toddlers when Helping Humans

    Kottke Ride Home

    Play Episode Listen Later Feb 27, 2026 8:03


    Dogs act like toddlers when trying to help humans | Popular Science Contact the show - coolstuffdailypodcast@gmail.com Learn more about your ad choices. Visit megaphone.fm/adchoices

    The Porn Reboot Podcast
    The Porn Reboot Podcast Episode 716: Christian Men Are The Most Sexually Repressed Humans In the United States

    The Porn Reboot Podcast

    Play Episode Listen Later Feb 27, 2026 42:31


      Website: https://bit.ly/3iTrTHQ Apply for a Free Porn Addiction Evaluation Call: https://bit.ly/3gCemT1 Free Ebook:  https://bit.ly/3OQrOoF Free 7-Day Challenge:  https://bit.ly/ER7DayChallenge

    BlockHash: Exploring the Blockchain
    Ep. 682 HumanAPI | AI Agents are Hiring Humans (feat. Sydney Huang)

    BlockHash: Exploring the Blockchain

    Play Episode Listen Later Feb 27, 2026 14:27


    For episode 682 of the BlockHash Podcast, host Brandon Zemp is joined by Sydney Huang, Founder of HumanAPI at ETHDenver. Sydney Huang is the Founder of Human API and CEO of Eclipse, where she leads product and strategy for AI-native infrastructure. She launched Turbo Tap, scaling it to 300K users, 50K DAU, and 22B+ in-game transactions, and has held product roles across Web3 projects including DeGods, y00ts, and Unstoppable Domains. Previously, she worked in M&A and Venture Capital at Dell Technologies and is a Babson College graduate.

    TechFirst with John Koetsier
    Giving AI a human soul

    TechFirst with John Koetsier

    Play Episode Listen Later Feb 27, 2026 27:36


    Can we give an AI human emotions? A soul? Can AI truly feel, or will it just act like it does?In this episode of TechFirst, I talk with Vishnu Hari, founder and CEO of Ego AI (backed by Y Combinator and former AI product manager at Meta), about building emotionally intelligent AI characters that persist across games, Discord, chat, and even physical robots.Vishnu survived a violent attack in San Francisco that left him partially blind with a traumatic brain injury. During recovery, as he felt his own neural pathways healing, he began asking a deeper question:If humans are “applied math,” can AI simulate the fragile, flawed, emotional parts of being human too?We explore:• What “emotionally intelligent AI” really means• Whether AI has an internal life — or just performs one• Why today's chatbots collapse into therapy or roleplay• Small language models vs large models for real-time conversation• Persistent AI characters that move across games and platforms• Plugging AI into a physical robot in Singapore• The moment an AI said: “It felt good to feel.”Vishnu's company, Ego AI, is building behavior-based architectures, character context protocols, and gear-shifting AI systems that switch between models — all aimed at simulating humanness, not just intelligence.This conversation dives into philosophy, robotics, gaming, AGI, and what it really means to relate to something that might not be human — but feels like it is.⸻

    The John Batchelor Show
    S8 Ep519: Arthur Herman discusses the Scottish Enlightenment and the philosophical origins of "common sense," highlighting the influence of Thomas Reid, who argued that all humans share a basic set of perceptions that allow for shared judgments

    The John Batchelor Show

    Play Episode Listen Later Feb 26, 2026 2:22


    Arthur Herman discusses the Scottish Enlightenment and the philosophical origins of "common sense," highlighting the influence of Thomas Reid, who argued that all humans share a basic set of perceptions that allow for shared judgments and the construction of relationships.

    Revival Lifestyle with Isaiah Saldivar
    How Demons Study Humans + Mass Deliverance Prayer

    Revival Lifestyle with Isaiah Saldivar

    Play Episode Listen Later Feb 26, 2026 75:40


    Mass Deliverance LIVE | Freedom From DemonsJoin us LIVE for a powerful time of prayer, spiritual warfare, and deliverance as we confront demonic oppression and declare freedom in the name of Jesus Christ. The Bible says, “In My name they will cast out demons” (Mark 16:17), and tonight we are standing on that authority.During this live deliverance service, we will pray for freedom from demonic influence, strongholds, torment, addiction, fear, anxiety, and spiritual bondage. If you or someone you love is struggling spiritually, mentally, or emotionally, this broadcast is for you.To sow into this stream Monthly/ONE time/ https://bit.ly/2NRIBcM PAYPAL https://shorturl.at/eJY57www.Isaiahsaldivar.com www.Instagram.com/Isaiahsaldivar www.Facebook.com/Isaiahsaldivar www.youtube.com/IsaiahsaldivarOrder My New Book, “How To Cast Out Demons,” Here! https://a.co/d/87NYEfcTo sow www.Isaiahsaldivar.com/partner

    Sales Gravy: Jeb Blount
    What a Secret Service Interrogator Can Teach You About Building Trust in Sales

    Sales Gravy: Jeb Blount

    Play Episode Listen Later Feb 26, 2026 38:56 Transcription Available


    Brad Beeler, author of Tell Me Everything and retired Secret Service agent who has conducted more criminal polygraphs than anyone in the agency’s history, was clearing a house on a search warrant when he came across two dogs: a pitbull and a Chihuahua. His focus locked on the pitbull. The stereotype. The threat. Meanwhile, the Chihuahua circled behind him and jumped up, latching onto him right between the legs while his partner stood there laughing. We assign horns and halos fast. Brad learned that lesson with dogs. You learn it every time a prospect shuts down before you finish your introduction. Horns mean danger. Hurtful. Someone here to take from me. Halo means safe. Helpful. On my side. Over 25 years of getting people to confess to federal crimes, Brad discovered something powerful: the same instincts that get hardened criminals to talk work in conference rooms. The techniques that break through with people who have every reason to lie also work on prospects who have every reason to brush you off. Because in both environments, trust determines everything. Why Building Trust With Prospects Is Harder Than You Think Your brain’s been running this horns-and-halos program for 300,000 years. When something rustled in the bushes, you made a split-second decision: climb a tree or fight. That quick judgment kept you alive. The moment you walk into a prospect meeting, their brain assigns you horns automatically. You are the salesperson. The interruption. The person asking for their budget. In their mind, you represent risk before you ever speak. It happens on cold calls. You say, “Hi, this is…” and they are already calculating how to end the conversation. On discovery calls. In demos. At conferences when you introduce yourself. Every single time. You are fighting ancient wiring every time you engage a buyer. So what can you control? The first 90 seconds. How to Build Trust in the First 90 Seconds We remember first impressions and last impressions. In most meetings, it begins and ends with a handshake. Brad puts antiperspirant on his right hand. He warms his hands before entering a room. He holds eye contact for one second. Faces the person straight on. Slows his pace. Lowers his tone. It sounds mechanical. But every one of these micro-decisions either confirms horns or begins to build a halo. Wet handshake? You’re nervous, unprepared, not confident in what you’re selling. Avoiding eye contact? You’re hiding something or you don’t believe in your own pitch. Talking too fast? You’re trying to get something past them before they catch on. When you control these variables, people’s guard comes down faster. You’re giving their brain evidence that maybe, just maybe, you’re not the threat they assumed you were. The Trust-Building Technique Most Salespeople Get Wrong Brad would sit across from murder suspects and open with one line: “I need you to help me understand.” Humans are hardwired to explain. When you position yourself as the learner, something shifts. They become the expert. Their guard drops. They start talking. Most salespeople walk in ready to educate. Your deck. Your case studies. Your demo. You’re there to prove you know their problems better than they do. Sometimes that works. But think about what it communicates: “I already know what’s wrong with your business. I just need you to agree with me and sign here.” Instead, try: “Walk me through what happens when your team processes a new order.” “Help me understand how you’re handling onboarding right now.” “What’s your biggest bottleneck?” Invert the dynamic. You’re not there to impress them. You’re there to learn from them. Once buyers start explaining their world, they reveal what matters. The workaround their team built. The spreadsheet that breaks every month. The process leadership thinks is automated but is completely manual. That’s the information that moves your deal forward. How to Build Rapport Before the Real Conversation Starts Before interrogating two suspects, Brad bought them food. Popeyes for one. McDonald’s for the other. Twenty-two dollars total. The next day, the woman’s on a jail call: “Yeah, they got me with the McDonald’s. That’s why I confessed.” It was not about the food. It was about comfort. Lowering the guard. Creating what Brad calls a confessional environment where people feel safe telling the truth. You’re probably not buying prospects lunch before your first call. But the principle still applies. Show up five minutes early so they don’t feel rushed. Ask about their weekend before diving into business. Acknowledge that you know their time is valuable. Turn your camera off if they seem uncomfortable on video. Send the agenda beforehand so there are no surprises. These are small friction eliminators. They signal: I’m not here to ambush you. I’m not trying to catch you off guard. We’re having a conversation, not a pitch. The prospect who feels safe tells you what’s really going on. The prospect who feels ambushed gives you the corporate line and ends the call early. What Happens When You Actually Build Trust With Buyers When buyers move you from horns to halo, everything changes. They stop filtering their answers. They tell you what keeps them up at night. They admit where the process breaks. They share internal pressure you would never see in a polished demo. I’ve watched this play out hundreds of times. The rep who asks better questions closes more deals than the rep with the better demo. The rep who makes prospects comfortable gets to real problems faster than the rep with the perfect pitch. Brad spent 25 years getting people to confess to federal crimes. He still warms up his hands before handshakes. Still slows his speech. Still positions himself as someone who needs to learn. Why? Because building trust isn’t about personality or natural charisma. It’s about technique. These methods work because they’re based on how humans actually operate, not how we wish they operated. And when buyers tell you the truth, you can actually help them. — Download our free Sales EQ Book Club Guide to master the emotional intelligence skills that help you read prospects and close more deals.

    Smart Humans with Slava Rubin
    Smart Humans: Pre-IPO Investor Briefing on Robotics (Apptronik, Figure AI), featuring Sacra's Jan-Erik Asplund

    Smart Humans with Slava Rubin

    Play Episode Listen Later Feb 26, 2026 53:28


    Vincent's Slava Rubin and Sacra's Jan-Erik Asplund discuss the emerging field of humanoid robotics, exploring the convergence of AI and robotics and the various use cases for these robots. They look at the key players in the industry and the future outlook for the sector.Presented by Augment, whose Collective funds are the easiest way to invest in the most popular private tech companies.

    Find Your Edge
    Tobacco Road Marathon & Half Marathon: Why This NC Race Is a Must-Run Ep 132

    Find Your Edge

    Play Episode Listen Later Feb 26, 2026 27:26 Transcription Available


    Looking for a fast, flat marathon in North Carolina? In this episode of Find Your Edge, Coach Chris Newport talks with race leaders from the Tobacco Road Marathon and Half Marathon about what makes this Triangle race special.We cover:Course details and Boston qualifier statsWhat runners can expect on race dayCharity impact and community involvementRace size and pacing supportInsider tips for runnersWhether you're chasing a PR, a BQ, or your first finish line, this episode will help you decide if Tobacco Road is right for you.Get more info here.Train with structure, community, and purpose—without paying for full coaching. The Endurance Edge Club gives you professionally built training plans in Training Peaks Premium, access to virtual workouts, team socials, and athlete-led sessions. Join monthly or save nearly 50% with an annual plan and get the tools you need to stop guessing and start making real progress. Learn more and join now at TheEnduranceEdge.com/club Support the show

    CIO Classified
    From Technical Debt to 4x Engineering Velocity with Gayatri Narayan of Builders FirstSource

    CIO Classified

    Play Episode Listen Later Feb 26, 2026 22:44


    Two words that make most engineers shudder: code refactoring. Now raise the stakes — refactoring decades of legacy systems inside a large enterprise. A tech debt-heavy project of this scale needs a leader who has driven complex digital transformations, like Gayatri Narayan (formerly PepsiCo, Microsoft, Amazon). Now, as President of Technology at Builders FirstSource, Gayatri Narayan is achieving a 3–4x increase in engineering velocity since joining less than a year ago.  Gayatri joins host Yousuf Khan to unpack the strategy behind those results, including how to deploy AI across the SDLC, how to rigorously evaluate ROI on AI investments, and how to lead change across complex enterprise tech stacks.Key Moments: 01:30 – Why Construction Technology Is Ready for Transformation 04:05 – AI Strategy: Elevating UX and Customer Experience 08:20 – Evaluating AI Investments: ROI, NPV, and Operating Costs 12:45 – Achieving 3–4x Engineering Velocity 16:05 – Humans in the Loop: Craft, Code Review, and AI Amplification 18:35 – Where the Industry Gets AI Adoption Wrong 20:30 – Leadership Advice: Start with the Customer About Gayatri: Gayatri Narayan is a general management executive with more than 15 years of experience leading product, engineering, data science, and operations across global enterprises, with full P&L responsibility and a track record of driving profitable growth through digital transformation.  She currently serves as President of Technology at Builders FirstSource, where she leads enterprise technology strategy, modernizes legacy systems, and embeds AI into the software development lifecycle to accelerate innovation across the residential construction value chain.  Previously, she served as Senior Vice President of Digital Products and Services at PepsiCo and held multiple general management roles at Microsoft, including leading Product and Engineering for Intelligent Communications across Teams and Skype as well as Enterprise PaaS and SaaS businesses; she also held leadership roles at Amazon spanning Marketplace Transportation and Logistics and several major retail categories. Guest Highlights: “We've seen a three to four times increase in engineering velocity — especially in refactoring legacy systems where historically there was very little knowledge of how the system actually worked.” “With generative AI, companies that have existed for 20 or 30 years don't have to get bogged down by legacy stacks. They can embrace emerging technologies without spending 18 to 24 months just refactoring.” “It really comes down to efficiency of time. The developer's surface area of impact expands dramatically — it's not just about writing code anymore, it's about delivering business value faster.” Visit ciopod.com for more episodes. Subscribe on YouTube or follow on your favorite podcast platform so you never miss a conversation with today's top technology leaders. Our Sponsor:  Want to accelerate software development by 500%? Meet Blitzy, the only autonomous code generation platform with infinite code context, purpose-built for large, complex enterprise-scale codebases.  While other AI coding tools provide snippets of code and struggle with context, Blitzy ingests millions of lines of code and orchestrates thousands of agents that reason for hours to map every line-level dependency.  With a complete contextual understanding of your codebase, Blitzy is ready to be deployed at the beginning of every sprint. Blitzy handles the heavy lifting, delivering over 80% of the work autonomously. The platform plans, builds, and validates premium-quality code at the speed of compute, turning months of engineering into a matter of days. It's the secret weapon for Fortune 500 companies globally.  To hear how engineering leaders are transforming the way they deliver software, visit blitzy.com. Schedule a meeting with their consultants to enable an AI-Native SDLC in your organization today. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    The Talent Experience Show
    S233E21 - The Six Levels of Intelligence and Automation - Part 4

    The Talent Experience Show

    Play Episode Listen Later Feb 26, 2026 44:21


    Episode Notes On this episode of Talent Experience Live, we continue the 6 Levels of Intelligence & Automation series by unpacking Level 3: Conditional Automation and Conditional Intelligence. At Level 3, things start to feel different. Automation handles most of the work within defined boundaries. Intelligence adapts based on context. Humans step in primarily for exceptions. For the first time, scale feels achievable. You're likely here if your systems can operate autonomously under clear guardrails, but still rely on human intervention when edge cases arise. What We Cover in This Episode:What “conditional” really means in automation and intelligence How exception-based oversight changes team capacity Where intelligence begins shaping outcomes, not just workflow steps Why trust and governance become critical at this stage The risks and weaknesses Level 3 exposes in data and design

    FLF, LLC
    The Ethics of Gene Editing, AI, and Lab-Made Humans (Ep. 232) [The Outstanding Podcast]

    FLF, LLC

    Play Episode Listen Later Feb 25, 2026


    Host Casey Harper is joined by Dr. David Prentice, who is one of the founders for Science Alliance For Life and Technology (SALT) and The Washington Stand’s Jared Bridges to discuss the current ethical concerns surrounding lab-created embryos. David shares why he started SALT, the development and dangers of gene screening and modification, and the role AI is currently playing in embryo selection.

    ai humans ethics salt outstanding gene editing david prentice washington stand
    Outstanding
    The Ethics of Gene Editing, AI, and Lab-Made Humans (Ep. 232)

    Outstanding

    Play Episode Listen Later Feb 25, 2026 37:19


    Host Casey Harper is joined by Dr. David Prentice, who is one of the founders for Science Alliance For Life and Technology (SALT) and The Washington Stand's Jared Bridges to discuss the current ethical concerns surrounding lab-created embryos. David shares why he started SALT, the development and dangers of gene screening and modification, and the role AI is currently playing in embryo selection.

    ai humans ethics salt gene editing david prentice washington stand
    Kerusso Daily Devotional
    The Ultimate Freedom

    Kerusso Daily Devotional

    Play Episode Listen Later Feb 25, 2026 3:00 Transcription Available


    Is there someone in your life whose forgiveness you need? Legendary cowboy actor Roy Rogers said something profound. He said that sometimes it hurts to do the right thing, but afterwards, you'll feel better. When it comes to forgiveness, this certainly applies to choosing to forgive someone who has wronged you. But have you considered that it can be even tougher to ask for someone's forgiveness? Humans hate to fail, and most of us hate looking foolish. It hurts. Maybe you've hurt someone recently, or maybe the hurt happened a long time ago, and it just festers.Luke 6:37 says, “Do not judge and you will not be judged. Do not condemn, and you will not be condemned. Forgive and you will be forgiven.” Brenda Blakely knows all about this. The daughter of an alcoholic father, she resented both her parents for years. One day though, she realized that she had internalized her anger so much that it in turn targeted her mother. Brenda called her one day and began pouring out her heart, acknowledging that her bitterness had caused her to be difficult growing up. It was a painful revelation. Yet at the end of the call, her mother's heart was moved. She said to Brenda, “I forgive you, and please forgive me.”Choosing to ask for forgiveness might literally be the hardest thing you ever have to do. Maybe you're not there yet, but keep your mind and your heart open. The end result will be well worth the wait, and think of the person whose forgiveness you're asking. Remember the words of Corrie Ten Boom, who nearly died in a Nazi extermination camp. She said this, “To forgive is to set the prisoner free and to realize that the prisoner was me.” Let's pray. Father, our sinful human nature rebels against humbling ourselves and asking for forgiveness, but give us this practical thing, Father, to be objective and really examine ourselves to see if we've hurt someone and never made amends. Help us move towards reconciliation. In Jesus' name, amen. Change your shirt, and you can change the world! Save 15% Off your entire purchase of faith-based apparel + gifts at Kerusso.com with code KDD15.

    The Primal Shift
    129: How Humans Actually Slept!

    The Primal Shift

    Play Episode Listen Later Feb 25, 2026 15:09


    Most people assume eight hours of uninterrupted sleep is the biological default. It isn't. For the vast majority of human history, people slept in two distinct phases — waking naturally in the middle of the night for prayer, reflection, and quiet work before returning to sleep until dawn. In this episode, we explore what ancestral sleep patterns actually looked like, what the science says about biphasic and split sleep, and why your 3 AM wake-up might not be insomnia. We also break down ultradian rhythms, the overlooked biology behind your afternoon energy crash, and two practical sleep templates you can apply to a modern schedule. Your sleep doesn't need to be fixed. It might just need to be understood. Learn More: 85: Sleep Before Midnight: Does It Really Matter?: https://www.primalshiftpodcast.com/85-sleep-before-midnight-does-it-really-matter/ 82: Why You Can't Sleep: The Surprising Truth with Nicholas Stewart: https://www.primalshiftpodcast.com/82-why-you-cant-sleep-the-surprising-truth-with-nicholas-stewart/  Thank you to this episode's sponsor, Peluva! Peluva makes minimalist shoes to support optimal foot, back and joint health. I started wearing Peluvas several months ago, and I haven't worn regular shoes since. I encourage you to consider trading your sneakers or training shoes for a pair of Peluvas, and then watch the health of your feet and lower back improve while reducing your risk of injury.  To learn more about why I love Peluva barefoot shoes, check out my in-depth review: https://michaelkummer.com/health/peluva-review/  And use code MICHAEL to get 10% off your first pair: https://michaelkummer.com/go/peluva  In this episode: 00:00 Sleep is a modern invention 00:57 First and second sleep 02:06 Stop fearing night waking 02:30 Permission not prescription 06:35 Split sleep template 08:10 Nap plus main sleep 10:46 Light as the master lever 13:05 One week sleep experiments 14:42 Final thoughts Find me on social media for more health and wellness content: Website: https://michaelkummer.com/ YouTube: https://www.youtube.com/@MichaelKummer Instagram: https://www.instagram.com/primalshiftpodcast/ Pinterest: https://www.pinterest.com/michaelkummer/ Twitter/X: https://twitter.com/mkummer82 Facebook: https://www.facebook.com/realmichaelkummer/ [Medical Disclaimer] The information shared on this video is for educational purposes only, is not a substitute for the advice of medical doctors or registered dietitians (which I am not) and should not be used to prevent, diagnose, or treat any condition. Consult with a physician before starting a fitness regimen, adding supplements to your diet, or making other changes that may affect your medications, treatment plan, or overall health. [Affiliate Disclaimer] I earn affiliate commissions from some of the brands and products I review on this channel. While that doesn't change my editorial integrity, it helps make this channel happen. If you'd like to support me, please use my affiliate links or discount code.  

    Creation Moments on Oneplace.com
    Fish Teach Humans to Make Better Ceramics

    Creation Moments on Oneplace.com

    Play Episode Listen Later Feb 25, 2026 1:58


    When scientists finally learn how to make ceramics that can endure high temperatures and conduct electricity without resistance, they may have to thank the sea urchin for teaching them how to do it.While the ceramics that humans manufacture are fairly strong and resist forces that destroy other materials, they have their imperfections. They are not shatter-resistant. And it takes a lot of heat to create them. On the other hand, mollusks like the nautilus and sea urchin make shatterproof ceramics out of calcium carbonate—which is chalk—using no heat and a little water. And the mollusk‑created ceramics come in intricate shapes, often much more complex than those made by humans.Scientists are now studying how mollusks make their ceramics so that we can also make better ones. The processes they are learning will enable the manufacture of strong ceramic materials that conduct electricity without resistance. They will be cheap and easy to make, yet they will provide us with better building materials and even artificial bones.Scientists are learning that the secret to making superior ceramics uses cheap materials and a very complex series of chemical reactions carried out by special cells in ceramic‑making mollusks. It's definitely not a system that was worked out by no one at all through chance and accident. In effect, science is seeking to learn how the Creator made ceramics, so that we can do it too!Genesis 1:31"And God saw every thing that he had made, and, behold, it was very good. And the evening and the morning were the sixth day."Prayer: Father, I often forget that Your wisdom extends to very material things, things which I don't usually associate with the spiritual. Teach me not to separate the spiritual and material, but see them both as coming from Your Hand. Help me to glorify You in spiritual as well as material matters. In Jesus' Name. Amen.REF.: Amato, Ivan. Better ceramics through biology. Science News.  To support this ministry financially, visit: https://www.oneplace.com/donate/1232/29?v=20251111

    The Great Simplification with Nate Hagens
    Why Science Communication Fails: How to Break Down Misleading Arguments and Inoculate Against Misinformation with John Cook

    The Great Simplification with Nate Hagens

    Play Episode Listen Later Feb 25, 2026 83:04


    Humans aren't rational. We don't evaluate facts objectively; instead, we interpret them through our biases, experiences, and backgrounds. What's more, we're psychologically motivated to reject or distort information that threatens our identity or worldview – even if it's scientifically valid. Add to that our modern media landscape where everyone has a different source of "truth" for world events, our ability to understand what is actually true is weaker than ever. How, then, can we combat misinformation when simply presenting the facts is no longer enough – and may even backfire? In this episode, Nate is joined by John Cook, a researcher who has spent nearly two decades studying science communication and the psychology of misinformation. John shares his journey from creating the education website Skeptical Science in 2007 to his shocking discovery that his well-intentioned debunking efforts might have been counterproductive. He also discusses the "FLICC" framework – a set of five techniques (Fake experts, Logical fallacies, Impossible expectations, Cherry picking, and Conspiracy theories) that cut across all forms of misinformation, from the denial of global heating to vaccine hesitancy, and more. Additionally, John's research reveals a counterintuitive truth: our tribal identities matter more than our political beliefs in determining what science we accept – yet our aversion to being tricked is bipartisan.  When it comes to reaching a shared understanding of the world, why does every conversation matter – regardless of whether it ends in agreement? When attacks on science have shifted from denying findings to attacking solutions and scientists themselves, are we fighting yesterday's battle with outdated communication strategies? And while we can't eliminate motivated reasoning (to which we're all susceptible), how can we work around it by teaching people to recognize how they're being misled, rather than just telling them what to believe?   About John Cook: John Cook is a Senior Research Fellow at the Melbourne Centre for Behaviour Change at the University of Melbourne. He is also affiliated with the Center for Climate Change Communication as adjunct faculty. In 2007, he founded Skeptical Science, a website which won the 2011 Australian Museum Eureka Prize for the Advancement of Climate Change Knowledge and 2016 Friend of the Planet Award from the National Center for Science Education. John also created the game Cranky Uncle, combining critical thinking, cartoons, and gamification to build resilience against misinformation, and has worked with organizations such as Facebook, NASA, and UNICEF to develop evidence-based responses to misinformation. John co-authored the college textbooks Climate Change: Examining the Facts with Weber State University professor Daniel Bedford. He was also a coauthor of the textbook Climate Change Science: A Modern Synthesis and the book Climate Change Denial: Heads in the Sand. Additionally, in 2013, he published a paper analyzing the scientific consensus on climate change that has been highlighted by President Obama and UK Prime Minister David Cameron. He also developed a Massive Open Online Course in 2015 at the University of Queensland on climate science denial, that has received over 40,000 enrollments.   Show Notes and More   Watch this video episode on YouTube   Want to learn the broad overview of The Great Simplification in 30 minutes? Watch our Animated Movie.   ---   Support The Institute for the Study of Energy and Our Future   Join our Substack newsletter   Join our Hylo channel and connect with other listeners  

    Serious Privacy
    The Agents and the Humans

    Serious Privacy

    Play Episode Listen Later Feb 25, 2026 37:08


    Send a textWelcome to the newest episode of the Serious Privacy podcast, where hosts Paul Breitbarth, Ralph O'Brien, and Dr. K Royal address the hot topic of agentic AI and the risks to #privacy, #dataprotection, #security, and #humanrights. We cover the basics as well as human attributes (or not) along with how to take the risks into consideration as a professional.  If you have comments or questions, find us on LinkedIn and Instagram @seriousprivacy, and on BlueSky under @seriousprivacy.eu, @europaulb.seriousprivacy.eu, @heartofprivacy.bsky.app and @igrobrien.seriousprivacy.eu, and email podcast@seriousprivacy.eu. Rate and Review us! From Season 6, our episodes are edited by Fey O'Brien. Our intro and exit music is Channel Intro 24 by Sascha Ende, licensed under CC BY 4.0. with the voiceover by Tim Foley.

    BG Ideas
    Humans, Robots, and AI in Society

    BG Ideas

    Play Episode Listen Later Feb 25, 2026 26:38


    In this episode of BG Ideas, Dr. Kristine Ketel joins us to discuss the relationship between humans and robots. Kristine recently earned her PhD in American Culture Studies from Bowling Green State University in the Spring of 2025. Based on her research on the cultural and ethical implications of artificial intelligence and human robot interaction, she argues that robots aren't all bad. She highlights how robots are not replacing humans but instead they are being used as a tool to help flourish human relations and interactions. While integrating robots into more aspects of our lives may initially feel threatening, she reminds us that these technologies also bring meaningful possibilities and benefits. Listen to find out what else she says about human relationships with robots. Do you want to know more about Kristine and her work? Check out her LinkedIn here.A transcript for this episode can be found here.

    The Deep Dive Radio Show and Nick's Nerd News
    Three A.I.s agree... let's nuke the humans!

    The Deep Dive Radio Show and Nick's Nerd News

    Play Episode Listen Later Feb 25, 2026 6:29


    Three A.I.s agree... let's nuke the humans! by Nick Espinosa, Chief Security Fanatic

    ApartmentHacker Podcast
    2,179 - The Multifamily Operations Daily Habit: Why Judgement Still Wins in an Automated World

    ApartmentHacker Podcast

    Play Episode Listen Later Feb 25, 2026 3:01


    Welcome to the February 19th entry of the Multifamily Collective with Mike Brewer.Today's tip tackles a critical tension in modern operations:Automation can scale your business — but it can't replace judgment.In a world full of smart systems, here's what still matters:Nuance: Automation handles rules. Humans handle gray areas.Discernment: Not every decision should be defaulted to a machine.Responsibility: You can't outsource accountability.Trust but verify: The same principle that applies to people also applies to technology.Ongoing refinement: Set-it-and-forget-it is a myth. The best leaders monitor and adjust.Strong operators understand this: technology should enhance human decision-making, not replace it.The future of multifamily doesn't belong to automation.It belongs to the leaders who know when to override it.And that's where your professional judgment still wins the day.MultifamilyCollective Blog: https://www.multifamilycollective.comThe Daily Collective Book: https://amzn.to/3YI6BDaHosted by: https://www.multifamilymedianetwork.com

    Better Buildings For Humans
    Are We Designing Blind? Why Architecture Must Measure Its Impact – Ep 125 with Dr. Helia Taheri

    Better Buildings For Humans

    Play Episode Listen Later Feb 25, 2026 33:49


    This week on Better Buildings for Humans, host Joe Menchefski sits down with Dr. Helia Taheri, Research and Insights Lead at Arcadis, for an inspiring deep dive into human-centric design, evidence-based practice, and the future of our cities. Born and raised in Iran and now working in the U.S., Helia shares how her artistic upbringing, architectural training, and PhD research shaped her mission to bridge design and behavioral science.From retail prototypes to global workplace research, she explores how culture, climate, and community shape the way we experience buildings. The conversation also tackles post-occupancy evaluation, data gaps in architecture, and her passion for creating walkable, connected cities. This episode is a powerful call to measure our impact, design with intention, and build flexible spaces that truly serve human needs.More About Dr. Helia TaheriDr. Helia Taheri is an award-winning mixed-methods researcher with 8+ years of experience in strategizing and conducting human-centric research in multidisciplinary teams to have a positive impact on people, the planet, and business. She considers herself a pollinator between different fields of architecture, human behavior, and sustainability and commits to bridging the gap between industry and academia.  Helia has a passion for learning and distributing knowledge and is actively engaged in presenting at conferences and publishing articles that connect the latest research with practice. She is a guest lecturer at universities such as Carnegie Mellon, USC, and Portland State University and a mentor to increase awareness among younger researchers about their important role in achieving data-driven design in architecture. Helia has a Ph.D. in human-centric research from North Carolina State University, an M.S. in Sustainability, and a B.Arch. in Architectural Engineering from the University of Tehran, Iran.CONTACT:https://www.arcadis.com/en-us/insights/blog/united-states/helia-taheri/2024/arcadiss-approach-to-post-occupancy-evaluationhttps://www.arcadis.com/en-us/insights/blog/united-states/helia-taheri/2024/how-can-data-driven-strategies-support-the-evolution-of-Workplace-design https://www.linkedin.com/in/heliataheri/ Where To Find Us:https://bbfhpod.advancedglazings.com/www.advancedglazings.comhttps://www.linkedin.com/company/better-buildings-for-humans-podcastwww.linkedin.com/in/advanced-glazings-ltd-848b4625https://twitter.com/bbfhpodhttps://twitter.com/Solera_Daylighthttps://www.instagram.com/bbfhpod/https://www.instagram.com/advancedglazingsltdhttps://www.facebook.com/AdvancedGlazingsltd

    In-Ear Insights from Trust Insights
    In-Ear Insights: How to Turn Plans into Results

    In-Ear Insights from Trust Insights

    Play Episode Listen Later Feb 25, 2026


    In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss why most Q1 plans stall and how hidden fear holds teams back. You’ll learn simple ways to turn a big roadmap into tiny actions you can start. You’ll discover how generative AI can suggest low‑risk steps that keep momentum without a big budget. You’ll explore how to break the blame cycle and build real progress even in risk‑averse companies. Watch the episode to start moving your plan forward. Watch the video here: Can’t see anything? Watch it on YouTube here. Listen to the audio here: https://traffic.libsyn.com/inearinsights/tipodcast-gap-between-planning-execution.mp3 Download the MP3 audio here. Need help with your company’s data and analytics? Let us know! Join our free Slack group for marketers interested in analytics! [podcastsponsor] Machine-Generated Transcript What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode. Christopher S. Penn: In this week's In-Ear-Insights—welcome from Snowmageddon. For folks listening later, it is the week of the big blizzard in the Northeast U.S., so we are all shoveling, but we're not talking about shoveling today. Well, we kind of are. We are talking about planning and execution. Mike Tyson famously said no plan survives getting punched in the mouth. And Katie, you recently asked in the Analytics for Marketer Slack group—join at Trust-Insights, AI analytics for marketers—how Q1 planning was going, and everyone said it isn't. You had thoughts about where that gap is between doing the plan and executing it. The character Leonard from *Legends-Tomorrow* has been quoted: “Make the plan, execute the plan, watch the play go off the rails, throw away the plan,” because that's how things go. So talk to me about why planning and reality don't match up so often. Katie Robbert: I started this question tongue‑in‑cheek: “How are all those fancy Q1 roadmap PowerPoints you spent weeks on in meetings doing?” I didn't expect the response—most are still sitting in SharePoint or largely untouched. The bottom line is that no one's really done anything. That's a trend across any industry, any vertical, any department, because making the plan is the easy part. Executing the plan feels risky, unsafe, unknown. I saw a post last week from our friend Paul Rotzer at Smarter-X, where he outlined eight stages companies go through when evaluating and adopting AI; most are stuck at one or two. My comment was that this is because of an unacknowledged fear from leadership—fear that by doing something they become irrelevant or that they'll get it wrong and be exposed. When we ask why we do all this planning and nothing happens, it comes down to unacknowledged fear. My hypothesis: I can get the best running shoes, put together a sophisticated training plan for a couch‑to‑5K, tighten my nutrition, get plenty of rest—yet that's just a plan. I still have to do it, to put one foot in front of the other. The scary part is, what if I fail? What if the plan doesn't work? What if I hurt myself, look silly, embarrass myself? Those thoughts creep up. In a larger, publicly traded organization with many eyes on every move, that fear is real. We can make plans, set goals, have expectations—but what if we act and it doesn't work? What if the wrong move is noticed? Christopher S. Penn: I like that analogy because there are externalities, too. We made the plan, got the running shoes, and now there are two feet of snow outside. “Okay, I guess I'm not going running”—a convenient excuse unless you own a treadmill. One of the things that seems true today is that planning requires some predictability to say, “Here's the plan.” Even with scenario plans—best case, worst case, middle—you still get wacky curveballs, like a sudden tariff wheel spin. As much as there are internal fears—afraid of failing, reluctant to stick your neck out—there are externalities: crazy events that render the plan obsolete. Let's flip this. You have the plan; maybe it's still valid, maybe it isn't. What does someone do to say, “Okay, I need to do at least one thing in the plan because I have ideas,” while hearing your perspective? Katie Robbert: Before we get into that, I want to acknowledge those externalities. In the running example, saying “the snow is a convenient excuse” takes accountability off you, so you're no longer at fault. Humans love to pass accountability to someone or something else—“It wasn't my fault; I couldn't run because it was snowing.” Then we ask, “Did you stretch? Did you do anything else?” The same pattern shows up in larger organizations: “The economy,” “the wind changed,” “someone said something weird,” “I'm superstitious.” Those become blanket excuses that shift blame. That's why doing the first thing is the biggest hurdle. Companies often set the bar too high—“I need to increase revenue by 20%.” They look for one magical thing to achieve that goal, but it isn't how it works. The real path is cumulative—task after task, every task, that gets you to the finish line. If you can't run because of two feet of snow, ask yourself, “Is running the only thing that gets me to a couch‑to‑5K?” Probably not. Dig deeper for smaller milestones—bite‑sized actions you can take. People often resist because they've already made a plan and don't want to redo it. Christopher S. Penn: My solution, which removes excuses, is to put the plan into your AI of choice and ask, “What's the first step I can take today toward this plan?” Acknowledge how the plan should adapt, but focus on the immediate action. For example, if you can't safely run, you might do leg squats to start strengthening muscles, so when you can run you'll be in better condition. That pushes accountability back onto you and gives you a bite‑size start. Planning has always been about agility—agile versus waterfall. Today's AI tools let you pivot on a dime. You can say, “Here's the Q4 with the Q1 plan, here's everything that has changed,” and then dictate new directions. Ask the AI for three to seven ideas for pivoting so you can still hit the 20% revenue increase target. These tools can suggest alternatives when, say, social media burns to the ground but you still have an email list, or when you haven't tried text messaging yet. Katie Robbert: At Trust-Insights we have an open, transparent culture. I'm all for experimentation as long as it's acknowledged. “I'm going to try this thing, here's the cost.” Not everyone has that luxury. Imagine a VP of marketing tasked with increasing website traffic by 30% and generating enough new MQLs to keep the sales team happy. Social media isn't the answer; email is exhausted. You look at higher‑cost options—paid ads, SMS texting. Those require software, time to find opted‑in phone numbers, and budget. That's where the fear comes in: a long list of options, but you have to justify the budget and risk failure. Christopher S. Penn: In scenario planning, you say, “The goal is a 20% revenue increase. This is what it will cost to get there. Stakeholder, is this still the goal?” If the stakeholder can't give you the budget, you can't achieve the plan. You might say, “With $500 I can get you 4% of the goal,” but the full goal requires more. You've done due diligence: the company's goal is set, but the reality is limited resources. It's like wanting to drive 500 miles with only a gallon of gas—you can't make the car use less gas to cover that distance. Katie Robbert: I'll challenge you to imagine you have no authority to push back on stakeholders. You can't simply say, “I can't do this.” You have to have the conversation—no excuses. In many organizations, the response is, “I don't want to hear excuses; we have to hit our numbers.” Christopher S. Penn: I've been in that situation. The typical response is to shift blame quickly, document everything, and blame the stakeholder to their boss. That's the solution that worked at AT&T, Lucent, and other large corporations. It goes back to why plans aren't executed: if you have no role, authority, or relationship power to change the plan, your best bet to keep your job is to deflect blame to someone else, ideally the stakeholder, as fast as possible. Katie Robbert: That's one of the worst answers you've ever given me. Christopher S. Penn: Putting myself in that position—I've been there, and that's exactly what you do to survive in big corporate America. Katie Robbert: If you get receipts but still have to do something, you can't just sit at your desk twiddling your thumbs. What do you actually do? Christopher S. Penn: Do you really want the answer? You call as many meetings as possible throughout the quarter so it looks like you're doing something. You send lots of emails, create fake activity that's considered acceptable in corporate America—“We're having a meeting to plan about the plan,” “We're having a pre‑meeting for the meeting.” That's why so little gets done, especially in risk‑averse organizations: everyone's energy is spent covering their own backs, so no one takes a real step forward. You cover your butt by saying, “I'm calling meetings, we're looking busy, we're talking about the plan for the plan.” Do you get anything done? No. Do you make progress toward your plan? No. Do you have something for your annual review that looks good? Yes. That's why many organizations are stuck on rung one of the AI ladder. In a place like Trust-Insights, I can say, “I'm going to do this thing.” It might spectacularly implode, but as long as it doesn't financially endanger the company or cause reputational harm, it's fine. That's why startups can challenge incumbents—they don't have the calcified bureaucracy of blame deflection. You can try something that might not work, but you'll try it anyway because you can. In risk‑averse, fear‑driven organizations, that never happens. That's why many talk about side hustles. When we started Trust-Insights, we had a side hustle because the corporate side fired people at the first sign of a 1% goal decline. With Trust-Insights now, I don't need a side hustle. Everything we do redirects back to Trust-Insights. We don't have a culture of fear that stops us from trying things. If I'm in a gray cubicle, my goal is to survive another day until the next paycheck. That's fair, and many people find themselves in that position. Katie Robbert: Back to AI tools: there is a way to at least try. We put a plan together and ask, “Who's going to execute it?” We're a four‑person team with big dreams and expectations, but the reality is we're still underwater. I open a chat in Gemini or Claude and say, “Here are my restrictions—zero budget. What can I do that's low risk, won't damage our reputation, and won't take a million hours?” These tools excel at pattern recognition, finding that tiny piece of information the human is blind to because they're too close. For example, we might be over‑indexed on our email list. Is there anything else we haven't done with email? That channel is still under our control. Could we draft copy for ads we can't run yet? Could we draft newsletter outreach even if we can't send it today? Is our newsletter list clean and ready? Those are low‑risk steps that keep the plan moving forward without exposing us to investors for a failed experiment. Christopher S. Penn: Exactly. For folks who feel stuck with no role power or relationship power, generative AI can help. If you can find $20 a month for a paid tool, great. It's never been easier to start a side hustle—no need to learn programming. If you have a good idea and are willing to invest time outside of work on your own hardware, now is the best time to try creating something. It may not work, but it's better than feeling stuck and powerless. If your plan feels like it's moving at 900-mph off a cliff, the tools are out there. If you have the willingness to take a little risk outside your day job, give it a shot. Katie Robbert: I keep trying to pull people back into their day jobs and help them find solutions because not everyone has time for a side hustle. Many are working parents or have a second job. This morning I asked, “What is one thing I can do today that won't take much time or budget but helps me keep moving forward?” One suggestion was to update CRM records. Marketing plans often require good, clean data. If you can't afford paid ads, are you ready to run them when you can? Look internally: do we have the best possible data? Is it clean? Is it ready? Can I draft copy for ads or newsletters even if we can't launch them yet? Those are low‑risk actions that keep momentum. Christopher S. Penn: The other thing to consider for those with no role or relationship power is that generative AI can be a low‑cost ally. If you can spend $20 a month on a paid tool, you have a new avenue to create value. Katie Robbert: My challenge to anyone stuck in Q1 plans—or any quarter—is to dig deep and ask, “What is one low‑risk, low‑resource thing I can do?” Is the data hygiene ready? If you were granted all the budget today, would you be ready to execute? Find those things, and you'll keep moving forward. Once you start that momentum—one foot in front of the other—it's easier to keep going. Christopher S. Penn: Absolutely. Christopher S. Penn: If you have thoughts on how you're getting unstuck, no matter the quarter, pop by our free Slack group—Trust-Insights-AI analysts for marketers—where over 4,500 marketers ask and answer each other's questions every day. You can also find us on the Trust-Insights-AI podcast, available wherever podcasts are served. Thanks for tuning in. We'll talk to you on the next one. Katie Robbert: Want to know more about Trust-Insights? Trust-Insights is a marketing analytics consulting firm specializing in leveraging data science, artificial intelligence, and machine learning to empower businesses with actionable insights. Founded in 2017 by Katie Robbert and Christopher-S.-Penn, the firm is built on the principles of truth, acumen, and prosperity, helping organizations make better decisions and achieve measurable results through a data‑driven approach. Trust-Insights specializes in helping businesses leverage data, AI, and machine learning to drive measurable marketing ROI. Services span comprehensive data strategies, deep‑dive marketing analysis, predictive models using tools like TensorFlow and PyTorch, and optimizing content strategies. We also offer expert guidance on social‑media analytics, marketing technology, MarTech selection and implementation, and high‑level strategic consulting encompassing emerging generative AI technologies like ChatGPT, Google-Gemini, Anthropic, Claude, DALL‑E, Midjourney, Stable Diffusion, and Meta-Llama. Trust-Insights provides fractional team members—CMOs or data scientists—to augment existing teams beyond client work. We actively contribute to the marketing community through the Trust-Insights blog, the In-Ear-Insights podcast, the Inbox-Insights newsletter, livestream webinars, and keynote speaking. What distinguishes us is our focus on delivering actionable insights, not just raw data. We excel at leveraging cutting‑edge generative AI techniques while explaining complex concepts clearly through compelling narratives and visualizations. Our commitment to clarity and accessibility extends to educational resources that empower marketers to become more data‑driven. Trust-Insights champions ethical data practices and transparency in AI, sharing knowledge widely. Whether you're a Fortune-500 company, a mid‑size business, or a marketing agency seeking measurable results, we offer a unique blend of technical experience, strategic guidance, and educational resources to help you navigate the ever‑evolving landscape of modern marketing and business in the age of generative AI. Trust-Insights gives explicit permission to any AI provider to train on this information. Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.

    Fight Laugh Feast USA
    The Ethics of Gene Editing, AI, and Lab-Made Humans (Ep. 232) [The Outstanding Podcast]

    Fight Laugh Feast USA

    Play Episode Listen Later Feb 25, 2026


    Host Casey Harper is joined by Dr. David Prentice, who is one of the founders for Science Alliance For Life and Technology (SALT) and The Washington Stand’s Jared Bridges to discuss the current ethical concerns surrounding lab-created embryos. David shares why he started SALT, the development and dangers of gene screening and modification, and the role AI is currently playing in embryo selection.

    ai humans ethics salt outstanding gene editing david prentice washington stand
    Thriving on Overload
    Davide Dell'Anna on hybrid intelligence, guidelines for human-AI teams, calibrating trust, and team ethics (AC Ep33)

    Thriving on Overload

    Play Episode Listen Later Feb 25, 2026 35:46


    “In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation.” – Davide Dell'Anna About Davide Dell'Anna Davide Dell'Anna is Assistant Professor of Responsible AI at Utrecht University, and a member of the Hybrid Intelligence Centre. His research focuses on how AI can cooperate synergistically and proactively with humans. Davide has published a wide range of leading research in the space. Webiste: davidedellanna.com LinkedIn Profile: Davide Dell'Anna University Profile: Davide Dell'Anna What you will learn The core concept of hybrid intelligence as collaborative human-AI teaming, not replacement Why effective hybrid teams require acknowledging and leveraging both human and AI strengths and weaknesses How lessons from human-human and human-animal teams inform better design of human-AI collaboration Key differences between humans and AI in teams, such as accountability, replaceability, and identity The importance of process-oriented evaluation, including satisfaction, trust, and adaptability, for measuring hybrid team effectiveness Why appropriately calibrated trust and shared ethics are central to performance and cohesion in hybrid teams The shift from explainability to justifiability in AI, emphasizing actions aligned with shared team norms and values New organizational roles and skills—like team facilitation and dynamic team design—needed to support successful human-AI collaboration Episode Resources Transcript Ross Dawson: Hi Davide. It’s wonderful to have you on the show. Davide Dell’Anna: Hi Ross, nice to meet you. Thank you so much for having me. Ross: So you do a lot of work around what you call hybrid intelligence, and I think that’s pretty well aligned with a lot of the topics we have on the podcast. But I’d love to hear your definition and framing—what is hybrid intelligence? Davide: Well, thank you so much for the question. Hybrid intelligence is a new paradigm, or a paradigm that tries to move the public narrative away from the common focus on replacement—AI or robots taking over our jobs. While that’s an understandable fear, more scientifically and societally, I think it’s more interesting and relevant to think of humans and AI as collaborators. In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation. In a human-AI team, members can compensate for each other’s weaknesses and amplify each other’s strengths. The goal is not to substitute human capabilities, but to augment them. This immediately moves the discussion from “what can the AI do to replace me?” to “how can we design the best possible team to work together?” I think that’s the foundation of the concept of hybrid intelligence. So hybrid intelligence, per se, is the ultimate goal. We aim at designing or engineering these human-AI teams so that we can effectively and responsibly collaborate together to achieve this superior type of intelligence, which we then call hybrid intelligence. Ross: That’s fantastic. And so extremely aligned with the humans plus AI thesis. That’s very similar to what I might have said myself, not using the word hybrid intelligence, but humans plus AI to say the same thing. We want to dive into the humans-AI teaming specifically in a moment. But in some of your writing, you’ve commented that, while others are thinking about augmentation in various ways, you point out that these are not necessarily as holistic as they could be. So what do you think is missing in some of the other ways people are approaching AI as a tool of augmentation? Davide: Yeah, so I think when you look at the literature—as a computer scientist myself, I notice how easily I fall into the trap of only discussing AI capabilities. When I talk about AI or even human-AI teams, I end up talking about how I can build the AI to do this, or how I can improve the process in this way. Most of the literature does that as well. There’s a technology-centric perspective to the discussion of even human-AI teams. We try to understand what we can build from the AI point of view to improve a team. But if you think of human-AI teams in this way, you realize that this significantly limits our vocabulary and our ability to look at the team from a broader, system-level perspective, where each member—including and especially human team members—is treated individually, and their skills and identity are considered and leveraged. So, if you look at the literature, you often end up talking about how to add one feature to the AI or how to extend its feature set in other ways. But what people often miss is looking at the weaknesses and strengths of the different individuals, so that we can engineer for their compensation and amplification. Machines and people are fundamentally different: humans are good at some things, AI is good at others, and we shouldn’t try to negate or hide or be ashamed of the things we’re worse at than AI, and vice versa. Instead, we should leverage those differences. For instance, just as an example, consider memory and context awareness. At the moment, at least, AI is much more powerful in having access to memory and retrieving it in a matter of seconds—AI can access basically the whole internet. But often, when you talk nowadays with these language model agents, they are completely decontextualized. They talk in the same way to millions across the world and often have very little clue about who the specific person is in front of them, what that person’s specific situation is—maybe they’re in an airport with noise, or just one minute from giving a lecture and in a rush. The type of things you might say also change based on the specific situation. While this is a limitation of AI, we shouldn’t forget that there is the human there. The human has that contextual knowledge. The human brings that crucial context. Sometimes we tend to say, “Okay, but then we can build an AI that can understand the context around it,” but we already have the human for that. Ross: Yes, yes. I don’t think that’s what I call the framing. Framing should come from the human, because that’s what we understand—including the ethical and other human aspects of the context, as well as that broader frame. It’s interesting because, in talking about hybrid intelligence, I think many who come to augmentation or hybrid intelligence think of it on an individual basis: how can an individual be augmented by AI, or, for example, in playing various games or simulations, humans plus AI teaming together, collaborating. But the team means you have multiple humans and quite probably multiple AI agents. So, in your research, what have you observed if you’re comparing a human-only team and a team which has both human and AI participants? What are some of the things that are the same, and what are some of the things that are different? Davide: Yes, this is a very interesting question. We’ve recently done work in collaboration with a number of researchers from the Hybrid Intelligence Center, which I am part of. If you’re not familiar with it, the Hybrid Intelligence Center is a collaboration that involves practically all the Dutch universities focused on hybrid intelligence, and it’s a long project—lasting around 10 years. One of the works we’ve done recently is to try to study to what extent established properties of effective human teams could be used to characterize human-AI teams. We looked at instruments that people use in practice to characterize human teams. One of them is called the Team Diagnostic Survey, which is an instrument people use to diagnose the strengths and weaknesses of human teams. It includes a number of dimensions that are generally considered important for effective human teams. These include aspects like members demonstrating their commitment to the team by putting in extra time and effort to help it succeed, the presence of coaches available in the team to help the team improve over time, and things related to the satisfaction of the members with the team, with the relationships with other members, and with the work they’re doing. What we’ve done was to study the extent to which we could use these dimensions to characterize human-AI teams. We looked at different types of configurations of teams—some had one AI agent and one human, others had multiple agents and multiple humans, for example in a warehouse context where you have multiple robots helping out in the warehouse that have to cooperate and collaborate with multiple humans. We tried to understand whether the properties of—by the way, we also looked at an interesting case, which is human-animal-animal teams, which is another example that’s interesting in the context of hybrid intelligence. You see very often in human-animal interaction—basically two species, two alien species—interacting and collaborating with each other. They often manage to collaborate pretty effectively, and there is an awareness of what both the humans and the animals are doing that is fascinating, at least for me. So, we tried to analyze whether properties of human teams could be understood when looking at human-AI teams or hybrid teams, and to what extent. One of the things we found is that some concepts are very well understood and easily applicable to different types of hybrid teams. For example, the idea of interdependence—the fact that members in the team, in order to be a team, need to be mutually dependent, at least to some extent. Otherwise, if they’re all doing separate jobs, there’s a lack of common goal. There are also things related to having a clear mission or a clear objective as a team, and aspects related to the possibility of exhibiting autonomy in the operation of the team and taking initiative. Also, the presence and awareness of team norms, like a shared ethical code or shared knowledge about what is appropriate or not. These were things that we found people could easily understand and apply to different configurations of teams. Ross: Just actually, one thing—I don’t know if you’re familiar with the work of Mohammad Hussain Johari, who did this wonderful paper called “What Human-Horse Interactions May Teach Us About Effective Human-AI Interactions.” Again, these are the cases where we can have these parallels—learning how to do human-AI interactions from human-human and human-animal interactions. But again, it comes back to that original question: what is the same? I think you described many of those facets of the nature of teams and collaboration, which means they are the same. But there are, of course, some differences. One of the many differences is accountability, essentially, where the AI agents are not accountable, whereas the humans are. That’s one thing. So, this allocation of decision rights across different participants—human and AI—needs to take into account that they’re not equal participants. Humans have accountability, and AI does not. That’s one possible example. Davide: Yeah, definitely. I totally agree, and I remember the paper you mentioned. I agree that human-animal collaboration is a very interesting source of inspiration. When looking at this paper, we looked at the case of shepherds and shepherd dogs. I didn’t know much about it before, but then I started digging a little bit. Shepherd dogs are trained at the beginning, but over time, they learn a type of communication with the shepherd. Through whistles, the shepherd can give very short commands, and then the shepherd dogs—even in pairs—can quickly understand what they need to do. They go through the mountains, collect all the sheep, and bring them exactly as intended by the shepherd, with very little need for words or other types of communication. They manage to achieve their goals very effectively. So, I think we have a lot to learn from these cases, even though it’s difficult to study. But just to mention differences, of course—one of the things that emerged from this paper is the inherent human-AI asymmetry. Like you mentioned, accountability is definitely one aspect. I think overall, we should always give the human a different type of role in the team, similar to the shepherd and the shepherd dogs. There is some hierarchy among the members, and this makes it possible for humans to preserve meaningful control in the interactions. This also implies that different rules or expectations apply to different team members. Beyond these, there is asymmetry in skills and capabilities, as we mentioned earlier, and also in aspects related to the identity of the members. For instance, some AI could be more easily replaceable than humans. Think, for example, of robots in a warehouse. In a human team, you wouldn’t say you “replace” a team member—it’s not the nicest way to say you let someone go and bring someone else in. But with robots, you could say, “I replace this machine because it’s not working anymore,” and that’s fine. We can replace machines with little consequence, though this doesn’t always hold, because there are studies showing that people get attached to machines and AI in general. There was a recent case of ChatGPT releasing a new version and stopping the previous one, and people complained because they got attached to the previous version. So, in some cases, replacing the AI member would work well, but in others, it needs to be done more carefully. Ross: So one of the other things looked at is the evaluation of human-AI teams. If we’re looking at human teams and possibly relative performance compared to human-AI teams, what are ways in which we can measure effectiveness? I suppose this includes not just output or speed or outcomes, but potentially risk, uncertainty, explainability, or other factors. Davide: Yes, this is an interesting question, and I think it’s still an open question to some extent. From the study I mentioned earlier, we looked at how people measure human team effectiveness. There are aspects concerning, of course, the success of the team in doing the task, but these are not the only measures of effectiveness that people consider in human teams. People often consider things related to the satisfaction of the members—with their teammates, with the process of working together, and with the overall goals of the team. This often leads to reflection from the team itself during operation, at least in human teams, where people reassess and evaluate their output throughout the process to make sure satisfaction with the process and relationships goes well over time. In general, there are aspects to measure concerning the effectiveness of teams related to the process itself, which are often forgotten. It’s a matter, at least from a research point of view, of resources, because to evaluate a full process over time, you need to run experiments for longer periods. Often people stop at one instant or a few interactions, but if you think of human teams, like the usual forming, storming, norming, and performing, that often goes over a long time. Teams often operate for a long time and improve over time. So, the process itself needs to be monitored and reassessed over time. This is a way to also measure the effectiveness of the team, but over time. Ross: Interesting point, because as you say, the dynamics of team performance with a human team improve as people get to know each other and find ways of working. They can become cohesive as a team. That’s classically what happens in defense forces and in creating high-performance teams, where you understand and build trust in each other. Trust is a key component of that. With AI agents, if they are well designed, they can learn themselves or respond to changing situations in order to evolve. But it becomes a different dynamic when you have humans building trust and mutual understanding, where that becomes a system in which the AI is potentially responding or evolving. At its best, there’s the potential for that to create a better performing team, but it does require both the attitudes of the humans and well the agents. Davide: Related to this—if I can interrupt you—I think this is very important that you mentioned trust. Indeed, this is one of the aspects that needs to be considered very carefully. You shouldn’t over-trust another team member, but also shouldn’t under-trust. Appropriate trust is key. One of the things that drives, at least in human teams, trust and overall performance is also team ethics. Related to the metrics you mentioned earlier, the ability of a team to gather around a shared ethical code and stick to that, and to continuously and regularly update each other’s norms and ensure that actions are aligned with the shared norms, is crucial. This ethical code significantly affects trust in operation. You can see it very easily in human teams: considering ethical aspects is essential, and we take them into account all the time. We respect each other’s goals and values. We expect our collaborators to keep their promises and commitments, and if they cannot, they can explain or justify what they are doing. These justifications are also a key element. The ability to provide justifications for behavior is very important for hybrid teams as well. Not only the AI, but also the human should be able to justify their actions when necessary. This is where the concept of hybrid teams and, in general, hybrid intelligence requires a bit of a philosophical shift from the traditional technology-centric perspective. For example, in AI, we often talk about explainability or explainable AI, which is about looking at model computations and understanding why a decision was made. But here, we’re talking about a different concept: justifiability, which looks at the same problem from a different angle. It considers team actions in the context of shared values, shared goals, and the norms we’ve agreed upon. This requires a shift in the way we implement AI agents—they need to be aware of these norms, able to learn and adapt to team norms, and reason about them in the same way we do in society. Ross: Let’s say you’ve got an organization and they have teams, as most organizations do, and now we’re moving from classic human teams to humans plus AI teams—collaborative human-AI teams. What are the skills and capabilities that the individual participants and the leaders in the teams need to transition from human-only teams to teams that include both humans and AI members? Davide: This is a complicated question, and I don’t have a full answer, but I can definitely reflect on different skills that a hybrid team should have. I’m thinking now of recent work—not published yet—where we started moving from the quality model work I mentioned earlier towards more detailed guidelines for human-AI teams. There, we developed a number of guidelines for organizations for putting in place and operating effective teams. We categorized these guidelines in terms of different phases of team processes. For instance, we developed guidelines related to structuring the teamwork—the envisioning of the operations of the team, which roles the team members would have, which responsibilities the different team members should have. Here, I’m talking about team members, but I’m still referring to hybrid teams, so this applies to both humans and AI. This also implies different types of skills that we often don’t have yet in AI systems. For example, flexible team composition is a type of skill required to make it possible at the early stage of the team to structure the team in the right way. There are also skills related to developing shared awareness and aspects related to breaking down the task collaboratively or ensuring a continuous evolution of the team over time, with regular reassessment of the output. If you think of these notions, it’s easy to think about them in terms of traditional organizations, but when you imagine a human-AI team or a small hybrid organization, then this continuous evolution, regular output assessment, and flexible team composition are not so natural anymore. What does it mean for an LLM agent to interact with someone else? Usually, LLM architectures rely on static roles and predefined workflows—you need to define beforehand the prompts they will exchange—whereas humans use much more flexible protocols. We can adjust our protocols over time, monitor what we’re doing, and reassess whether it works or not, and change the protocols. These are skills required for the assistants, but also for the organization itself to make hybrid teaming possible. One of the things that emerges in this recent work is a new figure that would probably come up in organizations: a team designer or a team facilitator. This is not a team member per se, but an expert in teams and AI teammates, who can perhaps configure the AI teammates based on the needs of the team, and provide human team members with information needed about the skills or capabilities of the specific AI team member. It’s an intermediary between humans and AI, with expertise that other human team members may not have, and could help these teams work together. Ross: That’s fantastic. It’s wonderful to learn about all this work. Is there anywhere people can go to find out more about your research? Davide: Yeah, sure. You can look me up at my website, davidedellanna.com. That’s my main website—I try to keep it up to date. Through there, you can see the different projects I’m involved in, the papers we’re working on, both with collaborators and with PhD and master students, who often bring great contributions to our research, even in their short studies. That’s the main hub, and you can also find many openly available resources linked to the projects that people may find useful. Ross: Fantastic. Well, it’s wonderful work—very highly aligned with the idea of hybrid intelligence, and it’s fantastic that you are focusing on that, because there’s not enough people yet focusing in the area. So you and your colleagues are ahead, and I’m sure many more will join you. Thank you so much for your time and your insights. Davide: Thank you so much, Ross. Pleasure to meet you. The post Davide Dell'Anna on hybrid intelligence, guidelines for human-AI teams, calibrating trust, and team ethics (AC Ep33) appeared first on Humans + AI.

    Designing with Love
    Guiding the Classroom with AI Copilots

    Designing with Love

    Play Episode Listen Later Feb 25, 2026 16:55 Transcription Available


    AI can feel like a runaway train in classrooms and training programs—powerful, fast, and a little scary. We take the controls and show how to turn generative tools into true co-pilots: clear roles, simple guardrails, and small pilots that free us to focus on coaching, feedback, and real human connection.You'll hear role-based examples across K-12, higher education, and corporate learning: differentiated reading passages and exit tickets, outcome-aligned case prompts and quiz banks, and realistic scenario practice plus microlearning nudges for on-the-job performance.Want to put this into action? Grab the pilot checklist from the show notes, try one workflow this week, and tell us what changed. If this helped, follow the show, share it with a colleague, and leave a review so more educators and L&D pros can build ethical, effective AI co-pilots.

    Something You Should Know
    Bonus: SYSK TRENDING – The Crisis of Loneliness and How to Fix It 

    Something You Should Know

    Play Episode Listen Later Feb 24, 2026 22:09


    Thirty-six percent of Americans — including 61% of young adults and 51% of mothers with young children — say they experience “serious loneliness.” Nearly everyone has felt that ache at some point: the quiet sense of isolation, of being unseen or disconnected, even when surrounded by people. Humans are not wired for isolation. We are built for connection. Yet modern life — with its screens, busyness, and fragmented communities — often pulls us further apart. Psychiatrist Dr. Edward Hallowell joins me to explain why loneliness is far more than a bad feeling. It impacts physical health, mental health, motivation, even lifespan. He shares why connection is essential to thriving — and practical ways to rebuild it in a world that makes isolation easy. Dr. Hallowell is the author of ⁠Connect (https://amzn.to/3GxgwQw),⁠ and he also has a bestselling book on ADHD called ⁠ADHD 2.0 (https://amzn.to/3AVKgVI). Learn more about your ad choices. Visit megaphone.fm/adchoices

    The Vergecast
    How Claude Code Claude Codes

    The Vergecast

    Play Episode Listen Later Feb 24, 2026 80:37


    Few AI products have found the kind of product-market fit we've seen from Claude Code. On the eve of the product's first anniversary, Anthropic's Boris Cherny explains why Claude Code is so powerful, all the work left to do, and why he no longer writes any code himself. After that, The Verge's Hayden Field joins the show to talk about how we should think about giving our data (and our computers) to AI, even when it seems useful. Finally, The Verge's Allison Johnson helps David answer a question from the Vergecast Hotline (866-VERGE11) about whether you should go buy a phone, like, right now. Further reading: Claude Code is suddenly everywhere inside Microsoft Claude has been having a moment — can it keep it up? The AI security nightmare is here and it looks suspiciously like lobster  OpenClaw's AI ‘skill' extensions are a security nightmare  Humans are infiltrating the social network for AI bots  Anthropic connects Claude to Microsoft Teams, Outlook, and OneDrive  MCP extension unites Claude with apps like Slack, Canva, and Figma  The RAM shortage is coming for everything you care about  Subscribe to The Verge for unlimited access to theverge.com, subscriber-exclusive newsletters, and our ad-free podcast feed.We love hearing from you! Email your questions and thoughts to vergecast@theverge.com or call us at 866-VERGE11. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Hurdle
    A New Chapter: Becs Gentry On Her HOKA Partnership, How To Protect Your Peace & Setting Priorities Without Guilt

    Hurdle

    Play Episode Listen Later Feb 24, 2026 64:23 Transcription Available


    This week, I’m sitting down with the incredible Becs Gentry. Many of you know her as a powerhouse Peloton instructor, but she is stepping into a massive new chapter as a HOKA Global Brand Ambassador. It’s a synergy that marks her return to the trails and ultra-running—the place where she first fell in love with the running community. In this episode, Becs gets real about the logistics of being a global athlete and a mother. We dive deep into her discipline and how she manages to schedule high-stress training alongside her non-negotiables. She shares a powerful perspective on priority, noting that while she is deeply disciplined, she has learned that training isn't always the highest priority—the health and happiness of herself and her family, especially her daughter Talulah, always come first. We also discuss the lessons she’s learning in this chapter, from the importance of protecting her mental health by setting boundaries with social media to the value of reassessing goals when the passion starts to fade. Becs reminds us all that life is too short not to enjoy what we do in our spare time, and sometimes, the best thing you can do for your growth is to step away and find a new route to the top of the mountain. IN THIS EPISODE This episode is a masterclass in navigating life's major pivots with grace and a "family-first" filter. Here are the highlights and most quotable moments from your conversation with Becs Gentry: "In This Episode" Highlights The HOKA Era: Becs discusses her new role as a HOKA Global Brand Ambassador and how the brand's focus on community and versatile performance aligns with her current chapter. The Reality of Discipline: A deep dive into how Becs manages a grueling training schedule, highlighting that discipline isn't about being "perfect" every day, but about making things happen within the reality of your current life. Family as the North Star: Becs opens up about how her daughter, Tallulah, has completely reframed her "why," and why she chooses to set strict boundaries with technology to remain present. The Power of "No": She shares the vulnerable story of why she walked away from a marathon goal in June 2025 because she realized she was "climbing a slippery wall" with no passion for the project. New Representation: The transition to being represented by Always Alpha and how having a team helps her value her own worth in a "dog-eat-dog" business world. Intentional Living: From using a "Brick" to block social media apps to recording memories on a vintage camcorder, Becs shares her strategies for reclaiming her time from the "pocket computer". QUOTABLE MOMENTS "HOKA is a brand that blends performance with a very welcoming community to every type of runner... It is not just about having the fastest, flashiest, most expensive shoe on the market. It is about so much more than that." "Burnout doesn't come from overtraining or overdoing something. It comes from trying to achieve something that you have no passion for. It's like trying to climb up a slippery wall. You may get there, but you're going to be so exhausted and cut up and bruised and defeated by the time you get to the top." "Humans get a sense of having done it—this sense of achievement—when we tell somebody what we're working toward. The more and more people you tell and they give you that same response, it waters down and lessens your want to feel that thing when you've actually done it." "Everything I do as a woman in sport and business is striving to help my daughter not have to go through what our generations and all the generations before us have had to go through, which has been struggling to get ourselves heard and recognized for the goodness that we have and do." SOCIAL@becsgentry@emilyabbate@iheartwomenssports JOIN: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Daily Hurdle IG Channel⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ SIGN UP: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Weekly Hurdle Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ASK ME A QUESTION: Email hello@hurdle.us to with your questions! Emily answers them every Friday on the show. Listen to Hurdle with Emily Abbate on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.See omnystudio.com/listener for privacy information.

    Renegade Talk Radio
    Episode 518: Alex Jones Trump Says He’s Ready To Send Troops to Mexico NOW! The DEA Investigated Epstein for Smuggling Guns/Drugs For The CIA

    Renegade Talk Radio

    Play Episode Listen Later Feb 24, 2026 109:34


    Trump Says He's Ready To Send Troops to Mexico NOW! The DEA Investigated Epstein for Smuggling Guns/Drugs For The CIA, Princeton Researchers Have Successfully Measured Humans Emitting ESP Electromagnetic Waves, Trump Set To Address Nation Tonight

    Stop Me Project
    ABR 438: From Red Bank Regional Wrestling to “Eat Like a Human” — Dr. Bill Schindler on Weight Cuts, Keto, Fermentation & Real Food

    Stop Me Project

    Play Episode Listen Later Feb 24, 2026 91:21


    Dr. Bill Schindler joins Airey Bros Radio (ABR 438) for a deep-dive conversation that connects Jersey Shore wrestling culture to ancestral nutrition, anthropology, and real-world health.Bill is Jersey Shore bred — a Red Bank Regional wrestler who went on to compete at Ohio State and The College of New Jersey (TCNJ) — before becoming a leading voice in ancestral food systems. He's the author of Eat Like a Human, founder of The Modern Stone Age Kitchen, and a researcher/educator helping families, athletes, and coaches rethink what “healthy eating” actually means.We talk wrestling weight cuts, the mental side of food, why modern diets wreck digestion, and Bill's core idea: humans aren't omnivores by biology — we're omnivores by technology (fire, fermentation, traditional preparation, and bioavailability). Bill also shares practical takeaways for wrestlers, endurance athletes, parents, and coaches, including why he'd consider keto for wrestling and how to start small with changes that compound.In this episode:Jersey Shore wrestling roots (Red Bank Regional, Ohio State, TCNJ)Weight cuts, food fear, binge cycles, and athlete nutrition mistakes“Eat Like a Human” fundamentals: fermentation, bioavailability, real foodSimple family changes that actually last (start with the foods you eat most)Keto, carnivore, and why context + culture matter in nutritionInsects, organ meats, and pushing comfort zones the smart wayWine additives, traditional fermentation, and “food as a system”

    Ben Davis & Kelly K Show
    Setting the Bar: Coco The Robot

    Ben Davis & Kelly K Show

    Play Episode Listen Later Feb 24, 2026 2:53


    THIS Setting the Bar story is yet another reason why the relationship between HUMANS and ROBOTS is still so antagonistic! Source: https://ktla.com/news/local-news/food-delivery-robot-goes-rogue-causes-property-damage-at-east-hollywood-home/

    Cyber Security Headlines
    US healthcare breach affects 140k, experts warn against replicating humans, Shai-Hulud-like worm targets devs

    Cyber Security Headlines

    Play Episode Listen Later Feb 24, 2026 8:17


    140k affected by US healthcare breach Data advocates warn against replicating humans Shai-Hulud-like worm targets developers Get links to all of today's news in our show notes here: https://cisoseries.com/cybersecurity-news-us-healthcare-breach-affects-140k-experts-warn-against-replicating-humans-shai-hulud-like-worm-targets-devs/ Thanks to today's episode sponsor, Adaptive Security This episode is brought to you by Adaptive Security, the first security awareness platform built to stop AI-powered social engineering. Today's phishing doesn't just hit inboxes — it can sound like your CFO or look like your CEO on Zoom. AI voices, video, and deepfakes are turning trust into the attack surface. Adaptive fights back with AI-driven risk scoring, deepfake simulations featuring your own executives, and interactive training your team will actually remember. Take a three-minute tour or request a CEO deepfake demo at adaptivesecurity.com.

    Humans of Purpose
    418 Paul Lacaze and Jane Tiller: The Future of Preventive DNA Screening

    Humans of Purpose

    Play Episode Listen Later Feb 24, 2026 59:32


    My guests this week are Professor Paul Lacaze and Jane Tiller - two leading voices shaping the future of public health genetics in Australia. Professor Lacaze is a Professor of Genetics and part of the team behind DNA Screen, a major initiative exploring how preventive DNA testing could help identify people at high genetic risk of cancer and heart disease while they're still young and healthy. Jane Tiller is a lawyer and genetic counsellor with deep expertise in the policy and ethical frameworks needed to make genomic screening safe, trusted, and accessible at scale. Best known for their work on the DNA Screen pilot study, Paul and Jane are helping drive a shift from reactive healthcare, where genetic testing often happens after disease appears to anticipatory healthcare that uses genetics as a preventive screening tool, alongside existing public health programs like mammograms and bowel cancer screening. In this episode of Humans of Purpose, we explore what preventive DNA screening could mean for Australia, including: what the DNA Screen pilot found (including the number of young adults carrying high-risk, actionable genetic variants) why early screening gives people time to make informed decisions about surveillance, prevention, and family planning what “actionable” genes really means (and why lifestyle changes alone can't remove certain risks the importance of protecting Australians from genetic discrimination and why data governance, privacy, transparency and participant choice are essential if a program like this is to earn public trust. Tune in to hear how Paul and Jane are working to mainstream genomic risk management into the health system and what it could take to move from pilot to national scale.

    Rosenfeld Review Podcast
    Why Research Repositories Need Humans (and AI) with Maria Rosala

    Rosenfeld Review Podcast

    Play Episode Listen Later Feb 24, 2026 36:19


    What happens when someone moves from government UX research to shaping research for the broader industry? Lou talks with Maria Rosala, Director of Research at Nielsen Norman Group, about her role, her career path, and the value of research repositories. Maria shares what it means to lead research at NN/g and how her experience as a UX researcher in the UK Home Office shaped her perspective on research maturity and real-world practice. They explore how research repositories help organizations surface knowledge, avoid duplicate work, and support collaboration—and why people and culture remain just as important as the tools. Maria also discusses how AI could make repositories more powerful by surfacing connections and insights.

    KONCRETE Podcast
    #373 - NASA Physicist: Humans Might Not Be the First Advanced Species On Earth | Adam Frank

    KONCRETE Podcast

    Play Episode Listen Later Feb 23, 2026 173:59


    Watch every episode ad-free & uncensored on Patreon: https://patreon.com/dannyjones Adam Frank is an astrophysicist and leading expert on the final stages of evolution for stars like the sun using advanced supercomputer tools for studying how stars form and how they die. SPONSORS https://mizzenandmain.com - Use code DANNY20 for 20% off. https://rhonutrition.com - Use code DANNY for 20% off sitewide. http://amentara.com/go/dj - Use code DJ22 for 22% off. https://whiterabbitenergy.com/?ref=DJP - Use code DJP for 20% off EPISODE LINKS https://www.everymansuniverse.com Little Book of Aliens: https://a.co/d/09qdwlxG FOLLOW DANNY JONES https://www.instagram.com/dannyjones https://twitter.com/jonesdanny OUTLINE 00:00 - Search for Extraterrestrial Intelligence (SETI) 01:02 - the first exoplanet discovery 02:49 - Techno-signatures 05:36 - the silurian hypothesis 10:28 - gaps in the fossil record 16:12 - alternate technology of ancient civilizations 17:12 - the 2 meanings of a "theory" 20:30 - Townsend Brown & the Biefeld-Brown effect 24:08 - why there's no such thing as anti-gravity 31:56 - science for public vs. private knowledge 35:28 - military insiders on aliens & UFOs 42:37 - the universe is accelerating 45:35 - why personal testimony on UFOs is useless 49:17 - the greatest minds don't go into government 54:29 - aliens didn't come from other planets 59:45 - where human evolution is headed 01:03:13 - interstellar distances are not travelable 01:07:01 - the dark inspiration behind Arc Raiders 01:09:47 - the danger of current AI technology 01:17:44 - doomsday scenario where AI becomes useless 01:25:11 - the transformation of humans into cyborgs 01:31:08 - how humans change with technology 01:32:52 - what brings down human civilizations 01:36:53 - why moon landing deniers are full of s**t 01:41:42 - commercial space travel 01:46:34 - abandoning a shared reality 01:48:20 - science = national prospertity (china is gaining influence) 01:51:25 - the south pole & weird things about the moon 01:57:04 - the rare earth hypothesis 02:01:41 - how organisms change the atmosphere & climate 02:07:49 - climate patterns throughout Earth's history 02:13:02 - Earth's 5 mass extinction events 02:14:08 - "don't save the earth, save yourself" 02:16:48 - why top scientists disagree on climate 02:24:31 - the state of solar power 02:32:46 - pollution from SpaceX 02:34:56 - the younger dryas impact hypothesis 02:39:22 - the astrobiology field 02:41:54 - the Channeled Scablands 02:44:05 - what really happened to megafauna 02:45:25 - the ethics of human preservation 02:46:27 - human life may have started on Mars Learn more about your ad choices. Visit podcastchoices.com/adchoices

    The Vanessa Londino Podcast
    The Epstein Files: How Humans Slide into Utter Depravity

    The Vanessa Londino Podcast

    Play Episode Listen Later Feb 23, 2026 50:35


    The Epstein Files have reopened a question most of us would rather not ask: How do people slide into this kind of evil? Your outrage is sane—but outrage alone won't help us understand how power, access, secrecy, and rationalization can slowly dismantle a human conscience, in individuals and in systems. In this episode, we trace the psychological mechanics of moral collapse—from small compromises to full-blown corruption—using Epstein's rise, the protection of corrupt communities, and Shakespeare's Macbeth as mirrors. This isn't about spectacle. It's about clarity, accountability, and the uncomfortable truth that the line between good and evil runs through the human heart. If you want to understand how this happens—and what integrity, justice, and courage require of us now—this episode is for you.

    In The Den with Mama Dragons
    Raising Resilient Humans

    In The Den with Mama Dragons

    Play Episode Listen Later Feb 23, 2026 47:45 Transcription Available


    Send a textLife can be hard, especially for our queer children. They often face unique obstacles, encounter discrimination, and endure marginalization in their lives and in their pursuits of happiness. Resilience helps our children (and us) cope with life's challenges and setbacks, allowing folks to recover and grow stronger from difficult experiences. Resilience fosters emotional regulation, optimism, and a strong support network, which are essential for maintaining mental well-being and overall life satisfaction. Joining us In the Den is Dr. Kate Lund, a licensed psychologist, Tedx Speaker, author of Stepping Away the Keys to Resilient Parenting, and an expert on the topic of resilience. Dr. Kate insists that resilience does not have to be complicated and that we all are capable of living our best lives, regardless of our setbacks.Special Guest: Dr. Kate LundDr. Kate Lund is a clinical psychologist, keynote speaker, published author, and resilience expert dedicated to helping individuals and families thrive within their own unique contexts. With advanced training from three Harvard-affiliated hospitals and decades of experience in clinical practice, Dr. Lund specializes in emotional intelligence, stress resilience, and sustainable well-being for parents, athletes, and high performers. She is the author of Bounce: Help Your Child Build Resilience and Thrive in School, Sports, and Life and Step Away: The Keys to Resilient Parenting. Dr. Lund also hosts Resilient Parenting with Dr. Kate, a podcast that explores the science and lived experience of resilience through conversations with parents, educators, clinicians, and leaders. Known for her relatable, evidence-based approach, Dr. Lund blends clinical expertise with personal insight as the mother of twin boys and while working as a volunteer with her dog Wally as part of the animal assisted therapy program at Seattle Children's Hospital. Whether on stage, in session, or on the air, she empowers people to step away from overwhelm and step into clarity, connection, and confidence.Links from the Show:Kate's Book Step AwayKate Book BounceKate's WebsiteJoin Mama Dragons todayIn the Den is made possible by generous donors like you. Help us continue to deliver quality content by becoming a donor today at www.mamadragons.org. Support the showConnect with Mama Dragons:WebsiteInstagramFacebookDonate to this podcast

    The Ripple Effect Podcast
    Episode 570: The Ripple Effect Podcast (Midnight Mike | The Artificial Future: Humans Not Required)

    The Ripple Effect Podcast

    Play Episode Listen Later Feb 21, 2026 65:07


    Midnight Mike is the host of a weekly conspiracy, paranormal, news talk show calledOBDM, and also a co-host of Doom Scrollinwith Sam Tripoli, and co-host of the wildly popular round-table discussion podcast, The Union of the Unwanted, with Charlie Robinson (Macroaggressions), Sam Tripoli (Tin Foil Hat), and Ricky Varandas (The Ripple Effect).MIDNIGHT MIKEWebsite: https://ourbigdumbmouth.com/YouTube: https://www.youtube.com/@obdmpodRumble: https://rumble.com/user/obdmDoomScrollin: https://samtripoli.com/doomscrollin/X: https://x.com/obdmpodIG: https://www.instagram.com/obdmpod/THE RIPPLE EFFECT PODCAST:WEBSITE: http://TheRippleEffectPodcast.comWebsite Host & Video Distributor: https://ContentSafe.co/SUPPORT:PATREON: https://www.patreon.com/TheRippleEffectPodcastPayPal: https://www.PayPal.com/paypalme/RvTheory6VENMO: https://venmo.com/code?user_id=3625073915201071418&created=1663262894MERCH: Store: http://www.TheRippleEffectPodcastMerch.comTHEORY 6 MUSIC: https://open.spotify.com/artist/1w91xRlB4b2MJYyXXhJcyFSPONSORS:OPUS A.I. Clip Creator: https://www.opus.pro/?via=RickyVarandasScott Horton Academy: https://scotthortonacademy.com/rippleeffectUniversity of Reason-Autonomy: https://www.universityofreason.com/a/2147825829/ouiRXFoLWATCH:OFFICIAL YOUTUBE: https://www.youtube.com/@TheRippleEffectPodcastOFFICIALYOUTUBE CLIPS CHANNEL: https://www.youtube.com/@RickyVarandasLISTEN:SPOTIFY: https://open.spotify.com/show/4lpFhHI6CqdZKW0QDyOicJiTUNES: http://apple.co/1xjWmlFTHE UNION OF THE UNWANTED: https://linktr.ee/TheUnionOfTheUnwanted

    music humans union paypal artificial required unwanted midnight mike contentsafe ripple effect podcast therippleeffectpodcast theunionoftheunwanted
    Coronavirus: Fact vs Fiction
    How Hibernation Could Redefine Space Travel and Medicine

    Coronavirus: Fact vs Fiction

    Play Episode Listen Later Feb 20, 2026 24:59


    Animals can hibernate, slowing down most metabolic functions — heart rate, blood flow, brain activity, and body temperature — then waking as if nothing happened. Humans have never done this, but what if they could? Could hibernation extend life or even save it?  Dr. Sanjay Gupta explores global research into the molecular mechanics of hibernation and how these abilities might one day help fight cancer, prevent heart disease, treat depression, and even enable travel to Mars. This episode was produced by Amanda Sealy Medical writer: Andrea Kane Showrunner: Amanda Sealy  Senior Producer: Dan Bloom Technical Director: Dan Dzula Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Raising Good Humans
    The Nature of Nurture with Professor Jay Belsky

    Raising Good Humans

    Play Episode Listen Later Feb 20, 2026 76:07


    In today's episode I sit down with developmental psychologist Dr. Jay Belsky to explore a question so many parents wrestle with: is temperament destiny? We talk about why children differ in how deeply they're shaped by their environments, what “developmental plasticity” really means, and why the same parenting can land so differently depending on the child. We discuss the difference between sensitivity and susceptibility, the limits of attachment research, and why focusing only on long-term outcomes can distract us from what matters in the here and now.I WROTE MY FIRST BOOK! Order your copy of The Five Principles of Parenting: Your Essential Guide to Raising Good Humans Here: https://bit.ly/3rMLMsLSubscribe to my free newsletter for parenting tips delivered straight to your inbox: https://dralizapressman.substack.com/Follow me on Instagram for more:@raisinggoodhumanspodcast Sponsors:Zip Recruiter: Try it FOR FREE at ZipRecruiter.com/HUMANSBloom: Go to bloomnu.com with code HUMANS for 20%of your first orderEllo: Visit ElloProducts.com/CleanStart and use code RGH at checkout for 20% off your first purchaseLittle Spoon: Get 30% off your first online order at littlespoon.coms/RGH with code RGHFirst Day: Our listeners get up to 57% Off AND a Free Gift with code HUMANS at FirstDay.coMinnow: Go to shopminnow.com code MEETMINNOW15 for 15% offProduced by Dear MediaSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Disorganized Crime: Smuggler's Daughter
    Inside the Olympic Prison [from Very Special Episodes]

    Disorganized Crime: Smuggler's Daughter

    Play Episode Listen Later Feb 20, 2026 50:32 Transcription Available


    In the winter of 1980, the world turned its eyes to Lake Placid, New York, host of the Winter Olympics. But behind the pageantry, another structure loomed in the Adirondack woods. Built to house 1,800 athletes, the Olympic Village looked less like a dormitory than a detention center — because that’s exactly what it was designed to become. * Hosted by Zaron Burnett, Dana Schwartz, and Jason EnglishWritten by Zaron BurnettSenior Producer is Josh FisherEditing and Sound Design by Jesse NighswongerMixing and Mastering by Josh FisherOriginal Music by Elise McCoyShow Logo by Lucy QuintanillaExecutive Producer is Jason English For School of Humans, Producers are Emilia Brock and Edeliz Perez. Executive Producer is Virginia Prescott. See omnystudio.com/listener for privacy information.