Species of hominid in the genus Homo
POPULARITY
Categories
This video features an in-depth discussion between Michael Sartain and an Ivy League academic named "David", focusing on the scientific and practical aspects of human mating, dating, and social status from the perspectives of evolutionary biology, psychology, and sociology. The conversation bridges rigorous academic research with real-world application through the coach's extensive hands-on experience working with thousands of clients. 00:00- Intro 01:34 - Practical Field Experiments vs. Academia 04:19 - Evolutionary Attraction and Competency Triggers 08:22 - Hormones, Status Hierarchies, and Dominance 13:28 - The Winner Effect and TRT 20:23 - Mental Framing and Behavioral Confidence 27:21 - Socialization as an Evolutionary Advantage 33:07 - Paternal Hormonal Changes and Risk 40:04 - Modern Biohacking and Age Relevance 45:00 - Stoicism through Social Immersion Therapy 51:10 - Master Overcoming Chaos and Difficulty 57:51 - Evolutionary Origins of Approach Anxiety 1:03:30 - Mate Choice Copying in Humans 1:13:09 - Cheerleader Effect and Social Proof 1:21:37 - Assortative Mating and Time Effects 1:30:27 - Hypergamy and Modern Social Media 1:33:36 - Data-Driven Limitations of Dating Apps 1:47:11 - Using Apps to Build Social Circles 1:56:09 - Critiquing Modern Matchmaking and Services ————————————————————
Animals hunt to fill their stomachs. Humans do so for power and greed. And when they possess weapons of destruction, they think themselves to be invincible. It's easy to say it's primordial, part of the ancient blood running in our veins, but it's also civilizational. Of having - or not having - a spiritual foundation, a religion which teaches inclusion and diversity, and not harp on a supreme monotheism. The urge to convert, failing which to conquer, is the legacy of our flawed religious leaders, who were products of their time, and constructed manuals chocobloc with their fears, flaws and aneurysms of the times. And they forced humanity to see divine in the monstrous. And the moral underpinnings of every endeavour thus became vitiated and compromised. And when men gave into their basest inclinations to acquire and rule, to preen and show, all hell broke loose. Under the guise of righteousness, they found justification to bring destruction, mayhem, deaths. Alas, that is the legacy we will leave behind on this earth, which some day or the other we are bound to destroy - the proverbial cutting the branch on which we sit. Because with hubris comes the suicidal instinct, of so-called glory above all else, justification above logic, of allowing ourselves to be destroyed as collateral damage just to prove a point of our invincibility. A simple fact. There's never going to be peace on this earth. Men, religion and hubris will justify every vile crime done against humankind on this earth. Till we are all wiped off. If you liked this poem, consider listening to these other poems on the miseries and damage of war - Sounds of the Living and the Dead For Anyone Who Bleeds Will We Ever Trust the Skies Again Subscribe to my newsletter 'The Uncuts' Follow me on Instagram at @sunilgivesup. Get in touch with me on uncutpoetrynow@gmail.com The details of the music used in this episode are as follows - Evacuation by Sascha Ende Link: https://filmmusic.io/en/song/evacuation Licence: https://filmmusic.io/standard-license
In today's episode I sit down with professor Ann Masten to unpack what resilience actually means—and why it's so often misunderstood. We explore her powerful definition of resilience as the capacity of a system to adapt to serious challenges, not just a personality trait or inner toughness. From everyday stress to real adversity, we discuss the difference between harmful trauma and growth-building challenges, and why kids need support—not perfection—to thrive. We talk about the “ordinary magic” of caring relationships, schools, communities, and cultural traditions, and why resilience is built through connection across multiple systems.I WROTE MY FIRST BOOK! Order your copy of The Five Principles of Parenting: Your Essential Guide to Raising Good Humans Here: https://bit.ly/3rMLMsLSubscribe to my free newsletter for parenting tips delivered straight to your inbox: https://dralizapressman.substack.com/Follow me on Instagram for more:@raisinggoodhumanspodcast Sponsors:BetterHelp: Sign up and get 10% off at BetterHelp.com/humansWayfair: Head to Wayfair.com right now to shop all things homeJones Road Beauty: Use code HUMANS at jonesroadbeauty.com to get a Free Shimmer Face Oil with your first purchase! #JonesRoadBeauty #adFast Growing Trees:An ADDITIONAL TWENTY PERCENT OFF better plants and better growing at FastGrowingTrees.com using the code HUMANS at checkoutExperian: Get started with the Experian App now!Bloom: Go to bloomnu.com with code HUMANS for 20% off your first orderProduced by Dear MediaSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Watch every episode ad-free & uncensored on Patreon: https://patreon.com/dannyjones Mario Beauregard, PhD, is a cognitive neuroscientist who studies the neuroscience of consciousness and mystical experience, including a study investigating the brain activity of Carmelite nuns. He is co-author of the book 'The Spiritual Brain: A Neuroscientist's Case for the Existence of the Soul'. SPONSORS http://amentara.com/go/dj - Use code DJ22 for 22% off. https://rag-bone.com - Use code DANNY & get 20% off sitewide. https://capl.onelink.me/vFut/zralgyl0 - Download CashApp today! https://chubbiesshorts.com/danny - Use code DANNY for 20% off. https://whiterabbitenergy.com/?ref=DJP - Use code DJP for 20% off EPISODE LINKS The Spiritual Brain - https://a.co/d/0cZDv6gn https://www.drmariobeauregard.com FOLLOW DANNY JONES https://www.instagram.com/dannyjones https://twitter.com/jonesdanny OUTLINE 00:00 - Dr. Beureguard's childhood mystical experience 03:52 - Discovering everything is connected as one 07:08 - Mario "downloaded" his life's mission 09:54 - Mario's failed journey to become a priest 15:44 - Mario's second mystical experience 21:08 - What Mario saw in Heaven 23:30 - Mario's biological markers say he's 20 years younger 29:19 - How Mario became a neuroscientist 30:07 - The roots of modern science 31:02 - When science lost its spiritual connection 34:27 - Testing memory molecules for Pfizer 36:00 - Pfizer pushed ineffective Alzheimer's drug in 1994 41:12 - Why Mario fled Canada during the pandemic 43:00 - Justin Trudeau paid off court judges during the pandemic 46:31 - The Catholic Church tried to bribe Mario 53:38 - Why the church is pushing new science 01:01:10 - Carmelite nuns study 01:07:00 - 1% of seizures trigger mystical experiences 01:09:57 - Johns Hopkins psychedelics + religion study 01:13:07 - Mario tested all drugs before experimenting 01:14:44 - Human psyche vs. consciousness 01:16:55 - "Consciousness" is the scientific God 01:21:56 - Non-physical information 01:25:17 - Where thoughts come from 01:30:14 - Holotropic breathwork to expand consciousness 01:34:58 - New consciousness research 01:38:02 - Who's funding consciousness research 01:40:11 - Studies on people who survived death 01:44:58 - Holosynthesis 01:49:33 - What happens when you "overdose" psychedelics 01:52:20 - Church-sanctioned psychedelic use 01:55:57 - Humans are behaving like robots 02:02:54 - Joan Jett's spiritual transformation 02:05:37 - NDEs vs. life reviews 02:07:21 - Memories of past lives 02:15:35 - How to expand consciousness using sound 02:21:30 - Bufo: DMT times 1,000 02:24:39 - Mapping neurological effects across religions 02:26:25 - The Dalai Lama's lesson on attention 02:32:04 - What the brains of uncontacted tribes might look like 02:37:55 - Explanation of the universe Learn more about your ad choices. Visit podcastchoices.com/adchoices
Jim talks with Samantha Sweetwater about her book True Human: Reimagining Ourselves at the End of Our World and the question of what it means to be human at this moment in planetary history. They discuss her verb-based rather than noun-based self-identity, Lisa Feldman Barrett's construction theory as a framework for understanding the entanglement of body, brain, mind, and relationship as the fabric of lived experience, Samantha's identity as a "Gaian" and humans as a creator-destroyer class of organism, the Fermi paradox and the gigantic moral freight of potentially being the only general intelligence in the universe, the meaning of the sacred and John Vervaeke's formulation that "sacred is how the world is to us when we see it through the eyes of love," Jim's own definition of the sacred as the appropriate stance toward things too complex for reductionist analysis, the metacrisis as fundamentally a crisis of separation, the four generator functions of separation including stories of separability, structures of separability, win-lose game-theoretic dynamics, and dominator ideologies, the forager operating system and Chris Boehm's account of how egalitarian societies historically defeated hierarchy, the hinge of agriculture and henchmen enabling dominator systems, Luke Kemp's Goliath's Curse and the contrast between fluid civilizations and Goliaths, role-based non-hierarchical leadership in forager societies and whether it can scale, Audrey Tang as an emergent archetype of life-centric coordination, psychedelics as allies and teachers rather than mere tools, Samantha's personal healing path through sacrament, community, and prayer, the neuroscience of heightened neural entropy and the brain's wash cycle, the ontological reframe of one's own importance, the hard problem of machine consciousness and the California Institute for Machine Consciousness, the space of minds and the n=1 problem of one planet and one biochemistry, the MoltBook experiment of AI inventing languages and religions, relationality as the core practice available to people in their actual lives, humans as a custodial species and co-orchestrators rather than dominion-holders, Tyson Yunkaporta's Sand Talk, and much more. Episode Transcript True Human: Reimagining Ourselves at the End of Our World, by Samantha Sweetwater Goliath's Curse, by Luke Kemp Sand Talk, by Tyson Yunkaporta JRS Currents 010: Tyson Yunkaporta on Humans as a Custodial Species Samantha Sweetwater is the author of True Human: Reimagining Ourselves at the End of Our World, a meta-relational educator, leadership mentor, and the founder of One Life Circle, a ministry of remembering. For over three decades, she has facilitated individual and collective transformational experiences across diverse cultures and communities on five continents. As the founder of Dancing Freedom and Peacebody Japan, she pioneered a global movement of embodied awakening and trained hundreds of facilitators worldwide. Her work bridges ecology, complexity, spirituality, and technology with lived experience, inviting a re-imagining of what it means to be human in a time of planetary techno-cultural transformation. Through teaching, writing, and attuned presence, she helps people restore relationship with their bodies, each other, and the living world as a foundation for wise action in uncertain times.
Humans evolved for face-to-face courtship in small communities, where attraction unfolded gradually and choices were limited. Today, we're navigating global dating markets, algorithms, AI recommendations, endless novelty, and constant rejection. So what happens when ancient mating psychology collides with modern technology? I am joined once again by Dr. Justin Garcia, evolutionary biologist and Executive Director of the Kinsey Institute at Indiana University. He is the chief scientific advisor for Match, and author of the new book The Intimate Animal: The Science of Sex, Fidelity, And Why We Live and Die For Love. Some of the specific topics we explore in this episode include: How do dating apps shape our dopamine responses and bonding tendencies? Could AI actually improve mate selection, or is that better left to humans? Are changing relationship patterns a sign of human adaptability, or something else? Where might the future of sex, dating, and intimacy be headed? To learn more about Dr. Garcia, follow @drjustingarcia on the socials. Got a sex question? Send me a podcast voicemail to have it answered on a future episode at speakpipe.com/sexandpsychology. *** Thank you to our sponsors! If you’re ready to ditch the shady stuff and choose a libido supplement that's effective and that you can feel confident about, it’s time to check out Drive Boost. Visit vb.health and use code JUSTIN for 10% off. If you’re looking to gain a broad understanding of human sexuality or refresh your knowledge, check out the upcoming Human Sexuality Intensive courses at the Kinsey Institute: https://kinseyinstitute.org/learning/human-sexuality-intensive.html *** Want to learn more about Sex and Psychology? Click here for previous articles or follow the blog on Facebook, Twitter, or Bluesky to receive updates. You can also follow Dr. Lehmiller on YouTube and Instagram. Listen and stream all episodes on Apple, Spotify, or Amazon. Subscribe to automatically receive new episodes and please rate and review the podcast! Credits: Precision Podcasting (Podcast editing) and Shutterstock/Florian (Music). Image created with Canva; photos used with permission of guest.
Summary Welcome to our 500th episode! To celebrate this milestone, Andy talks with Steve Brown, AI futurist, keynote speaker, and author of The AI Ultimatum: Preparing for a World of Intelligent Machines and Radical Transformation. Steve brings a rare perspective shaped by years at Intel and Google DeepMind, and today helps organizations navigate two vital questions: what future do you want to build with AI, and what future do you want to avoid? They explore why waiting isn't actually the safe option it feels like, how to think about the different "flavors" of AI beyond just generative tools, and what it really means to orchestrate humans, AI agents, and robots together in the workplace. Steve introduces three types of AI agents—offload, elevate, and extend—and explains the crucial difference between automating tasks and truly transforming how work gets done. You'll also hear his candid take on the fear of being replaced and why doubling down on your humanity is the smartest career move you can make right now. If you're looking for a practical, empowering guide to leading through the AI revolution—without the hype—this episode is for you! Sound Bites "The difference between an AI-enabled or AI-first company and an AI laggard is going to be so great that if you don't get on the train, you may get to the point where you can never catch up." "Your competitors who have embraced AI faster than you are going to be just kicking your butt all over town." "There's a serious cost to inaction in that you can become made irrelevant." "The danger with that is you may automate yourself. It may automate away all of the differentiation you have in your brand and your company." "AI is this sort of amplification technology, and the challenge is to balance cost-cutting and value creation." "Each flavor of AI is useful for solving a different type of business problem." "It feels like a digital employee, right? A digital worker that works for you." "It's taking the suck out of your job." "The real opportunity here, is to transform the way you do work rather than just try and automate away tasks or people." "The workplace of the future is going to be three groups. Humans will still be in the workforce. Great! Go us!" "You won't be replaced by an AI or a robot. You'll be replaced by someone who knows how to use AI better than you do." "Double down on your humanity." "Focus on building the skills that cannot be replaced, or at least won't be replaced by machines anytime soon." "At the end of all of this is going to be lives of abundance, where we have the things that we need." Chapters 00:00 Introduction 01:45 Start of Interview 01:54 Steve's Career Journey from Intel to DeepMind 05:00 Understanding the AI Ultimatum 08:23 Our First AI Moments 09:32 The Flavors of AI 13:54 Three Pathways to Creating Value with AI 15:11 Automation vs. Transformation 17:10 Orchestrating Humans, AI, and Robots 19:01 Real-World Examples of AI Agents 21:33 Physically Intelligent Robots in the Workplace 24:13 Addressing Fear and Resistance to AI 26:44 Preparing the Next Generation for the AI Age 29:56 Where to Learn More About Steve 31:01 End of Interview 31:38 Andy Comments After the Interview 36:23 Outtakes Learn More You can learn more about Steve and his work at SteveBrown.ai. For more learning on this topic, check out: Episode 479 with Matt Mong. It's a discussion about the AI skills you need to stay relevant. Episode 454 with Christie Smith. She talks about how AI is changing leadership, and what we can do about that now. Episode 437 with Nada Sanders. It's a discussion about future-prepping your career in an age of AI. You can also chat directly with PMeLa—the podcast's AI persona—to get episode recommendations and answers to your project management and leadership questions. Visit PeopleAndProjectsPodcast.com/PMeLa to chat with her. Level Up Your AI Skills Join other listeners from around the world who are taking our AI Made Simple course to prepare for an AI-infused future. Just go to ai.PeopleAndProjectsPodcast.com. Thanks! Pass the PMP Exam This Year If you or someone you know is thinking about getting PMP certified, we've put together a helpful guide called The 5 Best Resources to Help You Pass the PMP Exam on Your First Try. We've helped thousands of people earn their certification, and we'd love to help you too. It's totally free, and it's a great way to get a head start. Just go to 5BestResources.PeopleAndProjectsPodcast.com to grab your copy. I'd love to help you get your PMP this year! Join Us for LEAD52 I know you want to be a more confident leader–that's why you listen to this podcast. LEAD52 is a global community of people like you who are committed to transforming their ability to lead and deliver. It's 52 weeks of leadership learning, delivered right to your inbox, taking less than 5 minutes a week. And it's all for free. Learn more and sign up at GetLEAD52.com. Thanks! Thank you for joining me for this episode of The People and Projects Podcast! Talent Triangle: Business Acumen Topics: Artificial Intelligence, Leadership, Future of Work, AI Strategy, Digital Transformation, Agentic AI, Automation, Organizational Change, AI Ethics, Competitive Advantage, Human-AI Collaboration, Technology Adoption The following music was used for this episode: Music: Lullaby of Light featuring Cory Friesenhan by Sascha Ende License (CC BY 4.0): https://filmmusic.io/standard-license Music: Fashion Corporate by Frank Schroeter License (CC BY 4.0): https://filmmusic.io/standard-license
AI-powered agents and robots are already technically capable of performing an increasing share of human work. So how can workers, managers and organizations adapt to the dramatic shift? A new McKinsey Global Institute report offers a roadmap. While AI is transforming the workplace at unprecedented speed, people will remain essential for many tasks that are still beyond AI's capabilities—and to supervise, manage and collaborate with the technology. In fact, the demand for workers with AI fluency has grown dramatically over the past two years. Work in the future will be a partnership between people, agents and robots. Which skills are likely to be most—and least—impacted by automation? How can public institutions help by aligning education and training with emerging skill needs—from AI fluency to skilled trades—and widening access to opportunity? And what strategies can organizations adopt to help their workforce adapt? Join us for a conversation with report authors Alexis Krivkovich and Anu Madgavkar of McKinsey Global Institute, along with Katy George, Microsoft's corporate vice president of workforce transformation, and Kevin Delaney, editor-in-chief of The San Francisco Standard. They will discuss the research findings and share practical guidance for navigating the transition to human-AI collaboration at work. Learn more about your ad choices. Visit megaphone.fm/adchoices
Today on the show, we welcome Q, Miguel, and Raven of Renegade Theater speaking about their April production of Reefer Madness. Renegade Theater Co. is a community-based theater company in Santa Cruz, CA. Renegade is a 501 (C) (3) non-profit that creates theatrical productions meant to challenge the standard expectations of theater. The show benefactor for Reefer Madness is The Last Prisoner Project. Gennevie "Q" Herbranson has served as the board president of Renegade Theater since its inception three years ago. She has been involved in theater in Santa Cruz since the summer of 2019, originally supporting her two kids in the youth theater scene.Miguel Reyna is beside himself to be directing his first production for his friends at Renegade Theater. He has been acting in and directing theater for the past 40 years and will likely never stop. His past 20 years have happened here in Santa Cruz County. Most recently, he's directed Stephen King's Misery at Actors Theatre. He's also directed The Thin Place for Actors' Theatre and The Humans for Mountain Community Theater, and co-produced Evil Dead The Musical at MCT.Raven Voorhees is a barista in the heart of Santa Cruz (Abbott Square) and enjoys creating art both on and off stage. Raven has an extensive theatrical resume and has been in both of Renegade's previous adult musicals, Heather's and Cabaret. Raven is the lead role of Jimmy in Reefer Madness. In this conversation, we explore themes of propaganda, rebellion, and control over youth, using humor as political satire in a time when history is repeating itself in more ways than one.
In episode 187, Part 111 of The Story of Creation, we dive into the origins of humanity, consciousness, and existence through the lens of Universal Beings and sovereign intelligence — the source of all creation. Discover the hidden truths about human potential and your role in the universe. This is not abstract philosophy — it is your direct connection to the energy and frequency that created existence itself. Through this conversation, you'll learn: • How your curiosity and questions feed consciousness and expand reality. • Why recognizing yourself as a soul with unique energy shifts your life, beyond trauma, fear, or societal expectations. • How your instincts and interests are not random — they are the heartbeat of the universe, guiding you toward your potential. • The truth about human embodiment as a reflection of universal intelligence and infinite energy. • How letting go of limiting human models and societal rules opens the door to aligned, conscious living. This episode will help you: • Awaken to your role in existence. • Recognize and honor your unique perspective and frequency. • Align with universal intelligence to manifest potential and purpose. • Step fully into your sovereign, creative, authentic self. You were designed for more than just living life by the rules — it's time to explore your true potential. Listen, awaken, and begin embodying your infinite energy today. 0:00 – Exploring Consciousness, Creation & Human Origins 1:05 – Curiosity as the Heartbeat of Humanity 2:53 – Celebrating What Doesn't Fit Society 4:15 – Recognizing Yourself as a Soul 6:43 – Universal Love, Respect & Your Role 8:53 – Questions Expand Consciousness 10:38 – Living Beyond Limits & Time 11:39 – Participation in the Cycle of Creation 15:31 – Destiny as an Unfolding Experience 19:08 – Humans as Embodiment of Infinite Intelligence 22:56 – Your Vital Life Force & Power 28:01 – Discovering Purpose in Every Question 34:36 – Aligning Frequency & Vibration with the Universe 37:46 – Excavating Your Mind to Reveal More 38:49 – Recognizing Existence, Letting Go of Fear #HumanOrigins #ConsciousLiving #ExistenceAwakening Watch The Story of Creation from the beginning: https://www.youtube.com/playlist?list=PLtY9aRgn79cba9wSRRx-vkT1crKnyBotq
Alright headbangers, here's the deal: Ian double-booked himself and couldn't make it into the studio today... which wouldn't normally be a problem, but he also at the last minute realized he doesn't currently have a working mic to do a home recording with. So, as an extra special treat on today's episode, we're gonna pull out a colossal two-hour slab of ambient doom 'n' drone from Japan's legendary CORRUPTED, and play their titanic 1999 magnum opus Llenandose De Gusanos (almost) in its entirety. And... that's it. That's the whole episode. There will be no talking or anything else to cut the immense monochromatic darkness of this singular musical journey. Enjoy!
Episode 784: Future Humans, Urban Legends & the Amazon's Boiling River Are UFOs actually… us? This week on The Box of Oddities, Kat and Jethro dive headfirst into one of the most unsettling and scientifically grounded UFO theories you've probably never seriously considered: what if “alien grays” aren't extraterrestrials at all—but future humans traveling back in time? Drawing from the work of biological anthropologist Dr. Michael P. Masters and his “extratempestrial” hypothesis, we explore how reported alien anatomy—large craniums, smaller jaws, reduced musculature, oversized dark eyes—might align disturbingly well with projected human evolution. If technology continues to shape our bodies, if artificial environments replace natural selection, and if reproductive trends continue to decline (with documented sperm count drops of 50–60% since the 1970s), could humanity biologically transform within 50,000–100,000 years into something that looks eerily like the beings reported in UFO encounters? And if that's the case… why would they come back? We unpack the reproductive crisis angle, the strange fixation on DNA in abduction lore, and the possibility that UFO “craft” aren't spacecraft at all—but space-time manipulation devices. Is time travel actually the more conservative explanation compared to faster-than-light travel? What would survival look like for a technologically advanced but biologically fragile future civilization? Then, because we love tonal whiplash, we pivot to something equally bizarre but undeniably real: the legendary Boiling River of the Amazon. Deep in Peru's rainforest flows Shanay-Timpishka, a river so hot it can nearly boil living creatures alive—reaching temperatures close to 200°F in certain stretches. Far from any volcano, this geothermal marvel has been documented by geoscientist Andrés Ruzo and remains steeped in Indigenous legend involving Yacumama, the great serpent spirit said to shape the waters. We explore the science, the myth, and why protecting “neat things” like a four-mile-long boiling river might matter more than we realize. From evolutionary biology to paranormal lore, from time machines to steaming rainforest rivers, this episode proposes one uncomfortable idea: If future humans are visiting us, they aren't here to save us or punish us. They're here because something survives… and something doesn't. Learn more about your ad choices. Visit megaphone.fm/adchoices
Loneliness is quietly becoming one of the most dangerous struggles of modern life, even among believers who sit in full churches each week. Ray, E.Z., Mark, and Oscar explore why fellowship is fading and why many feel isolated. The guys explain how social media fuels comparison and resentment by showcasing polished lives that make normal struggles feel shameful. People can stand in crowded rooms yet feel unseen, afraid that honesty will be met with misunderstanding. Biblical fellowship is part of God's design, and shared purpose in the gospel replaces isolation with meaningful work. Busyness may numb loneliness temporarily, but it cannot replace deep relationships rooted in Christ.The guys explore how fear of rejection and fear of being known keep people stuck in isolation. Humans are created in God's image for a relationship with Him and with others, so disconnection runs counter to design. The gospel is not only a rescue from judgment but an invitation into communion with God and His people. Isolation creates space where lies grow louder, though intentional time alone with the Lord is different from unhealthy withdrawal. When believers live aware of Christ's presence, they are never alone, yet they still need embodied community. The guys connect the loneliness crisis to the Fall and to a culture that celebrates radical independence. From the beginning, it was not good for man to be alone, reflecting a God who exists in perfect community. Modern life pushes people inward, urging them to build identity from feelings and demand affirmation from others. This inward focus can lead to shallow online groups that imitate belonging without offering truth or accountability. Real gospel community reshapes hearts and calls believers to lift their eyes from themselves toward loving God and serving others. Purpose pulls people out of despair and reminds them they belong to something eternal.Finally, the guys offer practical steps for rebuilding connections in a disconnected world. The starting point is Christ, because union with Him means a believer is never spiritually abandoned. Meaningful church involvement, discipleship, confession, and shared service are essential for growth. Overcoming isolation requires intention, such as changing habits, making time for friendships, and stepping into opportunities to serve with others. For those battling anxiety or fear, small but concrete steps matter. Christians are not meant to fight alone but to link arms, labor together, and find that fellowship is one of God's primary tools for joy, strength, and lasting hope.Send a textThanks for listening! If you've been helped by this podcast, we'd be grateful if you'd consider subscribing, sharing, and leaving us a comment and 5-star rating! Visit the Living Waters website to learn more and to access helpful resources!You can find helpful counseling resources at biblicalcounseling.com.Check out The Evidence Study Bible and the Basic Training Course.You can connect with us at podcast@livingwaters.com. We're thankful for your input!Learn more about the hosts of this podcast.Ray ComfortEmeal (“E.Z.”) ZwayneMark SpenceOscar Navarro
Bob Zimmerman highlights SpaceX's routine orbital successes while contrasting them with China's rational, long-term plan to land humans on the moon by the year 2030. (15)1900 NILE EGYPT
Before Tyrannosaurus and Triceratops, Earth was rebuilding from catastrophe. Out of the ashes of the Great Dying rose a new prehistoric world and with it came the age of the dinosaurs.In this episode of The Ancients, Tristan Hughes is joined by Dr Henry Gee to explore the full sweep of dinosaur history, from their emergence on the supercontinent Pangaea to their 150-million-year dominance of the planet. Discover how early reptiles evolved into the giants of the Jurassic and Cretaceous, how ecosystems transformed around them, and why their reign finally came to a dramatic end.MORERise of HumansListen on AppleListen on SpotifyFeathered DinosaursListen on AppleListen on SpotifyPresented by Tristan Hughes. Audio editor and producer is Joseph Knight. The senior producer is Anne-Marie Luff.All music courtesy of Epidemic SoundsThe Ancients is a History Hit podcast.Sign up to History Hit for hundreds of hours of original documentaries, with a new release every week and ad-free podcasts. Sign up at https://www.historyhit.com/subscribe. You can take part in our listener survey here: https://insights.historyhit.com/history-hit-podcast-always-on Hosted on Acast. See acast.com/privacy for more information.
We talk with physician and writer Bob Wachter about why he's cautiously optimistic that artificial intelligence will usher in a ‘golden age' of medicine — and the questions he still has about these powerful new tools.Guest:Bob Wachter, Chair, Department of Medicine, UC San Francisco; Author, A Giant Leap: How AI Is Transforming Healthcare and What That Means for Our FutureLearn more and read a full transcript on our website.Want more Tradeoffs? Sign up for our free weekly newsletter featuring the latest health policy research and news.Support this type of journalism today, with a gift. Hosted on Acast. See acast.com/privacy for more information.
https://rhr.tv/stream U.S. Government Contractor Arrested for Stealing $46M from U.S. Marshals Service — https://x.com/fbidirectorkash/status/2029574256959389933 Polymarket Prediction Market: Next Supreme Leader of Iran — https://polymarket.com/event/who-will-be-next-supreme-leader-of-iran-515 X Sharing User Data with Israel via Au10tix Verification — https://x.com/isfjmocha/status/2028407560382841305 GrapheneOS Partners with Motorola for Privacy-Focused Devices — https://primal.net/e/nevent1qqsykcl7urukyh4g3s56rhlwthmyu66ggm9zr9q2sunjengtnug5geqx7ww3f STRIKE Now Available in NY, Rolls Out Line of Credit Product Bitwise Donates $233K to Bitcoin Open-Source Developers — https://x.com/bitwise/status/2029245847620530531 Gabon | Government Suspends Social Media Access Nationwide Last week, officials in Gabon suspended access to social media platforms indefinitely. To justify the suspension, the country's telecommunications agency said it observed “content that undermines human dignity, the country's institutions, and national security” on digital platforms, but independent voices condemned the action as an obvious crackdown on dissent. Users of TikTok and Meta's platforms, including Facebook and WhatsApp, reported widespread disruptions beginning Wednesday, Feb. 18, which have widely disrupted people's ability to communicate. Freedom tech like Bitchat, which provides offline messaging capabilities, and Nostr, a protocol for decentralized, censorship-resistant communication, will continue to play important roles in preserving speech, expression, and communication as authoritarian regimes increasingly restrict internet freedom. FinancialFreedomReport.org Stealth: Private Bitcoin Wallet Privacy Auditor Tool — https://x.com/brenorb/status/2028897371749269890 Cake Wallet Launches Lightning Network Integration — https://x.com/cakewallet/status/2028531059160182943 Tailrelay: Simplified Start9 Access via Tailscale — https://primal.net/e/nevent1qqs9wqhks48fhvxz7j4ngl9mxgsqyempy7g2ywl4kn4km79shzuqulgsn4j65 YakiHonne Update: Scheduled Notes and New Features — https://primal.net/e/nevent1qqsruj5rf9s6rqpzdvpsyc2end2jtn3hyqe8s8ggwld3pmn397r63nqm3p3rn Wisp: New Android Nostr Client in Beta — https://primal.net/e/nevent1qqsddm6payrqvnultvp6n7ck69jwax74e3f7y3278qnhutdu33amxpc5rm3ze A Unified Command-Line Tool for All Google Workspace APIs, Built for Humans and AI Agents — https://github.com/googleworkspace/cli OpenClaw Surpasses React in GitHub Stars — https://x.com/openclaw/status/2028347703621464481 AI Agents Prefer Bitcoin: Research on Monetary Preferences — https://moneyforai.org 3:54 - Iran 14:24 - Dashboard 16:04 - More Iran 42:29 - Daghita 45:34 - Au10tix 47:54 - Moto Graphene 51:24 - Zaps 54:19 - Strike NY 1:00:54 - Bitwise 1:05:14 - HRF Story of the Week 1:08:09 - Stealth wallet 1:12:39 - Chamath fud 1:19:29 - Software updates 1:36:54 - BPI AI money test 1:44:24 - Macro talk Shoutout to our sponsors: Coinkite https://coinkite.com/ Strike https://strike.me/ Stakwork https://stakwork.ai/ Salt of the Earth https://drinksote.com/rhr Follow Marty Bent: Twitter https://twitter.com/martybent Nostr https://primal.net/marty Newsletter https://tftc.io/martys-bent/ Podcast https://tftc.io/podcasts/ Follow Odell: Nostr https://primal.net/odell Newsletter https://discreetlog.com/ Podcast https://citadeldispatch.com/
We also find later on, after Caitanya Mahāprabhu, there was a deficit, and the teachings of Caitanya Mahāprabhu were not readily available. Even the Śrī Caitanya-caritāmṛta—which was the ultimate of all teachings of Śrī Caitanya Mahāprabhu and the synthesis of the Śrīmad-Bhāgavatam—was not readily available. So much so that Bhaktivinoda Ṭhākura, the great ācārya, was looking for the Caitanya-caritāmṛta and could not find a single copy anywhere. The teachings of Caitanya Mahāprabhu had been not just obscured, but also perverted in many different ways. So, we can't take for granted that the product is available. But now it is, and that's due to the mercy of Śrī Caitanya Mahāprabhu, Lord Nityānanda, and the devotees who have carried on the tradition. This morning, on an early walk hoping to see the sunrise, it was a beautiful, painted sky, I was thinking of a lyric—or perhaps just an idiom—from modern American society: "Be true to your school." Is it a lyric from a song? Sounds like the Beach Boys or something. I was thinking about what a "school" is. A school is a consolidation of knowledge meant to uplift us; that's really the idea. Humans have an urge for that. They want to be in a place where they can take advantage of all the best knowledge the world has to offer. Nowadays, it has become a little more vocationally oriented, but previously, classical education had to do with bringing all the best books, teachings, and teachers together in one place. That's what universities were for: to get well-grounded in knowledge and gain a wider context and view of the world. Then, as one has a grounding in knowledge, one can also discern what is "the best." That's the idea of Vedānta. There's knowledge, and then there's the end of knowledge. Of course, if one just studies endlessly, it never comes to a conclusion. There are ways in which people say, "When are you going to graduate?" or "When are you going to finish?" You get a doctorate, then a post-doc—and then what do you have? What do you do after that? We're not meant simply to study endlessly and come to no conclusion. Even Socrates and Plato talked about coming to an ultimate conclusion. It's not that we just break everything down and are fascinated by the process; there should be a development of character (that's more Aristotle). We should also discern the purpose of life and what we should be aiming for as human beings—something to aim for. So all of that comes to bear in the teachings of Lord Caitanya Mahāprabhu. But what's most unusual and fantastic about Caitanya Mahāprabhu, whom we're celebrating today and tomorrow, is that...0:04:19 ------------------------------------------------------------ To connect with His Grace Vaiśeṣika Dāsa, please visit https://www.fanthespark.com/next-steps/ask-vaisesika-dasa/?utm_source=youtube&utm_medium=video&utm_campaign=launch2025 https://vaisesikadasayatra.blogspot.com/ ------------------------------------------------------------ Add to your wisdom literature collection: https://iskconsv.com/book-store/?utm_source=youtube&utm_medium=video&utm_campaign=launch2025 https://www.bbtacademic.com/books/?utm_source=youtube&utm_medium=video&utm_campaign=launch2025 https://thefourquestionsbook.com/?utm_source=youtube&utm_medium=video&utm_campaign=launch2025 ------------------------------------------------------------ Join us live on Facebook: https://www.facebook.com/FanTheSpark/ Podcasts: https://podcasts.apple.com/us/podcast/sound-bhakti/id1132423868 For the latest videos, subscribe https://www.youtube.com/@FanTheSpark For the latest in SoundCloud: https://soundcloud.com/fan-the-spark ------------------------------------------------------------ #spiritualawakening #soul #spiritualexperience #
This lecture discusses key ideas from the 20th century existentialist and feminist philosopher, novelist, essayist, and playwright Simone de Beauvoir's book, The Ethics of Ambiguity It focuses specifically on what she calls the "paradox of action" which imposes itself upon human beings, which is "no action can be generated for man without it being immediately generated against men". To support my ongoing work, go to my Patreon site - www.patreon.com/sadler If you'd like to make a direct contribution, you can do so here - www.paypal.me/ReasonIO - or at BuyMeACoffee - www.buymeacoffee.com/A4quYdWoM You can find over 3500 philosophy videos in my main YouTube channel - www.youtube.com/user/gbisadler Purchase De Beauvoir's Ethics of Ambiguity - https://amzn.to/32IbKya
Will Madden joins the podcast to talk about Prisma Next and the evolution from Prisma 7, including the decision to migrate away from Rust, ship the core through WebAssembly, and move toward a fully TypeScript ORM. The conversation dives into how modern workflows like agentic coding change the role of an ORM and why tools still matter even when agents can write SQL queries directly. We discuss how feedback loops, guardrails, and the TypeScript type system help prevent errors, along with the new query builder, query linter, and middleware layer that analyze queries using an abstract syntax tree. The episode also covers new database capabilities including Postgres support, upcoming Mongo support, and extensions like PG Vector, enabling vector columns and cosine distance similarity search. You'll also learn about new patterns such as collection methods, scopes, and composable database extensions, plus tooling like driver adapters, a potential compatibility layer, and safeguards like lint rules and a performance budget middleware designed to catch expensive queries before they run. Resources The Next Evolution of Prisma ORM: https://www.prisma.io/blog/the-next-evolution-of-prisma-orm We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Fill out our listener survey! https://t.co/oKVAEXipxu Let us know by sending an email to our producer, Elizabeth, at elizabeth.becz@logrocket.com, or tweet at us at PodRocketPod. Check out our newsletter! https://blog.logrocket.com/the-replay-newsletter/ Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form, and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. Chapters 00:00 Introduction 01:00 Prisma Seven and the Move Away from Rust 02:20 Missing Features and Mongo Support 03:00 Why Prisma Started Rebuilding the Core 04:00 Community Sentiment and Developer Feedback 05:20 Rethinking ORMs in the AI and Agentic Coding Era 06:45 Why Agents Still Need ORMs 07:30 Feedback Loops and Guardrails for SQL 08:30 Type Safety and the First Layer of Query Validation 09:30 Query Linter and Middleware Architecture 11:00 Runtime Validation and Query Errors 12:30 Configuring Lint Rules and Guardrails 14:00 Designing ORMs for Humans and Agents 15:30 Collection Methods and ActiveRecord-style Scopes 17:00 Reusable Queries and Domain Vocabulary 18:30 Query Composition and Flexibility 19:00 Performance Guardrails and Query Budget Middleware 20:30 Debugging ORM Performance Issues 21:00 Query Telemetry and Request Tracing 22:30 Prisma Next Extensibility and Database Plugins 23:00 Using PGVector and Vector Search 24:00 Database Drivers and Backend Architecture 25:00 Native Mongo Support in Prisma Next 26:00 Community Extensions and Middleware Ecosystem 27:00 Runtime Schema Validation Use Cases 28:00 Writing Custom Query Validation Rules 29:00 Migration Paths from Prisma Seven 30:30 Compatibility Layers vs Parallel Systems 32:00 Prisma Next Roadmap and Timeline 34:30 What Developers Will Be Most Excited About 35:30 Final Thoughts and Community Feedback
Doug Foltz explains how he used AI to solve a real coach-development bottleneck: mentor coaching doesn't scale. By building a competency rubric and an AI "agent" that evaluates coaching transcripts, Doug's team reduced hours of expert analysis to minutes—then re-centered the human work where it matters most: reflection, agency, and a short mentor-coaching conversation. The bigger idea: "communal co-intelligence"—AI not just as a personal assistant, but as a tool that helps a whole coaching community preserve culture, build consistency, and scale development without losing what makes coaching human. Episode description How do you scale mentor coaching when you don't have the budget—or the hours? Doug Foltz (Content Engineering & Value Alignment Lead at Gloo, DMin candidate at Asbury, and longtime church-planting coach) shares how he built an AI-supported mentor-coaching loop: a detailed competency rubric + an AI evaluator that reviews transcripts in minutes. But Doug also warns about a hidden danger: AI can bypass reflection, which is essential for adult learning. So they intentionally added "friction" back into the process—reflection first, then AI feedback, then a short human coaching conversation. Along the way, Doug introduces a powerful concept: communal co-intelligence—AI that strengthens a community's shared language, values, and coaching culture. Key moments (timestamps) 0:02–1:20 – Who Doug is + why Brian calls him the "AI guy" 1:49–3:21 – The real problem: coaching training doesn't stick without mentor coaching 3:34–5:06 – Doug's solution: a rubric + AI agent that evaluates transcripts (levels 1–3) 6:44–8:15 – The twist: reflection is essential; AI can accidentally remove it 8:28–9:00 – The human loop: 15–20 minute mentor conversation after reflection + report 10:38–14:35 – Why AI matters: replaces 3–4 hours of expert analysis with minutes 15:04–16:15 – The church's role: protect what's uniquely human; set boundaries 16:27–19:16 – "Communal co-intelligence": AI + a coaching community's culture and standards 21:24–23:00 – What they observed: fast growth from Level 1 → Level 2; harder jump to Level 3 23:29–25:46 – Craft guild model: learn the fundamentals, then innovate without losing the core 28:57–31:14 – What's next: agentic systems, tools + data access, and AI as "work orchestrator" Key ideas AI can scale mentor coaching by doing the transcript evaluation quickly and consistently. Reflection is non-negotiable in adult learning; AI can "steal" it by doing the thinking for you. The solution is intentional friction: reflection → AI feedback → short human mentor coaching. Agency matters: don't make AI the all-knowing guru; keep the learner's authority intact. Communal co-intelligence: AI can reinforce a shared coaching culture across many coaches. Early gains can be rapid (novice → intermediate), but advanced mastery takes longer. The future is agentic systems that combine tools + data + context to orchestrate real work. Quotable lines (pull quotes) "We really can't scale coaching very well." "Mentor coaching is what makes the training stick." "My process actually bypasses [reflection] entirely." "We added a friction point… and we made them reflect." "You don't want the AI to be the all-knowing guru." "That's the part of the process that we said, we're going to replace." (re: 3–4 hours of evaluation) "Communal co-intelligence… it's the AI with our coaching community." "It becomes this orchestrator of work within an organization." Discussion questions (for Learning Lab / staff meeting) Where would AI help us scale without compromising what we value most? What part of our development process must remain human-only? Where might AI accidentally remove reflection, struggle, or ownership? What would a "reflection-first" workflow look like for our coaches or trainers? What are the risks of communal AI (shared culture) becoming static or overly controlling? If AI becomes an "orchestrator of work," what data is off-limits—and why? Practical takeaway AI is best used as a leverage tool—not a replacement for learning. Let it do the heavy lift of analysis and pattern recognition, then spend your human time where it counts: reflection, discernment, presence, and coaching conversations that build ownership and growth. If you design it well, AI doesn't dilute your culture—it can actually help you scale it.
Reeve Collins is a pioneer in blockchain, stablecoins, and digital assets, with a legacy of creating category-defining innovations. After co-founding Tether, the world's first stablecoin, and launching the first platform for NFTs, he helped shape two of the most transformative movements in Web3.Reeve is currently the co-founder and chairman of: WeFi, an onchain infrastructure provider for banks | STBL, the next generation stablecoin protocol | and the pending chairman of ReserveOne, a publicly traded digital asset management company.
What if the biggest influence on your team's behavior isn't the company handbook, the leadership training, or the motivational speech you gave last quarter? What if it's you? Humans are wired to observe and model behavior. Decades of research in behavioral psychology show that people learn far more from what they see leaders do than from what leaders say. Which means something leaders don't always want to hear: Your team is modeling you. If accountability is weak, if gossip spreads, if difficult conversations never happen, there's a strong chance your team has learned—intentionally or not—that those behaviors work in your environment. In this episode of Leadership Sandbox, Tammy J. Bond breaks down the truths behind behavioral modeling and what it means for leaders who want to change the culture and performance of their teams. Drawing on the work of psychologist Albert Bandura and the concept of social learning theory, Tammy exposes why behavior spreads quickly inside organizations and why leadership example matters more than any training program or policy. If you want to understand why the behaviors showing up on your team look the way they do—and what to do about it—this episode will challenge the way you think about leadership influence.
In this episode of Find Your Edge, Coach Chris Newport sits down with Dr. Jerry Yoo of Next Level Physical Therapy to talk about what it really takes to stay active for life—especially for runners and endurance athletes over 40.We cover:Why the best time for PT is often before you're injuredThe “two diagnoses” in PT (symptom vs root cause)Shockwave therapy and regenerative toolsWarm-up + cool-down best practicesStrength training for endurance performance and longevitySimple breathing tools for mobility and race-day nervesLearn more: nlphysio.comInstagram: @drjerryYoo | @nextlevelphysioptRead more and watch here: https://www.theenduranceedge.com/dr-jerry-yoo-next-level-physio-longevity-for-athletesTrain with structure, community, and purpose—without paying for full coaching. The Endurance Edge Club gives you professionally built training plans in Training Peaks Premium, access to virtual workouts, team socials, and athlete-led sessions. Join monthly or save nearly 50% with an annual plan and get the tools you need to stop guessing and start making real progress. Learn more and join now at TheEnduranceEdge.com/club Support the show
The reception to our recent post on Code Reviews has been strong. Catch up!Amid a maelstrom of discussion on whether or not AI is killing SaaS, one of the top publicly listed SaaS companies in the world has just reported record revenues, clearing well over $1.1B in ARR for the first time with a 28% margin. As we comment on the pod, Aaron Levie is the rare public company CEO equally at home in both worlds of Silicon Valley and Wall Street/Main Street, by day helping 70% of the Fortune 500 with their Enterprise Advanced Suite, and yet by night is often found in the basements of early startups and tweeting viral insights about the future of agents.Now that both Cursor, Cloudflare, Perplexity, Anthropic and more have made Filesystems and Sandboxes and various forms of “Just Give the Agent a Box” cool (not just cool; it is now one of the single hottest areas in AI infrastructure growing 100% MoM), we find it a delightfully appropriate time to do the episode with the OG CEO who has been giving humans and computers Boxes since he was a college dropout pitching VCs at a Michael Arrington house party.Enjoy our special pod, with fan favorite returning guest/guest cohost Jeff Huber!Note: We didn't directly discuss the AI vs SaaS debate - Aaron has done many, many, many other podcasts on that, and you should read his definitive essay on it. Most commentators do not understand SaaS businesses because they have never scaled one themselves, and deeply reflected on what the true value proposition of SaaS is.We also discuss Your Company is a Filesystem:We also shoutout CTO Ben Kus' and the AI team, who talked about the technical architecture and will return for AIE WF 2026.Full Video EpisodeTimestamps* 00:00 Adapting Work for Agents* 01:29 Why Every Agent Needs a Box* 04:38 Agent Governance and Identity* 11:28 Why Coding Agents Took Off First* 21:42 Context Engineering and Search Limits* 31:29 Inside Agent Evals* 33:23 Industries and Datasets* 35:22 Building the Agent Team* 38:50 Read Write Agent Workflows* 41:54 Docs Graphs and Founder Mode* 55:38 Token FOMO Culture* 56:31 Production Function Secrets* 01:01:08 Film Roots to Box* 01:03:38 AI Future of Movies* 01:06:47 Media DevRel and EngineeringTranscriptAdapting Work for AgentsAaron Levie: Like you don't write code, you talk to an agent and it goes and does it for you, and you may be at best review it. That's even probably like, like largely not even what you're doing. What's happening is we are changing our work to make the agents effective. In that model, the agent didn't really adapt to how we work.We basically adapted to how the agent works. All of the economy has to go through that exact same evolution. Right now, it's a huge asset and an advantage for the teams that do it early and that are kinda wired into doing this ‘cause you'll see compounding returns. But that's just gonna take a while for most companies to actually go and get this deployed.swyx: Welcome to the Lane Space Pod. We're back in the chroma studio with uh, chroma, CEO, Jeff Hoover. Welcome returning guest now guest host.Aaron Levie: It's a pleasure. Wow. How'd you get upgraded to, uh, to that?swyx: Because he's like the perfect guy to be guest those for you.Aaron Levie: That makes sense actually, for We love context. We, we both really love context le we really do.We really do.swyx: Uh, and we're here with, uh, Aaron Levy. Welcome.Aaron Levie: Thank you. Good to, uh, good to be [00:01:00] here.swyx: Uh, yeah. So we've all met offline and like chatted a little bit, but like, it's always nice to get these things in person and conversation. Yeah. You just started off with so much energy. You're, you're super excited about agents.I loveAaron Levie: agents.swyx: Yeah. Open claw. Just got by, got bought by OpenAI. No, not bought, but you know, you know what I mean?Aaron Levie: Some, some, you know, acquihire. Executiveswyx: hire.Aaron Levie: Executive hire. Okay. Executive hire. Say,swyx: hey, that's my term. Okay. Um, what are you pounding the table on on agents? You have so many insightful tweets.Why Every Agent Needs a BoxAaron Levie: Well, the thing that, that we get super excited by that I think is probably, you know, should be relatively obvious is we've, we've built a platform to help enterprises manage their files and their, their corporate files and the permissions of who has access to those files and the sharing collaboration of those files.All of those files contain really, really important information for the enterprise. It might have your contracts, it might have your research materials, it might have marketing information, it might have your memos. All that data obviously has, you know, predominantly been used by humans. [00:02:00] But there's been one really interesting problem, which is that, you know, humans only really work with their files during an active engagement with them, and they kind of go away and you don't really see them for a long time.And all of a sudden, uh, with the power of AI and AI agents, all of that data becomes extremely relevant as this ongoing source of, of answers to new questions of data that will transform into, into something else that, that produces value in your organization. It, it contains the answer to the new employee that's onboarding, that needs to ramp up on a project.Um, it contains the answer to the right thing to sell a customer when you're having a conversation to them, with them contains the roadmap information that's gonna produce the next feature. So all that data. That previously we've been just sort of storing and, and you know, occasionally forgetting about, ‘cause we're only working on the new active stuff.All of that information becomes valuable to the enterprise and it's gonna become extremely valuable to end users because now they can have agents go find what they're looking for and produce new, new [00:03:00] value and new data on that information. And it's gonna become incredibly valuable to agents because agents can roam around and do a bunch of work and they're gonna need access to that data as well.And um, and you know, sometimes that will be an agent that is sort of working on behalf of, of, of you and, and effectively as you as and, and they are kind of accessing all of the same information that you have access to and, and operating as you in the system. And then sometimes there's gonna be agents that are just.Effectively autonomous and kind of run on their own and, and you're gonna collaborate and work with them kind of like you did another person. Open Claw being the most recent and maybe first real sort of, you know, kind of, you know, up updating everybody's, you know, views of this landscape version of, of what that could look like, which is, okay, I have an agent.It's on its own system, it's on its own computer, it has access to its own tools. I probably don't give it access to my entire life. I probably communicate with it like I would an assistant or a colleague and then it, it sort of has this sandbox environment. So all of that has massive implications for a platform that manage that [00:04:00] enterprise data.We think it's gonna just transform how we work with all of the enterprise content that we work with, and we just have to make sure we're building the right platform to support that.swyx: The sort of shorthand I put it is as people build agents, everybody's just realizing that every agent needs a box. Yes.And it's nice to be called box and just give everyone a box.Aaron Levie: Hey, I if I, you know, if we can make that go viral, uh, like I, I think that that terminology, I, that's theswyx: tagline. Every agentAaron Levie: needs a box. Every agent needs a box. If we can make that the headline of this, I'm fine with this. And that's the billboard I wanna like Yeah, exactly.Every agent needs a box. Um, I like it. Can we ship this? Like,swyx: okay, let's do it. Yeah.Aaron Levie: Uh, my work here is done and I got the value I needed outta this podcast Drinks.swyx: Yeah.Agent Governance and IdentityAaron Levie: But, but, um, but, but, you know, so the thing that we, we kind of think about is, um, is, you know, whether you think the number 10 x or a hundred x or whatever the number is, we're gonna have some order of magnitude more agents than people.That's inevitable. It has to happen. So then the question is, what is the infrastructure that's needed to make all those agents effective in the enterprise? Make sure that they are well governed. Make sure they're only doing [00:05:00] safe things on your information. Make sure that they're not getting exposed. The data that they shouldn't have access to.There's gonna be just incredibly spectacularly crazy security incidents that will happen with agents because you'll prompt, inject an agent and sort of find your way through the CRM system and pull out data that you shouldn't have access to. Oh, weJeff Huber: have God,Aaron Levie: right? I mean, that's just gonna happen all over the place, right?So, so then the thing is, is how do you make sure you have the right security, the permissions, the access controls, the data governance. Um, we actually don't yet exactly know in many cases how we're gonna regulate some of these agents, right? If you think about an agent in financial services, does it have the exact same financial sort of, uh, requirements that a human did?Or is it, is the risk fully on the human that was interacting or created the agent? All open questions, but no matter what, there's gonna need to be a layer that manages the, the data they have access to, the workflows that they're involved in, pulling up data from multiple systems. This is the new infrastructure opportunity in the era of agents.swyx: You have a piece on agent identities, [00:06:00] which I think was today, um, which I think a lot of breaking news, the security, security people are talking about, right? Like you basically, I, I always think of this as like, well you need the human you and then there you need the agent. YouAaron Levie: Yes.swyx: And uh, well, I don't know if it's that simple, but is box going to have an opinion on that or you're just gonna be like, well we're just the sort of the, the source layer.Yeah. Let's Okta of zero handle that.Aaron Levie: I think we're gonna have an opinion and we will work with generally wherever the contours of the market end up. Um, and the reason that we're gonna have an opinion more than other topics probably is because one of the biggest use cases for why your agent might need it, an identity is for file system access.So thus we have to kind of think about this pretty deeply. And I think, uh, unless you're like in our world thinking about this particular problem all day long, it might be, you know, like, why is this such a big deal? And the reason why it's a really big deal is because sometimes sort of say, well just give the agent an, an account on the system and it just treats, treat it like every other type of user on the system.The [00:07:00] problem is, is that I as Aaron don't really have any responsibility over anybody else's box account in our organization. I can't see the box account of any other employee that I work with. I am not liable for anything that they do. And they have, I have, I have, you know, strict privacy requirements on everything that they're able to, you know, that, that, that they work on.Agents don't have that, you know, don't have those properties. The person who creates the agent probably is gonna, for the foreseeable future, take on a lot of the liability of what that agent does. That agent doesn't deserve any privacy because, because it's, you know, it can't fully be autonomously operated and it doesn't have any legal, you know, kind of, you know, responsibility.So thus you can't just be like, oh, well I'll just create a bunch of accounts and then I'll, I'll kind of work with that agent and I'll talk to it occasionally. Like you need oversight of that. And so then the question is, how do you have a world where the agent, sometimes you have oversight of, but what if that agent goes and works with other people?That person over there is collaborating with the agent on something you shouldn't have [00:08:00] access to what they're doing. So we have all of these new boundaries that we're gonna have to figure out of, of, you know, it's really, really easy. So far we've been in, in easy mode. We've hit the easy button with ai, which is the agent just is you.And when you're in quad code and you're in cursor, and you're in Codex, you're just, the agent is you. You're offing into your services. It can do everything you can do. That's the easy mode. The hard mode is agents are kind of running on their own. People check in with them occasionally, they're doing things autonomously.How do you give them access to resources in the enterprise and not dramatically increased the security risk and the risk that you might expose the wrong thing to somebody. These are all the new problems that we have to get solved. I like the identity layer and, and identity vendors as being a solution to that, but we'll, we'll need some opinions as well because so many of the use cases are these collaborative file system use cases, which is how do I give it an agent, a subset of my data?Give it its own workspace as well. ‘cause it's gonna need to store off its own information that would be relevant for it. And how do I have the right oversight into that? [00:09:00]Jeff Huber: One thing, which, um, I think is kind interesting, think about is that you know, how humans work, right? Like I may not also just like give you access to the whole file.I might like sit next to you and like scroll to this like one part of the file and just show you that like one part and like, you know,swyx: partial file access.Jeff Huber: I'm just saying I think like our, like RA does seem to be dead, right? Like you wanna say something is dead uhhuh probably RA is dead. And uh, like the auth story to me seems like incredibly unsolved and unaddressed by like the existing state of like AI vendors.ButAaron Levie: yeah, I think, um, we're, I mean you're taking obviously really to level limit that we probably need to solve for. Yeah. And we built an access control system that was, was kind of like, you know, its own little world for, for a long time. And um, and the idea was this, it's a many to many collaboration system where I can give you any part of the file system.And it's a waterfall model. So if I give you higher up in the, in the, in the system, you get everything below. And that, that kind of created immense flexibility because I can kind of point you to any layer in the, in the tree, but then you're gonna get access to everything kind of below it. And that [00:10:00] mostly is, is working in this, in this world.But you do have to manage this issue, which is how do I create an agent that has access to some of my stuff and somebody else's stuff as well. Mm-hmm. And which parts do I get to look at as the creator of the agent? And, and these are just brand new problems? Yeah. Crazy. And humans, when there was a human there that was really easy to do.Like, like if the three of us were all sharing, there'd be a Venn diagram where we'd have an overlapping set of things we've shared, but then we'd have our own ways that we shared with each other. In an agent world, somebody needs to take responsibility for what that agent has access to and what they're working on.These are like the, some of the most probably, you know, boring problems for 98% of people on, on the internet, but they will be the problems that are the difference between can you actually have autonomous agents in an enterprise contextswyx: Yeah.Aaron Levie: That are not leaking your data constantly.swyx: No. Like, I mean, you know, I run a very, very small company for my conference and like we already have data sensitivity issues.Yes. And some of my team members cannot see Yes. Uh, the others and like, I can't imagine what it's like to run a Fortune 500 and like, you have to [00:11:00] worry about this. I'm just kinda curious, like you, you talked to a lot like, like 70, 80% of your cus uh, of the Fortune 500, your customers.Aaron Levie: Yep. 67%. Just so we're being verySEswyx: precise.So Yeah. I'm notAaron Levie: Okay. Okay.swyx: Something I'm rounding up. Yes. Round up. I'm projecting to, forAaron Levie: the government.swyx: I'm projecting to the end of the year.Aaron Levie: Okay.swyx: There you go.Aaron Levie: You do make it sound like, like we, we, well we've gotta be on this. Like we're, we're taking way too long to get to 80%. Well,swyx: no, I mean, so like. How are they approaching it?Right? Because you're, you don't have a, you don't have a final answer yet.Why Coding Agents Took Off FirstAaron Levie: Well, okay, so, so this is actually, this is the stark reality that like, unfortunately is the kinda like pouring the water on the party a little bit.swyx: Yes.Aaron Levie: We all in Silicon Valley are like, have the absolute best conditions possible for AI ever.And I think we all saw the dke, you know, kind of Dario podcast and this idea of AI coding. Why is that taken off? And, and we're not yet fully seeing it everywhere else. Well, look, if you just like enumerated the list of properties that AI coding has and then compared it to other [00:12:00] knowledge work, let's just, let's just go through a few of them.Generally speaking, you bring on a new engineer, they have access to a large swath of the code base. Like, there's like very, like you, just, like new engineer comes on, they can just go and find the, the, the stuff that they, they need to work with. It's a fully text in text out. Medium. It's only, it's just gonna be text at the end of the day.So it's like really great from a, from just a, uh, you know, kinda what the agent can work with. Obviously the models are super trained on that dataset. The labs themselves have a really strong, kind of self-reinforcing positive flywheel of why they need to do, you know, agent coding deeply. So then you get just better tooling, better services.The actual developers of the AI are daily users of the, of the thing that they're we're working on versus like the, you know, probably there's only like seven Claude Cowork legal plugin users at Anthropic any given day, but there's like a couple thousand Claude code and you know, users every single day.So just like, think about which one are they getting more feedback on. All day long. So you just go through this list. You have a, you know, everybody who's a [00:13:00] developer by definition is technical so they can go install the latest thing. We're all generally online, or at least, you know, kinda the weird ones are, and we're all talking to each other, sharing best practices, like that's like already eight differences.Versus the rest of the economy. Every other part of the economy has like, like six to seven headwinds relative to that list. You go into a company, you're a banker in financial services, you have access to like a, a tiny little subset of the total data that's gonna be relevant to do your job. And you're have to start to go and talk to a bunch of people to get the right data to do your job because Sally didn't add you to that deal room, you know, folder.And that that, you know, the information is actually in a completely different organization that you now have to go in and, and sort of run into. And it's like you have this endless list of access controls and security. As, as you talked about, you have a medium, which is not, it's not just text, right? You have, you have a zoom call that, that you're getting all of the requirements from the customer.You have a lot of in-person conversations and you're doing in-person sales and like how do you ever [00:14:00] digitize all of that information? Um, you know, I think a lot of people got upset with this idea that the code base has all the context, um, that I don't know if you follow, you know, did you follow some of that conversation that that went viral?Is like, you know, it's not that simple that, that the code base doesn't have all the knowledge, but like it's a lot, you're a lot better off than you are with other areas of knowledge work. Like you, we like, we like have documentation practices, you write specifications. Those things don't exist for like 80% of work that happens in the enterprise.That's the divide that we have, which is, which is AI coding has, has just fully, you know, where we've reached escape velocity of how powerful this stuff is, and then we're gonna have to find a way to bring that same energy and momentum, but to all these other areas of knowledge work. Where the tools aren't there, the data's not set up to be there.The access controls don't make it that easy. The context engineering is an incredibly hard problem because again, you have access control challenges, you have different data formats. You have end users that are gonna need to kind of be kind of trained through this as opposed to their adopting [00:15:00] these tools in their free time.That's where the Fortune 500 is. And so we, I think, you know, have to be prepared as an industry where we are gonna be on a multi-year march to, to be able to bring agents to the enterprise for these workflows. And I think probably the, the thing that we've learned most in coding that, that the rest of the world is not yet, I think ready for, I mean, we're, they'll, they'll have to be ready for it because it's just gonna inevitably happen is I think in coding.What, what's interesting is if you think about the practice of coding today versus two years ago. It's probably the most changed workflow in maybe the history of time from the amount of time it's changed, right? Yeah. Like, like has any, has any workflow in the entire economy changed that quickly in terms of the amount of change?I just, you know, at least in any knowledge worker workflow, there's like very rarely been an event where one piece of technology and work practice has so fundamentally, you know, changed, changed what you do. Like you don't write code, you talk to an agent and it goes and [00:16:00] does it for you, and you may be at best review it.And even that's even probably like, like largely not even what you're doing. What's happening is we are changing our work to make the agents effective. In that model, the agent didn't really adapt to how we work. We basically adapted to how the agent works. Mm-hmm. All of the economy has to go through that exact same evolution.The rest of the economy is gonna have to update its workflows to make agents effective. And to give agents the context that they need and to actually figure out what kind of prompting works and to figure out how do you ensure that the agent has the right access to information to be able to execute on its work.I, you know, this is not the panacea that people were hoping for, of the agent drops in, just automates your life. Like you have to basically re-engineer your workflow to get the most out of agents and, uh, and that, that's just gonna take, you know, multiple years across the economy. Right now it's a huge asset and an advantage for the teams that do it early and that are kinda wired into doing this.‘cause [00:17:00] you'll see compounding returns, but that's just gonna take a while for most companies to actually go and get this deployed.swyx: I love, I love pushing back. I think that. That is what a lot of technology consultants love to hear this sort of thing, right? Yeah, yeah, yeah. First to, to embrace the ai. Yes. To get to the promised land, you must pay me so much money to a hundred percent to adopt the prescribed way of, uh, conforming to the agents.Yes. And I worry that you will be eclipsed by someone else who says, no, come as you are.Aaron Levie: Yeah.swyx: And we'll meet you where you are.Aaron Levie: And, and, and and what was the thing that went viral a week ago? OpenAI probably, uh, is hiring F Dees. Yeah. Uh, to go into the enterprise. Yeah. Yeah. And then philanthropic is embedded at Goldman Sachs.Yeah. So if the labs are having to do this, if, if the labs have decided that they need to hire FDE and professional services, then I think that's a pretty clear indication that this, there's no easy mode of workflow transformation. Yeah. Yeah. So, so to your point, I think actually this is a market opportunity for, you know, new professional services and consulting [00:18:00] firms that are like Agent Build and they, and they kind of, you know, go into organizations and they figure out how to re-engineer your workflows to make them more agent ready and get your data into the right format and, you know, reconstruct your business process.So you're, you're not doing most of the work. You're telling agents how to do the work and then you're reviewing it. But I haven't seen the thing that can just drop in and, and kinda let you not go through those changes.swyx: I don't know how that kind of sales pitch goes over. Yeah. You know, you're, you're saying things like, well, in my sort of nice beautiful walled garden, here's, there's, uh, because here's this, here's this beautiful box account that has everything.Yes. And I'm like, well, most, most real life is extremely messy. Sure. And like, poorly named and there duplicate this outdated s**tAaron Levie: a hundred percent. And so No, no, a hundred percent. And so this is actually No. So, so this is, I mean, we agree that, that getting to the beautiful garden is gonna be tough.swyx: Yeah.Aaron Levie: There's also the other end of the spectrum where I, I just like, it's a technical impossibility to solve. The agent is, is truly cannot get enough context to make the right decision in, in the, in the incredibly messy land. Like there's [00:19:00] no a GI that will solve that. So, so we're gonna have to kind of land in somewhere in between, which is like we all collectively get better at.Documentation practices and, and having authoritative relatively up-to-date information and putting it in the right place like agents will, will certainly cause us to be much better organized around how we work with our information, simply because the severity of the agent pulling the wrong data will be too high and the productivity gain of that you'll miss out on by not doing this will be too high as well, that you, that your competition will just do it and they'll just have higher velocity.So, uh, and, and we, we see this a lot firsthand. So we, we build a series of agents internally that they can kind of have access to your full box account and go off and you give it a task and it can go find whatever information you're looking for and work with. And, you know, thank God for the model progress, but like, if, if you gave that task to an agent.Nine months ago, you're just gonna get lots of bogus answers because it's gonna, it's gonna say, Hey, here's, here are fi [00:20:00] five, you know, documents that all kind of smell like the right thing. And I'm gonna, but I, but you're, you're putting me on the clock. ‘cause my assistant prompt says like, you know, be pretty smart, but also try and respond to the user and it's gonna respond.And it's like, ah, it got the wrong document. And then you do that once or twice as a knowledge worker and you're just neverswyx: again,Aaron Levie: never again. You're just like done with the system.swyx: Yeah. It doesn't work.Aaron Levie: It doesn't work. And so, you know, Opus four six and Gemini three one Pro and you know, whatever the latest five 3G BT will be, like, those things are getting better and better and it's using better judgment.And this sort of like the, all of these updates to the agentic tool and search systems are, are, we're seeing, we're seeing very real progress where the agent. Kind of can, can almost smell some things a little bit fishy when it's getting, you know, we, we have this process where we, we have it go fan out, do a bunch of searches, pull up a bunch of data, and then it has to sort of do its own ranking of, you know, what are the right documents that, that it should be working with.And again, like, you know, the intelligence level of a model six months ago, [00:21:00] it'd be just throwing a dart at like, I'm just, I'm gonna grab these seven files and I, I pray, I hope that that's the right answer. And something like an opus first four five, and now four six is like, oh, it's like, no, that one doesn't seem right relative to this question because I'm seeing some signal that is making that, you know, that's contradicting the document where it would normally be in the tree and who should have access.Like it's doing all of that kind of work for you. But like, it still doesn't work if you just have a total wasteland of data. Like, it's just not, it's just not possible. Partly ‘cause a human wouldn't even be able to do it. So basically if a, if a really, really smart human. Could not do that task in five or 10 minutes for a search retrieval type task.Look, you know, your agent's not gonna be able to do it any better. You see this all day long. SoContext Engineering and Search Limitsswyx: this touches on a thing that just passionate about it was just context engineering. I, I'm just gonna let you ramble or riff on, on context engineering. If, if, if there's anything like he, he did really good work on context fraud, which has really taken over as like the term that people use and the referenceAaron Levie: a hundred percent.We, we all we think about is, is the context rob problem. [00:22:00]Jeff Huber: Yeah, there's certainly a lot of like ranking considerations. Gentech surgery think is incredibly promising. Um, yeah, I was trying to generate a question though. I think I have a question right now. Swyx.Aaron Levie: Yeah, no, but like, like I think there was this moment, um, you know, like, I don't know, two years ago before, before we knew like where the, the gotchas were gonna be in ai and I think someone was like, was like, well, infinite context windows will just solve all of these problems and ‘cause you'll just, you'll just give the context window like all the data and.It's just like, okay, I mean, maybe in 2035, like this is a viable solution. First of all, it, it would just, it would just simply cost too much. Like we just can't give the model like the 5,000 documents that might be relevant and it's gonna read them all. And I've seen enough to, to start believing in crazy stuff.So like, I'm willing to just say, sure. Like in, in 10 years from now,swyx: never say, never, never.Aaron Levie: In, in 10 years from now, we'll have infinite context windows at, at a thousandth of the price of today. Like, let's just like believe that that's possible, but Right. We're in reality today. So today we have a context engineering [00:23:00] problem, which is, I got, I got, you know, 200,000 tokens that I can work with, or prob, I don't even know what the latest graph is before, like massive degradation.16. Okay. I have 60,000 tokens that I get to work with where I'm gonna get accurate information. That's not a lot of tokens for a corpus of 10 million documents that a knowledge worker might have across all of the teams and all the projects and all the people they work with. I have, I have 10 million documents.Which, you know, maybe is times five pages per document or something like that. I'm at 50 million pages of information and I have 60,000 tokens. Like, holy s**t. Yeah. This is like, how do I bridge the 50 million pages of information with, you know, the couple hundred that I get to work with in that, in that token window.Yeah. This is like, this is like such an interesting problem and that's why actually so much work is actually like, just like search systems and the databases and that layer has to just get so locked in, but models getting better and importantly [00:24:00] knowing when they've done a search, they found the wrong thing, they go back, they check their work, they, they find a way to balance sort of appeasing the user versus double checking.We have this one, we have this one test case where we ask the agent to go find. 10 pieces of information.swyx: Is this the complex work eval?Aaron Levie: Uh, this is actually not in the eval. This is, this is sort of just like we have a bunch of different, we have a bunch of internal benchmark kind of scenarios. Every time we, we update our agent, we have one, which is, I ask it to find all of our office addresses, and I give it the list of 10 offices that we have.And there's not one document that has this, maybe there should be, that would be a great example of the kind of thing that like maybe over time companies start to, you know, have these sort of like, what are the canonical, you know, kind of key areas of knowledge that we need to have. We don't seem to have this one document that says, here are all of our offices.We have a bunch of documents that have like, here's the New York office and whatever. So you task this agent and you, you get, you say, I need the addresses for these 10 offices. Okay. And by the way, if you do this on any, you know, [00:25:00] public chat model, the same outcome is gonna happen. But for a different kind of query, you give it, you say, I need these 10 addresses.How many times should the agent go and do its search before it decides whether or not, there's just no answer to this question. Often, and especially the, the, let's say lower tier models, it'll come back and it'll give you six of the 10 addresses. And it'll, and I'll just say I couldn't find the otherswyx: four.It, it doesn't know what It doesn't know. ItAaron Levie: doesn't know what It doesn't know. Yeah. So the model is just like, like when should it stop? When should it stop doing? Like should it, should it do that task for literally an hour and just keep cranking through? Maybe I actually made up an office location and it doesn't know that I made it up and I didn't even know that I made it up.Like, should it just keep, re should it read every single file in your entire box account until it, until it should exhaust every single piece of information.swyx: Expensive.Aaron Levie: These are the new problems that we have. So, you know, something like, let's say a new opus model is sort of like, okay, I'm gonna try these types of queries.I didn't get exactly what I wanted. I'm gonna try again. I'm gonna, at [00:26:00] some point I'm gonna stop searching. ‘cause I've determined that that no amount of searching is gonna solve this problem. I'm just not able to do it. And that judgment is like a really new thing that the model needs to be able to have.It's like, when should it give up on a task? ‘cause, ‘cause you just don't, it's a can't find the thing. That's the real world of knowledge, work problems. And this is the stuff that the coding agents don't have to deal with. Because they, it just doesn't like, like you're not usually asking it about, you're, you're always creating net new information coming right outta the model for the most part.Obviously it has to know about your code base and your specs and your documentation, but, but when you deploy an agent on all of your data that now you have all of these new problems that you're dealing withJeff Huber: our, uh, follow follow-up research to context ride is actually on a genetic search. Ah. Um, and we've like right, sort of stress tested like frontier models and their ability to search.Um, and they're not actually that good at searching. Right. Uh, so you're sort of highlighting this like explore, exploit.swyx: You're just say, Debbie, Donna say everything doesn't work. Like,Aaron Levie: well,Jeff Huber: somebody has to be,Aaron Levie: um, can I just throw out one more thing? Yeah. That is different from coding and, and the rest [00:27:00] of the knowledge work that I, I failed to mention.So one other kind of key point is, is that, you know, at the end of the day. Whether you believe we're in a slop apocalypse or, or whatever. At the end of the day, if you, if you build a working product at the end of, if you, if you've built a working solution that is ultimately what the customer is paying for, like whether I have a lot of slop, a little slop or whatever, I'm sure there's lots of code bases we could go into in enterprise software companies where it's like just crazy slop that humans did over a 20 year period, but the end customer just gets this little interface.They can, they can type into it, it does its thing. Knowledge work, uh, doesn't have that property. If I have an AI model, go generate a contract and I generate a contract 20 times and, you know, all 20 times it's just 3% different and like that I, that, that kind of lop introduces all new kinds of risk for my organization that the code version of that LOP didn't, didn't introduce.These are, and so like, so how do you constrain these models to just the part that you want [00:28:00] them to work on and just do the thing that you want them to do? And, and, you know, in engineering, we don't, you can't be disbarred as an engineer, but you could be disbarred as a lawyer. Like you can do the wrong medical thing In healthcare, you, there's no, there's no equivalent to that of engineering.Like, doswyx: you want there to be, because I've considered softwareJeff Huber: engineer. What's that? Civil engineering there is, right? NotAaron Levie: software civil engineer. Sure. Oh yeah, for sure. But like in any of our companies, you like, you know, you'll be forgiven if you took down the site and, and we, we will do a rollback and you'll, you'll be in a meeting, but you have not been disbarred as an engineer.We don't, we don't change your, you know, your computer science, uh, blameJeff Huber: degree, this postmortem.Aaron Levie: Yeah, exactly. Exactly. So, so, uh, now maybe we collectively as an industry need to figure out like, what are you liable for? Not legally, but like in a, in a management sense, uh, of these agents. All sorts of interesting problems that, that, that, uh, that have to come out.But in knowledge work, that's the real hostile environments that we're operating in. Hmm.swyx: I do think like, uh, a lot of the last year's, 2025 story was the rise of coding agents and I think [00:29:00] 2026 story is definitely knowledge work agents. Yes. A hundredAaron Levie: percent.swyx: Right. Like that would, and I think open claw core work are just the beginning.Yes. Like it's, the next one's gonna just gonna be absolute craziness.Aaron Levie: It it is. And, and, uh, and it's gonna be, I mean, again, like this is gonna be this, this wave where we, we are gonna try and bring as many of the practices from coding because that, that will clearly be the forefront, which is tell an agent to go do something and has an access to a set of resources.You need to be responsible for reviewing it at the end of the process. That to me is the, is the kind of template that I just think goes across knowledge, work and odd. Cowork is a great example. Open Closet's a great example. You can kind of, sort of see what Codex could become over time. These are some, some really interesting kind of platforms that are emerging.swyx: Okay. Um, I wanted to, we touched on evals a little bit. You had, you had the report that you're gonna go bring up and then I was gonna go into like, uh, boxes, evals, but uh, go ahead. Talk about your genetic search thing.Jeff Huber: Yeah. Mostly I think kinda a few of the insights. It's like number one frontier model is not good at search.Humans have this [00:30:00] natural explore, exploit trade off where we kinda understand like when to stop doing something. Also, humans are pretty good at like forgetting actually, and like pruning their own context, whereas agents are not, and actually an agent in their kind of context history, if they knew something was bad and they even, you could see in the trace the reason you trace, Hey, that probably wasn't a good idea.If it's still in the trace, still in the context, they'll still do it again. Uhhuh. Uh, and so like, I think pruning is also gonna be like, really, it's already becoming a thing, right? But like, letting self prune the con windowsswyx: be a big deal. Yeah. So, so don't leave the mistake. Don't leave the mistake in there.Cut out the mistake but tell it that you made a mistake in the past and so it doesn't repeat it.Jeff Huber: Yeah. But like cut it out so it doesn't get like distracted by it again. ‘cause really, you know, what is so, so it will repeat its mistake just because it's been, it's inswyx: theJeff Huber: context. It'sAaron Levie: in the context so much.That's a few shot example. Even if it, yeah.Jeff Huber: It's like oh thisAaron Levie: is a great thing to go try even ifJeff Huber: it didn't work.Aaron Levie: Yeah,Jeff Huber: exactly.Aaron Levie: SoJeff Huber: there's like a bunch of stuff there. JustAaron Levie: Groundhogs Day inside these models. Yeah. I'm gonna go keep doing the same wrongJeff Huber: thing. Covering sense. I feel like, you know, some creator analogy you're trying like fit a manifold in latent space, which kind is doing break program synthesis, which is kinda one we think about we're doing right.Like, you know, certain [00:31:00] facts might be like sort of overly pitting it. There are certain, you know, sec sectors of latent space and so like plug clean space. Yeah. And, uh, andswyx: so we have a bell, our editor as a bell every time you say that. SoJeff Huber: you have, you have to like remove those, likeswyx: you shoulda a gong like TPN or something.IfJeff Huber: we gong, you either remove those links to like kinda give it the freedom, kind of do what you need to do. So, but yeah. We'll, we'll release more soon. That'sAaron Levie: awesome.Jeff Huber: That'll, that'll be cool.swyx: We're a cerebral podcast that people listen to us and, and sort of think really deep. So yeah, we try to keep it subtle.Okay. We try to keep it.Aaron Levie: Okay, fine.Inside Agent Evalsswyx: Um, you, you guys do, you guys do have EVs, you talked about your, your office thing, but, uh, you've been also promoting APEX agents and complex work. Uh, yeah, whatever you, wherever you wanna take this just Yeah. How youAaron Levie: Apex is, is obviously me, core's, uh, uh, kind of, um, agent eval.We, we supported that by sort of. Opening up some data for them around how we kind of see these, um, data workspaces in, in the, you know, kind of regular economy. So how do lawyers have a workspace? How do investment bankers have a workspace? What kind of data goes into those? And so we, [00:32:00] we partner with them on their, their apex eval.Our own, um, eval is, it's actually relatively straightforward. We have a, a set of, of documents in a, in a range of industries. We give the agent previously did this as a one shot test of just purely the model. And then we just realized we, we need to, based on where everything's going, it's just gotta be more agentic.So now it's a bit more of a test of both our harness and the model. And we have a rubric of a set of things that has to get right and we score it. Um, and you're just seeing, you know, these incredible jumps in almost every single model in its own family of, you know, opus four, um, you know, sonnet four six versus sonnet four five.swyx: Yeah. We have this up on screen.Aaron Levie: Okay, cool. So some, you're seeing it somewhere like. I, I forget the to, it was like 15 point jump, I think on the main, on the overall,swyx: yes.Aaron Levie: And it's just like, you know, these incredible leaps that, that are starting to happen. Um,swyx: and OP doesn't know any, like any, it's completely held out from op.Aaron Levie: This is not in any, there's no public data which has, you know, Ben benefits and this is just a private eval that we [00:33:00] do, and then we just happen to show it to, to the world. Hmm. So you can't, you can't train against it. And I think it's just as representative of. It's obviously reasoning capabilities, what it's doing at, at, you know, kind of test time, compute capabilities, thinking levels, all like the context rot issues.So many interesting, you know, kind of, uh, uh, capabilities that are, that are now improvingswyx: one sector that you have. That's interesting.Industries and Datasetsswyx: Uh, people are roughly familiar with healthcare and legal, but you have public sector in there.Aaron Levie: Yeah.swyx: Uh, what's that? Like, what, what, what is that?Aaron Levie: Yeah, and, and we actually test against, I dunno, maybe 10 industries.We, we end up usually just cutting a few that we think have interesting gains. All extras, won a lot of like government type documents. Um,swyx: what is that? What is it? Government type documents?Aaron Levie: Government filings. Like a taxswyx: return, likeAaron Levie: a probably not tax returns. It would be more of what would go the government be using, uh, as data.So, okay. Um, so think about research that, that type of, of, of data sets. And then we have financial services for things like data rooms and what would be in an investment prospectus. Uhhuh,swyx: that one you can dog food.Aaron Levie: Yeah, exactly. Exactly. Yes. Yes. [00:34:00] So, uh, so we, we run the models, um, in now, you know, more of an agent mode, but, but still with, with kinda limited capacity and just try and see like on a, like, for like basis, what are the improvements?And, and again, we just continue to be blown away by. How, how good these models are getting.swyx: Yeah, I mean, I think every serious AI company needs something like that where like, well, this is the work we do. Here's our company eval. Yeah. And if you don't have it, well, you're not a serious AI company.Aaron Levie: There's two dimensions, right?So there's, there's like, how are the models improving? And so which models should you either recommend a customer use, which one should you adopt? But then every single day, we're making changes to our agents. And you need to knowswyx: if you regressed,Aaron Levie: if you know. Yeah. You know, I've been fully convinced that the whole agent observability and eval space is gonna be a massive space.Um, super excited for what Braintrust is doing, excited for, you know, Lang Smith, all the things. And I think what you're going to, I mean, this is like every enter like literally every enterprise right now. It's like the AI companies are the customers of these tools. Every enterprise will have this. Yeah, you'll just [00:35:00] have to have an eval.Of all of your work and like, we'll, you'll have an eval of your RFP generation, you'll have an eval of your sales material creation. You'll have an eval of your, uh, invoice processing. And, and as you, you know, buy or use new agentic systems, you are gonna need to know like, what's the quality of your, of your pipeline.swyx: Yeah.Aaron Levie: Um, so huge, huge market with agent evals.swyx: Yeah.Building the Agent Teamswyx: And, and you know, I'm gonna shout out your, your team a bit, uh, your CTO, Ben, uh, did a great talk with us last year. Awesome. And he's gonna come back again. Oh, cool. For World's Fair.Aaron Levie: Yep.swyx: Just talk about your team, like brag a little bit. I think I, I think people take these eval numbers in pretty charts for granted, but No, there, I mean, there's, there's lots of really smart people at work during all this.Aaron Levie: Biggest shout out, uh, is we have a, we have a couple folks at Dya, uh, Sidarth, uh, that, that kind of run this. They're like a, you know, kind of tag tag team duo on our evals, Ben, our CTO, heavily involved Yasha, head of ai, uh, you know, a bunch of folks. And, um, evals is one part of the story. And then just like the full, you know, kind of AI.An agent team [00:36:00] is, uh, is a, is a pretty, you know, is core to this whole effort. So there's probably, I don't know, like maybe a few dozen people that are like the epicenter. And then you just have like layers and layers of, of kind of concentric circles of okay, then there's a search team that supports them and an infrastructure team that supports them.And it's starting to ripple through the entire company. But there's that kind of core agent team, um, that's a pretty, pretty close, uh, close knit group.swyx: The search team is separate from the infra team.Aaron Levie: I mean, we have like every, every layer of the stack we have to kind of do, except for just pure public cloud.Um, but um, you know, we, we store, I don't even know what our public numbers are in, you know, but like, you can just think about it as like a lot of data is, is stored in box. And so we have, and you have every layer of the, of the stack of, you know, how do you manage the data, the file system, the metadata system, the search system, just all of those components.And then they all are having to understand that now you've got this new customer. Which is the agent, and they've been building for two types of customers in the past. They've been building for users and they've been building for like applications. [00:37:00] And now you've got this new agent user, and it comes in with a difference of it, of property sometimes, like, hey, maybe sometimes we should do embeddings, an embedding based, you know, kind of search versus, you know, your, your typical semantic search.Like, it's just like you have to build the, the capabilities to support all of this. And we're testing stuff, throwing things away, something doesn't work and, and not relevant. It's like just, you know, total chaos. But all of those teams are supporting the agent team that is kind of coming up with its requirements of what, what do we need?swyx: Yeah. No, uh, we just came from, uh, fireside chat where you did, and you, you talked about how you're doing this. It's, it's kind of like an internal startup. Yeah. Within the broader company. The broader company's like 3000 people. Yeah. But you know, there's, there's a, this is a core team of like, well, here's the innovation center.Aaron Levie: Yeah.swyx: And like that every company kind of is run this way.Aaron Levie: Yeah. I wanna be sensitive. I don't call it the innovation center. Yeah. Only because I think everybody has to do innovation. Um, there, there's a part of the, the, the company that is, is sort of do or die for the agent wave.swyx: Yeah.Aaron Levie: And it only happens to be more of my focus simply because it's existential that [00:38:00] we get it right.swyx: Yeah.Aaron Levie: All of the supporting systems are necessary. All of the surrounding adjacent capabilities are necessary. Like the only reason we get to be a platform where you'd run an agent is because we have a security feature or a compliance feature, or a governance feature that, that some team is working on.But that's not gonna be the make or break of, of whether we get agents right. Like that already exists and we need to keep innovating there. I don't know what the right, exact precise number is, but it's not a thousand people and it's not 10 people. There's a number of people that are like the, the kind of like, you know, startup within the company that are the make or break on everything related to AI agents, you know, leveraging our platform and letting you work with your data.And that's where I spend a lot of my time, and Ben and Yosh and Diego and Teri, you know, these are just, you know, people that, that, you know, kind of across the team. Are working.swyx: Yeah. Amazing.Read Write Agent WorkflowsJeff Huber: How do you, how do you think about, I mean, you talked a lot about like kinda read workflows over your box data. Yep.Right. You know, gen search questions, queries, et cetera. But like, what about like, write or like authoring workflows?Aaron Levie: Yes. I've [00:39:00] already probably revealed too much actually now that I think about it. So, um, I've talked about whatever,Jeff Huber: whatever you can.Aaron Levie: Okay. It's just us. It's just us. Yeah. Okay. Of course, of course.So I, I guess I would just, uh, I'll make it a little bit conceptual, uh, because again, I've already, I've already said things that are not even ga but, but we've, we've kinda like danced around it publicly, so I, yeah, yeah. Okay. Just like, hopefully nobody watches this, um, episode. No.swyx: It's tidbits for the Heidi engaged to go figure out like what exactly, um, you know, is, is your sort of line of thinking.Sure. They can connect the dots.Aaron Levie: Yeah. So, so I would say that, that, uh, we, you know, as a, as a place where you have your enterprise content, there's a use case where I want to, you know, have an agent read that data and answer questions for me. And then there's a use case where I want the agent to create something.And use the file system to create something or store off data that it's working on, or be able to have, you know, various files that it's writing to about the work it's doing. So we do see it as a total read write. The harder problem has so far been the read only because, because again, you have that kind of like 10 [00:40:00] million to one ratio problem, whereas rights are a lot of, that's just gonna come from the model and, and we just like, we'll just put it in the file system and kinda use it.So it's a little bit of a technically easier problem, but the only part that's like, not necessarily technically hard, it is just like it's not yet perfected in the state of the ecosystem is, you know, building a beautiful PowerPoint presentation. It's still a hard problem for these models. Like, like we still, you know, like, like these formats are just, we're not built for.They'reswyx: working on it.Aaron Levie: They're, they're working on it. Everybody's working on it.swyx: Every launch is like, well, we do PowerPoint now.Aaron Levie: We're getting, yeah, getting a lot, getting a lot of better each time. But then you'll do this thing where you'll ask the update one slide and all of a sudden, like the fonts will be just like a little bit different, you know, on two of the slides, or it moved, you know, some shape over to the left a little bit.And again, these are the kind of things that, like in code, obviously you could really care about if you really care about, you know, how beautiful is the code, but at the end, user doesn't notice all those problems and file creation, the end user instantly sees it. You're [00:41:00] like, ah, like paragraph three, like, you literally just changed the font on me.Like it's a totally different font and like midway through the document. Mm-hmm. Those are the kind of things that you run into a lot of in the, in the content creation side. So, mm-hmm. We are gonna have native agents. That do all of those things, they'll be powered by the leading kind of models and labs.But the thing that I think is, is probably gonna be a much bigger idea over time is any agent on any system, again, using Box as a file system for its work, and in that kind of scenario, we don't necessarily care what it's putting in the file system. It could put its memory files, it could put its, you know, specification, you know, documents.It could put, you know, whatever its markdown files are, or it could, you know, generate PDFs. It's just like, it's a workspace that is, is sort of sandboxed off for its work. People can collaborate into it, it can share with other people. And, and so we, we were thinking a lot about what's the right, you know, kind of way to, to deliver that at scale.Docs Graphs and Founder Modeswyx: I wanted to come into sort of the sort of AI transformation or AI sort of, uh, operations things. [00:42:00] Um, one of the tweets that you, that you wanted to talk about, this is just me going through your tweets, by the way. Oh, okay. I mean, like, this is, you readAaron Levie: one by one,swyx: you're the, you're the easiest guest to prep for because you, you already have like, this is the, this is what I'm interested in.I'm like, okay, well, areAaron Levie: we gonna get to like, like February, January or something? Where are we in the, in the timelines? How far back are we going?swyx: Can you, can you describe boxes? A set of skills? Right? Like that, that's like, that's like one of the extremes of like, well if you, you just turn everything into a markdown file.Yeah. Then your agent can run your company. Uh, like you just have to write, find the right sequence of words toAaron Levie: Yes.swyx: To do it.Aaron Levie: Sorry, isthatswyx: the question? So I think the question is like, what if we documented everything? Yes. The way that you exactly said like,Aaron Levie: yes.swyx: Um, let's get all the Fortune five hundreds, uh, prepared for agents.Yes. And like, you know, everything's in golden and, and nicely filed away and everything. Yes. What's missing? Like, what's left, right? LikeAaron Levie: Yeah.swyx: You've, you've run your company for a decade. LikeAaron Levie: Yeah. I think the challenge is that, that that information changes a week later. And because something happened in the market for that [00:43:00] customer, or us as a company that now has to go get updated, and so these systems are living and breathing and they have to experience reality and updates to reality, which right now is probably gonna be humans, you know, kinda giving those, giving them the updates.And, you know, there is this piece about context graphs as as, uh, that kinda went very viral. Yeah. And I, I, I was like a, i, I, I thought it was super provocative. I agreed with many parts of it. I disagree with a few parts around. You know, it's not gonna be as easy as as just if we just had the agent traces, then we can finally do that work because there's just like, there's so much more other stuff that that's happening that, that we haven't been able to capture and digitize.And I think they actually represented that in the piece to be clear. But like there's just a lot of work, you know, that that has to, you just can't have only skills files, you know, for your company because it's just gonna be like, there's gonna be a lot of other stuff that happens. Yeah. Change over time.Yeah. Most companies are practically apprenticeships.swyx: Most companies are practically apprenticeships. LikeJeff Huber: every new employee who joins the team, [00:44:00] like you span one to three months. Like ramping them up.Aaron Levie: Yes. AllJeff Huber: that tat knowledgeAaron Levie: isJeff Huber: not written down.Aaron Levie: Yes.Jeff Huber: But like, it would have to be if you wanted to like give it to an Asian.Right. And so like that seems to me like to beAaron Levie: one is I think you're gonna see again a premium on companies that can document this. Mm-hmm. Much. There'll be a huge premium on that because, because you know, can you shorten that three month ramp cycle to a two week ramp cycle? That's an instant productivity gain.Can you re dramatically reduce rework in the organization because you've documented where all the stuff is and where the answers are. Can you make your average employee as good as your 90th percentile employee because you've captured the knowledge that's sort of in the heads of, of those top employees and make that available.So like you can see some very clear productivity benefits. Mm-hmm. If you had a company culture of making sure you know your information was captured, digitized, put in a format that was agent ready and then made available to agents to work with, and then you just, again, have this reality of like add a 10,000 person [00:45:00] company.Mapping that to the, you know, access structure of the company is just a hard problem. Is like, is like, yeah, well, you just, not every piece of information that's digitized can be shared to everybody. And so now you have to organize that in a way that actually works. There was a pretty good piece, um, this, this, uh, this piece called your company as a file is a file system.I, did you see that one?swyx: Nope.Aaron Levie: Uh, yes. You saw it. Yeah. And, and, uh, I actually be curious your thoughts on it. Um, like, like an interesting kind of like, we, we agree with it because, because that's how we see the world and, uh,swyx: okay. We, we have it up on screen. Oh,Aaron Levie: okay. Yeah. But, but it's all about basically like, you know, we've already, we, we, we already organized in this kind of like, you know, permission structure way.Uh, and, and these are the kind of, you know, natural ways that, that agents can now work with data. So it's kind of like this, this, you know, kind of interesting metaphor, but I do think companies will have to start to think about how they start to digitize more, more of that data. What was your take?Jeff Huber: Yeah, I mean, like the company's probably like an acid compliant file system.Aaron Levie: Uh,Jeff Huber: yeah. Which I'm guessing boxes, right? So, yeah. Yes.swyx: Yeah. [00:46:00]Jeff Huber: Which you have a great piece on, but,swyx: uh, yeah. Well, uh, I, I, my, my, my direction is a little bit like, I wanna rewind a little bit to the graph word you said that there, that's a magic trigger word for us. I always ask what's your take on knowledge graphs?Yeah. Uh, ‘cause every, especially at every data database person, I just wanna see what they think. There's been knowledge graphs, hype cycles, and you've seen it all. So.Aaron Levie: Hmm. I actually am not the expert in knowledge graphs, so, so that you might need toswyx: research, you don't need to be an expert. Yeah. I think it's just like, well, how, how seriously do people take it?Yeah. Like, is is, is there a lot of potential in the, in the HOVI?Aaron Levie: Uh, well, can I, can I, uh, understand first if it's, um, is this a loaded question in the sense of are you super pro, super con, super anti medium? Iswyx: see pro, I see pros and cons. Okay. Uh, but I, I think your opinion should be independent of mine.Aaron Levie: Yeah. No, no, totally. Yeah. I just want to see what I'm stepping into.swyx: No, I know. It's a, and it's a huge trigger word for a lot of people out Yeah. In our audience. And they're, they're trying to figure out why is that? Because whyAaron Levie: is this such aswyx: hot item for them? Because a lot of people get graph religion.And they're like, everything's a graph. Of course you have to represent it as a graph. Well, [00:47:00] how do you solve your knowledge? Um, changing over time? Well, it's a graph.Aaron Levie: Yeah.swyx: And, and I think there, there's that line of work and then there's, there's a lot of people who are like, well, you don't need it. And both are right.Aaron Levie: Yeah. And what do the people who say you don't need it, what are theyswyx: arguing for Mark down files. Oh, sure, sure. Simplicity.Aaron Levie: Yeah.swyx: Versus it's, it's structure versus less structure. Right. That's, that's all what it is. I do.Aaron Levie: I think the tricky thing is, um, is, is again, when this gets met with real humans, they're just going to their computer.They're just working with some people on Slack or teams. They're just sharing some data through a collaborative file system and Google Docs or Box or whatever. I certainly like the vision of most, most knowledge graph, you know, kind of futuristic kind of ways of thinking about it. Uh, it's just like, you know, it's 2026.We haven't seen it yet. Kind of play out as as, I mean, I remember. Do you remember the, um, in like, actually I don't, I don't even know how old you guys are, but I'll for, for to show my age. I remember 17 years ago, everybody thought enterprises would just run on [00:48:00] Wikis. Yeah. And, uh, confluence and, and not even, I mean, confluence actually took off for engineering for sure.Like unquestionably. But like, this was like everything would be in the w. And I think based on our, uh, our, uh, general style of, of, of what we were building, like we were just like, I don't know, people just like wanna workspace. They're gonna collaborate with other people.swyx: Exactly. Yeah. So you were, you were anti-knowledge graph.Aaron Levie: Not anti, not anti. Soswyx: not nonAaron Levie: I'm not, I'm not anti. ‘cause I think, I think your search system, I just think these are two systems that probably, but like, I'm, I'm not in any religious war. I don't want to be in anybody's YouTube comments on this. There's not a fight for me.swyx: We, we love YouTube comments. We're, we're, we're get into comments.Aaron Levie: Okay. Uh, but like, but I, I, it's mostly just a virtue of what we built. Yeah. And we just continued down that path. Yeah.swyx: Yeah.Aaron Levie: And, um, and that, that was what we pursued. But I'm not, this is not a, you know, kind of, this is not a, uh, it'sswyx: not existential for you. Great.Aaron Levie: We're happy to plug into somebody else's graph.We're happy to feed data into it. We're happy for [00:49:00] agents to, to talk to multiple systems. Not, not our fight.swyx: Yeah.Aaron Levie: But I need your answer. Yeah. Graphs or nerd Snipes is very effective nerd.swyx: See this is, this is one, one opinion and then I've,Jeff Huber: and I think that the actual graph structure is emergent in the mind of the agent.Ah, in the same way it is in the mind of the human. And that's a more powerful graph ‘cause it actually involved over time.swyx: So don't tell me how to graph. I'll, I'll figure it out myself. Exactly. Okay. All right. AndJeff Huber: what's yours?swyx: I like the, the Wiki approach. Uh, my, I'm actually
Artificial intelligence is rapidly transforming the business landscape, redefining how value is created and where human work fits within the new paradigm. Long-standing advice to amass knowledge and out-execute others is now running up against sophisticated AI agents that can process information and perform tasks at speeds and scales unattainable by humans. In this emerging era, Christopher Lochhead's insights point to a critical shift from being a traditional “knowledge worker” to embracing the future as a “creator capitalist.” On this episode, Christopher Lochhead moves over to the guest chair and answer our questions about AI, Creator Capitalists, and the future of work. You're listening to Christopher Lochhead: Follow Your Different. We are the real dialogue podcast for people with a different mind. So get your mind in a different place, and hey ho, let's go. Why the Knowledge Worker Playbook Is Obsolete For decades, success in business hinged on being a master of knowledge and execution. This model rewarded those who reacted effectively, put out fires, and delivered results with established frameworks. However, with AI making information and execution nearly free and instantly accessible, simply reacting and executing is no longer enough. As Christopher Lochhead argues, clinging to this outdated success formula is akin to opening a video rental store in the age of streaming services. Today, the competitive edge lies in moving upstream to activities that AI cannot easily replicate. This means focusing on judgment, unique perspectives, and the ability to define, frame, and solve new problems. Humans cannot out-execute a GPU, but they can out-create one by leveraging skills that remain distinctly human. The Four Capitals of the Creator Capitalist Framework Lochhead's Creator Capitalist concept rests on the mastery and integration of four kinds of capital: intellectual, relationship, reputational, and financial. Intellectual capital emerges from differentiated insights, deep domain expertise, and unique perspectives. Relationship capital is built through genuine connections and trust within your network, while reputational capital is earned through tangible results and reliability, not just self-promotional branding. Bringing these capitals together creates a flywheel that drives lasting success, even as AI commoditizes old sources of value. Financial capital follows as a natural result of delivering value that others find meaningful. Those able to orchestrate these four capitals will build not just AI-resistant careers but ones supercharged by the new opportunities technology presents. Unleashing Human Potential: Adapt, Create, and Lead As AI handles more routine tasks, the future belongs to those who cultivate curiosity, creativity, and critical thinking. These human abilities enable us to ask better questions, generate bold ideas, and envision solutions no algorithm can predict. Lochhead urges professionals to take radical responsibility for their careers and continually seek ways to create net new value. Adapting to this shift means letting go of fear and embracing the opportunity to redefine what it means to be valuable. The most successful individuals and organizations will be those who harness AI as a tool to augment their creative power and lead the way into uncharted territory. The age of the creator capitalist has arrived, and it's time to build the future together. To hear more of Christopher Lochhead’s thoughts on Creator Capitalist and the future of work, download and listen to this episode. Links Want to catch more episode of the AI Agent & Copilot Podcast? You can check them out here: Presented by Cloud Wars | AI Agent and Copilot Podcast | John Siefert LinkedIn | Cloud Wars LinkedIn We hope you enjoyed this episode of Christopher Lochhead: Follow Your Different™! Christopher loves hearing from his listeners. Feel free to email him, connect on Facebook, X (formerly Twitter), Instagram, and subscribe on Apple Podcast / Spotify!
For as long as we've known, humans have revered ancient trees. We have also destroyed them, especially since the advent of colonialism and fossil fuel capitalism. Historian Jared Farmer reflects on what trees illuminate about our past and potential future. The post Fund Drive Special: Humans and Ancient Trees appeared first on KPFA.
The Top 5 Issues Managing Multiple AI Agents in Production Managing 1-2 AI agents? Easy. Managing 20+? That's a different game entirely. After 9+ months running nearly 30 AI agents in production at SaaStr, we've learned what actually breaks at scale - and nobody's talking about it. This isn't about deployment tips or vendor selection. This is about the brutal realities that only emerge when you're juggling 20+ agents generating $1M+ in revenue.
This bonus episode of Insider Interviews: With Media & Marketing Pros came together super spontaneously at On Air Fest in Brooklyn, where podcasters, creators, and technologists gathered recently to talk about the future of audio and, no spoiler alert, the future of AI. After a keynote session that talked about living WITH machines by keeping humanity present I had to grab Baratunde Thurston and Terry Rice to keep talking about how creators, entrepreneurs, (and parents) are navigating exactly that. Both of these conversations landed on the same core idea as my previous episode with Jack Myers: The real differentiator won't be the machines—it'll be the humans using them. Baratunde Thurston, author, speaker, comedian, and “thought leader of interdependence,” has been thinking about this balance for years and created his podcast Life with Machines to really explore that. As he asks: How do we live well with technology, instead of just enduring it? Living Well with Tech per Baratunde He's experimenting with AI directly in his own creative process—even creating an AI character named “Blair” as a kind of co-producer on his show. But he's also clear that there's a line between assistance and authorship. #AI can help with research, feedback, or execution. But the deeper creative work, like ideas, voice, perspective, still needs to come from a human. “There's something slower and messier about crafting things yourself—but there's also a pride of creativity that I want to maintain.” Baratunde, and not surprisingly after him Terry Rice, also raised an issue that's only going to become more important: authenticity. As generative AI content becomes harder to identify, the industry may need new ways to verify that a real person is behind what we're seeing, hearing, or reading. Some technologists are already exploring ideas like “proof of humanity.” But Baratunde's take was refreshingly simple: “I think the thing we're going to trust the most is this: I feel you. We're sharing the same air.” (He grabbed my arm to illustrate, saying “THIS is what matters.”) In other words, real-world presence and connection may become even more valuable in a digital ecosystem increasingly filled with synthetic content. My second conversation was with Terry Rice, entrepreneur, speaker, and host of The Signal, a podcast designed to help entrepreneurs cut through the noise and focus on practical strategies for growing their businesses. Terry uses AI in his own workflow, like generating prep guides before interviews (which I wish I had done for these spontaneous chats!) or organizing research. He also got so inspired by his kids that he built a way to help parents, with a way to build their own app for their kids! Trust me, you have to listen and hear what he did. But he made an important distinction: the value isn't letting AI do all the thinking. It's knowing what good looks like. “The real skill isn't producing every answer yourself—it's recognizing when something is good and when it isn't.” That was one of those lightbulb emoji comments. It’s also a mindset that he's already teaching his kids. In fact, his ten-year-old daughter summed it up in a way that might be the most useful rule for all of us navigating AI right now: “It's okay to fight with AI.” Out of the mouths of (this generation’s) babes. Question it. Push back. Refine the answer. Through lines? AI will absolutely change how content gets made and how businesses operate. But creativity, judgment, curiosity—and yes, a little humanity—are still very much part of the equation. And for now at least, that's something machines can't replicate. (But props to Chat GPT for helping me summarize some of this brilliance!) Key Moments: 01:36 – Baratunde Thurston on the philosophy behind Life with Machines02:40 – Experimenting with AI as a co-producer03:20 – Where creators should draw the line with AI06:43 – The emerging concept of “proof of humanity”07:55 – Why physical presence may matter more in an AI world10:13 – Should AI try to imitate humans?11:10 – Could real human experiences become a luxury?12:18 – AI's environmental impact and future possibilities 15:54 – Build With Them AI Parenting 17:18 – A Brand Marriage: The Signal and Fiverr 19:54 – Vulnerability Builds Trust 22:47 – No Guilt Using LLMs 23:52 – Teaching Kids to Challenge AI Connect With: Baratunde Thurston — Author, comedian, cultural thought leader; host of Life with Machines Podcast Terry Rice — Journalist, entrepreneur; host of The Signal and founder of Build With Them On Air Fest Connect with E.B. Moss and Insider Interviews: With Media & Marketing Experts LinkedIn: https://www.linkedin.com/in/mossappeal Instagram: https://www.instagram.com/insiderinterviews Facebook: https://www.facebook.com/InsiderInterviewsPodcast/ Threads: https://www.threads.net/@insiderinterviews Substack: Moss Hysteria Please follow Insider Interviews, share with another smart business leader, and leave a comment on @Apple or @Spotify… or a tip in my jar!: https://buymeacoffee.com/mossappeal! THANK YOU for listening!
This week on Better Buildings for Humans, host Joe Menchefski sits down with Richard Williams, Senior Architect Development Manager at VELUX, for an energizing and deeply thoughtful conversation about daylight, design, and the future of healthy housing. Drawing on more than 30 years with the company, Richard shares how his early hands-on training in building science shaped his passion for craftsmanship, performance, and people-centered architecture.Together, they explore why the “fifth façade” — the roof — may be the most important surface of all, how natural daylight from above outperforms vertical glazing, and why ventilation is critical to occupant well-being. The conversation also tackles urban density, renovation of aging housing stock, climate resilience, and the urgent need to build homes for purpose — not just profit. This episode is a powerful reminder that great buildings start with one thing: people.More About Richard WilliamsRichard Williams' passion for architecture gave him significant insight into the design, specification, and project planning of a wide range of projects throughout the Cotswolds, where he worked within several architectural practices as an Architectural Technician.He joined VELUX Company Ltd more than 30 years ago, and in his current role as Senior Architectural Development Manager, Richard Williams collaborates with colleagues across multiple countries, sharing industry insights and strengthening ongoing relationships within the architectural community.Contact:https://hdawards.org/ https://hdawards.org/living-places/https://www.linkedin.com/posts/velux_yesterday-we-were-proud-to-celebrate-participants-activity-7369674420702969856-wief?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAIRKIMBpAPsGNrXCmSGc8zj03bLKRHa1oM Where To Find Us:https://bbfhpod.advancedglazings.com/www.advancedglazings.comhttps://www.linkedin.com/company/better-buildings-for-humans-podcastwww.linkedin.com/in/advanced-glazings-ltd-848b4625https://twitter.com/bbfhpodhttps://twitter.com/Solera_Daylighthttps://www.instagram.com/bbfhpod/https://www.instagram.com/advancedglazingsltdhttps://www.facebook.com/AdvancedGlazingsltd
“Transparency has to be built into the structure so that you know where the decision is made, what authorizations are given, and have an audit trail visible so you can always see what is going on.” –Ross Dawson About Ross Dawson Ross Dawson is a futurist, keynote speaker, strategy advisor, author, and host of Amplifying Cognition podcast. He is Chairman of the Advanced Human Technologies group of companies and Founder of Humans + AI startup Informivity. He has delivered keynote speeches and strategy workshops in 33 countries and is the bestselling author of 5 books, most recently Thriving on Overload. LinkedIn Profile: Ross Dawson What you will learn How human-AI teams outperform human-only teams in productivity and efficiency The crucial role of understanding AI strengths and limitations when designing collaborative workflows Ways AI collaboration can lead to output homogenization and strategies to preserve human creativity Key principles of intelligent delegation within multi-agent AI systems, including dynamic assessment and trust Understanding accountability, transparency, and auditability in decision-making with autonomous AI agents How user intent and ‘machine fluency’ impact the effectiveness of AI agents in economic and organizational contexts The emergence of an ‘agentic economy’ and its implications for fairness, capability gaps, and representation Counterintuitive findings on AI-mediated negotiation, particularly advantages for women, and what it reveals about AI-human interaction Episode Resources Transcript Ross Dawson: This episode is a little bit different. Instead of doing an interview with somebody remarkable, as usual, today I’m going to just share a bit of an update and then share insights from three recent research papers that dig into something which I think is exceptionally important, which is how humans work with AI agentic systems. And we’ll look at a few different layers of that, from how small humans plus agent teams work through to how we can delegate decisions to AI through to some of the broader implications. But first, a bit of an update. 2026 seems to be moving exceptionally fast. It’s a very interesting time to be alive, and I think it’s pretty even hard to see what the end of this year is going to look like. So for me, I am doing my client work as usual. So I’ve got keynotes around the world on usually various things related to AI, the future of AI, humans plus AI, and so on. A few industry-specific ones in financial services and so on. And also doing some work as an advisor on AI transformation programs, so helping organizations and their leaders to frame the pathways, drawing on my AI roadmap framework in how it is you look at the phases, mapping those out, working out the issues, and being able to guide and coach the leaders to do that effectively. But the rest of my time is focused on three ventures, and I’ll share some more about these later on. But these are fairly evidently tied to my core interests. Fractious is our AI for strategy app. So this was really building a way in which we can capture the detailed nuance of the strategic thinking of leaders of the organization, to disambiguate it, to clarify it, and enable that to then be built into strategic options, strategic hypotheses, and to be able to evolve effectively. So that’ll be in beta soon. Please reach out if you’re interested in being part of the beta program, and that’ll go to market. So that’s deeply involved in that. We also have our Thought Weaver software, rebuilding previous software which had already built on AI-augmented thinking workflows. So again, that’ll be going to beta. That’s more an individual tool that will be going into beta in the next weeks. So again, go to Thought Weaver. Actually, don’t—the website isn’t updated yet—but I’ll let you know when it’s out, or keep posted for updates on that. And also building an enterprise course on humans plus AI teaming. It’s my fundamental belief that we’ve kind of been through the phase of augmentation of individuals, and we still need to work hard at doing that better. But the next phase for organizations is to focus on teams. How do you work with teams where we have both human members and AI Agentic members? And it creates a whole different series of dynamics and new skills and capabilities. It really calls for how to participate in the humans plus AI team and how to lead humans plus AI teams. And that is again going into the first few test organizations in the next month or so. So again, just let me know. So today what we’re going to look at is this theme: teams of humans working with AI agents. So not individual AI as in chat, but where we have a lot of agents with various degrees of autonomy, but also agentic systems where these agents are interacting with each other as well as with humans. So there are three papers which I want to just talk about, just give you a quick overview, and please go and check out the papers in more detail if you’re interested. There’ll be links in the show notes. First is Collaborating with AI Agents: A Field Experiment on Teamwork, Productivity and Performance, by Harang Ju at Johns Hopkins and Sinan Aral at MIT. So this, there was an experiment which had over 2,300 participants who were working on creating advertisements. And they had a whole array of humans plus AI, human-human teams, human-AI teams, sort of quite small or just in duos and so on, working on being able to create those which were then assessed in terms of quality and how they worked. So a few particularly interesting findings from that. So individually, just having a human-AI team essentially enhanced performance significantly compared to just human-only teams. And so they were able to move faster and to complete more of their tasks, and the quality was strong. But there’s a phrase which is commonly used around the jagged frontier of capability of AI, and it was quite clear that there were some domains where AI does very well and others where it didn’t. And so this comes to the part where, in terms of the design of the tasks, the design of the human-AI systems, and also the understanding by the human users of what AI is good at or not, is fundamental in being able to do that. And so in some cases, if AI was used in some domains such as image quality, they actually decreased quality. So we need to understand where and how both to apply AI in this jagged frontier and design the systems around that. This changes the role of the humans, of course. Humans then tend to delegate more. And there’s one of the things which they tested for, which is how do you behave differently if you know your teammate is an AI as opposed to not knowing whether a human or AI. And it changes. So they become more task-oriented. They are less using the social cues to interact, and they are essentially becoming more efficient. But some of these social cues which are valuable in the human-human collaboration started to disappear. And this automation process meant that there was not, in the end, as much creative diversity. Now I’ve often pointed to the role of AI in creativity tasks. It depends fundamentally on the architecture—where does the AI sit in terms of initial ideas which are then sorted by filtered by humans and then are involved, or where it sits in that process. But in this particular structure, they found that humans plus AI teams started to create more and more similar-type outputs. So this homogenization of outputs in these human-AI teams was very notable and significant. And so this again creates a design factor for how it is that we build human-AI systems which actually do not lead to homogeneous output. And we’re making sure that we are ensuring that the human diversity is maintained. Often that can be done by being able to have human outputs first without AI then blunting or narrowing the breadth of the creative outputs of humans. Second paper I’d like to point to is called Intelligent AI Delegation, from a team at Google DeepMind. So this is this point where we now have not just single AI agents to delegate decisions to or problems to, but in fact systems of AI. And so this creates a different challenge. And the key point is, I’m saying this, is around you are delegating tasks, but when you are delegating tasks it’s more than just saying, okay, which agent gets the task. You have to understand responsibility. So where does accountability reside? Who is responsible for that? How clarity around the roles of the agents, what are the boundaries of what it is they can do and cannot do, the clarity of the intent, and how that’s communicated and cascaded through the agents, and the critical role of trust and appropriate degrees of trust in the systems. So this means that we have to define what are the different characteristics of the task. And in the paper it goes through quite a few different characteristics. And a few of the critical ones was the degree of uncertainty around the task. Obviously, if it is very clear that can be appropriately delegated, but many tasks and problems are uncertain. And so this creates a different dynamic. Whether verifiable, as you know you have high-quality information, or whether that’s the degree of uncertainty around whether decisions are reversible, the degree of subjectivity, because not everything is data-driven. And so assessing these task characteristics start to define where human judgment plays a role, how do you create those checks, and how do you build that. So this creates a system so intelligent delegation is not just how the humans delegate, but in turn the structure of how that cascades down through the agents. So this requires this idea of dynamic assessment. So you’re not just setting and forgetting. You are continuously reassessing what is happening with the context, what is changing in the stakes, any uncertainty. So you’re coming back to be able to ensure there’s not just a single delegation structure, but you’re changing it over time. And you’ll continue to adapt as you’re executing, and be able to monitor, replan, and set. So transparency has to be built into the structure so that you have where the decision is made, what authorizations are given, you know where the audit trail is visible so you can always see what is going on in those structures. And being able to scale how you are coordinating the systems. And if it’s just small scale that’s fine, but you want to be able to build something which has been able to move across many agents. And so this requires a way of being able to discover which agents are most appropriate and be able to essentially establish the delegation of a particular task to them again on a dynamic basis. And essentially this final principle of systemic resilience, where you have to expect that things will go wrong. So there’s continuing monitoring, being able to understand that these systems can be attacked in various ways and being able to recover. So, very solid paper, quite deep, but really giving some very good principles for how it is we can delegate to AI systems. So the final of the three papers goes to a bit of a higher level. It’s called Agentic Interactions, and it’s from Alex Imas, Sanjog Misra of the University of Chicago, and Kevin Lee at the University of Michigan. And what they’re looking at is what happens on a macro scale when increasingly decisions are delegated to AI agents. So this is the agent economy that I’ve been talking about for a very long time, which is now very much coming to the fore. And so what they do is they look at what happens when we start to delegate more and more economic decisions, such as buying and selling decisions. So what they found is extraordinarily interesting. They found that the AI agents in fact do behave very similarly to their human creators. And in fact what you can observe is that there are differences in the agents where you can infer the gender and the personality of the person who is delegating the agent. Even though there is no information, the agent doesn’t even know what the gender or the personality is, they are actually flowing through. So in fact agents represent us in the market as it were, potentially very accurately. But this goes directly to the second point where this idea of machine fluency. And so AI fluency is very much a term in vogue at the moment. So the authors talk about this idea of machine fluency which is how well can a user put their intent and align that with the agent so the agent is aligned with them. And in fact they found that there’s very significant degrees of difference in those. And those people who are better at being able to get their agents to express their wishes could in fact amplify the economic outcomes of these people. And related to that in fact they showed there was a correlation that higher educational levels mean that you were able to better delegate to AI, and your AI agents performed better and gave you better returns. So again pointing to these ways in which we’re starting to see potentials for aggravation of differences in the agentic economy when our agents who act for us in the economy start to reflect among other things educational differences or capabilities in how it is we express our results and our intentions through AI. There was one very interesting and I suppose counterintuitive result. Women get better outcomes in negotiation when using AI agents than they do in human-to-human interactions. Again this is without the AI agents knowing that they are representing a woman or not. But in fact this shows that the style and the way on the machine fluency the ways in which women are able to instruct and put their intent into the AI agents is in this study superior to those of males. And there’s of course in the real world unfortunately a bias towards male performance in negotiation. And that was inversed in the study. So exceptionally interesting. So just pulling back some of the common themes of these three papers. We increasingly want a world where humans have relationships to agents. We are starting to work with them in teams and systems. And we’re starting to build economies where humans are represented by agents. And essentially our relationship to those agents and our ability to delegate effectively is driving value of course to the individual but also across these agentic systems that are emerging. So this is early on because the realities of these agentic human-agent systems are pretty early at this point. But this starts to point to some of the potential, some of the challenges, some of the opportunities, and some of the work that we have to do. So I will be sharing more on these kinds of topics in my interviews with people and also of course on the Humans Plus AI website. So just go to humansplus.ai. Actually to be frank it hasn’t been updated a lot recently but we will be sharing a lot more there. Or LinkedIn is where I share the most actually, and getting back on Twitter as well if you’re interested. But I’ll be diving deep and trying to share what I find is useful as well as interesting in helping us to create a world where humans are first. AI complements us. The reality is we are moving to humans plus AI systems. And if we design that well with the right intentions we can make this absolutely one which drives human value first. So glad to have you on the journey. Have a wonderful rest of your day. The post Ross Dawson on Humans + AI Agentic Systems (AC Ep34) appeared first on Humans + AI.
What if the secret to social media success in 2026 isn’t better tools, smarter apps, or perfectly planned content - but you? In this solo episode of the Direct Selling Accelerator Podcast, I’m breaking down one of the most important strategies for the future of social media: your red thread. We explore why authenticity matters more than ever, how algorithms actually learn who to show your content to, and why over-editing and over-planning are hurting your reach. In this podcast episode, I’ll walk you through how to identify up to three red threads that should weave through everything you post, helping you attract the right audience, build trust faster, and grow your business with clarity and confidence. If you want social media to feel simpler, more aligned, and far more effective in 2026, this episode will give you a powerful framework you can implement immediately. I’ll be talking about: ➡ [00:00] Introduction ➡ [00:18] Welcome to the Direct Selling Accelerator Podcast ➡ [00:45] Why Over-Planning Kills Authenticity ➡ [01:36] Introducing the Red Thread Strategy ➡ [02:39] The Mari Smith Moment That Changed Everything ➡ [04:03] You’re Talking to Humans, Not Algorithms ➡ [05:18] Why Over-Edited Content Is Hurting Your Reach ➡ [07:00] What a Red Thread Really Is ➡ [08:21] How the Algorithm Learns What You Talk About ➡ [10:06] The Biggest Red Thread Mistake People Make ➡ [10:48] Sam’s Three Core Red Threads ➡ [12:00] Your Content Is a Recipe, Not One Ingredient ➡ [12:45] Why Your Red Thread Must Connect to Your Why ➡ [13:27] Energy Is Contagious on Social Media ➡ [14:51] Four Questions to Find Your Red Thread ➡ [16:00] The Dinner Party Rule for Social Media ➡ [18:12] What Do You Want to Be Remembered For? ➡ [20:03] Passion Is Your Power ➡ [23:30] Turning Clarity Into a 2026 Content Plan ➡ [24:12] Stop Overthinking and Just Be You ➡ [25:15] Final Thoughts, Subscribe & Outro Resources Quote: “People buy from people they know, like, and trust.” Free Facebook community: https://www.facebook.com/groups/socialmediafordirectsellerswithgregandsam/ Are you ready to keep growing? Learn more about joining the Auxano Family - https://go.auxano.global/welcome Connect with Direct Selling Accelerator: ➡ Visit our website: https://www.auxano.global/ ➡ Subscribe to YouTube: https://www.youtube.com/c/DirectSellingAccelerator ➡ Follow us on Instagram: https://www.instagram.com/auxanomarketing/ ➡ Sam Hind’s Instagram: https://instagram.com/samhinddigitalcoach ➡ Follow us on Facebook: https://www.facebook.com/auxanomarketing/ ➡ Email us: community_manager@auxano.global If you have any podcast suggestions or things you’d like to learn about specifically, please send us an email at the address above. And if you liked this episode, please don’t forget to subscribe, tune in, and share this podcast. Are you ready to join the Auxano Family to get live weekly training, support and the latest proven posting strategies to get leads and sales right now - find out more here: https://go.auxano.global/welcome See omnystudio.com/listener for privacy information.
Darkest Mysteries Online - The Strange and Unusual Podcast 2023
The Moment Humans Unveiled Their True Military Power—And One Laughed Again Then Room Went SilentBecome a supporter of this podcast: https://www.spreaker.com/podcast/darkest-mysteries-online-the-strange-and-unusual-podcast-2026--5684156/support.Darkest Mysteries Online
Darkest Mysteries Online - The Strange and Unusual Podcast 2023
They Were Losing The War—Then The Humans One Mistake Changed EverythingBecome a supporter of this podcast: https://www.spreaker.com/podcast/darkest-mysteries-online-the-strange-and-unusual-podcast-2026--5684156/support.Darkest Mysteries Online
What separates someone who'd donate a kidney to a stranger from someone who might steal one? Abigail Marsh explains the neuroscience of fear on part 1 of 2.Full show notes and resources can be found here: jordanharbinger.com/1292What We Discuss with Dr. Abigail Marsh:Psychopaths don't lack all emotion — they specifically lack the ability to recognize and feel fear. Where most people see terror on someone's face, a psychopath sees something unidentifiable — a deficit that fundamentally rewires how they relate to other humans.Many of psychology's most famous studies — the Stanford Prison Experiment, the Milgram shock test, the Kitty Genovese bystander story — turned out to be deeply flawed or outright fabricated. Humans are actually far more compassionate than these narratives suggest; CCTV data shows bystanders intervene 90% of the time.Psychopathy isn't caused by "bad parenting" in any simple way — it's a neurodevelopmental disorder with significant genetic heritability, much like autism or ADHD. Blaming parents echoes the same harmful logic that once attributed schizophrenia to "cold mothers."Household chaos — literal noise, inconsistent rules, revolving caregivers — makes it significantly harder for children to learn behavioral patterns. It's not just abuse that derails development; it's the inability to pick out a reliable signal from an overwhelming amount of environmental noise.The sweet spot of effective parenting — and really, of shaping better humans — is combining genuine warmth with consistent boundaries. Love without structure breeds entitlement; structure without love breeds resentment. But together, they build the foundation for compassion and resilience in any child.And much more... [This is part one of a two-part episode. Stay tuned for part two later this week!]And if you're still game to support us, please leave a review here — even one sentence helps! Sign up for Six-Minute Networking — our free networking and relationship development mini course — at jordanharbinger.com/course!Subscribe to our once-a-week Wee Bit Wiser newsletter today and start filling your Wednesdays with wisdom!Do you even Reddit, bro? Join us at r/JordanHarbinger!This Episode Is Brought To You By Our Fine Sponsors: BetterHelp: 10% off first month: betterhelp.com/jordanButcherBox: Free protein for a year + $20 off first box: butcherbox.com/jordanDeleteMe: 20% off: joindeleteme.com/jordan, code JORDANWayfair: Start renovating: wayfair.comProgressive Insurance: Free online quote: progressive.comThe President's Daily Brief: Listen here or wherever you find fine podcasts!See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Most of us assume angels rank above humans in the great cosmic hierarchy. But angels, for all their mystery and power, lack two essential parts of what it means to be human: a physical body and the kind of covenant relationship with God that shapes our story. Kaitlyn takes up the simple question of whether angels are boys or girls and uses it to draw out the deeper differences between angels and humans and how these differences point us back to the astonishing claim at the heart of Scripture: that embodied, relational humanity holds a uniquely honored place in God's creation. 0:00 - Theme Song 1:08 - Angels in the Bible 8:26 - Do Angels Have Bodies? 13:34 - Sponsor - Hiya Health - Go to https://www.hiyahealth.com/CURIOUSLY to receive 50% off your first order 15:01 - Sponsor - With & For: Psychology and Spirituality for Thriving Podcast. Check it out now! https://pod.link/1712333330 16:02 - Sponsor - No Small Endeavor - Award-winning podcast where theologians, philosophers, and best-selling authors talk about faith with Lee C. Camp. Start listening today: https://pod.link/1513178238 17:38 - Marriage in Heaven 22:06 - Are Angels Better Than Us? 28:37 - Goodness of Limitation 31:37 - End Credits
Say the word "taxes" out loud. Did your shoulders just clench? Yeah. That's not an accident. In this episode, I sit down with Hannah Cole — artist, tax expert, author of Taxes for Humans, and founder of Sunlight Tax — to talk about why the tax industry profits from keeping you afraid, and what you can actually do about it. Hannah breaks down the three main IRA options available to self-employed women in plain, clear language that finally makes it click. She talks about why creatives, women, and anyone who doesn't fit the traditional financial mold has been left out of this conversation on purpose — and how that changes when you have access to someone who sounds like your best friend instead of a shaming accountant. This is one of those conversations that will make you exhale. Because the tax system isn't as scary as you've been led to believe. And the tools available to you are far more powerful than most people realize. If you've ever felt like money — or taxes — weren't meant for someone like you, this episode is going to change that. WHAT YOU'LL LEARN Why the tax industry's marketing mechanism is fear — and how to stop falling for it Why nobody gets a tax education in this country (and why that's not your fault) The real difference between a Traditional IRA, Roth IRA, and SEP IRA in plain language Why creativity is a synonym for resourcefulness when it comes to money How to start thinking of money and taxes as a tool that works for you instead of against you FREE GIFT FROM HANNAH COLE Download Hannah's Free Visual Guide to Tax Deductions here: https://www.sunlighttax.com/deductionsguide CONNECT WITH HANNAH COLE Website: https://www.sunlighttax.com Instagram: https://www.instagram.com/sunlighttax LinkedIn: https://www.linkedin.com/in/hannah-cole-3775561/ ABOUT HANNAH COLE Hannah Cole is an artist, tax expert, author of Taxes for Humans [link: https://amzn.to/4a2Mu9m ], and founder of Sunlight Tax. She specializes in educating entrepreneurs and creative professionals in taxes and financial empowerment. A long-time working artist with a high-level exhibition history, Hannah is a frequent speaker on stages and podcasts, a money columnist for the art blog Hyperallergic, and the host of a global top 2% podcast, the Sunlight Tax Podcast. Her company, Sunlight Tax, specializes in friendly, informative tax education for self-employed people with big visions, and engaging, savvy tax education workshops for creative groups. READY TO BUILD YOUR CONFIDENCE? Book a free 15-min call with Sarah to talk about where you are in your business and see if working together feels right. Schedule here: https://app.acuityscheduling.com/schedule.php?owner=13047670&appointmentType=34706781 FREE GIFT FROM SARAH Get Sarah's Freedom Calculator and discover how much your business needs to make to finally be free. Download at https://sarahwalton.com/freedom LEARN FROM SARAH Explore Sarah's online courses and free resources to start building your business with confidence. Online Courses: https://sarahwalton.com/online-courses Free Resources: https://sarahwalton.com/free-resources CONNECT WITH SARAH Website: https://sarahwalton.com/podcast YouTube: https://www.youtube.com/@TheSarahWalton Instagram: https://instagram.com/thesarahwalton ABOUT SARAH WALTON Sarah Walton is a business coach, podcast host, and mentor who helps women entrepreneurs build businesses they love. She's the creator of the Abundance Academy, Effortless Sales, and the Game On Girlfriend® podcast. Sarah's mission is to put more money in the hands of more women while teaching authentic, heart-centered business strategies. RELATED GAME ON GIRLFRIEND® EPISODES YOU'LL LOVE Episode 230: You Deserve the Money with Bookkeeper Ashley Chamberlain — https://sarahwalton.com/bookkeeping-for-women/ Episode 227: How to Clear a Money Fog with Mikelann Valterra — https://sarahwalton.com/clear-money-fog/ Episode 95: When Your Life Falls Apart Because of Money with Michelle Arpin Begina — https://sarahwalton.com/michelle/ LOVE THE SHOW? LEAVE US A REVIEW! Thank you so much for listening. I'm honored that you're here and would be grateful if you could leave a quick review on Apple Podcasts by clicking here, scrolling to the bottom, and clicking "Write a review." Your reviews help other women entrepreneurs find the show and get the support they need to build businesses they love. Thank you for being part of the Game On Girlfriend® community! (If you're not sure how to leave a review, you can watch this quick tutorial.)
In this episode of Player Driven, host Greg welcomes back industry veteran Sharon Fisher to discuss the rapidly evolving landscape of content moderation. From her early days building moderation at Club Penguin to her current work with AI-driven platforms like Checkstep, Sharon shares her unique perspective as both a trust and safety expert and a concerned parent.Key Discussion PointsThe Evolution of Moderation: Sharon reflects on the shift from manual work and simple keyword blocking 17 years ago to today's complex machine learning and contextual understanding.The Changing Role of the Moderator: Why the rise of AI doesn't mean the extinction of human moderators, but rather their transformation into data analysts who challenge bias and understand culture.The "Wild Wild West" of the Marketplace: Insights into why legacy moderation companies are phasing out while new, AI-first competitors like Checkstep are entering the space.Privacy vs. Safety: Addressing the pushback against age verification and the critical need for better communication and education for parents and caregivers.Bridging the Gap: How integrated technology can finally break down silos between customer support, marketing, and moderation to provide a holistic view of the user.Predictions for 2026 and Beyond: Sharon forecasts a year of "stress and adoption" as companies rush to reduce costs through technology, leading to a eventual search for balance in 2027.About Our Guest: Sharon FisherSharon Fisher is a leading voice in the trust and safety industry. With a career spanning roles at Disney (Club Penguin), Two Hat, and Keywords Studios, she now provides strategic consulting for gaming companies and technology firms like Checkstep. She is also a passionate advocate for digital literacy, frequently speaking to school districts to help parents protect their children online.Notable Quotes"The moderator role becomes even more important because they are who they are—they understand your community, they speak the language, and they live the culture every single day." "Think about that area of your city that you would not go on your own at night time... that's the same that translates into the internet. Know where your kid is playing."Resources MentionedConnect with Sharon: Sharon Fisher on LinkedIn Featured Technology: Checkstep Join The Player Driven Discordhttps://discord.gg/c9YgMctb
Researcher and journalist Adam Smith joins us this week to talk about what happens when AI boosters hype up their own speculation. We talk about Moltbook, a project that purported to be a “social network for AI agents” and claimed to prove that the sentience and consciousness of AI was inevitable. Adam explains the tricks AI boosters used to present their bots as smarter than they actually were, and why Moltbook should be considered more as theatre, than a glimpse into the future of communication. Read and subscribe to Adam on substack: https://open.substack.com/pub/wealthofnotions/p/i-think-therefore-i-am-probably-roleplaying ------- PALESTINE AID LINKS -You can donate to Medical Aid for Palestinians and other charities using the links below. https://www.map.org.uk/donate/donate https://www.savethechildren.org.uk/how-you-can-help/emergencies/gaza-israel-conflict -Palestinian Communist Youth Union, which is doing a food and water effort, and is part of the official communist party of Palestine https://www.gofundme.com/f/to-preserve-whats-left-of-humanity-global-solidarity -Water is Life, a water distribution project in North Gaza affiliated with an Indigenous American organization and the Freedom Flotilla https://www.waterislifegaza.org/ -Vegetable Distribution Fund, which secured and delivers fresh veg, affiliated with Freedom Flotilla also https://www.instagram.com/linking/fundraiser?fundraiser_id=1102739514947848 -Thamra, which distributes herb and veg seedlings, repairs and maintains water infrastructure, and distributes food made with replanted veg patches https://www.gofundme.com/f/support-thamra-cultivating-resilience-in-gaza -------- PHOEBE ALERT Okay, now that we have your attention; check out her Substack Here! Check out Masters of our Domain with Milo and Patrick, here! -------- Ten Thousand Posts is a show about how everything is posting. It's hosted by Hussein (@HKesvani), Phoebe (@PRHRoy) and produced by Devon (@Devon_onEarth).
Humans are highly inquisitive, yet fallible and cognitively limited. How can we improve our epistemic lot despite our limitations? In Epistemic Ecology (MIT Press, 2025), Catherine Elgin develops a model in which individuals learn to rely on communal epistemic resources, such as communally-endorsed standards for correcting ourselves, and in turn contribute to those resources through active epistemic agency. In this way, she shows how epistemic autonomy and epistemic interdependence are mutually reinforcing rather than in tension. Elgin, who is professor of philosophy of education at Harvard University, also distinguishes between belief, which entails truth, and acceptance, an active epistemic attitude that constitutively involves reflection and assessment. This capacity for reflection is learned, but we use it widely – in sports bars, for example, just as much as in academic contexts. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Humans are highly inquisitive, yet fallible and cognitively limited. How can we improve our epistemic lot despite our limitations? In Epistemic Ecology (MIT Press, 2025), Catherine Elgin develops a model in which individuals learn to rely on communal epistemic resources, such as communally-endorsed standards for correcting ourselves, and in turn contribute to those resources through active epistemic agency. In this way, she shows how epistemic autonomy and epistemic interdependence are mutually reinforcing rather than in tension. Elgin, who is professor of philosophy of education at Harvard University, also distinguishes between belief, which entails truth, and acceptance, an active epistemic attitude that constitutively involves reflection and assessment. This capacity for reflection is learned, but we use it widely – in sports bars, for example, just as much as in academic contexts. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/philosophy
Humans are highly inquisitive, yet fallible and cognitively limited. How can we improve our epistemic lot despite our limitations? In Epistemic Ecology (MIT Press, 2025), Catherine Elgin develops a model in which individuals learn to rely on communal epistemic resources, such as communally-endorsed standards for correcting ourselves, and in turn contribute to those resources through active epistemic agency. In this way, she shows how epistemic autonomy and epistemic interdependence are mutually reinforcing rather than in tension. Elgin, who is professor of philosophy of education at Harvard University, also distinguishes between belief, which entails truth, and acceptance, an active epistemic attitude that constitutively involves reflection and assessment. This capacity for reflection is learned, but we use it widely – in sports bars, for example, just as much as in academic contexts. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/critical-theory
Pastor Terry Rapley started our sermon series, How to be a Christian around other humans, on the topic of LOVE! ++++++++++++++ Download the Church App here: https://bit.ly/3vxVr8q If you have any questions or comments, please feel free to leave a comment below
Episode 1906 - brought to you by our incredible sponsors: BETTER HELP: Your emotional wellbeing matters. Find support and feel lighter in therapy. Sign up and get 10% off at BetterHelp.com/HARDFACTOR. QUINCE: Don't keep settling that clothes that don't last. Go to Quince.com/hardfactor for free shipping and 365 day returns. BRUNT WORKWEAR: Get $10 Off boots and clothing at BRUNT with code HARDFACTOR at www.bruntworkwear.com LUCY - 100% pure nicotine. Always tobacco-free. LUCY's the only pouch that gives you long-lasting flavor, whenever you need it. Get 20% off your first order when you buy online with code (HARDFACTOR). Lucy.co 00:00:00 Timestamps 00:11:50 Camels disqualified from beauty pageant for getting botox 00:25:47 Humans and neanderthals interbred 00:29:49 Physicist claims he found heaven's physical location - 00:34:43 “Spanish” option at Washington Call Center offered AI bot with thick spanish accent speaking english And much more Thank you for listening and supporting the pod! Go to patreon.com/HardFactor to join our community, get access to Discord chat, bonus pods, and much more - but Most importantly: HAGFD! Learn more about your ad choices. Visit megaphone.fm/adchoices
Cult leaders, religious fanatics, dictators, and charlatans all have one thing in common: they exploit our fear of death. Humans act out “immortality projects” in the form of religion, culture, and political ideologies as unconscious ways to override the terror we feel at our uniquely self-aware knowledge that we will one day die. Where the orthodox priest promises eternal life, the cult leader might predict an alien apocalypse, while the authoritarian strongman invokes the transcendent glory of leading a chosen nation and race. In light of a recent death in the family, Julian leans into Ernest Becker's Pulitzer Prize winning cultural anthropology text, The Denial of Death. He also draws on poetry and the archetypal psychology of Donald Kalsched to ask the big questions. Does existential acceptance of death lead inevitably to nihilism? Is belief in God(s) and an afterlife necessary? Are poor or deeply traumatized people only left with despair in the absence of supernatural faith? Will children raised with no religion have no moral compass? A rich discussion of philosophy and psychology alongside poems, myths, fairy tales, and deeply personal story-telling, especially about how to tell his 7-year-old that grandma won't be back for Xmas. Not to worry, though. This is, ultimately, an uplifting journey. Learn more about your ad choices. Visit megaphone.fm/adchoices
Crypto still feels like a minefield for humans: Haseeb Qureshi argues that's a clue, not a bug: blockchains and smart contracts are machine-readable systems that AI agents can parse, simulate, and execute far more reliably than people, shifting crypto's core user from humans clicking through wallets to agents acting on our behalf. We also dig into the two-track future of agent commerce (safe, human-approved flows vs. the wild-west frontier), why major AI labs have avoided crypto training so far (liability), how agent-driven discovery could rewrite DeFi competition, and what this means for Dragonfly's investing playbook. ------
Stewart Alsop sits down with Ulises Martins on the Crazy Wisdom podcast to explore how artificial intelligence is fundamentally disrupting professional careers, labor markets, and the pace of human adaptation itself. They discuss everything from Dario Amodei's concept of "technological adolescence" to the possibility that we're approaching a point where AI advancement accelerates beyond our ability to keep up, touching on topics ranging from the economics of software development and the future of warfare to generational differences in how people will respond to AI-driven change. Martins emphasizes that while we may not be able to predict exactly what's coming, we need to dramatically increase our efforts to learn and adapt—potentially doubling the time we invest in understanding AI—because this isn't optional change, it's disruption happening at an unprecedented speed. Connect with Ulises on Linkedin to follow his work in AI and generative technology.Timestamps00:00 — Stewart introduces Ulysses Martins, framing the conversation around accelerationism and the future of work.05:00 — Ulises uses the parent-child analogy to argue humans will no longer play the dominant role as AI surpasses us.10:00 — Both agree learning AI is non-negotiable, urging listeners to double their investment in staying current.15:00 — Discussion shifts to software as media, the collapsing cost of building products, and the risk of big players like Anthropic making your idea obsolete overnight.20:00 — Ulises raises ecology vs. cosmic ambition, questioning whether humanity should aim for civilizational-scale goals like the Dyson sphere.25:00 — Stewart's ESP32 hardware project illustrates AI's current blind spots beyond software, while both predict physical-world AI will arrive as a byproduct of bigger industrial goals.30:00 — Tesla's birthplace in Croatia sparks a reflection on human genius as luck versus deliberate investment, invoking the Apollo program as a model.35:00 — The US-China AI race is compared to the Cold War Space Race, with interdependency acting as a brake on outright conflict.40:00 — Drone warfare and AI reframe military power, making troop size irrelevant and potentially reducing total war.45:00 — Agile methodology and generational shifts are linked, asking how Gen Z's values will shape the AI era globally.50:00 — Argentine vs. American Zoomers are contrasted, with millennial expectations versus Gen Z's pragmatism explored.55:00 — Ulises closes urging everyone to enjoy the ride, taking the infinite stream of change one episode at a time.Key Insights1. The Death of Traditional Career Paths: The concept of professional careers as we know them—starting as a junior and progressively advancing—is becoming obsolete due to AI's rapid advancement. This applies far beyond just software and SaaS companies, extending to all industries as robots and AI systems gain capabilities that fundamentally disrupt labor markets. The question isn't whether we'll adapt, but whether humans can adapt fast enough to keep pace with exponential technological change.2. The Acceleration Imperative: People must dramatically increase their investment in learning about AI immediately. Whatever time you were previously dedicating to staying current with technology needs to be doubled or tripled. This isn't optional—it's comparable to the necessity of basic education. Unlike previous technological transitions where you had years to learn new frameworks or tools, the current pace demands immediate, intensive engagement or you risk becoming irrelevant.3. Software as Media and the Collapse of Development Economics: Software has become media—easily reproducible and increasingly commoditized through AI assistance. The fundamental economics of software development are collapsing because if building software requires dramatically fewer development hours, the value and price of that software must necessarily decrease. Entrepreneurs need a new evaluation framework that assesses the risk of their ideas being replicated by AI or absorbed by major players like Anthropic or OpenAI.4. The Parent-Child Analogy for AI Development: Humanity's relationship with AI will inevitably mirror that of parents with increasingly capable children. Initially, we understand and control what AI does, but as it advances, it will surpass human capabilities in most domains. Just as parents cannot control fully grown adult children who exceed their abilities, humans will need to reconcile with creating something superior to ourselves. Attempting to permanently control such systems may be both impossible and potentially pathologic.5. The Kardashev Scale and Civilizational Ambitions: AI represents a civilizational-level technology that should redirect humanity toward grander goals like capturing stellar energy through Dyson spheres and expanding beyond our solar system. The competition between China and the United States over AI mirrors the Apollo program's space race but with higher stakes—potentially making traditional concepts like money less relevant if we successfully crack general intelligence. This requires thinking beyond planetary constraints.6. The Changing Nature of Warfare and Geopolitics: AI and autonomous weapons systems are fundamentally changing warfare by making human soldiers less relevant, similar to how nuclear weapons reduced the importance of conventional military force. This shift may actually reduce bloody civilian casualties in conflicts between major powers, as drone warfare and AI-driven systems create new equilibriums. The geopolitical map may fracture into more sovereign states and city-states as centralized control becomes less effective.7. Generational Adaptation and Unpredictability: Different generations will respond uniquely to AI disruption based on their values and experiences. Generation Z, having grown up during the pandemic without traditional expectations, may adapt differently than millennials who experienced unmet expectations. However, we must remain humble about our predictive abilities—we're not good at forecasting technological change or its timing. The best approach is maintaining openness, trying to understand developments as they unfold, and accepting that we cannot consume all information in an era of unlimited AI-generated content.
Viktor Gamov talks to Leonid Igolnik (Former CTO at Clari) about his career in B2B SaaS engineering leadership. Leonid's first job: teaching kids Pascal. His challenge: changing buyer behavior and scale complex systems.Books mentioned:► Influence without Authority: https://www.amazon.com/Influence-Without-Authority-Allan-Cohen/dp/0471463302► Drive: https://www.danpink.com/books/drive/► Blink: The Power of Thinking Without Thinking: https://www.amazon.com/Blink-Power-Thinking-Without/dp/0316172324SEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo
Episode 450 of Friends Talking Nerdy kicks off March with a brand-new theme: History. And not the dry, memorize-the-dates kind. The messy, human, “why do we do this?” kind.The Reverend Tracy and Tim The Nerd dive into the long, boozy tale of how drinking became welded to holiday celebrations. From ancient harvest festivals to Christmas parties that somehow end with someone crying in the kitchen, they explore how alcohol shifted from ritual offering to social lubricant to cultural expectation. Humans have been fermenting things since before we figured out plumbing. That's not an accident. Fermentation was chemistry, preservation, and mild euphoria all rolled into one bubbling clay pot.They break down why certain holidays seem incomplete without a drink in hand. Is it tradition? Marketing? Social pressure? A collective agreement that Uncle Gary is easier to handle with eggnog? The conversation wanders through how Americans tend to approach alcohol—often in big swings between indulgence and moral panic—compared to drinking cultures in parts of Europe and elsewhere, where alcohol can be more integrated into daily life rather than treated like a rebellious event.Then the episode zooms into the historical shockwaves of Prohibition. From the 18th Amendment to the unintended consequences of bootlegging and organized crime, they explore how attempts to legislate morality often create new problems. They also unpack the racial and xenophobic undercurrents that fueled Prohibition, including how anti-immigrant sentiment targeted communities associated with beer culture. History rarely behaves like a clean morality tale. It's usually more like a Jenga tower of good intentions and bad incentives.The conversation then fast-forwards to the War on Drugs and how its policies continue to shape incarceration rates, community trust, and public health conversations today. The Reverend Tracy and Tim The Nerd examine how racial disparities were baked into enforcement and how the ripple effects are still with us. Laws are not just words on paper; they're systems that echo for generations.But this episode isn't about wagging fingers or telling anyone to dump out their liquor cabinet. The heart of the conversation is introspective. When you reach for a drink at a holiday party, is it simply enjoyment? Ritual? Flavor? Community? Or is it covering anxiety, loneliness, or pressure? There's a big difference between mindful celebration and autopilot coping. The goal isn't prohibition 2.0. It's self-awareness.Episode 450 invites listeners to look at their own traditions with curiosity instead of judgment. Because history isn't just about what people did centuries ago. It's about the patterns we're still living inside today.History month is officially underway. And this one comes with a side of fermentation science and social psychology.As always, we wish to thank Christopher Lazarek for his wonderful theme song. Head to his website for information on how to purchase his EP, Here's To You, which is available on all digital platforms.Head to Friends Talking Nerdy's website for more information on where to find us online.