Podcasts about inherently

  • 653PODCASTS
  • 1,187EPISODES
  • 29mAVG DURATION
  • 1WEEKLY EPISODE
  • Dec 21, 2025LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about inherently

Latest podcast episodes about inherently

Second Cup of Joe...and John
Willy Daunic – Play-by-Play Nashville Predators TV, Radio Host

Second Cup of Joe...and John

Play Episode Listen Later Dec 21, 2025 55:35


Little did the Florida native know that playing video games while toiling in the minor leagues would lay thefoundation for his broadcasting success decades later. Inherently likeable, the former Vanderbilt Universitybasketball recruit explains how opportunity and luck fused together leading to the Nashville TV and radioairwaves. Also, he opens up with poise and grace how he copes with the recent death of his wife, Erin.AMONG THE TOPICS: TURNING DOWN ERIN'S FATHER, DRAFTED BY THE WORLD CHAMPIONS,WHY MAXIM KUZNETSOV IS A TRICKY HOCKEY NAME, TAKING A PUCK IN THE HEAD, AND ASPECIAL GIFT FROM VANDY'S FOOTBALL COACH.

Second Cup of Joe...and John
Willy Daunic – Play-by-Play Nashville Predators TV, Radio Host

Second Cup of Joe...and John

Play Episode Listen Later Dec 21, 2025 55:35


Little did the Florida native know that playing video games while toiling in the minor leagues would lay thefoundation for his broadcasting success decades later. Inherently likeable, the former Vanderbilt Universitybasketball recruit explains how opportunity and luck fused together leading to the Nashville TV and radioairwaves. Also, he opens up with poise and grace how he copes with the recent death of his wife, Erin.AMONG THE TOPICS: TURNING DOWN ERIN'S FATHER, DRAFTED BY THE WORLD CHAMPIONS,WHY MAXIM KUZNETSOV IS A TRICKY HOCKEY NAME, TAKING A PUCK IN THE HEAD, AND ASPECIAL GIFT FROM VANDY'S FOOTBALL COACH.

Yoga Inspiration
#218 Is Yoga Inherently Healing? Trauma, Activation & the Power of Presence with Terri Cooper and Kino MacGregor

Yoga Inspiration

Play Episode Listen Later Dec 19, 2025 54:25


In this episode, Kino speaks with trauma-informed yoga educator and activist Terri Cooper to explore the deep connection between yoga and healing. What is trauma, really? Is yoga inherently trauma-sensitive? And how can teachers and students use yoga to navigate emotional activation and create space for true transformation? Terri shares her insights from years of work with Connection Coalition, a nonprofit bringing trauma-informed yoga to youth in underserved communities. You'll also learn accessible tools for emotional regulation, why healing is essential for anyone who teaches, and what society gets wrong about trauma.   Listen in to discover how yoga can become a path of profound presence, self-inquiry, and collective healing.   Resources & Links:   The Connection Course on Omstars Connection Coalition Practice LIVE with me exclusively on Omstars! Start your journey today with a 7-day trial at omstars.com. Stay connected with us on social @omstarsofficial and @kinoyoga Practice with me in person for workshops, classes, retreats, trainings and Mysore seasons. Find out more about where I'm teaching at kinoyoga.com and sign up for our Mysore season in Miami at www.miamilifecenter.com.

Just Break Up: Relationship Advice from Your Queer Besties
Episode 666: Are Long-Lasting Relationships Inherently Better?

Just Break Up: Relationship Advice from Your Queer Besties

Play Episode Listen Later Dec 8, 2025 30:14


Sam and Sierra answer a letter from someone who is wondering if a longer relationship will lead to a better one Join us on Patreon for an extra weekly episode, monthly office hours, and more! SUBMIT: justbreakuppod.com FACEBOOK: /justbreakuppod INSTAGRAM: @justbreakuppod Learn more about your ad choices. Visit megaphone.fm/adchoices

Sex Talk
Commitment Not The Problem Freedom Is

Sex Talk

Play Episode Listen Later Dec 6, 2025 2:44 Transcription Available


 Today, we're tackling a pervasive myth about relationships: that men are inherently afraid of commitment. But what if that's not quite right? New research suggests men aren't necessarily shying away from commitment itself, but rather from the perceived loss of something deeply valued: their personal freedom and independence.Become a supporter of this podcast: https://www.spreaker.com/podcast/lets-talk-sex--5052038/support.

RNZ: The Detail
The "inherently unsafe" brakes in some 70,000 vehicles

RNZ: The Detail

Play Episode Listen Later Nov 14, 2025 25:16


For years, a father has been fighting for Waka Kotahi to do more about the dangers of a vehicle braking system involved in his son's death. Now a coroner's report backs him up, but NZTA still disagrees. After a death on a construction site, a coroner's report has called a braking system found in some 70,000 vehicles around New Zealand "inherently unsafe". Waka Kotahi disagrees.Guests:Louisa Cleave - Checkpoint senior producerSelwyn Rabbits - safety campaignerLearn More: Read more reporting on cardan shaft brakes, starting in 2021, here, here, here, here, here, here, here, here, and hereSee NZTA Waka Kotahi's guidance on cardan shaft park brakes Go to this episode on rnz.co.nz for more details

The Mike Hosking Breakfast
Mike's Minute: You win in court but suffer financially - how does that work?

The Mike Hosking Breakfast

Play Episode Listen Later Nov 4, 2025 1:58 Transcription Available


Here is a line up: Alex Salmond, former head of Scotland, Dame Noeline Taurua, and Siouxsie Wiles, as in the microbiologist. The Salmond family is wanting their estate made bankrupt. It comes out of a judicial review over the handling of a couple of complaints against him by civil servants that turned out to be “tainted”. In other words, his defence was successful, but the cost of winning proved too high. Noeline, I have no idea what her lawyers cost, but you would hope as part of the deal she gets the bill covered. But I doubt it. And then Siouxsie Wiles, who you may remember took her employer, Auckland University, to court and won. She took mediation arbitration – it went back and forward for a while, but ultimately ended in court. During Covid she was harassed, she claimed her employer should have done more to protect her. She has now launched a crowd funding page to help pay her bills. The commonality here is all three appear to be on the right side. They have been wronged, they have had to defend themselves, and yet all three appear out of pocket for the experience. Wiles has spent thousands – hundreds of thousands. She has taken loans, her and her husband, she won but she is paying off loans. Inherently here is a fault with the law. The costs, even when awarded your way, never cover the bill. My question: why not? Is justice really served or seen to be done if you can be victorious, if you can defend your name, your honour, or reputation and still go broke? Doesn't that mean the deepest pockets will always triumph? The State v Salmond. A sport v a coach. The university v a microbiologist. It's one thing to settle – yes it saves court time, but do you settle because you will be broke if you don't? Is being broke and right worth it? Is launching a crowdfunding bid acceptable when you didn't do anything wrong? Is the justice system serving us properly when even the victorious and validated aren't really winners?See omnystudio.com/listener for privacy information.

Cafeteria Christian
#336 Confession: you're not inherently sinful

Cafeteria Christian

Play Episode Listen Later Oct 27, 2025 63:41


Emmy and Natalia answer a listener email about sin and confession and if we're all inherently bad or good. You know, light easy topics, just in time for Halloween. CW: church language around inherent sinfulness. www.patreon.com/cafeteriachristian Join us & Mary Danho this Advent KPDH Blog post

This Week in Google (MP3)
IM 841: Dust and Deli Meat - Open Source AI Revolution

This Week in Google (MP3)

Play Episode Listen Later Oct 16, 2025 184:47


Can open-source AI models really be truly neutral, or are they just another conduit for hidden agendas? Hear how the founder of Nous Research is battling Silicon Valley giants to put ethical, user-controlled AI in everyone's hands. TOpinion | The A.I. Prompt That Could End the World AI videos of dead celebrities are horrifying many of their families (20) Sam Altman on X: "We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have" / X The AI water issue is fake California becomes first state to regulate AI companion chatbots | TechCrunch Walmart Announces It Will Sell Products Through ChatGPT's Instant Checkout Protein Powders and Shakes Contain High Levels of Lead AI is changing how we quantify pain Kids who use social media score lower on reading and memory tests, a study shows Social media must warn users of 'profound' health risks under new California law Google will let friends help you recover an account AI content on the net AI writing hasn't overwhelmed the web yet Karpathy tweet Humanity AI Commits $500 Million to Build a People-Centered Future for AI Sal Khan is the new TED You won't believe what degrading practice the pope just condemned Nano Banana is coming to Google Search, NotebookLM and Photos. Paper: Machines in the Crowd? Measuring the Footprint of Machine-Generated Text on Reddit THOUSANDS OF AI AUTHORS ON THE FUTURE OF AI A Twitch streamer gave birth live, with Twitch's CEO in the chat DirecTV will soon bring AI ads to your screensaver Japan wants OpenAI to stop ripping off manga and anime What Is Really Going on With All This Radioactive Shrimp? Inherently funny word Boah, Bahn! A book is being marketed with mayo-scented ink. Jealous? Me? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeffrey Quesnelle Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit pantheon.io Melissa.com/twit threatlocker.com/twit

All TWiT.tv Shows (MP3)
Intelligent Machines 841: Dust and Deli Meat

All TWiT.tv Shows (MP3)

Play Episode Listen Later Oct 16, 2025 183:47


Can open-source AI models really be truly neutral, or are they just another conduit for hidden agendas? Hear how the founder of Nous Research is battling Silicon Valley giants to put ethical, user-controlled AI in everyone's hands. TOpinion | The A.I. Prompt That Could End the World AI videos of dead celebrities are horrifying many of their families (20) Sam Altman on X: "We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have" / X The AI water issue is fake California becomes first state to regulate AI companion chatbots | TechCrunch Walmart Announces It Will Sell Products Through ChatGPT's Instant Checkout Protein Powders and Shakes Contain High Levels of Lead AI is changing how we quantify pain Kids who use social media score lower on reading and memory tests, a study shows Social media must warn users of 'profound' health risks under new California law Google will let friends help you recover an account AI content on the net AI writing hasn't overwhelmed the web yet Karpathy tweet Humanity AI Commits $500 Million to Build a People-Centered Future for AI Sal Khan is the new TED You won't believe what degrading practice the pope just condemned Nano Banana is coming to Google Search, NotebookLM and Photos. Paper: Machines in the Crowd? Measuring the Footprint of Machine-Generated Text on Reddit THOUSANDS OF AI AUTHORS ON THE FUTURE OF AI A Twitch streamer gave birth live, with Twitch's CEO in the chat DirecTV will soon bring AI ads to your screensaver Japan wants OpenAI to stop ripping off manga and anime What Is Really Going on With All This Radioactive Shrimp? Inherently funny word Boah, Bahn! A book is being marketed with mayo-scented ink. Jealous? Me? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeffrey Quesnelle Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit pantheon.io Melissa.com/twit threatlocker.com/twit

Radio Leo (Audio)
Intelligent Machines 841: Dust and Deli Meat

Radio Leo (Audio)

Play Episode Listen Later Oct 16, 2025 183:47


Can open-source AI models really be truly neutral, or are they just another conduit for hidden agendas? Hear how the founder of Nous Research is battling Silicon Valley giants to put ethical, user-controlled AI in everyone's hands. TOpinion | The A.I. Prompt That Could End the World AI videos of dead celebrities are horrifying many of their families (20) Sam Altman on X: "We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have" / X The AI water issue is fake California becomes first state to regulate AI companion chatbots | TechCrunch Walmart Announces It Will Sell Products Through ChatGPT's Instant Checkout Protein Powders and Shakes Contain High Levels of Lead AI is changing how we quantify pain Kids who use social media score lower on reading and memory tests, a study shows Social media must warn users of 'profound' health risks under new California law Google will let friends help you recover an account AI content on the net AI writing hasn't overwhelmed the web yet Karpathy tweet Humanity AI Commits $500 Million to Build a People-Centered Future for AI Sal Khan is the new TED You won't believe what degrading practice the pope just condemned Nano Banana is coming to Google Search, NotebookLM and Photos. Paper: Machines in the Crowd? Measuring the Footprint of Machine-Generated Text on Reddit THOUSANDS OF AI AUTHORS ON THE FUTURE OF AI A Twitch streamer gave birth live, with Twitch's CEO in the chat DirecTV will soon bring AI ads to your screensaver Japan wants OpenAI to stop ripping off manga and anime What Is Really Going on With All This Radioactive Shrimp? Inherently funny word Boah, Bahn! A book is being marketed with mayo-scented ink. Jealous? Me? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeffrey Quesnelle Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit pantheon.io Melissa.com/twit threatlocker.com/twit

This Week in Google (Video HI)
IM 841: Dust and Deli Meat - Open Source AI Revolution

This Week in Google (Video HI)

Play Episode Listen Later Oct 16, 2025 183:46


Can open-source AI models really be truly neutral, or are they just another conduit for hidden agendas? Hear how the founder of Nous Research is battling Silicon Valley giants to put ethical, user-controlled AI in everyone's hands. TOpinion | The A.I. Prompt That Could End the World AI videos of dead celebrities are horrifying many of their families (20) Sam Altman on X: "We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have" / X The AI water issue is fake California becomes first state to regulate AI companion chatbots | TechCrunch Walmart Announces It Will Sell Products Through ChatGPT's Instant Checkout Protein Powders and Shakes Contain High Levels of Lead AI is changing how we quantify pain Kids who use social media score lower on reading and memory tests, a study shows Social media must warn users of 'profound' health risks under new California law Google will let friends help you recover an account AI content on the net AI writing hasn't overwhelmed the web yet Karpathy tweet Humanity AI Commits $500 Million to Build a People-Centered Future for AI Sal Khan is the new TED You won't believe what degrading practice the pope just condemned Nano Banana is coming to Google Search, NotebookLM and Photos. Paper: Machines in the Crowd? Measuring the Footprint of Machine-Generated Text on Reddit THOUSANDS OF AI AUTHORS ON THE FUTURE OF AI A Twitch streamer gave birth live, with Twitch's CEO in the chat DirecTV will soon bring AI ads to your screensaver Japan wants OpenAI to stop ripping off manga and anime What Is Really Going on With All This Radioactive Shrimp? Inherently funny word Boah, Bahn! A book is being marketed with mayo-scented ink. Jealous? Me? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeffrey Quesnelle Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit pantheon.io Melissa.com/twit threatlocker.com/twit

All TWiT.tv Shows (Video LO)
Intelligent Machines 841: Dust and Deli Meat

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Oct 16, 2025 183:46


Can open-source AI models really be truly neutral, or are they just another conduit for hidden agendas? Hear how the founder of Nous Research is battling Silicon Valley giants to put ethical, user-controlled AI in everyone's hands. TOpinion | The A.I. Prompt That Could End the World AI videos of dead celebrities are horrifying many of their families (20) Sam Altman on X: "We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have" / X The AI water issue is fake California becomes first state to regulate AI companion chatbots | TechCrunch Walmart Announces It Will Sell Products Through ChatGPT's Instant Checkout Protein Powders and Shakes Contain High Levels of Lead AI is changing how we quantify pain Kids who use social media score lower on reading and memory tests, a study shows Social media must warn users of 'profound' health risks under new California law Google will let friends help you recover an account AI content on the net AI writing hasn't overwhelmed the web yet Karpathy tweet Humanity AI Commits $500 Million to Build a People-Centered Future for AI Sal Khan is the new TED You won't believe what degrading practice the pope just condemned Nano Banana is coming to Google Search, NotebookLM and Photos. Paper: Machines in the Crowd? Measuring the Footprint of Machine-Generated Text on Reddit THOUSANDS OF AI AUTHORS ON THE FUTURE OF AI A Twitch streamer gave birth live, with Twitch's CEO in the chat DirecTV will soon bring AI ads to your screensaver Japan wants OpenAI to stop ripping off manga and anime What Is Really Going on With All This Radioactive Shrimp? Inherently funny word Boah, Bahn! A book is being marketed with mayo-scented ink. Jealous? Me? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeffrey Quesnelle Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit pantheon.io Melissa.com/twit threatlocker.com/twit

Radio Leo (Video HD)
Intelligent Machines 841: Dust and Deli Meat

Radio Leo (Video HD)

Play Episode Listen Later Oct 16, 2025 183:46 Transcription Available


Can open-source AI models really be truly neutral, or are they just another conduit for hidden agendas? Hear how the founder of Nous Research is battling Silicon Valley giants to put ethical, user-controlled AI in everyone's hands. TOpinion | The A.I. Prompt That Could End the World AI videos of dead celebrities are horrifying many of their families (20) Sam Altman on X: "We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have" / X The AI water issue is fake California becomes first state to regulate AI companion chatbots | TechCrunch Walmart Announces It Will Sell Products Through ChatGPT's Instant Checkout Protein Powders and Shakes Contain High Levels of Lead AI is changing how we quantify pain Kids who use social media score lower on reading and memory tests, a study shows Social media must warn users of 'profound' health risks under new California law Google will let friends help you recover an account AI content on the net AI writing hasn't overwhelmed the web yet Karpathy tweet Humanity AI Commits $500 Million to Build a People-Centered Future for AI Sal Khan is the new TED You won't believe what degrading practice the pope just condemned Nano Banana is coming to Google Search, NotebookLM and Photos. Paper: Machines in the Crowd? Measuring the Footprint of Machine-Generated Text on Reddit THOUSANDS OF AI AUTHORS ON THE FUTURE OF AI A Twitch streamer gave birth live, with Twitch's CEO in the chat DirecTV will soon bring AI ads to your screensaver Japan wants OpenAI to stop ripping off manga and anime What Is Really Going on With All This Radioactive Shrimp? Inherently funny word Boah, Bahn! A book is being marketed with mayo-scented ink. Jealous? Me? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeffrey Quesnelle Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit pantheon.io Melissa.com/twit threatlocker.com/twit

Pratt on Texas
Episode 3815: Protected speech on campus vs. being expelled for such | Race-based districts are “inherently racist” – Pratt on Texas 9/17/2025

Pratt on Texas

Play Episode Listen Later Sep 17, 2025 44:21


The news of Texas covered today includes:Our Lone Star story of the day: What is protected political speech on campus and is being expelled from a public university okay for engaging in protected speech? I contrast the Texas Tech story versus the Texas State story.Our Lone Star story of the day is sponsored by Allied Compliance Services providing the best service in DOT, business and personal drug and alcohol testing since 1995.Race pimp Democrats say redrawing some of Texas' Congressional districts is “inherently racist.” I say having special districts for people based on their race is inherently racist.Constitution Day: The Shining Light on a Hill: America's ConstitutionListen on the radio, or station stream, at 5pm Central. Click for our radio and streaming affiliates.www.PrattonTexas.com

1000 Hours Outsides podcast
1KHO 548: Creativity and Inherently Human Skills Will Be at a Premium in the Future, How to Prepare Your Kids for the AI Revolution| Matt Britton, Generation AI

1000 Hours Outsides podcast

Play Episode Listen Later Aug 13, 2025 51:59


In this urgent and eye-opening episode of The 1000 Hours Outside Podcast, host Ginny Yurich sits down with Matt Britton, a pioneering voice in AI and author of Generation AI, to unravel what the rapid rise of artificial intelligence truly means for parents and the future of childhood. Matt reveals why memorization and traditional education models are becoming obsolete, and how creativity—the uniquely human skill that no AI can replicate—will be the currency of tomorrow's workforce. As AI reshapes every facet of our lives, this conversation offers critical insight into how parents can nurture resilience, imagination, and real-world problem-solving skills in their children before the AI wave changes everything. From the promise of AI-powered “second brains” to the risks of digital loneliness and emotional dependency on chatbots, Matt and Ginny explore the paradox of embracing technology while fiercely protecting human connection and creativity. This episode is a must-listen for any parent who wants to prepare their children not just to survive—but to thrive—in a world where 85% of jobs in 2030 don't even exist today. Tune in for a thoughtful, hopeful, and practical roadmap to parenting in the Age of AI. Get your copy of Generation AI here Learn more about Matt and all he has to offer here Learn more about your ad choices. Visit megaphone.fm/adchoices

THE Leadership Japan Series by Dale Carnegie Training Tokyo,  Japan
Stop Procrastinating And Start Delegating

THE Leadership Japan Series by Dale Carnegie Training Tokyo, Japan

Play Episode Listen Later Aug 6, 2025 12:00


The most fatal words ever spoken by a leader are , “it will be faster if I do it myself”.  No it won't.  If you want to scare yourself, sit down and write down all the tasks that you face both regular and irregular.  That is one long, long list for leaders.  Are you really going to be able to get through all of these items and take care of filing your taxes on time, see the kids sports events, have a romantic dinner with your partner, lie on the couch and read a book, magazine or the newspapers?  In short, you won't, because you will be working all of the time, putting off life to earn a living.  The treadmill you should be the on is the one down at the gym, not the one where you are working like a dog, because you are trying to do it all yourself. Inherently, we know we should delegate, but we have had prior bad experiences with it and are now gun shy about using this important tool in our leader toolkit.  When I was growing up in Australia there was a common expression that “a good workman doesn't blame his tools”.  Delegation gets a bad rap because it is a misused tool and the tool itself is fine.  What we are mistaking is dumping for delegating.  What does dumping look like?  My old boss at Jones Lang LaSalle literally dumped two huge file collations on my desk, with a “whump”, they were so thick.  He just said “take care of this” and walked away.  I had to take on the work in those files, but there was no guidance, no instructions, I just had to work it by myself.  Is there a simple and better way to make sure that as the leader we are only working on the most high level tasks that only we can do? Here is an eight step process to make delegation work for you. Step One: Identify The Need Among the many tasks facing us, which ones will lend themselves to being delegated and what does a successful delegation outcome look like in our mind? Step Two: Select The Person This may sound counterintuitive, but select the person on the basis of how this delegated task will help them achieve their goals.  Wait a minute?  Isn‘t the delegation about me achieving my leader goals of getting work off my leader desk?  Actually no.  We are focused on using delegation to build leader bench strength in the organisation not playing “pass the parcel” at work.  Think about the team and identify which strengths need attention and how this piece of work will build this person's capabilities. Step Three: Plan The Delegation Meeting We don't plan to fail, but we fail to plan and this is one of the big missing pieces in the delegation puzzle.  Leaders will just willy-nilly grab the person and starting downloading what they want them to do, without thinking the conversation through in any meaningful way.  There are three sub-goals involved here. Desired outcome – what is the outcome to be accomplished and what does success look like? Think ahead to be able to explain what is in it for the person receiving the task. Current Situation – Clearly analyse where we are today both internally and externally. What factors may hinder or help this delegation? Goals – Define and set goals which are reasonable and yet challenging. Step Four: Hold The Delegation Meeting There are four subset goals. Identify their vision or goals. We are trying to align the task with their own goals so we need to be clear what is in it for them. Identify specific results to be achieved. We need to make success clear and also talk about the strengths they have which will allow them to succeed in this task. Outline the rules and limitations. There are bound to be resource limitations around time, money and people.  These need to be made clear from the start. Review the performance standards. To what level of sophistication are they required to deliver results? Step Five: Create A Plan Of Action We don't create the plan – they do.  This is important to  give them authority and ownership of how this task gets done. Step Six: Review Their Plan They create it but we must check it so that we are all on the same page and have a clear understanding of what happens next. Step Seven: Implement the Plan If there are other people going to be impacted by the plan then the leader's job is to clear the way and provide any needed air cover, while the task is under way. Step Eight: Follow Up Without micro managing the task, the leader needs regular progress updates so that everything is going as expected and there are no surprises at the end. None of these steps are diabolically difficult or complex.  Well then, why don't all leaders follow them?  It could be because they haven't thought about a process for delegation or they fear the time required for Steps Three and Four. Stop procrastinating.  These two steps, Three and Four, are not that big a time steal, so suck it up and get going.  You will never have the time available which you need, unless you start seeing delegation as a tool to develop the talents of your subordinates and treat the whole process that way.  Delegation is just Latin for coaching!

Law School
Torts Lecture Twenty-Three: Vicarious Liability: Employer Responsibility and Agency Principles

Law School

Play Episode Listen Later Aug 6, 2025 48:24


This conversation delves into the intricate world of vicarious liability, a fundamental concept in tort law that holds one party legally responsible for the tortious acts of another based on their relationship. The discussion covers key elements such as the employer-employee relationship, the scope of employment, and the distinctions between minor detours and major frolics. It also explores the implications of intentional torts, the treatment of independent contractors, and the principles of agency law. The conversation highlights various defenses against vicarious liability claims, policy justifications for the doctrine, and modern trends in the gig economy and institutional accountability.Explain the core difference between "vicarious liability" and "direct negligence" of an employer. Vicarious liability holds an employer responsible for an employee's tortious actions, even if the employer themselves did nothing wrong, based solely on the employment relationship. Direct negligence, conversely, means the employer is liable for their own wrongful conduct, such as negligent hiring or supervision, which directly contributed to the harm.What is the purpose of the "respondeat superior" doctrine, and where did the term originate? The purpose of respondeat superior is to hold employers liable for their employees' torts committed within the scope of employment, based on the idea that employers benefit from their employees' work and should bear associated risks. The term is Latin for "let the master answer" and has roots in Roman law.Provide an example that clearly illustrates the distinction between a "frolic" and a "detour" for an employee. If a delivery driver takes a slightly longer route to see a new billboard (a minor deviation), that's a detour, and the employer could still be liable for any accidents. However, if the same driver skips work for several hours to attend a baseball game and causes an accident en route to the game (a major departure for personal benefit), that's a frolic, likely absolving the employer of vicarious liability.List three factors courts consider when determining whether an employee's actions fall within the "scope of employment." Courts consider: (1) Was the act the kind of work the employee was hired to perform? (2) Did it occur within the authorized time and space limits? (3) Was it motivated, at least in part, by a purpose to serve the employer?Why are employers generally not held vicariously liable for the torts of independent contractors? Employers are generally not vicariously liable for independent contractors because they do not exercise direct control over the "manner and means" of the contractor's work. Independent contractors typically operate their own distinct business and are not economically dependent on a single hiring party in the same way an employee is.Identify two common exceptions to the general rule regarding employer non-liability for independent contractors. Two common exceptions are: (1) Non-delegable duties, where certain public safety obligations cannot be shifted to a contractor (e.g., maintaining safe premises for customers). (2) Inherently dangerous activities, where the work itself carries a significant risk of harm (e.g., demolition work).Beyond the employer-employee relationship, name two other relationships where vicarious liability principles can apply. Vicarious liability principles can also apply in principal-agent relationships (where an agent acts with authority on behalf of a principal) and partnerships (where one partner can be liable for another's torts in the ordinary course of business). Parental vicarious liability is also possible under certain circumstances.How does "negligent entrustment" differ from vicarious liability in the context of an employer's responsibility? Negligent entrustment is a form of direct negligence where an employer is liable for entrusting property (like a company vehicle) to an employee known to be unfit or reckless, and th

#STRask with Greg Koukl
Could Inherently Sinful Humans Have Accurately Recorded the Word of God?

#STRask with Greg Koukl

Play Episode Listen Later Jul 7, 2025 20:14


Questions about whether or not inherently sinful humans could have accurately recorded the Word of God, whether the words about Moses in Acts 7:22 and Exodus 4:10 contradict each other, and why we're told to say, “If it is the Lord's will,” in James 4 but not James 5.   How should I respond to the objection that humans, who are inherently sinful, could not have accurately recorded the Word of God? How do we reconcile the seeming contradiction between Acts 7:22, which says Moses was mighty in word and deed, and Exodus 4:10, where Moses says he is slow of speech and tongue? James 4:13–16 instructs us to qualify our plans by saying, “If it is the Lord's will,” but his words in the next chapter about our prayers healing the sick include no qualifiers regarding God's will. How does James 5 fit with James 4?

My Healing Guide
Settling Isn't Inherently Bad...

My Healing Guide

Play Episode Listen Later Jun 26, 2025 30:40


Settling isn't as bad as everyone makes it sound. Hence why it's so easy to do! It's juuussstttt under what you truly want. Close to the thing but not it fully. How can you make a decision without settling?? How can you make a decision from a sound and grounded place when your friends, family are saying otherwise & you remind is jumbled. Autumn covers this topic in-depth with humor, lightheartedness & real truth. Message on IG if this resonated with you or if you are going through this right now in your life! @myhealingguideSend me a text message on your takeaways from this episode!Peaceful, bliss state. Mediative introduction into My Healing Guide oasis. Implement, rest, or take action. whatever it is that you feel called to do for yourself...go for it.

Secrets of Organ Playing Podcast
SOPP732: I think some pieces are just inherently "mismatched" with the performer

Secrets of Organ Playing Podcast

Play Episode Listen Later Jun 18, 2025 24:29


Let's start episode 732 of Secrets of Organ Playing Podcast. These questions were sent by Rien and Benas and they write:Maybe you could answer this question in a podcast (referring to Benas): with some pieces you “feel” while practicing that everything comes together. And if you are there, you stay there. Even if you don't play the piece for a while, it still flows (maybe with some light practice) out of your hands in the right way, while other pieces don't seem to “stick”. What's the reason? Not enough practice, wrong practice routines? Or just a mismatch between the piece and performer?Benas:Hi, Rien, that's a very interesting topic You've touched upon - yes, I think some pieces are just inherently "mismatched" with the performer (I've had quite a few when I was learning piano in music school), but after a while I tried revisiting them and often found that the issue was the skill level required to play as well as understand the piece. But some of them just can't seem to be done right no matter how hard You try yet they flow in other performers' hands and feet beautifully - maybe they could offer insight into how they perceive the piece? It surely would be interesting to know all the factors that go into "matching" the performer and the piece of music.Rien:it would be nice if Ausra and Vidas make a Podcast over it. I myself struggle with Toccatas (my brain seems to have troubles with repeating patterns) and with pieces with lots of accidentals. That's the reason I still haven't published “Priere de Notre Dame” by Boellmann. There are lots of accidentals in it. So I prefer a piece written in 5 flats over a piece in F full with accidentals. But why? I think that's interesting…If you liked this conversation, you can buy us a coffee at https://buymeacoffee.com/organduo

Conversing
Global Displacement and Refugee Crisis, with Myal Greene

Conversing

Play Episode Listen Later Jun 17, 2025 49:08


“More of the church is committed to their immigrant neighbours than the media or politicians would like the public to believe.” (Myal Green, from the episode) Myal Greene (president and CEO of World Relief) joins host Mark Labberton to discuss the global humanitarian crises, refugee resettlement, and the church's responsibility to respond with courage and compassion. From Rwanda's post-genocide reconciliation following 1994 to the 2025 dismantling of humanitarian aid and refugee programs in the US, Greene shares how his personal faith journey fuels his leadership amid historic humanitarian upheaval. Rooted in Scripture and the global moral witness of the church, Greene challenges listeners to imagine a more faithful Christian response to suffering—one that refuses to turn away from the world's most vulnerable. Despite the current political polarization and rising fragility of moral consensus, Greene calls on the church to step into its biblical role: speaking truth to power, welcoming the stranger, standing with the oppressed, and embodying the love of Christ in tangible, courageous ways. Episode Highlights “Inherently, reconciliation of people who have done the worst things imaginable to you is not a human thing.” “To truly be a follower of Christ, you can't be completely for a politician or completely for a political party.” “What we've seen is that more of the church is committed to their immigrant neighbours than the media or politicians would like the public to believe.” “The challenge for pastors is: How do I talk about this issue without losing my job or splitting my congregation?” “If we're failing to define our neighbour expansively—as Christ did—we're always going to get it wrong.” Helpful Links and Resources World Relief Open Doors World Watch List 2025 2024 Lifeway Research on Evangelicals & Immigration PEPFAR Program – US Department of State National Association of Evangelicals Rich Christians in an Age of Hunger, by Ron Sider Good News About Injustice, by Gary Haugen Walking with the Poor, by Bryant Myers About Myal Greene Myal Greene has a deep desire to see churches worldwide equipped, empowered, and engaged in meeting the needs of vulnerable families in their communities. In 2021, he became president and CEO after serving for fourteen years with the organization. While living in Rwanda for eight years, he developed World Relief's innovative church-based programming model that is currently used in nine countries. He also spent six years in leadership roles within the international programs division. He has previous experience working with the US government. He holds a BS in finance from Lehigh University and an MA from Fuller Theological Seminary in global leadership. He and his wife Sharon have three children. Show Notes Myal Greene's call to faith-rooted leadership in alleviating poverty Greene's path from Capitol Hill to World Relief, shaped by his conversion in his twenties and a deepening conviction about God's heart for the poor “God was working in me and instilling a deep understanding of his heart for the poor.” Rich Christians in an Age of Hunger, by Ron Sider Good News About Injustice, by Gary Haugen Walking with the Poor, by Bryant Myers Psalm 31:7–8: “I'll be glad and rejoice for you have seen my troubles and you've seen the affliction of my soul, but you've not turned me over to the enemy. You've set me in a safe place.” “ Not only will God transform your life, but what it means to actually have experienced that and to feel that and to make that a very real personal experience.” 2007 in Rwanda Rwanda's one-hundred-day memorial period for the 1994 genocide “The effects of the genocide were always there. You wouldn't be able to see it, but it was always there.” Gacaca courts (system of transitional justice to handle the numerous legal cases following the 1994 genocide). “People would come and talk about what happened. … The attempts at apology, the attempts at reconciliation were powerful.” ”There are so many stories from Rwanda of true reconciliation where people have forgiven the people who've killed their family members or have forgiven people who've done terrible things to them.” ”How did the Gachacha courts see an interweaving or not of Christian faith in the process of the acts of forgiveness?” The church's role: “The hard part and the amazing part of Rwanda is that reconciliation is deeply connected to individual cases.” “Inherently, reconciliation of people who have done the worst things imaginable to you is not a human thing.” World Relief's Legacy & Mission Founded in 1944 at Park Street Church, Boston, in response to World War II European displacement. “Feeding 180,000 people a day in Korea during the Korean War.” “We boldly engage the world's greatest crises in partnership with the church.” The global displacement crisis Over 122 million forcibly displaced people worldwide—up from under 40 million in 2007 (a fourfold increase) “A handful of the most fragile nations of the world are experiencing extreme violence, fragility, rising poverty, the effects of climate change, and people are being forced to flee and put into d desperate situations.” “The generosity of the country is not being seen at a time when people in crisis face the greatest need.” World Relief is “one of ten refugee resettlement agencies, and we have been a refugee resettlement agency partnering with the US government since 1980 to do the work of welcoming refugees who come to this country. And we've partnered with every presidential administration since Jimmy Carter to do this work and have, have done so proudly.” Trump's immigration and refugee resettlement policies Refugee resettlement has been halted since January 20, 2025—an estimated one thousand people per month left unwelcomed “At a time when people experiencing crisis are facing the greatest need, the generosity of the country is not being seen.” 120,000 refugees were welcomed in 2024. “We expected around 12,000” in 2025. “Should Christian organizations receive federal funding?” Cuts to federal humanitarian funding USAID interruptions directly affect food, health, and medical services in fragile states like Sudan, Haiti, and DRC. On PEPFAR: HIV-AIDS specific program established by George W. Bush PEPFAR: “25 million lives have been saved … now it's among the casualties.” “Have these [federal cuts to humanitarian aid] increased philanthropic giving or has philanthropic giving dropped almost as a mirror of the government policy change?” Church response and misconceptions How should we manage uncertainty? When to use one's voice to speak truth to power? “Polling shows evangelicals overwhelmingly support refugee resettlement—even Trump voters.” “Over 70 percent of evangelicals believe the US has a moral responsibility to welcome refugees to this country. Sixty-eight percent of of evangelicals voted for Trump agree with that statement as well.” Lifeway Research found only 9 percent of evangelicals cite the Bible or their pastor as their main source on immigration. “It would sit uncomfortably to any pastor if that were true about any other major issue.” “Pastors find themselves in this difficult place where they're trying to figure out, ‘How do I talk about this issue without losing my job and splitting my congregation?'” ”The dissonance between the way the press represents evangelical opinions about immigration” “Whether the church's voice has enough authority to be able to actually affect people's real time decisions about how they live in the world” “To be a truly a follower of Christ, you can't be completely for a politician or completely for a political party because then you put that ahead of your faith in Christ.” “You have to be able to have that freedom to disagree with the leader or the party.” “A dog with a bone in his mouth can't bark. … I think that that's where we find ourself as a church right now. We want certain victories through political means, and we're willing to sacrifice our moral authority in order to get those. And I think that that's, that's a very dangerous place to be in as a church.” How Lifeway Research approaches their understanding of “evangelical Christian” “What is the authority of the church, and how is it exercising or failing to exercise its voice right now?” Hope for a compassionate church “The real movement happens when the church unites and uses its voice.” “One in twelve Christians in America will either be deported or live with someone who is subject to deportation.” Production Credits Conversing is produced and distributed in partnership with Comment magazine and Fuller Seminary.

The Mordy Shteibel's Podcast (Rabbi Binyomin Weinrib)
Tikun HaKlali (19) Escaping the Cycle of Negativity; You Are Inherently Good!

The Mordy Shteibel's Podcast (Rabbi Binyomin Weinrib)

Play Episode Listen Later Jun 17, 2025 44:32


Soul Mates!
S2E6A He's Inherently Doggy-Woggy

Soul Mates!

Play Episode Listen Later Jun 16, 2025 127:30


In this episode, the midpoint of Hot Heath Summer, we zoom in on our favorite bat-wielding, head-bashing bullheaded bri'ish boy Heathcliff, (or at least, look at the first half of his art, with the rest coming next episode) and ask a question that is fairer and kinder to him than anyone has been to him in his life: Why they always gotta make him either sad or stupid? A lot of the time, he's both!  Like I know he's Heathcliff from Wuthering Heights but why's he gotta be so bad and sad at nail sharpening? Read this episode's official companion fanfic, "Dog Days":  https://archiveofourown.org/works/66612928 Follow along:  https://limbuscompany.wiki.gg/wiki/Heathcliff Support the show:  https://ko-fi.com/ivyfoxart Follow the show on Tumblr:  https://soul-mates-podcast.tumblr.com/ Follow the show on YouTube:  https://www.youtube.com/@Soul-Mates-Podcast Listen to Together We'll Shine: An Utena Rewatch Podcast:  https://bunnygirlbrainwave.substack.com/archive Art by Ryegarden:  https://www.instagram.com/ryegarden Music by Sueños Electrónicos:  https://suenoselectronicos.bandcamp.com/ Follow and support ash:  https://ko-fi.com/asherlark

Inherently Happy
The Inherently Happy Rule - Ep. 406

Inherently Happy

Play Episode Listen Later Jun 11, 2025 3:00


Always Aim for Balance and Growth. For when you reach for dynamic sustainability, It helps you achieve not just one but both. [full text below] Ep. 406 - The Inherently Happy Rule We begin as always  with the Happy Creed. We believe in Happy,  in Balance and Growth,  of being Mindful and Grateful, Compassionate and Understanding. Yowza Haha My Happy Friends! There's lots of rules out there describing good behavior and proper conduct, There's the Silver Rule, the Golden Rule and even the Platinum Rule too, But there's still exceptions to those rules--allow me to deconstruct: Let's start with the Golden Rule: Treat others as you want them to treat you. Sounds good--at first, but what if I don't want to be treated the same way that you do? Maybe you like brutal honesty, so you decide to attack someone while they're down, You're just following the Golden Rule--so how can that plan possibly fall through? It's because not everyone needs the same things, overwatering can make you drown. A cousin to that is the Silver Rule: Don't treat others how you don't want them to treat you. That's fine, right? Well, what if you hate it when people ask you for help when they need it?  So, you, in turn, never ask for help either, even when you could really use it too,  So, you just create a situation where nobody helps anybody and we all lose bit by bit. Then there's the Platinum Rule--well, that's got to be the best one, right? Treat others the way they want to be treated. What could go wrong with that? Well, what if they want to be mistreated because they feel they deserve spite, And the reason they feel bad is because they are bad--when they are not a doormat! The problem is--all those rules are relative, They depend on who likes to be treated how. Except one, of course, The Inherently Happy Rule: Always Aim for Balance and Growth. Now you could say that's relative too--Balance and Growth depends on what's happening now, But when you reach for dynamic sustainability it helps you achieve not just one but both. Let say you're overwhelmed, things are getting frantic and you're getting stressed, You want to give up, but you can't because you haven't finished your work yet, But if, instead, you stop trying to fight everything, which is only making you more obsessed, You'll see that treating things as connected challenges--not hardships may be your best bet. Haha Yowza

The Primal Beast Podcast
The Uncomfortable TRUTH About Women & $&X

The Primal Beast Podcast

Play Episode Listen Later Jun 6, 2025 125:32


In today's show I take the fellowship into the often confusing and polarizing sexual under current of how women really view sex. What are their triggers, motivations, and goals regarding intercourse that many males are not widely cognizant of when dealing with women. I promise this is a show that is guaranteed to garner an intriguing, informative and eye opening perspective on why women choose the men that they do based on circumstances and what the woman's objectives are. Inherently, these sexual dynamics applies to all women in a free enterprising society -- regardless of age, educational background, socio class, or up bringing. Support the showhttps://cash.app./$MainoManedadonhttp://paypal.me/theprimalbeastInterested in a consultation or business if you have a business inquiry please shoot me an email: theprimalbeast1@gmail.com Shows are currently streaming live on Apple Podcast, Spotify, Google and iHeart radio streaming apps and many more! Simply go to your favorite listening platform and enter 'The Primal Beast' Podcast in the "search"

Relationship Insights with Carrie Abbott
The Inherently Predatory Nature of Commercial and Social Gambling

Relationship Insights with Carrie Abbott

Play Episode Listen Later May 20, 2025 28:01


Recently, Baltimore sued DraftKings and FanDuel over their targeting of problem gamblers. Emily Washburn, Issues Analyst for the Daily Citizen at Focus on the Family, joins us to shed light on this disturbing trend harming people, families, and their futures due to the addiction trap and the ongoing targeting using big data. (https://dailycitizen.focusonthefamily.com/baltimore-sues-fanduel-draftkings-targeting-problem-gamblers/)

Ideas from CBC Radio (Highlights)
Why music — even sad music — is 'inherently joyful'

Ideas from CBC Radio (Highlights)

Play Episode Listen Later May 19, 2025 54:38


Music is joy declares Daniel Chua. The renowned musicologist says music and joy have an ancient correlation, from Confucius to Saint Augustine and Beethoven to The Blues. Of course there is sad music, but Chua says, it's tragic because of joy. Chua delivered the 2025 Wiegand Lecture called Music, Joy and the Good Life.

VivaLife SPF ME
I AM INHERENTLY VALUABLE

VivaLife SPF ME

Play Episode Listen Later May 19, 2025 9:12


In this healing and heart-centered episode, Dr. Kelly O. Elmore invites listeners to reflect, reclaim, and rise. You'll learn five soul-anchored ways to know you are still worth it, with denials to clear the noise and affirmations to rebuild your inner sanctuary.Share, like, and follow this Vivalife SPF ME podcast on Spotify/Amazon,/Google platformsVivaLife SPF ME • A podcast on Spotify for PodcastersSubscribe to our YouTube: https://youtube.com/@vivalifehealthhub8261?si=zLFMLAZ126ss6qyOClick the link below to join our mailing list, events, and experienceshttps://vivalifespfme.com/dr-kelly-o-md-linktreeBook Dr. Kelly O., MD: https://vivalifespfme.com/speakerBuy your journal: https://vivalifespfme.myshopify.com/products/vivalife-spf-me-journal We can't be erased, T-shirt & Hat! https://vivalifespfme.myshopify.com/products/we-cant-be-erased-tshirt #Affirmation #365DaysofAffirmation #VivalifeSPFMEPodcast #VivalifeSPFME #VivalifeHealthHUB #DrKellyOMD

The Heidelcast
Heidelcast: Superfriends Saturday: The Role of a Prophet in the New Covenant | Are There Inherently Sinful Forms of Music? | The Perpetual Virginity of Mary?

The Heidelcast

Play Episode Listen Later May 10, 2025 48:55


All the Episodes of the Heidelcast Subscribe to the Heidelcast! Browse the Heidelshop! On X @Heidelcast On Insta & Facebook @Heidelcast Subscribe in Apple Podcast Subscribe directly via RSS Call The Heidelphone via Voice Memo On Your Phone The Heidelcast is available wherever podcasts are found including Spotify. Call or text the Heidelphone anytime at (760) 618-1563. Leave a message or email us a voice memo from your phone and we may use it in a future podcast. Record it and email it to heidelcast@heidelblog.net. If you benefit from the Heidelcast please leave a five-star review on Apple Podcasts so that others can find it. Please do not forget to make the coffer clink (see the donate button below). SHOW NOTES How To Subscribe To Heidelmedia The Heidelblog Resource Page Heidelmedia Resources The Ecumenical Creeds The Reformed Confessions The Heidelberg Catechism Recovering the Reformed Confession (Phillipsburg: P&R Publishing, 2008) Why I Am A Christian What Must A Christian Believe? Heidelblog Contributors Support Heidelmedia: use the donate button or send a check to: Heidelberg Reformation Association 1637 E. Valley Parkway #391 Escondido CA 92027 USA The HRA is a 501(c)(3) non-profit organization

The Max Frequency Podcast
"Nintendo Gave Me a Gift" with Peter Spezia

The Max Frequency Podcast

Play Episode Listen Later Apr 10, 2025 95:26


Peter Spezia is here! Nintendo Switch 2 has finally been revealed and it felt very much like the E3 conferences of old. The Direct was packed with highs and lows and left a wake of confused fervor behind. Join us as we dissect and react to the next generation of Nintendo home consoles! You can download a copy of this episode's transcript here. Show Notes Last Minute Predictions Peter's Prediction Tweet Max's Big Three Predictions for 2025 Kirby Air Ride (Game Concepts) - Masahiro Sakurai The Switch 2 Experience Nintendo Switch 2 Direct Nintendo Switch 2 – Overview Trailer Austin Evans Reactions to Smooth Control Sticks Nintendo Treehouse Live Four Swords GameChat Example Ask the Developer Vol. 16: Nintendo Switch 2 — Part 3: Inherently added value Virtual Game Cards Third Party Games Phil Spencer's Comments on Series S and the handheld PC Market Mario Kart World Mario Kart World Trailer Nintendo Switch 2 Welcome Tour Nintendo Switch 2 Welcome Tour Treehouse Live Gameplay Nintendo says Switch 2 could've been called ‘Super Nintendo Switch' Nintendo Switch 2 Enhanced Editions The Legend of Zelda games - Nintendo Switch 2 Enhanced Editions Kirby and the Forgotten Land – Nintendo Switch 2 Edition + Star-Crossed World Super Mario Party Jamboree – Nintendo Switch 2 Edition + Jamboree TV Metroid Prime 4: Beyond – Nintendo Switch 2 Edition Treehouse Live Gameplay Switch 1 Games with Free Updates Donkey Kong Bananza Donkey Kong Bananza Trailer Wrap-Up “A Passion for Smash” – Celebrating 15 Years of Super Smash Bros. Brawl with Peter Spezia Peter Spezia Original Soundchat Peter's Twitter @PeteSpeakEasy Peter's Bluesky @petespeakeasy.bsky.social Max Frequency - Max's home online Chapter Select - A seasonal, retrospective podcast where we bounce back and forth between a series exploring its evolution, design, and legacy.

Love Music More (with Scoobert Doobert)
A Step-Up Guide for Spotify Metrics: Shifting Mentality

Love Music More (with Scoobert Doobert)

Play Episode Listen Later Apr 8, 2025 17:36


A few musician friends were asking me how I grew my project. In this pod, I go through the mindset and perspective that worked well for me, as well as a bit of psychology I learned from Oliver Sacks' "This Is Your Brain on Music."For 30% off your first year with DistroKid to share your music with the world click ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠DistroKid.com/vip/lovemusicmore⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Want to hear my music? For all things links visit ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ScoobertDoobert.pizza⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Subscribe to this pod's blog on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Substack⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to receive deeper dives on the regular

That Early Childhood Nerd
NERD_0357 Play is Inherently Political with Kelsie Olds, the Occuplaytional Therapist

That Early Childhood Nerd

Play Episode Listen Later Apr 2, 2025 51:57


In February 2025, occupational therapist Kelsie Olds, who hosts a facebook page called The Occuplaytional Therapist, published a post that struck a nerve--leading to almost 4000 reactions to date, and almost 2000 shares. The post is a powerful statement on the political nature of play, of respecting children, of supporting families, and taking care of each other. Listen in as she talks us through it.For more of Kelsie's work, find her on Facebook as The Occuplaytional Therapist, or follow her substack here: https://occuplaytionaltherapist.substack.com/Want to support the show? You can make a one time $5 or more gift, or become a member for bonus content! Find more information here: buymeacoffee.com/heatherf Thanks for listening! Save 10% on professional development from Explorations Early Learning and support the show with the coupon code NERD. Like the show? Consider supporting our work by becoming a Patron, shopping our Amazon Link, or sharing it with someone who might enjoy it. You can leave a comment or ask a question here. Click here for more Heather. For a small fee we can issue self-study certificates for listening to podcasts.

The Underground
198: Movies Are Inherently Political and More Movie Trailer Slop

The Underground

Play Episode Listen Later Mar 19, 2025 81:36


The Creator of the the Terrifier franchise was bothered by some of the most annoying people on the internet and we got new trailers for Fantastic 4 and Jurassic Park. Also Buffy the Vampire Slayer is probably getting a remake and almost no one is happy about it.Leave a comment below! We try to respond to as many as we can!Don't forget to Like and Subscribe!#movies #games #tv Donate Here - https://www.paypal.com/donate?hosted_button_id=Y6TSU94STL9PUAll our Links - https://direct.me/theundergroundWhat is our Value for Value System?Value for Value is a listener based business model where you determine the value our content is worth. If you feel you are getting value from our content, please consider becoming a supporter by donating your time, talent, & treasure. Time: meaning any effort you put in to improving or developing our content or sharing it.Talent: meaning any skills you possess that you want to contribute to help us develop our platform (ie., artwork for podcast episodes, branding design, editing, etc). Treasure: pay a one-off amount or a recurring contribution for the value you think our service is worth. Please be sure with any payment you send via PayPal to include a note, so that we can read it on the livestream, if you'd like. Your donations keep our content advertisement free. Thank you.Where do you support us? Click the direct.me link to find our PayPal link for contributions as well as our YouTube, Odysee, TikTok, Instagram, and Twitter links! We appreciate the engagement from all of you! Contribution Amounts:Donors of less than $100 will automatically become Producers of the corresponding episode!Donors of $100 and above will automatically become Associate Executive Producers of the corresponding episode!Donors of $200 and above will receive the Executive Producer credit for that episode!We will list the credits in our show notes as Executive Producer, Associate Executive Producer, & Producer and is a genuine credit we will vouch for. Generally, executive producers are primarily responsible for financing the project. Therefore, this is a legitimate credit for your resume. Please note any amount will remain anonymous upon request.All donors will receive a special mention on the show unless otherwise noted!Special Note: The Value for Value business model originated with Adam Curry & John C. Dvorak of the No Agenda Podcast.https://www.youtube.com/watch?v=PgihPtnBSek

AWAYKEN SPACE with Chris Banisch
Ep 308: Is life ACTUALLY inherently PAINFUL & DIFFICULT?!

AWAYKEN SPACE with Chris Banisch

Play Episode Listen Later Mar 9, 2025 34:12


On this episode of the Awayken Space podcast I dive into the misconception many people have about life being inherently painful and difficult.

No Stupid Questions
11. Are Ambitious People Inherently Selfish?

No Stupid Questions

Play Episode Listen Later Mar 2, 2025 36:03


Also: why do we habituate to life's greatest pleasures? This episode originally aired on July 26, 2020.

Unashamed with Phil Robertson
Ep 1048 | Jase Calls Out the Yankees' ‘Duck Dynasty' Beard Policy & Is Alcohol Inherently Bad?

Unashamed with Phil Robertson

Play Episode Listen Later Feb 27, 2025 54:55


Jase gets riled up at the New York Yankees' beard policy snub, insisting there's only reason a man should ever trim his chin hairs. Zach does his best to fulfill his oath to track down a treasure hunt for Jase in England and offers a unique perspective on Jesus' wine-into-water miracle. The guys explore why organized religion is often unappealing to modern society. In this episode: John 2 “Unashamed” Episode 1048 is sponsored by: https://tnusa.com/unashamed — Call 1-800-958-1000 or visit the website for more details. https://preborn.com/unashamed — Click the link or dial #250 and use keyword BABY to donate today. Get your tickets now for LAST BREATH, rated PG-13. Opens Friday, February 28th in theaters everywhere! Listen to Not Yet Now with Zach Dasher on Apple, Spotify, iHeart, or anywhere you get podcasts. — Learn more about your ad choices. Visit megaphone.fm/adchoices

The Annie Frey Show Podcast
Legos are inherently anti-LGBT, says London Museum | Bee or Not the Bee

The Annie Frey Show Podcast

Play Episode Listen Later Feb 7, 2025 10:13


Late night comedians decide to make jokes about the president again. Real or not?

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Did you know that adding a simple Code Interpreter took o3 from 9.2% to 32% on FrontierMath? The Latent Space crew is hosting a hack night Feb 11th in San Francisco focused on CodeGen use cases, co-hosted with E2B and Edge AGI; watch E2B's new workshop and RSVP here!We're happy to announce that today's guest Samuel Colvin will be teaching his very first Pydantic AI workshop at the newly announced AI Engineer NYC Workshops day on Feb 22! 25 tickets left.If you're a Python developer, it's very likely that you've heard of Pydantic. Every month, it's downloaded >300,000,000 times, making it one of the top 25 PyPi packages. OpenAI uses it in its SDK for structured outputs, it's at the core of FastAPI, and if you've followed our AI Engineer Summit conference, Jason Liu of Instructor has given two great talks about it: “Pydantic is all you need” and “Pydantic is STILL all you need”. Now, Samuel Colvin has raised $17M from Sequoia to turn Pydantic from an open source project to a full stack AI engineer platform with Logfire, their observability platform, and PydanticAI, their new agent framework.Logfire: bringing OTEL to AIOpenTelemetry recently merged Semantic Conventions for LLM workloads which provides standard definitions to track performance like gen_ai.server.time_per_output_token. In Sam's view at least 80% of new apps being built today have some sort of LLM usage in them, and just like web observability platform got replaced by cloud-first ones in the 2010s, Logfire wants to do the same for AI-first apps. If you're interested in the technical details, Logfire migrated away from Clickhouse to Datafusion for their backend. We spent some time on the importance of picking open source tools you understand and that you can actually contribute to upstream, rather than the more popular ones; listen in ~43:19 for that part.Agents are the killer app for graphsPydantic AI is their attempt at taking a lot of the learnings that LangChain and the other early LLM frameworks had, and putting Python best practices into it. At an API level, it's very similar to the other libraries: you can call LLMs, create agents, do function calling, do evals, etc.They define an “Agent” as a container with a system prompt, tools, structured result, and an LLM. Under the hood, each Agent is now a graph of function calls that can orchestrate multi-step LLM interactions. You can start simple, then move toward fully dynamic graph-based control flow if needed.“We were compelled enough by graphs once we got them right that our agent implementation [...] is now actually a graph under the hood.”Why Graphs?* More natural for complex or multi-step AI workflows.* Easy to visualize and debug with mermaid diagrams.* Potential for distributed runs, or “waiting days” between steps in certain flows.In parallel, you see folks like Emil Eifrem of Neo4j talk about GraphRAG as another place where graphs fit really well in the AI stack, so it might be time for more people to take them seriously.Full Video EpisodeLike and subscribe!Chapters* 00:00:00 Introductions* 00:00:24 Origins of Pydantic* 00:05:28 Pydantic's AI moment * 00:08:05 Why build a new agents framework?* 00:10:17 Overview of Pydantic AI* 00:12:33 Becoming a believer in graphs* 00:24:02 God Model vs Compound AI Systems* 00:28:13 Why not build an LLM gateway?* 00:31:39 Programmatic testing vs live evals* 00:35:51 Using OpenTelemetry for AI traces* 00:43:19 Why they don't use Clickhouse* 00:48:34 Competing in the observability space* 00:50:41 Licensing decisions for Pydantic and LogFire* 00:51:48 Building Pydantic.run* 00:55:24 Marimo and the future of Jupyter notebooks* 00:57:44 London's AI sceneShow Notes* Sam Colvin* Pydantic* Pydantic AI* Logfire* Pydantic.run* Zod* E2B* Arize* Langsmith* Marimo* Prefect* GLA (Google Generative Language API)* OpenTelemetry* Jason Liu* Sebastian Ramirez* Bogomil Balkansky* Hood Chatham* Jeremy Howard* Andrew LambTranscriptAlessio [00:00:03]: Hey, everyone. Welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:12]: Good morning. And today we're very excited to have Sam Colvin join us from Pydantic AI. Welcome. Sam, I heard that Pydantic is all we need. Is that true?Samuel [00:00:24]: I would say you might need Pydantic AI and Logfire as well, but it gets you a long way, that's for sure.Swyx [00:00:29]: Pydantic almost basically needs no introduction. It's almost 300 million downloads in December. And obviously, in the previous podcasts and discussions we've had with Jason Liu, he's been a big fan and promoter of Pydantic and AI.Samuel [00:00:45]: Yeah, it's weird because obviously I didn't create Pydantic originally for uses in AI, it predates LLMs. But it's like we've been lucky that it's been picked up by that community and used so widely.Swyx [00:00:58]: Actually, maybe we'll hear it. Right from you, what is Pydantic and maybe a little bit of the origin story?Samuel [00:01:04]: The best name for it, which is not quite right, is a validation library. And we get some tension around that name because it doesn't just do validation, it will do coercion by default. We now have strict mode, so you can disable that coercion. But by default, if you say you want an integer field and you get in a string of 1, 2, 3, it will convert it to 123 and a bunch of other sensible conversions. And as you can imagine, the semantics around it. Exactly when you convert and when you don't, it's complicated, but because of that, it's more than just validation. Back in 2017, when I first started it, the different thing it was doing was using type hints to define your schema. That was controversial at the time. It was genuinely disapproved of by some people. I think the success of Pydantic and libraries like FastAPI that build on top of it means that today that's no longer controversial in Python. And indeed, lots of other people have copied that route, but yeah, it's a data validation library. It uses type hints for the for the most part and obviously does all the other stuff you want, like serialization on top of that. But yeah, that's the core.Alessio [00:02:06]: Do you have any fun stories on how JSON schemas ended up being kind of like the structure output standard for LLMs? And were you involved in any of these discussions? Because I know OpenAI was, you know, one of the early adopters. So did they reach out to you? Was there kind of like a structure output console in open source that people were talking about or was it just a random?Samuel [00:02:26]: No, very much not. So I originally. Didn't implement JSON schema inside Pydantic and then Sebastian, Sebastian Ramirez, FastAPI came along and like the first I ever heard of him was over a weekend. I got like 50 emails from him or 50 like emails as he was committing to Pydantic, adding JSON schema long pre version one. So the reason it was added was for OpenAPI, which is obviously closely akin to JSON schema. And then, yeah, I don't know why it was JSON that got picked up and used by OpenAI. It was obviously very convenient for us. That's because it meant that not only can you do the validation, but because Pydantic will generate you the JSON schema, it will it kind of can be one source of source of truth for structured outputs and tools.Swyx [00:03:09]: Before we dive in further on the on the AI side of things, something I'm mildly curious about, obviously, there's Zod in JavaScript land. Every now and then there is a new sort of in vogue validation library that that takes over for quite a few years and then maybe like some something else comes along. Is Pydantic? Is it done like the core Pydantic?Samuel [00:03:30]: I've just come off a call where we were redesigning some of the internal bits. There will be a v3 at some point, which will not break people's code half as much as v2 as in v2 was the was the massive rewrite into Rust, but also fixing all the stuff that was broken back from like version zero point something that we didn't fix in v1 because it was a side project. We have plans to move some of the basically store the data in Rust types after validation. Not completely. So we're still working to design the Pythonic version of it, in order for it to be able to convert into Python types. So then if you were doing like validation and then serialization, you would never have to go via a Python type we reckon that can give us somewhere between three and five times another three to five times speed up. That's probably the biggest thing. Also, like changing how easy it is to basically extend Pydantic and define how particular types, like for example, NumPy arrays are validated and serialized. But there's also stuff going on. And for example, Jitter, the JSON library in Rust that does the JSON parsing, has SIMD implementation at the moment only for AMD64. So we can add that. We need to go and add SIMD for other instruction sets. So there's a bunch more we can do on performance. I don't think we're going to go and revolutionize Pydantic, but it's going to continue to get faster, continue, hopefully, to allow people to do more advanced things. We might add a binary format like CBOR for serialization for when you'll just want to put the data into a database and probably load it again from Pydantic. So there are some things that will come along, but for the most part, it should just get faster and cleaner.Alessio [00:05:04]: From a focus perspective, I guess, as a founder too, how did you think about the AI interest rising? And then how do you kind of prioritize, okay, this is worth going into more, and we'll talk about Pydantic AI and all of that. What was maybe your early experience with LLAMP, and when did you figure out, okay, this is something we should take seriously and focus more resources on it?Samuel [00:05:28]: I'll answer that, but I'll answer what I think is a kind of parallel question, which is Pydantic's weird, because Pydantic existed, obviously, before I was starting a company. I was working on it in my spare time, and then beginning of 22, I started working on the rewrite in Rust. And I worked on it full-time for a year and a half, and then once we started the company, people came and joined. And it was a weird project, because that would never go away. You can't get signed off inside a startup. Like, we're going to go off and three engineers are going to work full-on for a year in Python and Rust, writing like 30,000 lines of Rust just to release open-source-free Python library. The result of that has been excellent for us as a company, right? As in, it's made us remain entirely relevant. And it's like, Pydantic is not just used in the SDKs of all of the AI libraries, but I can't say which one, but one of the big foundational model companies, when they upgraded from Pydantic v1 to v2, their number one internal model... The metric of performance is time to first token. That went down by 20%. So you think about all of the actual AI going on inside, and yet at least 20% of the CPU, or at least the latency inside requests was actually Pydantic, which shows like how widely it's used. So we've benefited from doing that work, although it didn't, it would have never have made financial sense in most companies. In answer to your question about like, how do we prioritize AI, I mean, the honest truth is we've spent a lot of the last year and a half building. Good general purpose observability inside LogFire and making Pydantic good for general purpose use cases. And the AI has kind of come to us. Like we just, not that we want to get away from it, but like the appetite, uh, both in Pydantic and in LogFire to go and build with AI is enormous because it kind of makes sense, right? Like if you're starting a new greenfield project in Python today, what's the chance that you're using GenAI 80%, let's say, globally, obviously it's like a hundred percent in California, but even worldwide, it's probably 80%. Yeah. And so everyone needs that stuff. And there's so much yet to be figured out so much like space to do things better in the ecosystem in a way that like to go and implement a database that's better than Postgres is a like Sisyphean task. Whereas building, uh, tools that are better for GenAI than some of the stuff that's about now is not very difficult. Putting the actual models themselves to one side.Alessio [00:07:40]: And then at the same time, then you released Pydantic AI recently, which is, uh, um, you know, agent framework and early on, I would say everybody like, you know, Langchain and like, uh, Pydantic kind of like a first class support, a lot of these frameworks, we're trying to use you to be better. What was the decision behind we should do our own framework? Were there any design decisions that you disagree with any workloads that you think people didn't support? Well,Samuel [00:08:05]: it wasn't so much like design and workflow, although I think there were some, some things we've done differently. Yeah. I think looking in general at the ecosystem of agent frameworks, the engineering quality is far below that of the rest of the Python ecosystem. There's a bunch of stuff that we have learned how to do over the last 20 years of building Python libraries and writing Python code that seems to be abandoned by people when they build agent frameworks. Now I can kind of respect that, particularly in the very first agent frameworks, like Langchain, where they were literally figuring out how to go and do this stuff. It's completely understandable that you would like basically skip some stuff.Samuel [00:08:42]: I'm shocked by the like quality of some of the agent frameworks that have come out recently from like well-respected names, which it just seems to be opportunism and I have little time for that, but like the early ones, like I think they were just figuring out how to do stuff and just as lots of people have learned from Pydantic, we were able to learn a bit from them. I think from like the gap we saw and the thing we were frustrated by was the production readiness. And that means things like type checking, even if type checking makes it hard. Like Pydantic AI, I will put my hand up now and say it has a lot of generics and you need to, it's probably easier to use it if you've written a bit of Rust and you really understand generics, but like, and that is, we're not claiming that that makes it the easiest thing to use in all cases, we think it makes it good for production applications in big systems where type checking is a no-brainer in Python. But there are also a bunch of stuff we've learned from maintaining Pydantic over the years that we've gone and done. So every single example in Pydantic AI's documentation is run on Python. As part of tests and every single print output within an example is checked during tests. So it will always be up to date. And then a bunch of things that, like I say, are standard best practice within the rest of the Python ecosystem, but I'm not followed surprisingly by some AI libraries like coverage, linting, type checking, et cetera, et cetera, where I think these are no-brainers, but like weirdly they're not followed by some of the other libraries.Alessio [00:10:04]: And can you just give an overview of the framework itself? I think there's kind of like the. LLM calling frameworks, there are the multi-agent frameworks, there's the workflow frameworks, like what does Pydantic AI do?Samuel [00:10:17]: I glaze over a bit when I hear all of the different sorts of frameworks, but I like, and I will tell you when I built Pydantic, when I built Logfire and when I built Pydantic AI, my methodology is not to go and like research and review all of the other things. I kind of work out what I want and I go and build it and then feedback comes and we adjust. So the fundamental building block of Pydantic AI is agents. The exact definition of agents and how you want to define them. is obviously ambiguous and our things are probably sort of agent-lit, not that we would want to go and rename them to agent-lit, but like the point is you probably build them together to build something and most people will call an agent. So an agent in our case has, you know, things like a prompt, like system prompt and some tools and a structured return type if you want it, that covers the vast majority of cases. There are situations where you want to go further and the most complex workflows where you want graphs and I resisted graphs for quite a while. I was sort of of the opinion you didn't need them and you could use standard like Python flow control to do all of that stuff. I had a few arguments with people, but I basically came around to, yeah, I can totally see why graphs are useful. But then we have the problem that by default, they're not type safe because if you have a like add edge method where you give the names of two different edges, there's no type checking, right? Even if you go and do some, I'm not, not all the graph libraries are AI specific. So there's a, there's a graph library called, but it allows, it does like a basic runtime type checking. Ironically using Pydantic to try and make up for the fact that like fundamentally that graphs are not typed type safe. Well, I like Pydantic, but it did, that's not a real solution to have to go and run the code to see if it's safe. There's a reason that starting type checking is so powerful. And so we kind of, from a lot of iteration eventually came up with a system of using normally data classes to define nodes where you return the next node you want to call and where we're able to go and introspect the return type of a node to basically build the graph. And so the graph is. Yeah. Inherently type safe. And once we got that right, I, I wasn't, I'm incredibly excited about graphs. I think there's like masses of use cases for them, both in gen AI and other development, but also software's all going to have interact with gen AI, right? It's going to be like web. There's no longer be like a web department in a company is that there's just like all the developers are building for web building with databases. The same is going to be true for gen AI.Alessio [00:12:33]: Yeah. I see on your docs, you call an agent, a container that contains a system prompt function. Tools, structure, result, dependency type model, and then model settings. Are the graphs in your mind, different agents? Are they different prompts for the same agent? What are like the structures in your mind?Samuel [00:12:52]: So we were compelled enough by graphs once we got them right, that we actually merged the PR this morning. That means our agent implementation without changing its API at all is now actually a graph under the hood as it is built using our graph library. So graphs are basically a lower level tool that allow you to build these complex workflows. Our agents are technically one of the many graphs you could go and build. And we just happened to build that one for you because it's a very common, commonplace one. But obviously there are cases where you need more complex workflows where the current agent assumptions don't work. And that's where you can then go and use graphs to build more complex things.Swyx [00:13:29]: You said you were cynical about graphs. What changed your mind specifically?Samuel [00:13:33]: I guess people kept giving me examples of things that they wanted to use graphs for. And my like, yeah, but you could do that in standard flow control in Python became a like less and less compelling argument to me because I've maintained those systems that end up with like spaghetti code. And I could see the appeal of this like structured way of defining the workflow of my code. And it's really neat that like just from your code, just from your type hints, you can get out a mermaid diagram that defines exactly what can go and happen.Swyx [00:14:00]: Right. Yeah. You do have very neat implementation of sort of inferring the graph from type hints, I guess. Yeah. Is what I would call it. Yeah. I think the question always is I have gone back and forth. I used to work at Temporal where we would actually spend a lot of time complaining about graph based workflow solutions like AWS step functions. And we would actually say that we were better because you could use normal control flow that you already knew and worked with. Yours, I guess, is like a little bit of a nice compromise. Like it looks like normal Pythonic code. But you just have to keep in mind what the type hints actually mean. And that's what we do with the quote unquote magic that the graph construction does.Samuel [00:14:42]: Yeah, exactly. And if you look at the internal logic of actually running a graph, it's incredibly simple. It's basically call a node, get a node back, call that node, get a node back, call that node. If you get an end, you're done. We will add in soon support for, well, basically storage so that you can store the state between each node that's run. And then the idea is you can then distribute the graph and run it across computers. And also, I mean, the other weird, the other bit that's really valuable is across time. Because it's all very well if you look at like lots of the graph examples that like Claude will give you. If it gives you an example, it gives you this lovely enormous mermaid chart of like the workflow, for example, managing returns if you're an e-commerce company. But what you realize is some of those lines are literally one function calls another function. And some of those lines are wait six days for the customer to print their like piece of paper and put it in the post. And if you're writing like your demo. Project or your like proof of concept, that's fine because you can just say, and now we call this function. But when you're building when you're in real in real life, that doesn't work. And now how do we manage that concept to basically be able to start somewhere else in the in our code? Well, this graph implementation makes it incredibly easy because you just pass the node that is the start point for carrying on the graph and it continues to run. So it's things like that where I was like, yeah, I can just imagine how things I've done in the past would be fundamentally easier to understand if we had done them with graphs.Swyx [00:16:07]: You say imagine, but like right now, this pedantic AI actually resume, you know, six days later, like you said, or is this just like a theoretical thing we can go someday?Samuel [00:16:16]: I think it's basically Q&A. So there's an AI that's asking the user a question and effectively you then call the CLI again to continue the conversation. And it basically instantiates the node and calls the graph with that node again. Now, we don't have the logic yet for effectively storing state in the database between individual nodes that we're going to add soon. But like the rest of it is basically there.Swyx [00:16:37]: It does make me think that not only are you competing with Langchain now and obviously Instructor, and now you're going into sort of the more like orchestrated things like Airflow, Prefect, Daxter, those guys.Samuel [00:16:52]: Yeah, I mean, we're good friends with the Prefect guys and Temporal have the same investors as us. And I'm sure that my investor Bogomol would not be too happy if I was like, oh, yeah, by the way, as well as trying to take on Datadog. We're also going off and trying to take on Temporal and everyone else doing that. Obviously, we're not doing all of the infrastructure of deploying that right yet, at least. We're, you know, we're just building a Python library. And like what's crazy about our graph implementation is, sure, there's a bit of magic in like introspecting the return type, you know, extracting things from unions, stuff like that. But like the actual calls, as I say, is literally call a function and get back a thing and call that. It's like incredibly simple and therefore easy to maintain. The question is, how useful is it? Well, I don't know yet. I think we have to go and find out. We have a whole. We've had a slew of people joining our Slack over the last few days and saying, tell me how good Pydantic AI is. How good is Pydantic AI versus Langchain? And I refuse to answer. That's your job to go and find that out. Not mine. We built a thing. I'm compelled by it, but I'm obviously biased. The ecosystem will work out what the useful tools are.Swyx [00:17:52]: Bogomol was my board member when I was at Temporal. And I think I think just generally also having been a workflow engine investor and participant in this space, it's a big space. Like everyone needs different functions. I think the one thing that I would say like yours, you know, as a library, you don't have that much control of it over the infrastructure. I do like the idea that each new agents or whatever or unit of work, whatever you call that should spin up in this sort of isolated boundaries. Whereas yours, I think around everything runs in the same process. But you ideally want to sort of spin out its own little container of things.Samuel [00:18:30]: I agree with you a hundred percent. And we will. It would work now. Right. As in theory, you're just like as long as you can serialize the calls to the next node, you just have to all of the different containers basically have to have the same the same code. I mean, I'm super excited about Cloudflare workers running Python and being able to install dependencies. And if Cloudflare could only give me my invitation to the private beta of that, we would be exploring that right now because I'm super excited about that as a like compute level for some of this stuff where exactly what you're saying, basically. You can run everything as an individual. Like worker function and distribute it. And it's resilient to failure, et cetera, et cetera.Swyx [00:19:08]: And it spins up like a thousand instances simultaneously. You know, you want it to be sort of truly serverless at once. Actually, I know we have some Cloudflare friends who are listening, so hopefully they'll get in front of the line. Especially.Samuel [00:19:19]: I was in Cloudflare's office last week shouting at them about other things that frustrate me. I have a love-hate relationship with Cloudflare. Their tech is awesome. But because I use it the whole time, I then get frustrated. So, yeah, I'm sure I will. I will. I will get there soon.Swyx [00:19:32]: There's a side tangent on Cloudflare. Is Python supported at full? I actually wasn't fully aware of what the status of that thing is.Samuel [00:19:39]: Yeah. So Pyodide, which is Python running inside the browser in scripting, is supported now by Cloudflare. They basically, they're having some struggles working out how to manage, ironically, dependencies that have binaries, in particular, Pydantic. Because these workers where you can have thousands of them on a given metal machine, you don't want to have a difference. You basically want to be able to have a share. Shared memory for all the different Pydantic installations, effectively. That's the thing they work out. They're working out. But Hood, who's my friend, who is the primary maintainer of Pyodide, works for Cloudflare. And that's basically what he's doing, is working out how to get Python running on Cloudflare's network.Swyx [00:20:19]: I mean, the nice thing is that your binary is really written in Rust, right? Yeah. Which also compiles the WebAssembly. Yeah. So maybe there's a way that you'd build... You have just a different build of Pydantic and that ships with whatever your distro for Cloudflare workers is.Samuel [00:20:36]: Yes, that's exactly what... So Pyodide has builds for Pydantic Core and for things like NumPy and basically all of the popular binary libraries. Yeah. It's just basic. And you're doing exactly that, right? You're using Rust to compile the WebAssembly and then you're calling that shared library from Python. And it's unbelievably complicated, but it works. Okay.Swyx [00:20:57]: Staying on graphs a little bit more, and then I wanted to go to some of the other features that you have in Pydantic AI. I see in your docs, there are sort of four levels of agents. There's single agents, there's agent delegation, programmatic agent handoff. That seems to be what OpenAI swarms would be like. And then the last one, graph-based control flow. Would you say that those are sort of the mental hierarchy of how these things go?Samuel [00:21:21]: Yeah, roughly. Okay.Swyx [00:21:22]: You had some expression around OpenAI swarms. Well.Samuel [00:21:25]: And indeed, OpenAI have got in touch with me and basically, maybe I'm not supposed to say this, but basically said that Pydantic AI looks like what swarms would become if it was production ready. So, yeah. I mean, like, yeah, which makes sense. Awesome. Yeah. I mean, in fact, it was specifically saying, how can we give people the same feeling that they were getting from swarms that led us to go and implement graphs? Because my, like, just call the next agent with Python code was not a satisfactory answer to people. So it was like, okay, we've got to go and have a better answer for that. It's not like, let us to get to graphs. Yeah.Swyx [00:21:56]: I mean, it's a minimal viable graph in some sense. What are the shapes of graphs that people should know? So the way that I would phrase this is I think Anthropic did a very good public service and also kind of surprisingly influential blog post, I would say, when they wrote Building Effective Agents. We actually have the authors coming to speak at my conference in New York, which I think you're giving a workshop at. Yeah.Samuel [00:22:24]: I'm trying to work it out. But yes, I think so.Swyx [00:22:26]: Tell me if you're not. yeah, I mean, like, that was the first, I think, authoritative view of, like, what kinds of graphs exist in agents and let's give each of them a name so that everyone is on the same page. So I'm just kind of curious if you have community names or top five patterns of graphs.Samuel [00:22:44]: I don't have top five patterns of graphs. I would love to see what people are building with them. But like, it's been it's only been a couple of weeks. And of course, there's a point is that. Because they're relatively unopinionated about what you can go and do with them. They don't suit them. Like, you can go and do lots of lots of things with them, but they don't have the structure to go and have like specific names as much as perhaps like some other systems do. I think what our agents are, which have a name and I can't remember what it is, but this basically system of like, decide what tool to call, go back to the center, decide what tool to call, go back to the center and then exit. One form of graph, which, as I say, like our agents are effectively one implementation of a graph, which is why under the hood they are now using graphs. And it'll be interesting to see over the next few years whether we end up with these like predefined graph names or graph structures or whether it's just like, yep, I built a graph or whether graphs just turn out not to match people's mental image of what they want and die away. We'll see.Swyx [00:23:38]: I think there is always appeal. Every developer eventually gets graph religion and goes, oh, yeah, everything's a graph. And then they probably over rotate and go go too far into graphs. And then they have to learn a whole bunch of DSLs. And then they're like, actually, I didn't need that. I need this. And they scale back a little bit.Samuel [00:23:55]: I'm at the beginning of that process. I'm currently a graph maximalist, although I haven't actually put any into production yet. But yeah.Swyx [00:24:02]: This has a lot of philosophical connections with other work coming out of UC Berkeley on compounding AI systems. I don't know if you know of or care. This is the Gartner world of things where they need some kind of industry terminology to sell it to enterprises. I don't know if you know about any of that.Samuel [00:24:24]: I haven't. I probably should. I should probably do it because I should probably get better at selling to enterprises. But no, no, I don't. Not right now.Swyx [00:24:29]: This is really the argument is that instead of putting everything in one model, you have more control and more maybe observability to if you break everything out into composing little models and changing them together. And obviously, then you need an orchestration framework to do that. Yeah.Samuel [00:24:47]: And it makes complete sense. And one of the things we've seen with agents is they work well when they work well. But when they. Even if you have the observability through log five that you can see what was going on, if you don't have a nice hook point to say, hang on, this is all gone wrong. You have a relatively blunt instrument of basically erroring when you exceed some kind of limit. But like what you need to be able to do is effectively iterate through these runs so that you can have your own control flow where you're like, OK, we've gone too far. And that's where one of the neat things about our graph implementation is you can basically call next in a loop rather than just running the full graph. And therefore, you have this opportunity to to break out of it. But yeah, basically, it's the same point, which is like if you have two bigger unit of work to some extent, whether or not it involves gen AI. But obviously, it's particularly problematic in gen AI. You only find out afterwards when you've spent quite a lot of time and or money when it's gone off and done done the wrong thing.Swyx [00:25:39]: Oh, drop on this. We're not going to resolve this here, but I'll drop this and then we can move on to the next thing. This is the common way that we we developers talk about this. And then the machine learning researchers look at us. And laugh and say, that's cute. And then they just train a bigger model and they wipe us out in the next training run. So I think there's a certain amount of we are fighting the bitter lesson here. We're fighting AGI. And, you know, when AGI arrives, this will all go away. Obviously, on Latent Space, we don't really discuss that because I think AGI is kind of this hand wavy concept that isn't super relevant. But I think we have to respect that. For example, you could do a chain of thoughts with graphs and you could manually orchestrate a nice little graph that does like. Reflect, think about if you need more, more inference time, compute, you know, that's the hot term now. And then think again and, you know, scale that up. Or you could train Strawberry and DeepSeq R1. Right.Samuel [00:26:32]: I saw someone saying recently, oh, they were really optimistic about agents because models are getting faster exponentially. And I like took a certain amount of self-control not to describe that it wasn't exponential. But my main point was. If models are getting faster as quickly as you say they are, then we don't need agents and we don't really need any of these abstraction layers. We can just give our model and, you know, access to the Internet, cross our fingers and hope for the best. Agents, agent frameworks, graphs, all of this stuff is basically making up for the fact that right now the models are not that clever. In the same way that if you're running a customer service business and you have loads of people sitting answering telephones, the less well trained they are, the less that you trust them, the more that you need to give them a script to go through. Whereas, you know, so if you're running a bank and you have lots of customer service people who you don't trust that much, then you tell them exactly what to say. If you're doing high net worth banking, you just employ people who you think are going to be charming to other rich people and set them off to go and have coffee with people. Right. And the same is true of models. The more intelligent they are, the less we need to tell them, like structure what they go and do and constrain the routes in which they take.Swyx [00:27:42]: Yeah. Yeah. Agree with that. So I'm happy to move on. So the other parts of Pydantic AI that are worth commenting on, and this is like my last rant, I promise. So obviously, every framework needs to do its sort of model adapter layer, which is, oh, you can easily swap from OpenAI to Cloud to Grok. You also have, which I didn't know about, Google GLA, which I didn't really know about until I saw this in your docs, which is generative language API. I assume that's AI Studio? Yes.Samuel [00:28:13]: Google don't have good names for it. So Vertex is very clear. That seems to be the API that like some of the things use, although it returns 503 about 20% of the time. So... Vertex? No. Vertex, fine. But the... Oh, oh. GLA. Yeah. Yeah.Swyx [00:28:28]: I agree with that.Samuel [00:28:29]: So we have, again, another example of like, well, I think we go the extra mile in terms of engineering is we run on every commit, at least commit to main, we run tests against the live models. Not lots of tests, but like a handful of them. Oh, okay. And we had a point last week where, yeah, GLA is a little bit better. GLA1 was failing every single run. One of their tests would fail. And we, I think we might even have commented out that one at the moment. So like all of the models fail more often than you might expect, but like that one seems to be particularly likely to fail. But Vertex is the same API, but much more reliable.Swyx [00:29:01]: My rant here is that, you know, versions of this appear in Langchain and every single framework has to have its own little thing, a version of that. I would put to you, and then, you know, this is, this can be agree to disagree. This is not needed in Pydantic AI. I would much rather you adopt a layer like Lite LLM or what's the other one in JavaScript port key. And that's their job. They focus on that one thing and they, they normalize APIs for you. All new models are automatically added and you don't have to duplicate this inside of your framework. So for example, if I wanted to use deep seek, I'm out of luck because Pydantic AI doesn't have deep seek yet.Samuel [00:29:38]: Yeah, it does.Swyx [00:29:39]: Oh, it does. Okay. I'm sorry. But you know what I mean? Should this live in your code or should it live in a layer that's kind of your API gateway that's a defined piece of infrastructure that people have?Samuel [00:29:49]: And I think if a company who are well known, who are respected by everyone had come along and done this at the right time, maybe we should have done it a year and a half ago and said, we're going to be the universal AI layer. That would have been a credible thing to do. I've heard varying reports of Lite LLM is the truth. And it didn't seem to have exactly the type safety that we needed. Also, as I understand it, and again, I haven't looked into it in great detail. Part of their business model is proxying the request through their, through their own system to do the generalization. That would be an enormous put off to an awful lot of people. Honestly, the truth is I don't think it is that much work unifying the model. I get where you're coming from. I kind of see your point. I think the truth is that everyone is centralizing around open AIs. Open AI's API is the one to do. So DeepSeq support that. Grok with OK support that. Ollama also does it. I mean, if there is that library right now, it's more or less the open AI SDK. And it's very high quality. It's well type checked. It uses Pydantic. So I'm biased. But I mean, I think it's pretty well respected anyway.Swyx [00:30:57]: There's different ways to do this. Because also, it's not just about normalizing the APIs. You have to do secret management and all that stuff.Samuel [00:31:05]: Yeah. And there's also. There's Vertex and Bedrock, which to one extent or another, effectively, they host multiple models, but they don't unify the API. But they do unify the auth, as I understand it. Although we're halfway through doing Bedrock. So I don't know about it that well. But they're kind of weird hybrids because they support multiple models. But like I say, the auth is centralized.Swyx [00:31:28]: Yeah, I'm surprised they don't unify the API. That seems like something that I would do. You know, we can discuss all this all day. There's a lot of APIs. I agree.Samuel [00:31:36]: It would be nice if there was a universal one that we didn't have to go and build.Alessio [00:31:39]: And I guess the other side of, you know, routing model and picking models like evals. How do you actually figure out which one you should be using? I know you have one. First of all, you have very good support for mocking in unit tests, which is something that a lot of other frameworks don't do. So, you know, my favorite Ruby library is VCR because it just, you know, it just lets me store the HTTP requests and replay them. That part I'll kind of skip. I think you are busy like this test model. We're like just through Python. You try and figure out what the model might respond without actually calling the model. And then you have the function model where people can kind of customize outputs. Any other fun stories maybe from there? Or is it just what you see is what you get, so to speak?Samuel [00:32:18]: On those two, I think what you see is what you get. On the evals, I think watch this space. I think it's something that like, again, I was somewhat cynical about for some time. Still have my cynicism about some of the well, it's unfortunate that so many different things are called evals. It would be nice if we could agree. What they are and what they're not. But look, I think it's a really important space. I think it's something that we're going to be working on soon, both in Pydantic AI and in LogFire to try and support better because it's like it's an unsolved problem.Alessio [00:32:45]: Yeah, you do say in your doc that anyone who claims to know for sure exactly how your eval should be defined can safely be ignored.Samuel [00:32:52]: We'll delete that sentence when we tell people how to do their evals.Alessio [00:32:56]: Exactly. I was like, we need we need a snapshot of this today. And so let's talk about eval. So there's kind of like the vibe. Yeah. So you have evals, which is what you do when you're building. Right. Because you cannot really like test it that many times to get statistical significance. And then there's the production eval. So you also have LogFire, which is kind of like your observability product, which I tried before. It's very nice. What are some of the learnings you've had from building an observability tool for LEMPs? And yeah, as people think about evals, even like what are the right things to measure? What are like the right number of samples that you need to actually start making decisions?Samuel [00:33:33]: I'm not the best person to answer that is the truth. So I'm not going to come in here and tell you that I think I know the answer on the exact number. I mean, we can do some back of the envelope statistics calculations to work out that like having 30 probably gets you most of the statistical value of having 200 for, you know, by definition, 15% of the work. But the exact like how many examples do you need? For example, that's a much harder question to answer because it's, you know, it's deep within the how models operate in terms of LogFire. One of the reasons we built LogFire the way we have and we allow you to write SQL directly against your data and we're trying to build the like powerful fundamentals of observability is precisely because we know we don't know the answers. And so allowing people to go and innovate on how they're going to consume that stuff and how they're going to process it is we think that's valuable. Because even if we come along and offer you an evals framework on top of LogFire, it won't be right in all regards. And we want people to be able to go and innovate and being able to write their own SQL connected to the API. And effectively query the data like it's a database with SQL allows people to innovate on that stuff. And that's what allows us to do it as well. I mean, we do a bunch of like testing what's possible by basically writing SQL directly against LogFire as any user could. I think the other the other really interesting bit that's going on in observability is OpenTelemetry is centralizing around semantic attributes for GenAI. So it's a relatively new project. A lot of it's still being added at the moment. But basically the idea that like. They unify how both SDKs and or agent frameworks send observability data to to any OpenTelemetry endpoint. And so, again, we can go and having that unification allows us to go and like basically compare different libraries, compare different models much better. That stuff's in a very like early stage of development. One of the things we're going to be working on pretty soon is basically, I suspect, GenAI will be the first agent framework that implements those semantic attributes properly. Because, again, we control and we can say this is important for observability, whereas most of the other agent frameworks are not maintained by people who are trying to do observability. With the exception of Langchain, where they have the observability platform, but they chose not to go down the OpenTelemetry route. So they're like plowing their own furrow. And, you know, they're a lot they're even further away from standardization.Alessio [00:35:51]: Can you maybe just give a quick overview of how OTEL ties into the AI workflows? There's kind of like the question of is, you know, a trace. And a span like a LLM call. Is it the agent? It's kind of like the broader thing you're tracking. How should people think about it?Samuel [00:36:06]: Yeah, so they have a PR that I think may have now been merged from someone at IBM talking about remote agents and trying to support this concept of remote agents within GenAI. I'm not particularly compelled by that because I don't think that like that's actually by any means the common use case. But like, I suppose it's fine for it to be there. The majority of the stuff in OTEL is basically defining how you would instrument. A given call to an LLM. So basically the actual LLM call, what data you would send to your telemetry provider, how you would structure that. Apart from this slightly odd stuff on remote agents, most of the like agent level consideration is not yet implemented in is not yet decided effectively. And so there's a bit of ambiguity. Obviously, what's good about OTEL is you can in the end send whatever attributes you like. But yeah, there's quite a lot of churn in that space and exactly how we store the data. I think that one of the most interesting things, though, is that if you think about observability. Traditionally, it was sure everyone would say our observability data is very important. We must keep it safe. But actually, companies work very hard to basically not have anything that sensitive in their observability data. So if you're a doctor in a hospital and you search for a drug for an STI, the sequel might be sent to the observability provider. But none of the parameters would. It wouldn't have the patient number or their name or the drug. With GenAI, that distinction doesn't exist because it's all just messed up in the text. If you have that same patient asking an LLM how to. What drug they should take or how to stop smoking. You can't extract the PII and not send it to the observability platform. So the sensitivity of the data that's going to end up in observability platforms is going to be like basically different order of magnitude to what's in what you would normally send to Datadog. Of course, you can make a mistake and send someone's password or their card number to Datadog. But that would be seen as a as a like mistake. Whereas in GenAI, a lot of data is going to be sent. And I think that's why companies like Langsmith and are trying hard to offer observability. On prem, because there's a bunch of companies who are happy for Datadog to be cloud hosted, but want self-hosted self-hosting for this observability stuff with GenAI.Alessio [00:38:09]: And are you doing any of that today? Because I know in each of the spans you have like the number of tokens, you have the context, you're just storing everything. And then you're going to offer kind of like a self-hosting for the platform, basically. Yeah. Yeah.Samuel [00:38:23]: So we have scrubbing roughly equivalent to what the other observability platforms have. So if we, you know, if we see password as the key, we won't send the value. But like, like I said, that doesn't really work in GenAI. So we're accepting we're going to have to store a lot of data and then we'll offer self-hosting for those people who can afford it and who need it.Alessio [00:38:42]: And then this is, I think, the first time that most of the workloads performance is depending on a third party. You know, like if you're looking at Datadog data, usually it's your app that is driving the latency and like the memory usage and all of that. Here you're going to have spans that maybe take a long time to perform because the GLA API is not working or because OpenAI is kind of like overwhelmed. Do you do anything there since like the provider is almost like the same across customers? You know, like, are you trying to surface these things for people and say, hey, this was like a very slow span, but actually all customers using OpenAI right now are seeing the same thing. So maybe don't worry about it or.Samuel [00:39:20]: Not yet. We do a few things that people don't generally do in OTA. So we send. We send information at the beginning. At the beginning of a trace as well as sorry, at the beginning of a span, as well as when it finishes. By default, OTA only sends you data when the span finishes. So if you think about a request which might take like 20 seconds, even if some of the intermediate spans finished earlier, you can't basically place them on the page until you get the top level span. And so if you're using standard OTA, you can't show anything until those requests are finished. When those requests are taking a few hundred milliseconds, it doesn't really matter. But when you're doing Gen AI calls or when you're like running a batch job that might take 30 minutes. That like latency of not being able to see the span is like crippling to understanding your application. And so we've we do a bunch of slightly complex stuff to basically send data about a span as it starts, which is closely related. Yeah.Alessio [00:40:09]: Any thoughts on all the other people trying to build on top of OpenTelemetry in different languages, too? There's like the OpenLEmetry project, which doesn't really roll off the tongue. But how do you see the future of these kind of tools? Is everybody going to have to build? Why does everybody want to build? They want to build their own open source observability thing to then sell?Samuel [00:40:29]: I mean, we are not going off and trying to instrument the likes of the OpenAI SDK with the new semantic attributes, because at some point that's going to happen and it's going to live inside OTEL and we might help with it. But we're a tiny team. We don't have time to go and do all of that work. So OpenLEmetry, like interesting project. But I suspect eventually most of those semantic like that instrumentation of the big of the SDKs will live, like I say, inside the main OpenTelemetry report. I suppose. What happens to the agent frameworks? What data you basically need at the framework level to get the context is kind of unclear. I don't think we know the answer yet. But I mean, I was on the, I guess this is kind of semi-public, because I was on the call with the OpenTelemetry call last week talking about GenAI. And there was someone from Arize talking about the challenges they have trying to get OpenTelemetry data out of Langchain, where it's not like natively implemented. And obviously they're having quite a tough time. And I was realizing, hadn't really realized this before, but how lucky we are to primarily be talking about our own agent framework, where we have the control rather than trying to go and instrument other people's.Swyx [00:41:36]: Sorry, I actually didn't know about this semantic conventions thing. It looks like, yeah, it's merged into main OTel. What should people know about this? I had never heard of it before.Samuel [00:41:45]: Yeah, I think it looks like a great start. I think there's some unknowns around how you send the messages that go back and forth, which is kind of the most important part. It's the most important thing of all. And that is moved out of attributes and into OTel events. OTel events in turn are moving from being on a span to being their own top-level API where you send data. So there's a bunch of churn still going on. I'm impressed by how fast the OTel community is moving on this project. I guess they, like everyone else, get that this is important, and it's something that people are crying out to get instrumentation off. So I'm kind of pleasantly surprised at how fast they're moving, but it makes sense.Swyx [00:42:25]: I'm just kind of browsing through the specification. I can already see that this basically bakes in whatever the previous paradigm was. So now they have genai.usage.prompt tokens and genai.usage.completion tokens. And obviously now we have reasoning tokens as well. And then only one form of sampling, which is top-p. You're basically baking in or sort of reifying things that you think are important today, but it's not a super foolproof way of doing this for the future. Yeah.Samuel [00:42:54]: I mean, that's what's neat about OTel is you can always go and send another attribute and that's fine. It's just there are a bunch that are agreed on. But I would say, you know, to come back to your previous point about whether or not we should be relying on one centralized abstraction layer, this stuff is moving so fast that if you start relying on someone else's standard, you risk basically falling behind because you're relying on someone else to keep things up to date.Swyx [00:43:14]: Or you fall behind because you've got other things going on.Samuel [00:43:17]: Yeah, yeah. That's fair. That's fair.Swyx [00:43:19]: Any other observations just about building LogFire, actually? Let's just talk about this. So you announced LogFire. I was kind of only familiar with LogFire because of your Series A announcement. I actually thought you were making a separate company. I remember some amount of confusion with you when that came out. So to be clear, it's Pydantic LogFire and the company is one company that has kind of two products, an open source thing and an observability thing, correct? Yeah. I was just kind of curious, like any learnings building LogFire? So classic question is, do you use ClickHouse? Is this like the standard persistence layer? Any learnings doing that?Samuel [00:43:54]: We don't use ClickHouse. We started building our database with ClickHouse, moved off ClickHouse onto Timescale, which is a Postgres extension to do analytical databases. Wow. And then moved off Timescale onto DataFusion. And we're basically now building, it's DataFusion, but it's kind of our own database. Bogomil is not entirely happy that we went through three databases before we chose one. I'll say that. But like, we've got to the right one in the end. I think we could have realized that Timescale wasn't right. I think ClickHouse. They both taught us a lot and we're in a great place now. But like, yeah, it's been a real journey on the database in particular.Swyx [00:44:28]: Okay. So, you know, as a database nerd, I have to like double click on this, right? So ClickHouse is supposed to be the ideal backend for anything like this. And then moving from ClickHouse to Timescale is another counterintuitive move that I didn't expect because, you know, Timescale is like an extension on top of Postgres. Not super meant for like high volume logging. But like, yeah, tell us those decisions.Samuel [00:44:50]: So at the time, ClickHouse did not have good support for JSON. I was speaking to someone yesterday and said ClickHouse doesn't have good support for JSON and got roundly stepped on because apparently it does now. So they've obviously gone and built their proper JSON support. But like back when we were trying to use it, I guess a year ago or a bit more than a year ago, everything happened to be a map and maps are a pain to try and do like looking up JSON type data. And obviously all these attributes, everything you're talking about there in terms of the GenAI stuff. You can choose to make them top level columns if you want. But the simplest thing is just to put them all into a big JSON pile. And that was a problem with ClickHouse. Also, ClickHouse had some really ugly edge cases like by default, or at least until I complained about it a lot, ClickHouse thought that two nanoseconds was longer than one second because they compared intervals just by the number, not the unit. And I complained about that a lot. And then they caused it to raise an error and just say you have to have the same unit. Then I complained a bit more. And I think as I understand it now, they have some. They convert between units. But like stuff like that, when all you're looking at is when a lot of what you're doing is comparing the duration of spans was really painful. Also things like you can't subtract two date times to get an interval. You have to use the date sub function. But like the fundamental thing is because we want our end users to write SQL, the like quality of the SQL, how easy it is to write, matters way more to us than if you're building like a platform on top where your developers are going to write the SQL. And once it's written and it's working, you don't mind too much. So I think that's like one of the fundamental differences. The other problem that I have with the ClickHouse and Impact Timescale is that like the ultimate architecture, the like snowflake architecture of binary data in object store queried with some kind of cache from nearby. They both have it, but it's closed sourced and you only get it if you go and use their hosted versions. And so even if we had got through all the problems with Timescale or ClickHouse, we would end up like, you know, they would want to be taking their 80% margin. And then we would be wanting to take that would basically leave us less space for margin. Whereas data fusion. Properly open source, all of that same tooling is open source. And for us as a team of people with a lot of Rust expertise, data fusion, which is implemented in Rust, we can literally dive into it and go and change it. So, for example, I found that there were some slowdowns in data fusion's string comparison kernel for doing like string contains. And it's just Rust code. And I could go and rewrite the string comparison kernel to be faster. Or, for example, data fusion, when we started using it, didn't have JSON support. Obviously, as I've said, it's something we can do. It's something we needed. I was able to go and implement that in a weekend using our JSON parser that we built for Pydantic Core. So it's the fact that like data fusion is like for us the perfect mixture of a toolbox to build a database with, not a database. And we can go and implement stuff on top of it in a way that like if you were trying to do that in Postgres or in ClickHouse. I mean, ClickHouse would be easier because it's C++, relatively modern C++. But like as a team of people who are not C++ experts, that's much scarier than data fusion for us.Swyx [00:47:47]: Yeah, that's a beautiful rant.Alessio [00:47:49]: That's funny. Most people don't think they have agency on these projects. They're kind of like, oh, I should use this or I should use that. They're not really like, what should I pick so that I contribute the most back to it? You know, so but I think you obviously have an open source first mindset. So that makes a lot of sense.Samuel [00:48:05]: I think if we were probably better as a startup, a better startup and faster moving and just like headlong determined to get in front of customers as fast as possible, we should have just started with ClickHouse. I hope that long term we're in a better place for having worked with data fusion. We like we're quite engaged now with the data fusion community. Andrew Lam, who maintains data fusion, is an advisor to us. We're in a really good place now. But yeah, it's definitely slowed us down relative to just like building on ClickHouse and moving as fast as we can.Swyx [00:48:34]: OK, we're about to zoom out and do Pydantic run and all the other stuff. But, you know, my last question on LogFire is really, you know, at some point you run out sort of community goodwill just because like, oh, I use Pydantic. I love Pydantic. I'm going to use LogFire. OK, then you start entering the territory of the Datadogs, the Sentrys and the honeycombs. Yeah. So where are you going to really spike here? What differentiator here?Samuel [00:48:59]: I wasn't writing code in 2001, but I'm assuming that there were people talking about like web observability and then web observability stopped being a thing, not because the web stopped being a thing, but because all observability had to do web. If you were talking to people in 2010 or 2012, they would have talked about cloud observability. Now that's not a term because all observability is cloud first. The same is going to happen to gen AI. And so whether or not you're trying to compete with Datadog or with Arise and Langsmith, you've got to do first class. You've got to do general purpose observability with first class support for AI. And as far as I know, we're the only people really trying to do that. I mean, I think Datadog is starting in that direction. And to be honest, I think Datadog is a much like scarier company to compete with than the AI specific observability platforms. Because in my opinion, and I've also heard this from lots of customers, AI specific observability where you don't see everything else going on in your app is not actually that useful. Our hope is that we can build the first general purpose observability platform with first class support for AI. And that we have this open source heritage of putting developer experience first that other companies haven't done. For all I'm a fan of Datadog and what they've done. If you search Datadog logging Python. And you just try as a like a non-observability expert to get something up and running with Datadog and Python. It's not trivial, right? That's something Sentry have done amazingly well. But like there's enormous space in most of observability to do DX better.Alessio [00:50:27]: Since you mentioned Sentry, I'm curious how you thought about licensing and all of that. Obviously, your MIT license, you don't have any rolling license like Sentry has where you can only use an open source, like the one year old version of it. Was that a hard decision?Samuel [00:50:41]: So to be clear, LogFire is co-sourced. So Pydantic and Pydantic AI are MIT licensed and like properly open source. And then LogFire for now is completely closed source. And in fact, the struggles that Sentry have had with licensing and the like weird pushback the community gives when they take something that's closed source and make it source available just meant that we just avoided that whole subject matter. I think the other way to look at it is like in terms of either headcount or revenue or dollars in the bank. The amount of open source we do as a company is we've got to be open source. We're up there with the most prolific open source companies, like I say, per head. And so we didn't feel like we were morally obligated to make LogFire open source. We have Pydantic. Pydantic is a foundational library in Python. That and now Pydantic AI are our contribution to open source. And then LogFire is like openly for profit, right? As in we're not claiming otherwise. We're not sort of trying to walk a line if it's open source. But really, we want to make it hard to deploy. So you probably want to pay us. We're trying to be straight. That it's to pay for. We could change that at some point in the future, but it's not an immediate plan.Alessio [00:51:48]: All right. So the first one I saw this new I don't know if it's like a product you're building the Pydantic that run, which is a Python browser sandbox. What was the inspiration behind that? We talk a lot about code interpreter for lamps. I'm an investor in a company called E2B, which is a code sandbox as a service for remote execution. Yeah. What's the Pydantic that run story?Samuel [00:52:09]: So Pydantic that run is again completely open source. I have no interest in making it into a product. We just needed a sandbox to be able to demo LogFire in particular, but also Pydantic AI. So it doesn't have it yet, but I'm going to add basically a proxy to OpenAI and the other models so that you can run Pydantic AI in the browser. See how it works. Tweak the prompt, et cetera, et cetera. And we'll have some kind of limit per day of what you can spend on it or like what the spend is. The other thing we wanted to b

Love Music More (with Scoobert Doobert)
Imposter Syndrome and Your Musical Purpose

Love Music More (with Scoobert Doobert)

Play Episode Listen Later Jan 28, 2025 17:42


What's success to you? How does your own psyche hold you back? Let's dive deep into musical meaning, and tackle the hardest question of them all: Does the world really need more music? For 30% off your first year with DistroKid to share your music with the world click ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠DistroKid.com/vip/lovemusicmore⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Want to hear my music? For all things links visit ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ScoobertDoobert.pizza⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Subscribe to this pod's blog on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Substack⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to receive deeper dives on the regular

Packet Pushers - Full Podcast Feed
NB510: CISA Says US Tech Inherently Insecure; AI Now Included in Google Workspace

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Jan 20, 2025 47:46


Take a Network Break! Guest co-host John Burke joins Drew Conry-Murray for this week’s analysis of tech news. They discuss a string of serious vulnerabilities in Wavlink Wi-Fi routers, Fortinet taking a one-two security punch, and CISA director Jen Easterly calling out US hardware and software companies for being “inherently insecure.” Microsoft and Google put... Read more »

Packet Pushers - Network Break
NB510: CISA Says US Tech Inherently Insecure; AI Now Included in Google Workspace

Packet Pushers - Network Break

Play Episode Listen Later Jan 20, 2025 47:46


Take a Network Break! Guest co-host John Burke joins Drew Conry-Murray for this week’s analysis of tech news. They discuss a string of serious vulnerabilities in Wavlink Wi-Fi routers, Fortinet taking a one-two security punch, and CISA director Jen Easterly calling out US hardware and software companies for being “inherently insecure.” Microsoft and Google put... Read more »

Packet Pushers - Fat Pipe
NB510: CISA Says US Tech Inherently Insecure; AI Now Included in Google Workspace

Packet Pushers - Fat Pipe

Play Episode Listen Later Jan 20, 2025 47:46


Take a Network Break! Guest co-host John Burke joins Drew Conry-Murray for this week’s analysis of tech news. They discuss a string of serious vulnerabilities in Wavlink Wi-Fi routers, Fortinet taking a one-two security punch, and CISA director Jen Easterly calling out US hardware and software companies for being “inherently insecure.” Microsoft and Google put... Read more »

The Untitled Gaming Podcast
I Have Inherently Not Trusted You Since the Day We Met

The Untitled Gaming Podcast

Play Episode Listen Later Jan 6, 2025 107:58


Pat, Zach, Rick, and Chance announce the nominees for the Tuggys 2024 and commence the draft for their Fantasy Critic League for 2025!

Alex Wagner Tonight
Greed and gray areas: Cryptocurrency seems inherently appealing to Trump; a wildcard in his new term

Alex Wagner Tonight

Play Episode Listen Later Dec 25, 2024 41:02


Plus, Trump's anti-immigration plans face resource challenge at the state level

Be It Till You See It
458. Own Your Journey to Better Health

Be It Till You See It

Play Episode Listen Later Dec 12, 2024 23:02


In this special health-focused round-up, Lesley and Brad revisit conversations with four inspiring guests: Uma Naralkar, Jenn Pike, Celeste Holbrook, and Jenny Swisher. From understanding your menstrual cycle and hormones to embracing pleasure and advocating for yourself, this episode delivers practical insights to help you live your healthiest life.If you have any questions about this episode or want to get some of the resources we mentioned, head over to LesleyLogan.co/podcast. If you have any comments or questions about the Be It pod shoot us a message at beit@lesleylogan.co. And as always, if you're enjoying the show please share it with someone who you think would enjoy it as well. It is your continued support that will help us continue to help others. Thank you so much! Never miss another show by subscribing at LesleyLogan.co/subscribe.In this episode you will learn about:The connection between nutrition, movement, lifestyle, and mindset for optimal health.Understanding the four phases of the menstrual cycle and how they affect daily life.Shifting perspectives on intimacy to find pleasure and reduce stigma.How to advocate for your health by asking the right questions and knowing your body.Episode References/Links:Ep. 25 ft. Uma Naralkar - https://beitpod.com/ep25Uma's Website https://omwithatwist.com/Ep. 55 ft. Jenn Pike - https://beitpod.com/ep55The Hormone Project: https://jennpike.com/thehormoneprojectEp. 85 ft. Celeste Holbrook - https://beitpod.com/ep85Website: https://www.drcelesteholbrook.com/Ep. 139 ft. Jenny Swisher - https://beitpod.com/139SYNC Your Life Podcast: https://jennyswisher.com/podcast/ If you enjoyed this episode, make sure and give us a five star rating and leave us a review on iTunes, Podcast Addict, Podchaser or Castbox. DEALS! DEALS! DEALS! DEALS!Check out all our Preferred Vendors & Special Deals from Clair Sparrow, Sensate, Lyfefuel BeeKeeper's Naturals, Sauna Space, HigherDose, AG1 and ToeSox Be in the know with all the workshops at OPCBe It Till You See It Podcast SurveyBe a part of Lesley's Pilates MentorshipFREE Ditching Busy Webinar Resources:Watch the Be It Till You See It podcast on YouTube!Lesley Logan websiteBe It Till You See It PodcastOnline Pilates Classes by Lesley LoganOnline Pilates Classes by Lesley Logan on YouTubeProfitable Pilates Follow Us on Social Media:InstagramThe Be It Till You See It Podcast YouTube channelFacebookLinkedInThe OPC YouTube Channel Episode Transcript:Lesley Logan 0:00  Welcome to the Be It Till You See It podcast where we talk about taking messy action, knowing that perfect is boring. I'm Lesley Logan, Pilates instructor and fitness business coach. I've trained thousands of people around the world and the number one thing I see stopping people from achieving anything is self-doubt. My friends, action brings clarity and it's the antidote to fear. Each week, my guest will bring bold, executable, intrinsic and targeted steps that you can use to put yourself first and Be It Till You See It. It's a practice, not a perfect. Let's get started. Lesley Logan 0:42  Welcome back to Be It Till You See It. You guys, we are continuing our, what do you call it? A round up, babe? You call it collection?Brad Crowell 0:49  Yeah, we call it the December round-up.Lesley Logan 0:51  Yeah. It's basically like a reflection review. And this particular episode has four of our favorite guests that have to do with health. We have these, have had multiple episodes that have to do with health.Brad Crowell 1:03  Many, many, many. Lesley Logan 1:04  Many. And so we are going to span the wide ranging topic of health, which can be a lot of things. We've got the tripod of health. We've got hormones in this one. We're gonna have sex in this one. Brad Crowell 1:13  Yeah, food is as part of the tripod. Lesley Logan 1:15  Yes, yes. We got lots of stuff so. Brad Crowell 1:18  Fitness, of course. Lesley Logan 1:19  So if you have been wondering, what health episode should I listen to during this chaotic month of December when most of my podcasts aren't listing anything new? The Be It Pod has given you four awesome ones, and we'll link even the numbers. You can go back and listen to the full interview in our catalog when you're ready.Brad Crowell 1:38  Let's dig in the first episode that we're gonna talk about today, that we're bringing back is episode number 25.Lesley Logan 1:45  Twenty-five.Brad Crowell 1:45  Twenty-five all the way back towards the very beginning.Lesley Logan 1:49  It's like 2022.Brad Crowell 1:51  We had a chance to interview Uma Naralkar, who talks a lot about food and nutrition, and we have two sections of this that we thought were really spectacular. So.Lesley Logan 2:07  Yeah, so first up, I really, I thought it was really cool and vulnerable that she talked about when she moved to the US and what the food was like, and how that challenged her and got her interested in what she has become known for, and being a nutritionist and things like that. So I'm really excited for us to hear her story of moving to the US.Brad Crowell 2:27  Yeah, so, and also she talked about this, her process of how she works with her clients, and she created something called the Tripod of Optimal Health. And I'm not going to tell you what it is, because you're going to hear it just after this. So tune in.Uma Naralkar 2:41  The biggest difference for me was the food, right? So in India, we have a lot of health. Inherently, there's health and cooks and food is never something that I had to even think about. So that's the reason why it was always so well -balanced and healthy, because it was like home-cooked Indian food and all the beautiful dals and vegetables, and it was primarily vegetarian. We ate meat on the weekends as like a treat. Dessert would always be homemade, something made in ghee, like, very, very like, decently portioned. And I came to America where everything was supersized, right? And I was a student. And, I mean, I was, first, it was shocking, then it was exciting, and then it was kind of like, I didn't have a choice. I was hungry, and I had to eat, and I was a student, so it was like, McDonald's and all the other and it was truly exciting, I have to say, in the beginning, because I was like, what is going on? Why are these people eating so much? But it was a huge adjustment. And you know, when you're asking me about how I, you know, the thing that I had to kind of like, get over and just be like, I'm going to embody this. I am. You know, the book Atomic Habits. Have you read that?Lesley Logan 4:01  Yes. Uma Naralkar 4:01  James Clear. He talks about shifting your identity to who you want to be. Do you remember that part of the book? What he's saying is that if you, you know you, if you want something, if you truly believe that you want something, you need to believe that you have it, and you need to shift your identity in the sense that you know I am a confident 20 year old girl in the United States, where I don't know shit about this country and I truly don't understand, have the words that they use. And at 20, was I clear about what I'm saying now? No, not at all, because it was nerve-racking. And the reason why I'm bringing it up is because the biggest obstacle, apart from the food, my biggest challenge, was speaking, or just speaking out in class, or just raising my hand, or just standing in front of an audience and saying, like anything, it was something that I didn't grow up with. In India, you never get an opportunity to speak anything. Everything is crowded and they don't have time for anybody speaking. So I think it was a true challenge, and it sounds so, it doesn't sound like a big deal because my children, both of them, grew up here. They're Californians, and, you know, I can see how speaking is so inherent, right? Like you're in a group setting, or if you're in a big crowd, just saying what you feel is pretty standard. First off, yes, to therapy. I think all kinds of therapy is, I appreciate all of it. And I think people, it's still, it's very interesting. Still, people have a lot of resistance to see a therapist or to, you know, just to open up and talk to someone else about what's going on. So yes, to therapy, but more than that, yeah, nutrition, what you're eating, is going to be foundational movement and how active you are and what you're doing there, as well as your stress levels, your sleep, all that, I think ties in. It is pretty holistic. I don't think it's one or the other. And I have a lot of really fit clients who are like, I mean, as fit as they can be, who are miserable, who are so unhappy, who are, who are they like, constantly looking for ways to, you know, get to the next level. And, quite frankly, they don't even know what the next level is. So I think it's, everyone's very different. And for one person, maybe it's like, you know, your nutrition is seriously lacking, and we need to make some switches so that you start, like, having a better relationship with food. But for someone else, it might just be something as simple as, you know, like doing yoga or getting out in nature, someone who's like, stuck in front of their computer all day and doesn't even like, realize it like, for example, like the best, I think the best example I can give is like being in a casino, right? Like, in inside a casino, like, how clever is that? It's like the lights are always the same, it's always bright, it's always entertaining. There's enough blue light to kick the melatonin out, so you're always in that cortisol rush. They want that because they want you to play. But that's how we are pretty, pretty much living our life like, like we're in a casino, right? Because we're indoors, we are in front of the computer, then we are watching something, and then we expect to have a good night's sleep. So I feel like it's, it's just, it all ties in, and it's not one thing I call it, I call it the Tripod, actually, of Optimal Health, which is what you're eating, what your movement, your life activity, your lifestyle, and then your mental health, your mindset, right? They all tie in. And then your health is sort of like sitting on that tripod. So if one of those legs is like wobbly, then the whole thing is going to collapse.Lesley Logan 7:59  So that was Episode 25 and we would love to know, we would love for you to share with us what part of the Tripod of Health that you're going to work on as we come into 2025 and no, it won't be a New Year's resolution. It will just be a thing that you're doing. Now we have Episode 55, so we're going way back in the catalog today's episode, and it's how are hormones dictating your life? And one of the things. Brad Crowell 8:21  With? Lesley Logan 8:21  With Jenn Pike. Brad Crowell 8:22  With Jenn Pike. Lesley Logan 8:23  Yeah, one of the things that we talk about that I'm really excited for you to talk, like, here is that the four different phases in your cycle, and this is really, really important, because I have a lot of people ask me a lot of questions about perimenopause. I want more episodes on this. But if you are not perimenopausal yet, or maybe you still have your cycle, but you're kind of, you know, that's what perimenopause is. You got to know what parts of the cycle you're in, because it affects how you work out. It affects what you should be eating. I had, there's some dream guests on my list that I want to have in future episodes, but we need to know these parts for those guests to make any sense. So like, dive into that first part with the different phases of your cycle, even if you think you know them.Brad Crowell 9:00  Yeah, the second part of this episode, though, I thought was really beneficial, was talking about educating both men and women on this. So I remember listening to this the first time, you know, a couple years ago, and I was taking notes because I knew none of this. I don't know how (inaudible)Lesley Logan 9:17  And you have a mom and a sister.Brad Crowell 9:18  And I went through high school and college, and never learned any of this stuff. Lesley Logan 9:22  And you had a wife before me. Brad Crowell 9:23  And I did have a wife before you, still didn't know any of this stuff. So, so the, she, Jenn talks about stigmatism, shame and embarrassment and the value of educating her son. I think she has sons. I can't remember. Son. She's one son. She's talking about how he knows just as much about the female body as her daughter and the value like, they, as a couple, decided to educate their son on purpose to avoid stigmatism and shame and embarrassment. So I thought that was really great.Lesley Logan 9:57  I love it. I love her. I love her for that already.Brad Crowell 10:00  Yeah. It's a win. There you go.Jenn Pike 10:03  So we go through four different phases in our cycle. So our cycle and our period are not the same thing. Your cycle is from day one of your bleed all the way through until you have your next bleed. That's a full cycle. Most women, it's going to range anywhere from 23 to 35 days. And in that cycle you have four different phases. So you have the phase that you bleed in, which is your actual period, when you come out of your period, you actually have what's referred to as the first phase, which is the follicular phase. And this is where your body, your hormones and estrogen and testosterone are starting to climb. Your uterine lining is starting to thicken again. This is typically where we actually feel more connected to our body. We do well with the estrogen surge. We feel clear, more focused, energized, happier. We're like gung-ho. We want to create new projects. We're super, you know, on point. Leading into ovulation, ovulation comes, it tends to be much more of a you know, I want to put myself out there. Confidence can peak a little higher, sex drive, typically. And the way I'm painting this picture, this isn't going to be for every woman. I'm just going to kind of give one example, and then I'll apex it on the other side. Once ovulation happens, you've now had this dip in estrogen and testosterone, and your luteinizing hormones increase as long as you've ovulated, your progesterone also increases. And that actually is a much more calming hormone. It helps us to integrate. It brings us into a place that is much more reflective, in that luteal phase, which are the couple weeks coming into your period. It's a time to really look at like what is working and what is not. It's time to finish projects. It's a time when you can feel really connected to your body, and then this is one of the times where you'll also know if things are out of balance, if that like seven to 10 day period of time before you bleed again, your mood's all over the place, you're emotional, your sleep is off, your gut is off, you're spotting. Your breasts are tender, like you're just like, oh my God, here we go again. My skin's breaking out. All the things are happening. That's a really strong indication that something is out of balance in your system. And it could be that you didn't ovulate, that you have lower progesterone, you have too much estrogen, it could be that all the hormones are sitting flat. It could be that testosterone and DHA is too high. So this is why testing and testing at the appropriate time of the month is such a valuable tool for women, because when you see it and someone's explaining it to you you're like, oh my gosh, I feel like you just described me to a tee. Yeah.Lesley Logan 12:34  No, I'm like, I'm like, sitting here, and I'm like, taking it all in, and I, like that whole part where it's like, that 7, 10, days before you just said, like, this is what you're gonna feel like, but this is also you could feel a look where things are out of whack. And I think we're taught, or at least I felt like, I felt like that's just the normal thing, like things are out of whack. And, yeah, what it sounds like, is it, and I did experience this, I did seed cycling for a long time because I felt like my swings were too big. And I was like, y'all, my boobs are a little bigger because of COVID and age, but they were very small back then. And I was like, they are too small to be this tender. Like this is not fun for me. And so I heard about seed cycling, and I did it consistently for three years. Not only did I literally make myself like clockwork with my cycle, I stopped breaking out. I don't have tenderness, and I've weaned off of it, and it hasn't been an issue, but I did notice that difference in that time before, it was almost like my period was a surprise each time, because I was like, oh, I didn't even know it's coming. (inaudible) Was feeling so good. That's so fascinating. Okay, so thank you for walking us through that. I think that it's helpful to know, like, just when you have the information, like you said, you just can expect things a little different, and you can know more about how you should be feeling, as opposed to like. Why do I feel like this versus yesterday? I felt better.Jenn Pike 13:55  I just want to say something quick on that before you go and you're talking about, you know, doing the recap with your husband. So I have two kids, a girl and a boy. My son knows just as much about the female body and cycles as my daughter, and that's on purpose, because part of the stigmatism and the shame and embarrassment ends when we stop excluding men and boys from the conversation as well. It, you know, it's like there's going to come a time in a boy's life where he's gonna, you know, you're either gonna be around a woman or your girlfriend or whatever it is, and you need to be able to understand what she's going through. And as I always say to my son, like bud, you wouldn't even be here if it weren't for our bodies doing this. So you should be darn grateful. Brad Crowell 14:33  All right, so that was Episode 55 with Jenn Pike. Hope you found it super helpful and educational. Lesley Logan 14:40  Her entire episode is so, has so, it's chock-full of information. You can, you could do, if you just used her episode to figure out what your health changes are for 2025 you would have enough to work on.Brad Crowell 14:54  Yeah, she's got a lot going on, and it's amazing. All right, next up we got Episode 85 let's talk about sex baby with Celeste Holbrook.Lesley Logan 15:02  I'm obsessed with her. Just so you know, I'm actually having a call with her tomorrow morning (inaudible) on the day that I, because I just love her. Brad Crowell 15:09  Well, she basically talked about, it's kind of a tack on to what we were just talking about with with Jenn Pike, about removing shame and embarrassment. This is about destigmatizing sex and the language around sex. And one thing she said that I thought was amazing was she pets her dog because she wants to feel calm. She rides her bike because she wants to, well, feel free. She has sex because she wants to feel pleasure, right? And it's like, we make it this taboo, weird, awkward thing, and she's like, but it shouldn't be that, you know? And she talks, she goes really in-depth about how, you know, how you might find pleasure in sex.Lesley Logan 15:48  Just so you know, I loved her so much we had her on the podcast twice. And we actually talked about bodies and all that stuff. So she's just fabulous. And especially for any of you who are raised in the purity culture, this episode is extremely freeing and informative.Brad Crowell 16:04  Yeah, yeah. So enjoy.Celeste Holbrook 16:06  I always think about what we want to feel in sex. Because everything that we do behaviorally, we do it because we want to feel something. So, like, I pet my dog because I want to feel calm. I ride my bike because I want to feel free. I do certain sexual activities because I want to feel pleasure, connection, erotic, intimate, loving, whatever it is that I want to feel in sex. And so start with the feeling. So, write down my dream sexual experience would feel like, and then write those words down, and then you can work your way backwards, like, okay, if I want to feel confident, what do I need to do behaviorally in order to feel confident? Maybe I need to learn more about my body. Maybe I need to establish a better relationship with my vulva and, like, clitoris. Maybe I need to have a masturbation practice. Maybe I need to read some more books, right? So start with what you want to feel and then work your way backwards. I want to feel connected. Okay, maybe I need to work on communication styles with my partner. Maybe I need to learn how to ask more for what I want, and maybe I don't know what I want. So maybe I need to take one more step back and figure out what I like and what I don't like, and do some more creative exploration in sex, you know. So I like to start out with that list of what we want to feel, because then you can build behaviors behind that.Lesley Logan 17:23  All right. So that was Celeste Holbrook's Episode 85 at the Be It Pod. If you want to go listen to the whole thing.Lesley Logan 17:31  Up next, we actually have Episode 139, Cycle Thinking Fitness & Balancing Your Hormones with Jenny Swisher. This is really, so again, we're having hormones, this is a totally different thing. So, we're actually going to be talking more about advocating for yourself, and ladies, but also gents listening, we always have a few good men, we often have been raised that like the doctors know best but really you know your body best and I think that this episode is one of those reminders that you can be your own best doctor and when you know your body best you can actually advocate for yourself and get the best health for yourself but especially for your hormones. And Jenny Swisher is really, I mean, like, what she's been doing since being on the podcast, really helping people understand their hormones, has been pretty epic.Brad Crowell 18:19  I just want to say that while we don't know medicine, because we're not doctors and didn't dedicate ourselves to study that there generally is logic behind the medicine. So if you're being given advice that is completely illogical or confusing to you before you just say yeah, let's do it, ask them to explain that further and understand it more. And it's okay to say that doesn't make sense to me.Lesley Logan 18:45  We didn't put the clip here. But if you want more, if you're inspired to be an advocate for yourself, definitely listen to Lindsay Miller's episode, Lindsay Moore's episode on, on being an advocate. And I do think, Brad, you make a, bring up a good point, like there is logic to it, but also they have to listen to you, like, they're not, at least in the States, they're not allowed to leave the room until you're done and you say, I have no more questions. And it is a practice. It's called a medical practice, and so they're practicing just like you'd have a Pilates practice, and so it's really, you should not feel ashamed or embarrassed to be like, hmm, I think I'm going to get another opinion on that.Brad Crowell 19:25  Yeah, yeah. That's okay. Lesley Logan 19:27  Yeah. So here is Jenny Swisher to inspire you to be your own best doctor.Jenny Swisher 19:31  I think you have to be your own best doctor. And I think, but you have to go into the the appointment knowing that, I mean, I don't know about anybody listening, but I know for me, especially after I feel like I'm an expert in sitting in doctor's offices after years of doing it, I felt like I got to the point where they were just going to diagnose or give me whatever I was leading them to. You know what I mean, like you're leading the doctor to the eventual answer. And so the more hormone literate you can become about your own body and your own cycle, for example, and in the case of hormone health, the easier it's going to be for the doctor to make those connections or to really, truly help you. I find that most people don't have the awareness that they need, the self-awareness and the body awareness of their own body to be able to go and get a proper answer from a doctor. And so it starts with that. But then when you are in that situation, when you go into it knowing like, this is how my body is supposed to operate. This is how it's supposed to feel. These are the things that I've learned about hormone health. And I'm not I'm low in energy, or I'm this, or I'm that, then you can go into the appointment and say hey, I think this is how I'm supposed to be feeling. But instead, I feel this way. What are some things that we can look into?Lesley Logan 20:35  All right. That was Episode 139, with Jenny Swisher, so you can go and listen to her full episode, if you'd like here in the Be It catalog. Again, this is a round-up of just a few of our favorite health episodes, and we hope that you're enjoying getting just some reminders of some of the epic guests we've had, or maybe we're peaking your interest in a topic that you're wanting to go back and learn more about. All of our guests are pretty amazing. And I can't believe that was like almost 300 episodes ago. Some of these are like 400 episodes ago. So, but also, like, I still take these tips. I still remember these people's tips in my daily life. I reflect back upon them, and so they really meant a lot to me.Lesley Logan 21:18  I'm Lesley Logan. Brad Crowell 21:19  And I'm Brad Crowell.Lesley Logan 21:20  Thank you so much for being a listener of the Be It Till You See It Podcast. We hope that you would love this. Send one of these episodes to a friend who needs it. Especially right now, you know, sometimes we think we have to do holiday gifts. And really, you can actually be like, here's someone to listen to on your long drive to go see your family in a chaotic time, you know, like, these can be the thing that keeps people warm at night. Really, you can, like, listen, they can curl up and listen to a good podcast. And so, until next time.Brad Crowell 21:46  Bye for now.Lesley Logan 21:47  No. Until next time, Be It Till You See It. Brad Crowell 21:52  Oh. Lesley Logan 21:53  And then. Brad Crowell 21:53  So, until next time. Lesley Logan 21:55  Be It Till You See It.Brad Crowell 21:58  Bye for now.Lesley Logan 22:01  That's all I got for this episode of the Be It Till You See It Podcast. One thing that would help both myself and future listeners is for you to rate the show and leave a review and follow or subscribe for free wherever you listen to your podcast. Also, make sure to introduce yourself over at the Be It Pod on Instagram. I would love to know more about you. Share this episode with whoever you think needs to hear it. Help us and others Be It Till You See It. Have an awesome day. Be It Till You See It is a production of The Bloom Podcast Network. If you want to leave us a message or a question that we might read on another episode, you can text us at +1-310-905-5534 or send a DM on Instagram @BeItPod.Brad Crowell 22:43  It's written, filmed, and recorded by your host, Lesley Logan, and me, Brad Crowell.Lesley Logan 22:48  It is transcribed, produced and edited by the epic team at Disenyo.co.Brad Crowell 22:53  Our theme music is by Ali at Apex Production Music and our branding by designer and artist, Gianfranco Cioffi.Lesley Logan 23:00  Special thanks to Melissa Solomon for creating our visuals.Brad Crowell 23:03  Also to Angelina Herico for adding all of our content to our website. And finally to Meridith Root for keeping us all on point and on time.Support this podcast at — https://redcircle.com/be-it-till-you-see-it/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Rainer on Leadership
Why Churches Are Not Inherently Safe Places

Rainer on Leadership

Play Episode Listen Later Dec 3, 2024 21:22


It's understandable why people would think, “This place is full of Christians. It must be one of the safest places in my community.” But churches are not inherently safe. Quite the opposite. Churches have a target on them! You are a spiritual target for the powers of darkness. Sam Rainer and Matt McCraw discuss some key issues involving church safety. The post Why Churches Are Not Inherently Safe Places appeared first on Church Answers.

Avoiding Babylon
Is Protestantism Inherently Racist?

Avoiding Babylon

Play Episode Listen Later Nov 23, 2024 82:32 Transcription Available


Want to reach out to us? Want to leave a comment or review? Want to give us a suggestion or berate Anthony? Send us a text by clicking this link!Embark on a journey with us as we unravel the rich tapestry of Catholicism, interwoven with personal anecdotes and profound reflections. Picture yourself on a high-speed bullet train from Florence to Rome, surrounded by laughter and camaraderie, as we share the excitement of our upcoming Italian escapade. Our discussion promises to enlighten, as we explore the vibrant diversity within the Catholic Church and compare it to the seemingly homogeneous landscape of American mega churches.Moving from light-hearted travel tales to thought-provoking issues, we tackle the serious topic of the Catholic Church's response—or lack thereof—to cultural challenges. Reflect on the story of a desecrated Virgin Mary statue in Switzerland and the remarkable resilience of the Catholic community. We navigate the complexities of church leadership and historical precedents, pondering the future of Catholicism. Enrich your understanding with insights into art, architecture, and the enduring influence of literary figures like G.K. Chesterton. As we prepare for our pilgrimage, we invite you to reflect on the deeper meanings of faith, unity, and tradition in a rapidly changing world.Support the show********************************************************https://www.avoidingbabylon.comMerchandise: https://shop.avoidingbabylon.comLocals Community: https://avoidingbabylon.locals.comRSS Feed for Podcast Apps: https://feeds.buzzsprout.com/1987412.rssSpiritusTV: https://spiritustv.com/@avoidingbabylonOdysee: https://odysee.com/@AvoidingBabylon

The Dana & Parks Podcast
Is it inherently racist? Hour 3 10/3/2024

The Dana & Parks Podcast

Play Episode Listen Later Oct 3, 2024 32:22


Is it inherently racist? Hour 3 10/3/2024 full 1942 Thu, 03 Oct 2024 21:00:05 +0000 s2gRpk2clIRDjUePIXZ8onn0mmpBu1ng news The Dana & Parks Podcast news Is it inherently racist? Hour 3 10/3/2024 You wanted it... Now here it is! Listen to each hour of the Dana & Parks Show whenever and wherever you want! 2024 © 2021 Audacy, Inc. News False https://player.amperwavepodcasting.com?feed-link=ht

Jesse Lee Peterson Radio Show
Jehovah Jireh, my provider!

Jesse Lee Peterson Radio Show

Play Episode Listen Later Sep 25, 2024 180:00


JLP Wed 9-25-24 Bill Lockwood; black callers; great advice… Hr 1 GUEST, Bill Lockwood: Communism. Amnesty. Assassination. Kamala, Neocons. Christians, soft Mike Pence. // Hr 2 Pro-black callers: blame gov't! Supers… JLP sings. Calls: Canada. Little Malcolm X. How to slow down? // Hr 3 Manhood Hour: Israel-Hezbollah war. Calls… Distraught wife. Thoughts. School "scream boxes." // Biblical Question: Why is your life one collision after another? GUEST INFO: Check out Patriotic Pulpit and Bible Studies with Bill Lockwood. Support via https://americanlibertywithbilllockwood.com Today's show sponsored by SEVENWOOD FINANCIAL SERVICES — Your experts in insuring retirement income — Schedule free consultation https://www.sevenwoodfinancialservices.com/eric.html TIMESTAMPS (0:00:00) HOUR 1 (0:04:50) Bill Lockwood: End game. Amnesty; illegal population. (0:12:00) Second assassination attempt. Useful idiots. (0:19:45) Kamala Harris, puppet. Will Trump win? (0:25:00) Democrats, Neocons: Socialists BREAK (0:32:05) …Christians not voting. Little guy Mike Pence. God in control? (0:38:10) Israel-Hezbollah, Iran. UN done any good? Air-headed Reps (0:44:25) MAURICE, NY, 1st: Clown! Why Trump? Why Kamala? (0:51:23) MAURICE: Love Trump? Personal attack! Stoop to his level. Live your own life? (0:55:00) NEWS: Inflated eggs. Tel Aviv. Storm Helene. Secret Service. (1:00:55) HOUR 2: BQ for the lost. (1:03:40) ARMANTE, 27, NV, 1st: Inherently harmonious. Black Panthers (1:12:00) ARMANTE: Asperger's? We gave you Obama. I'm black. (1:19:19) JLP: Catch yourself when you're about to blame. (1:20:34) JOHN, KY: Agree, super articulate. Gov't keeps us down. C—n character. (1:23:10) Supers: BQ, Jesus, Bible Thumper, tongues (1:32:19) Supers: Guardian angel? JLP sings "Jehovah Jireh". Read guests' books? (1:40:10) ELI, Canada… sense, Haitian, thank you, nice call! (1:46:35) WILLIAM, CA: Armante, little Malcolm X; black parents, Panthers (1:51:25) JAY, PA, 1st: Fast talker, slow learner. How to slow down? Forgave mama. James 1: 19 (1:55:00) NEWS: Ukraine aid. Drug prices. Brett Favre. 988 hotline. (2:00:55) HOUR 3 (2:03:45) Manhood Hour: Kamala chirp (2:05:45) Israel-Hezbollah war, beautiful rugs, Biden (2:11:00) War not the answer. JLP visited Israel. Reveals secrets? (2:13:33) DEBORAH, IL: Victims, accountability, black slums; voting (2:22:00) ASHLEY, CA: wants husband to tell her. Extremes. Relax, let life happen. (2:31:55) Announcements (2:33:55) ASHLEY: Calm down (2:36:03) AARON, MD: Identified with thoughts, emotions unnecessary (2:41:45) School "scream boxes" (2:43:22) Man and woman fight: Anger is evil (2:45:20) CHARLES, MI: Punchie! Church. Let people live their lives! (2:46:57) JENNIFER, CO, 1st, mother of 8. Cambodian Hebrew husband. Home school. (2:48:55) Supers: Not worried. BQ. Blame. Gates of Hell. Blacks. (2:56:40) Closing