POPULARITY
Categories
What happens when people “fear the Lord” in a healthy way? Explore how this can lead to wisdom and relationship with God!Receive The Bible's wisdom books remind us again and again that the fear of the Lord is where real wisdom begins—and that's the kind of wisdom we need for everyday life. From big decisions to ordinary moments, we all need God's perspective to help us navigate what's in front of us. Proverbs offers that wisdom with a hopeful, “glass half-full” lens, giving practical guidance for daily choices. Ecclesiastes meets us in the tension, naming life's frustrations and uncertainties with a more “glass half-empty” honesty. And Job takes us deeper, showing that a healthy fear of the Lord isn't about being afraid—it's about being drawn into a real, trusting relationship with God as we learn to revere all of who He is, even when life doesn't make sense. Reflect Read the verses connected with this episode below. As you reflect on the Scripture, what stands out to you? Proverbs 1:5 Proverbs 24:5 Ecclesiastes 1:18 Proverbs 24:3-4 Ecclesiastes 2:20-21 Ecclesiastes 9:11 Ecclesiastes 12:13-14 Job Psalm 22:22-24 Psalm 19:7-10 Psalm 25:8-14 Would you say you identify more with the “glass half-full" approach to wisdom in Proverbs, or the “glass half-empty" approach of Ecclesiastes? Job's story demonstrates that fearing God is relational and not transactional (see Job 1:9). Why is it important to fear God for who He is and not for what He does? How does this type of reverent fear strengthen your relationship with God? God assured Job of His power and presence and unlimited perspective. How did this help Job have a healthy fear of God (see Job 42:1-6)? How can this help you have a healthy fear of God? Fear of the Lord invites you into a deeper relationship with Him (see Psalm 25:14). How can that help you live wisely today? Respond (Use this prayer to start a conversation with God) “Jesus, help me have a healthy fear of God that allows me to live with wisdom throughout my life. Enable me to revere God for who He is and not for what He does so that I may deepen my relationship with You. Open my eyes to see more clearly the wonder and awe of who You are.” Discover more about the topics in this episode with these recommended resources Mentioned in this episode: Ecclesiastes | Week 1 Ecclesiastes | Week 2 Our Daily Bread Mobile App Listen: Proverbial Wisdom A Life of Wisdom and the Proverbs 31 Woman Read: Reverent Fear Understanding the Bible: The Wisdom Books Watch: Mount Arbel - Sermon on the Mount and the Great Commission
This chapter brings you into a lively conversation about golf and the unique experiences that come with playing in different regions. We start with a discussion on the contrasting weather conditions between Arizona and the low country, highlighting the stark differences in humidity. The conversation then shifts to memorable golfing experiences, particularly at the distinctive Calusa Pines in Florida, where the terrain mimics Pinehurst with its impressive elevation changes and lush pine settings. We share anecdotes from the U.S. Open at Pinehurst, emphasizing the challenging nature of Donald Ross-designed courses with their notoriously difficult greens. Additionally, we spotlight young golfer Ava Bunker's recent performance at Pinehurst, navigating the notorious course with notable success despite its demanding conditions. Throughout, we touch on the evolution of golf course maintenance, particularly how advancements have made greens faster and more challenging, posing a unique test to both amateur and professional golfers alike. (12:49) Golf Wind and Stroke Strategy This chapter focuses on the intricacies of improving golf performance and understanding how external factors like wind can affect play. I share insights from a day filled with various golf training sessions, including a putting school, a short game school, an AimPoint class, and a power school. During the power school, we measure club head speed and ball speed, emphasizing the importance of hitting the ball in the center of the clubface for maximum distance. I explain a simple technique using alignment sticks or tees to ensure a straight and effective swing. Additionally, I discuss the impact of wind on putting, illustrating how wind direction and strength can alter the path of a putt, especially on downhill or uphill greens. By setting up practice scenarios, I demonstrate how players can adjust their strategies to account for these elements, enhancing their overall game understanding and performance. (23:14) Improving Golf Swing Impact With Drill This chapter focuses on techniques for putting and driving in windy conditions, emphasizing the importance of squaring the club face to reduce spin and achieve faster ball speeds. I share creative strategies to deal with wind, such as using playing partners to block it, and highlight a scoring clinic where participants practiced driving into the wind. To help golfers square the club face, I describe a unique drill using a car wash sponge to provide auditory feedback, with the impact sound indicating how square the hit was. By wetting the sponge, golfers experience additional feedback through resistance, enhancing their feel of powering through the swing. While the primary focus is on using drivers, the methods can be adapted slightly for irons. This engaging approach offers practical tips for improving performance in challenging conditions. (29:40) Strategies for Playing in the Wind This chapter focuses on mastering golf techniques to improve performance, especially when dealing with challenging conditions like wind. We discuss using alignment tools like pennies or tees to enhance swing accuracy and how to adapt swings for better ball control. The key takeaway is the importance of hitting the ball solidly and squarely, as this ensures a higher ball speed and reduced spin, which helps the ball pierce through the wind effectively. We also touch on common misconceptions about adjusting stance or shot height and emphasize that without a solid strike, such adjustments may lead to weaker shots. By understanding these principles, golfers can maintain control and confidence on the course, even in windy conditions. (43:19) Putting Strategies for Golf Success This chapter explores the innovative putting technique I introduced, which involves putting towards the fringe rather than directly at the hole. Initially met with skepticism and laughter, this approach soon gained traction as I explained the reasoning behind it. Now, it's a strategy embraced by my entire group, underscoring the value of listening to our show for fresh golfing insights. Additionally, we touch upon playing in the wind and encourage consistent practice to improve skills. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Hi my loves
S.3 ep.76 - davvero? E quante attraversi in modalità pilota automatico? In questo episodio parliamo di come il cervello sia programmato per funzionare in automatico — e perché questo, nel lungo periodo, aumenta stress, reattività e insoddisfazione. Con uno stile diretto e concreto, esploriamo cosa succede quando perdiamo contatto con l'esperienza diretta e come riportare presenza nei gesti più semplici della vita quotidiana.
What happens when people “fear the Lord” in a healthy way? Explore how this can lead to wisdom and relationship with God!Receive The Bible's wisdom books remind us again and again that the fear of the Lord is where real wisdom begins—and that's the kind of wisdom we need for everyday life. From big decisions to ordinary moments, we all need God's perspective to help us navigate what's in front of us. Proverbs offers that wisdom with a hopeful, “glass half-full” lens, giving practical guidance for daily choices. Ecclesiastes meets us in the tension, naming life's frustrations and uncertainties with a more “glass half-empty” honesty. And Job takes us deeper, showing that a healthy fear of the Lord isn't about being afraid—it's about being drawn into a real, trusting relationship with God as we learn to revere all of who He is, even when life doesn't make sense. Reflect Read the verses connected with this episode below. As you reflect on the Scripture, what stands out to you? Proverbs 1:5 Proverbs 24:5 Ecclesiastes 1:18 Proverbs 24:3-4 Ecclesiastes 2:20-21 Ecclesiastes 9:11 Ecclesiastes 12:13-14 Job Psalm 22:22-24 Psalm 19:7-10 Psalm 25:8-14 Would you say you identify more with the “glass half-full" approach to wisdom in Proverbs, or the “glass half-empty" approach of Ecclesiastes? Job's story demonstrates that fearing God is relational and not transactional (see Job 1:9). Why is it important to fear God for who He is and not for what He does? How does this type of reverent fear strengthen your relationship with God? God assured Job of His power and presence and unlimited perspective. How did this help Job have a healthy fear of God (see Job 42:1-6)? How can this help you have a healthy fear of God? Fear of the Lord invites you into a deeper relationship with Him (see Psalm 25:14). How can that help you live wisely today? Respond (Use this prayer to start a conversation with God) “Jesus, help me have a healthy fear of God that allows me to live with wisdom throughout my life. Enable me to revere God for who He is and not for what He does so that I may deepen my relationship with You. Open my eyes to see more clearly the wonder and awe of who You are.” Discover more about the topics in this episode with these recommended resources Mentioned in this episode: Ecclesiastes | Week 1 Ecclesiastes | Week 2 Our Daily Bread Mobile App Listen: Proverbial Wisdom A Life of Wisdom and the Proverbs 31 Woman Read: Reverent Fear Understanding the Bible: The Wisdom Books Watch: Mount Arbel - Sermon on the Mount and the Great Commission
Former sen. Baertschiger and (now) not under consideration for Jo Co Commissioner comments on the weird process of getting a new commissioner or two. Open phones, D62 quiz and emails of the day to wrap the week.
The Ringer's Bill Simmons is joined by Chris Ryan, Van Lathan, and Rob Mahoney LIVE at the Wiltern in Los Angeles to hand out quotes from ‘Heat' as awards for this NBA season (1:11). Host: Bill Simmons Guests: Chris Ryan, Van Lathan, and Rob Mahoney Producers: Chia Hao Tat and Eduardo Ocampo #ULTRAinstructor could get you closer to the action! https://michelobultra.com/instructor MICHELOB ULTRA® ULTRA Instructor. No Purchase Necessary. Open to US residents 21 plus. Begins on January 30, 2026 and ends on February 22, 2026. See Official Rules at https://michelobultra.com/rules for free entry, entry deadlines, prizes, and details. The Ringer is committed to responsible gaming. Please visit www.rg-help.com to learn more about the resources and helplines available. Learn more about your ad choices. Visit podcastchoices.com/adchoices
It's time to take a trip down memory lane! Sean and Tommy rank their five favorite Open announcements from the last 13 years. From garage throwdowns to clashes of champions, they take a look back at the most memorable productions from the start of seasons gone by. This episode is presented by Thirdzy. Head to thirdzy.com and use the code "TEF" to save 20% on their Rest and Recovery Collagen and improve how you sleep and recover.
Jacob Frey and Tim Walz plead for more Federal dollars for the “damage” done by ICE. Dana reacts to a trend where grown liberal women are dressing up their American Girl Dolls, posing them with F*** ICE t-shirts. Sir Jim Ratcliffe, the Billionaire co-owner of Manchester United, is being slammed for rightfully saying that the UK has been colonised by immigrants. RFK Jr. reveals he used to snort cocaine off toilet seats on Theo Von's podcast.The polar bears are reportedly thriving, to the dismay of Al Gore. The father of the Tumbler Ridge trans sh*oter is reportedly distancing himself from his son. AOC gets asked in Munich about taxing the rich when she runs for President. Dana runs a montage of all the words Candace Owens has trouble pronouncing. Nicole Curtis the host of HGTV show Rehab Addict Nicole Curtis says the “N-word” on video and quickly tries to have it deleted.President Trump speaks in Fort Bragg, NC about talks with Iran. The term “pizza” was mentioned 911 times in the new Epstein files dump as the FBI-code word for “girl,” most often used next to the term “slicing”. NYC Mayor Zohran Mamdani' aide is blasted as ‘whiny bi*ch' after raging over being denied airport lounge access and fancy perks in resurfaced posts. NBC polls reveal over half of Latino-Americans have never even heard of the term “LatinX”.Thank you for supporting our sponsors that make The Dana Show possible…Noble Goldhttps://NobleGoldInvestments.com/DanaThis is the year to create a more stable financial future. Open a qualified account with Noble Gold and receive a 3 oz Silver Virtue coin free. Relief Factorhttps://ReliefFactor.com OR CALL 1-800-4-RELIEFTry Relief Factor's 3-week Quickstart for just $19.95—tell them Dana sent you and see if you can be next to control your pain!Patriot Mobilehttps://PatriotMobile.com/DANA or call 972-PATRIOTSwitch to Patriot Mobile in minutes—keep your number and phone or upgrade, then take a stand today with promo code DANA for a free month of service!Humannhttps://HumanN.comGet simple, delicious wellness support when you pick up Humann's Turmeric Chews at Sam's Club next time you're there and see why they're such a fan favorite!Byrnahttps://Byrna.com/DanaMake 2026 the year you protect your family with solid options—Get the Byrna today.WebRoothttps://Webroot.com/DanaTake your cybersecurity seriously! Get 60% off Webroot Total Protection at WebrootSubscribe today and stay in the loop on all things news with The Dana Show. Follow us here for more daily clips, updates, and commentary:YoutubeFacebookInstagramXMore InfoWebsite
Dana runs a montage of all the words Candace Owens has trouble pronouncing. Nicole Curtis the host of HGTV show Rehab Addict Nicole Curtis says the “N-word” on video and quickly tries to have it deleted. Meanwhile, Dana reacts to a trend where grown liberal women are dressing up their American Girl Dolls, posing them with F*** ICE t-shirts.Noble Goldhttps://NobleGoldInvestments.com/DanaThis is the year to create a more stable financial future. Open a qualified account with Noble Gold and receive a 3 oz Silver Virtue coin free. Subscribe today and stay in the loop on all things news with The Dana Show. Follow us here for more daily clips, updates, and commentary:YoutubeFacebookInstagramXMore InfoWebsite
Pamela Gagnon joins Todd and Kristin to break down how to coach handstand movements for the CrossFit Open, including what's shown up historically, common no-rep standards, and where athletes lose reps. We'll cover practical progressions and prep strategies coaches should be using now to make sure their athletes are confident, efficient, and Open-ready.--Ready to develop your coaching, advance your career, and level up your credential?
In this week's episode of the Rich Habits Radar, Robert Croak and Austin Hankwitz cover the Dow Jones Industrial Average hitting 50K, January's top notch job numbers, and the rise of the AI agent ie OpenClaw.
Beyond Belief: The Neuroscience of Reality, Perception, and the Biology of Belief Belief feels like truth. It feels earned. It feels safe. But what if your brain isn't revealing reality… it's predicting it? In this episode of Makes Sense, Dr. JC Doornick explores the neuroscience of belief and perception, revealing how the brain constructs reality inside a “dark box” using prediction, familiarity, and past experience. Drawing from modern brain science, epigenetics, the Biology of Belief, placebo research, and the work of thinkers like Lisa Feldman Barrett and Bruce Lipton, this conversation exposes how beliefs stabilize identity and calm the nervous system—while quietly limiting cognitive flexibility, curiosity, and long-term brain health. If belief is a shortcut the brain uses to reduce uncertainty, what happens when certainty becomes a prison? You'll learn how rigid beliefs shape physiology, influence gene expression, and impact longevity—and how the Interface Response System (IRS) restores choice by transforming belief from identity into hypothesis. This episode isn't about telling you what to believe.It's about helping you pause long enough to ask: Hmmm… what else might be possible? ️ MAKES SENSE with Dr. JC Doornick Welcome to Makes Sense with Dr. JC Doornick — the podcast where neuroscience, philosophy, performance, and perception converge. This show is built on a simple but disruptive truth: It's not what you do that determines your results — it's who you are while you're doing it. Each episode explores the psychology of belief, the mechanics of perception, and the power of conscious awareness through Dr. JC's Interface Response System (IRS). When you change the way you look at things, the things you look at begin to change. If you're ready to reclaim authorship of your life, sharpen your awareness, and awaken from autopilot…Welcome to the uprising of the sleepwalking masses. Resources: Article on Open and Curious correlation to longevity - https://newsroom.ucla.edu/releases/curiosity-can-help-brain-stay-sharp-as-they-age#:~:text=If%20you're%20curious%20about,even%20other%20forms%20of%20trivia.%E2%80%9D Follow Dr. JC Doornick and the Makes Sense Academy:► Makes Sense Substack - https://drjcdoornick.substack.com ► Instagram: / drjcdoornick ►Facebook: / makessensepodcast ►YouTube: / drjcdoornick MAKES SENSE PODCAST Welcome to the Makes Sense with Dr. JC Doornick Podcast. This podcast explores topics that expand human consciousness and enhance performance. On the Makes Sense Podcast, we acknowledge that it's who you are that determines how well what you do works, and that perception is subjective and an acquired taste. When you change the way you look at things, the things you look at begin to change. Welcome to the uprising of the sleepwalking masses. Welcome to the Makes Sense with Dr. JC Doornick Podcast. SUBSCRIBE/RATE/REVIEW & SHARE our new podcast. FOLLOW Podcast: You will find a "Follow" button in the top right. This will enable the podcast software to alert you when a new episode launches each week. Apple: https://podcasts.apple.com/ca/podcast/makes-sense-with-dr-jc-doornick/id1730954168 Spotify: https://open.spotify.com/show/1WHfKWDDReMtrGFz4kkZs9?si=003780ca147c4aec Podcast Affiliates: Kwik Learning: Many people ask me where I get all these topics, which I've been covering for almost 15 years. I have learned to read nearly four times faster and retain information 10 times better with Kwik Learning. Learn how to learn and earn with Jim Kwik. Get his program at a special discount here: https://jimkwik.com/dragon OUR SPONSORS: Makes Sense Academy: A private mastermind and psychologically safe environment full of the Mindset and Action steps that will help you begin to thrive. The Makes Sense Academy. https://www.skool.com/makes-sense-academy/about The Sati Experience: A retreat designed for the married couple that truly loves one another, yet wants to take their love to that higher magical level. Relax, reestablish, and renew your love at the Sati Experience. https://www.satiexperience.com 0:00 - Intro 3:10 - Live Audience Attacks as Opportunities to Run the IRS 6:05 - Beyond Belief 7:18 - Live Audience Attack #2 - This guy isn;t a doctor of anything 9:32 - When we have a belief, it feels true and earned and worthy of protection. 12:31 - The Core Signal of the Day: Belief is shortcut to reduce uncertainty 13:49 - Why does the brain cling to belief? 16:48 - We don't cling to beliefs cause they are true, we cling cause they make us feel safe. 18:40 - Cognitive Flexibility 23:44 - The Reverse Inference Problem 24:42 - The Biology of Belief and Bruce Lipton 27:45 - The Placebo, nocebo, and flow burglars 30:18 - Belief and Longevity and Mental Health 32:11 - Run Belief through the IRS (Interface Response System) Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Hello Beautiful, I'm so grateful you're here with me.
This week Naluna stopped by the studio and had a great chat with us about the importance of getting out of your comfort zone and taking this music game seriously! Naluna can be found at the following links: Instagram: https://www.instagram.com/david.terner?igsh=cGw3YWpmd296Yndt&utm_source=qr Website: https://linktr.ee/david.terner?utm_source=linktree_profile_share<sid=d64c476b-b3a4-4208-9e01-252cac7ec976 Enjoy their music on Spotify, or you can find them on the new 561 Music Playlist we created of various local artists that we will be continually updating. on Spotify: https://open.spotify.com/artist/5CzuHqOYFFHRfyLg9k1ETx?si=86644b2fa5e2499e 561 Music Playlist: https://open.spotify.com/playlist/7y2i0AgJTGRMtxMADgZ7AZ?si=Zp77sq_BTue_wWTDouxH2g 561 Music Links: Facebook: https://www.facebook.com/561musicpodcast Instagram: https://www.instagram.com/561musicpodcast Twitter: https://twitter.com/561musicpodcast YouTube: https://www.youtube.com/c/561musicpodcast A huge thank you to our sponsors this week. JUPITER INLET BOAT RENTALS Jupiter Inlet Boat Rentals is Palm Beach County's Premier Boat Rental Company and Boat Rental Club. As an alternative to boat ownership, our membership club ranks number #1 in boat quality, availability and customer satisfaction. OASIS ROOT COFFEE AND KAVA LOUNGE Oasis Root Coffee and Kava Lounge in Jupiter is a fun, relaxing place to come by drink kava, java, or tea, and hang out… South Pacific Style! Open daily from 8am-1am. Located at 185 E. Indiantown Rd., Suite 111, Jupiter, FL 33477. 561 MUSIC SCHOOL AND STUDIO Thank you to Justin and 561 MUSIC SCHOOL AND STUDIO for all they do to make our podcasts as professional as possible. If you are looking to do a podcast, record an album, do a live stream, or anything of that type, Live Music Community is the place to go. 561 MUSIC SCHOOL AND STUDIO is also a music school that takes it up a notch by not only teaching the foundations of music theory and songs on instruments and vocals but also teaches the students the full band experience. They team your child up with like-minded individuals who then go on to play shows, do live streams, and learn the dos and don'ts of being in a successful working band. You can find them online at https://www.561musicstudio.com and on Facebook and Instagram. 561 Music Podcast was recorded by our producer Justin Hucker at 561 Music School and Studio, which offers podcasting, video production, live stream, music lessons, recording and so much more. Check them out and take a virtual studio tour here: Thank you to Justin and 561 MUSIC SCHOOL AND STUDIO for all they do to make our podcasts as professional as possible. If you are looking to do a podcast, record an album, do a live stream, or anything of that type, Live Music Community is the place to go. 561 MUSIC SCHOOL AND STUDIO is also a music school that takes it up a notch by not only teaching the foundations of music theory and songs on instruments and vocals but also teaches the students the full band experience. They team your child up with like-minded individuals who then go on to play shows, do live streams, and learn the dos and don'ts of being in a successful working band. You can find them online at https://www.561musicstudio.com and on Facebook and Instagram.Special Guest: Naluna.
In this episode of the Say Yes to Holiness podcast, host Christina Semmens sits down to speak with Katie Zulanas, Executive Director of the Couple to Couple League (CCL) about ways to help your marriage flourish. To that end, we specifically focused upon the fertility awareness and the Peak Day app, a Catholic fertility and period tracking application. They discuss the risks associated with mainstream fertility apps, and how the Peak Day app empowers women and couples to take charge of their reproductive health. The conversation also highlights the significance of open communication between mothers and daughters regarding fertility, the role of Natural Family Planning (NFP) in strengthening marriages, and the resources available through the Fertility Science Institute. Katie shares her faith journey and the challenges and rewards of her work in promoting restorative reproductive medicine.TakeawaysFertility awareness improves communication and strengthens marriages. The Peak Day app is designed to empower women in tracking their fertility.Mainstream fertility apps often contain immoral content and hidden dangers.Open communication about fertility is crucial between mothers and daughters.Restorative reproductive medicine offers a natural alternative to IVF.The discipline of tracking fertility can lead to significant rewards.The Fertility Science Institute provides valuable resources for families.Women should be aware of the risks associated with using secular fertility apps.The Peak Day app integrates with wearable technology for better tracking.Everyone deserves access to accurate and supportive fertility resources. Sound Bites“A healthy cycle is a sign of health.”“Everyone needs to hear about this app.”“The discipline of NFP has huge rewards.”Chapters00:00 Introduction to Peak Day and Katie Zulanas' Faith Journey02:47 The Importance of Fertility Awareness05:28 Risks of Mainstream Fertility Apps07:59 Overview of Peak Day App Features10:32 Empowering Conversations Between Mothers and Daughters13:24 The Role of NFP in Strengthening Marriages15:53 Resources and Support from Fertility Science Institute18:24 The Need for Restorative Reproductive Medicine21:04 Challenges and Rewards in the Work23:10 Final Thoughts and Call to Action
(Part 2 of 2) My guest today, Melanie, believes that her mom worries too much about her eating habits and pesters her to eat more. As Melanie does this Mother-Work, it becomes clear what her mom really cares about. Open your mind, get still, and together let's do The Work. To catch Byron Katie live every Monday, Tuesday, and Wednesday, 9am/PT on Zoom, register here: athomewithbyronkatie.com
Canadian police say they are respecting the Tumbler Ridge trans shooter's “preferred gender pronouns”. Meanwhile, Orlando's “Gay Days” is being put on hold for a year after their sponsors DROP OUT.Noble Goldhttps://NobleGoldInvestments.com/DanaThis is the year to create a more stable financial future. Open a qualified account with Noble Gold and receive a 3 oz Silver Virtue coin freeSubscribe today and stay in the loop on all things news with The Dana Show. Follow us here for more daily clips, updates, and commentary:YoutubeFacebookInstagramXMore InfoWebsite
Sen. Josh Hawley DESTROYS Minnesota AG Keith Ellison before the Senate on ICE and immigration enforcement. Canadian police say they are respecting the Tumbler Ridge trans shooter's “preferred gender pronouns”. Dana slams the woke remake of the classic Tom Hanks movie, “The Burbs”, for using words like “microaggression” with poor casting and storylines. Anthropic's “Claude” AI has shown in testing that it's willing to blackmail and kill in order to avoid being shut down.Dana breaks down more Epstein emails including a code word where references beef jerky. Pam Bondi had a rough hearing where she went on a completely unhinged rant about the stock market when asked why she has not indicted any clients of Jeffrey Epstein. How is asking for ID when voting racist?? A Dem Rep from Michigan claims the SAVE Act will make it so women in her district can't vote for a political candidate. France is now urging less meat in a new Health-Climate Plan.The Wall Street Journal slams Millennials for buying rotisserie chickens due to overwhelming student debt. Orlando's “Gay Days” is being put on hold for a year after their sponsors DROP OUT. Rep. Chip Roy joins us share commentary why some Republicans are AGAINST requiring voter ID.Thank you for supporting our sponsors that make The Dana Show possible…Noble Goldhttps://NobleGoldInvestments.com/DanaThis is the year to create a more stable financial future. Open a qualified account with Noble Gold and receive a 3 oz Silver Virtue coin free. Relief Factorhttps://ReliefFactor.com OR CALL 1-800-4-RELIEFTry Relief Factor's 3-week Quickstart for just $19.95—tell them Dana sent you and see if you can be next to control your pain!Patriot Mobilehttps://PatriotMobile.com/DANA or call 972-PATRIOTSwitch to Patriot Mobile in minutes—keep your number and phone or upgrade, then take a stand today with promo code DANA for a free month of service!Humannhttps://HumanN.comGet simple, delicious wellness support when you pick up Humann's Turmeric Chews at Sam's Club next time you're there and see why they're such a fan favorite!Byrnahttps://Byrna.com/DanaMake 2026 the year you protect your family with solid options—Get the Byrna today.WebRoothttps://Webroot.com/DanaTake your cybersecurity seriously! Get 60% off Webroot Total Protection at WebrootSubscribe today and stay in the loop on all things news with The Dana Show. Follow us here for more daily clips, updates, and commentary:YoutubeFacebookInstagramXMore InfoWebsite
About this episode: Sexual education often focuses on the potential risks of unplanned pregnancies and STIs. But an approach to sexual health that includes frank discussions of what feels good could yield better health outcomes. In this episode: Sexual health expert Joshua O'Neal talks about the value of starting sexual health conversations with enjoyment and comfort. Note: This episode was produced in collaboration with the National Coalition of STD Directors. Guests: Joshua O'Neal, MA, is a sexual health educator and program director at the Southeast HIV/STI Prevention Training Center. Host: Lindsay Smith Rogers, MA, is the producer of the Public Health On Call podcast, an editor for Expert Insights, and the director of content strategy for the Johns Hopkins Bloomberg School of Public Health. Show links and related content: Promoting protection and pleasure: amplifying the effectiveness of barriers against sexually transmitted infections and pregnancy—The Lancet Pleasure and PrEP: Pleasure-Seeking Plays a Role in Prevention Choices and Could Lead to PrEP Initiation—American Journal of Men's Health Pleasure as a measure of agency and empowerment—Medicus Mundi Schweiz Pleasure As Tool For STI Prevention: Part 2—NCSD Real Talk Transcript information: Looking for episode transcripts? Open our podcast on the Apple Podcasts app (desktop or mobile) or the Spotify mobile app to access an auto-generated transcript of any episode. Closed captioning is also available for every episode on our YouTube channel. Contact us: Have a question about something you heard? Looking for a transcript? Want to suggest a topic or guest? Contact us via email or visit our website. Follow us: @PublicHealthPod on Bluesky @PublicHealthPod on Instagram @JohnsHopkinsSPH on Facebook @PublicHealthOnCall on YouTube Here's our RSS feed Note: These podcasts are a conversation between the participants, and do not represent the position of Johns Hopkins University.
Adding water to corn--and other hot crops--has been around awhile, but has recently caused a stir. Some say it's short-stopping ducks. Especially mallards. Others say it's among countless changes both good and bad that have redistributed waterfowl in the Mississippi flyway. At least in recent years. Tony Vandemore, Ira McCauley and Justin Martin weigh in. In no particular order we discuss hot-cropped corn, moist-soil, ethanol corn versus nesting cover, massive habitat expansions in Missouri, changes in Louisiana, secret to becoming a poster boy, why the recent firestorm, sanctuaries and refuges, mallards versus other species, benefits of adding water to corn versus corn to water, factors other than corn, putting differences aside and working together for the ducks--and more. Open, honest, friendly dialogue you're sure to appreciate. And be sure to tune in next week's multi-perspective "EP 663 Wading into the Corn Maze" ---y'all DO NOT want to miss! Visit the Legendary Brands That Make MOJO's Duck Season Somewhere Podcast Possible: MOJO Outdoors Alberta Professional Outfitters Society Benelli Shotguns Bow and Arrow Outdoors Ducks Unlimited Flash Back Decoys GetDucks.com Migra Ammunitions onX Maps Use code GetDucks25 to save 25% Sitka Gear SoundGear Use code GetDucks20 to save 25% Tom Beckbe USHuntList.com Like what you heard? Let us know! • Tap Subscribe so you never miss an episode. • Drop a rating—it's like a high-five in the duck blind. • Leave a quick comment: What hit home? What made you laugh? What hunt did it remind you of? • Share this episode with a buddy who lives for duck season. Want to partner? Have or know a story to share? Contact: Ramsey Russell ramsey@getducks.com
Ever felt like your math classroom is too noisy, too messy, or too chaotic when students are working on open-ended tasks?You're not alone. Many math teachers—and leaders—grapple with this tension: we want students to engage deeply, but we're uncomfortable when that engagement doesn't look like quiet order. In this episode, we unpack a listener's concern: “Open tasks feel chaotic. This isn't what I thought good classroom management looked like.”Listeners Will LearnWhy noise and movement are not signs of lost control—but of thinkingThe classroom management structures that support, not prevent, explorationHow to set clear routines that create space for student agencyWhat administrators can do to support—not sabotage—risk-taking teachersHow beliefs about “how kids learn best” impact the way we manage learningThe role of coherence across classrooms, schools, and districts in changing normsWhy teacher and student buy-in depend on emotional, not just logical, shiftsIf you're ready to make your math classroom a place of active learning without losing your sanity—or your students—this episode offers honest insights, practical strategies, and a path forward for teachers and leaders alikeNot sure what matters most when designing math improvement plans? Take this assessment and get a free customized report: https://makemathmoments.com/grow/ Math coordinators and leaders – Ready to design your math improvement plan with guidance, support and using structure? Learn how to follow our 4 stage process. https://growyourmathprogram.com Looking to supplement your curriculum with problem-based lessons and units? Make Math Moments Problem Based Lessons & Units Show Notes PageLove the show? Text us your big takeaway!Are you wondering how to create K-12 math lesson plans that leave students so engaged they don't want to stop exploring your math curriculum when the bell rings? In their podcast, Kyle Pearce and Jon Orr—founders of MakeMathMoments.com—share over 19 years of experience inspiring K-12 math students, teachers, and district leaders with effective math activities, engaging resources, and innovative math leadership strategies. Through a 6-step framework, they guide K-12 classroom teachers and district math coordinators on building a strong, balanced math program that grows student and teacher impact. Each week, gain fresh ideas, feedback, and practical strategies to feel more confident and motivate students to see the beauty in math. Start making math moments today by listening to Episode #139: "Making Math Moments From Day 1 to 180.
My guest on this episode of The Back of the Range is Emily Odwin from SMU Women's Golf. Emily has quite literally been a trailblazer for her country, Barbados. She is the first person from Barbados to compete in the U.S. Junior Amateur, U.S. Women's Amateur, and the U.S. Open. Most recently, she received an invitation to complete in this year's Augusta National Women's Amateur. We spoke about her start in the game and why competing for SMU has meant so much to her amateur and collegiate career. Emily Odwin - SMU Women's GolfThe Back of the Range - All Access Subscribe to The Back of the Range Subscribe in Apple Podcasts and SPOTIFY!Also Subscribe in YouTube, Google Play , Overcast, Stitcher Follow on Social Media! Email us: ben@thebackoftherange.comWebsite: www.thebackoftherange.com Voice Work by Mitch Phillips
In this episode of the Healthy, Wealthy and Smart podcast, Dr. Karen Litzy speaks with Arielle Loupos, founder of Flower Girl, about the often-stigmatized topic of menstrual care and period products. They discuss harmful ingredients in traditional menstrual products, the importance of sustainable, non-toxic alternatives, and cycle syncing as a tool for self-awareness and empowerment. Arielle shares her journey creating Flower Girl and emphasizes the need for open conversations about menstruation to break the stigma and promote women's health. Takeaways Menstrual care is often surrounded by misinformation and stigma. Traditional period products may contain harmful chemicals and toxins. Organic labeling on menstrual products can be misleading. Sustainable and non-toxic alternatives are essential for women's health. Cycle syncing can enhance self-awareness and optimize daily life. Women should honor their menstrual cycles and allow for rest. Understanding menstrual health is crucial for overall well-being. Open conversations about menstruation can empower women. Education about menstruation should start at a young age. Women's health research needs to be prioritized and expanded. Chapters 00:00 Breaking the Silence on Menstrual Care 02:55 The Hidden Dangers in Period Products 06:10 Creating a Sustainable Solution: Flower Girl Underwear 08:55 Understanding Menstrual Health and Cycle Syncing 11:55 Empowering Women Through Menstrual Awareness 14:46 The Emotional and Societal Impact of Menstruation 18:02 The Future of Women's Health Conversations More About Arielle: Arielle Loupos is the founder of Flower Girl, a new sustainable and non-toxic period underwear brand designed to help women feel safe, confident, and in flow with their bodies. With over a decade of experience in eCommerce and digital marketing, Arielle launched Flower Girl to challenge harmful menstrual products and create underwear women can wear on or off their period made with body-safe materials. Beyond selling underwear, Arielle's mission with the brand is to empower women to live in harmony with their cycles vs. working against it. Resources from this Episode: Flower Girl Website Flower Girl on Instagram Arielle's Instagram Jane Sponsorship Information: Book a one-on-one demo here Mention the code LITZY1MO for a free month Follow Dr. Karen Litzy on Social Media: Karen's Instagram Karen's LinkedIn Subscribe to Healthy, Wealthy & Smart: YouTube Website Apple Podcast Spotify SoundCloud Stitcher iHeart Radio
In this week's episode of the Rich Habits Podcast, Robert Croak and Austin Hankwitz answer your questions!---
EPISODE DETAILS Thinking about starting an Etsy shop? In this episode, I break down exactly how to earn your first $10K on Etsy-- what to do first, what to sell, how Etsy actually works, and the biggest mistakes new sellers make that slow them down. Whether you're brand new, stuck without results, or ready to try again, this episode gives you a realistic, doable roadmap to real Etsy income. **"How to Sell Your Stuff on Etsy" is not affiliated with or endorsed by Etsy.com STUFF I MENTIONED: Get the Ebook "How to Make Your First $10k on Etsy": https://www.howtosellyourstuff.com/offers/oXFKeSgN/checkout Profittree Link (lifetime access $67): https://lifetime.profittree.io/?via=lizzie87 Tutorial: https://www.youtube.com/watch?v=VO7Ra18ZPTw&t=1s Everbee (free account): https://www.everbee.io/?via=lizzie Tutorial: https://youtu.be/MucPFkvC8sk?si=iyaD0RbMbIp3echw How to find best sellers on Etsy (tutorial): https://youtu.be/IES_sR5WZ2E Open your shop and get 40 listings free with my link (save $8): https://etsy.me/4bx6yli Hannah's Free Ads Masterclass: https://www.howtosellyourstuff.com/request-etsy-ads-masterclass Scaling Society (all inclusive): https://www.howtosellyourstuff.com/scaling-society Trendspotting (weekly trend reports): https://www.howtosellyourstuff.com/trendspotting ----------------------
Join Dave and Tom as they engage in an in-depth, verse-by-verse examination of the Gospel of John. We hope you will be challenged and convicted as you listen to these insightful, exegetical discussions compiled from nearly four years of Search the Scriptures Daily radio programs. Open your Bible and get ready for an edifying pilgrimage into God's Word.
過年回家很無聊?親戚聊完就只剩滑手機?打開 SUGO 用「附近」功能,直接找附近的人聊天、揪局!不用左右滑、等配對,上線就能開聊!隨時有人陪你過年不孤單✨快點擊連結下載
Maybe not! ;)Registration is OPEN at the retreat!Sign up for updates on the retreat home and news letters right here!
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
Radical Honesty and the Many Kinds of Love with Jesse Poppick Radical Honesty (and the Many Kinds of Love) In this episode of The Open Nesters, we dive into the transformative journey of Jesse Poppik, a guest who brings a wealth of insights about love, relationships, and navigating life as an open nester. With his unique experiences, Jesse shares how his perspective on relationships has evolved, particularly through his transitions after two divorces and his experience as a father to three daughters. We explore concepts such as radical honesty and non-monogamy as he reflects on how these themes have informed his life choices and relationship dynamics. As our conversation unfolds, Jesse discusses his journey to embracing the open nester lifestyle, which he describes as facing new adventures rather than feeling the void of an empty nest. We emphasize that this stage of life allows for deeper exploration of personal identity, relationships, and experiences. His candid recount of a conversation with his youngest daughter reveals the profound shifts that have occurred as their family structure changes. What began as a necessary adjustment for their circumstances evolved into a broader understanding of freedom, support, and connection, prompting Jesse to reassess his role as a father and an individual. We dive deeper into Jesse’s relational intelligence, which has transformed through his exploration of non-monogamy and the concept of polyamory. He details his experiences in these realms, emphasizing that non-monogamous relationships aren’t just about sexual freedom but also about the capacity to love and connect with multiple partners in diverse ways. Jesse articulates a distinction between non-monogamy, which often entails sexual relationships, and polyamory, which focuses on loving connections without a sexual component. This insight prompts a broader discussion about the rich, fluid nature of love and the importance of understanding our desires and boundaries within these structures. Throughout the episode, we explore Jesse’s six principles of sexual health and how these guidelines can foster better communication and ultimately healthier relationships. Consent, non-exploitation, honesty, shared values, protection, and pleasure serve as essential pillars for navigating the complexities of intimacy. Jesse introduces the RBDSM-T framework, urging listeners to bring explicit conversations to dating and relationships. He highlights that understanding the meanings behind our interactions, setting clear boundaries, and addressing past traumas are crucial for nurturing connections—even in long-term relationships. As the discussion progresses, Jesse shares poignant reflections about his estranged relationships with his older daughters and how the principles of radical honesty and patience have played a pivotal role in rebuilding those connections. He emphasizes the importance of creating space for his children to engage with him on their terms, acknowledging the challenges and emotional weight involved in such situations. Listeners seeking guidance or inspiration are encouraged to connect with Jesse through his website, where he offers workshops and resources aimed at enhancing sexual health and communication in relationships. He also shares his upcoming appearances at festivals, demonstrating his commitment to spreading awareness about the complexities of modern relationships, the importance of emotional intelligence, and fostering a deeper understanding of love. About Tessa Tessa Krone is the engine behind and the face of The Open Nesters. Tessa holds an MA in Consciousness Studies and is a speaker, coach, program, and journey facilitator & leader, author, and, of course, Podcaster. Her offerings are based on her mission to help people open to their most self-expressed, loving selves. Tessa's specialties include embodiment from all the senses and elements of our inner and outer lives, ranging from mindfulness, dance, play, and sensory exploration in nature. If she had one superpower, it would be to help people, especially as they age, to live more open-hearted lives. Please email Tessa to make a connection. And visit her page here on the Open Nesters Website. If you like, please answer the question: What do you need to OPEN your NEST? In your LIFE. In your BODY. In your SPIRIT. Do you need MORE… Adventure Freedom of Expression Exploration and Fun Body Movement New circles of friends Deep love relationships
In this episode, Brendan and Hunter sit down with Rob Novak, Surestep's Director of Sales and a certified orthotist, for an in‑depth conversation about pediatric orthoses. Rob shares his path into the O&P field, the unique challenges clinicians face when fitting children, and how Surestep approaches designing orthotics that balance function and comfort.Learn more about Surestep and connect with Rob on Linkedin.Many thanks to Ottobock for sponsoring this episode! Ottobock's new Taleo Adapt takes the trusted Taleo family of feet to the next level of mobility. With its integrated hydraulic ankle offering 12 degrees of motion, your patients experience a smoother, more natural ride, even on slopes and hills. The Taleo base delivers the ideal balance of shock absorption, energy return, and simple alignment for an easier fit. Discover the new 1C59 Taleo Adapt today, backed by our 60-day satisfaction guarantee. Click here to learn more. If you need to earn CEU credits, click on over to SPS University. With over 15 courses, choose from topics ranging from microprocessor feet and knees, to AFOs and KAFOs, and more. Open an SPS University account today!Visit spsco.comAlso, email us! The O&P Check-in is a bi-monthly podcast featuring the latest orthotics and prosthetics news, trends, best practices, regulations and policies. Designed for O&P professionals, join Brendan Erickson and a rotating co-host as they interview guests and share the latest advancements in the industry.
Send a textWe trace the real Saint Valentine: priest, healer, and martyr who defended Christian marriage against imperial bans and turned February 14 into a witness of agape love. From secret weddings to a prison miracle, we call listeners to live covenant over sentiment and holiness over hype.• origins in Rome's persecutions and catacomb ministry• clandestine marriages as sacrament not civil contract• arrest, imprisonment, and fearless witness to Christ• miracle of the jailer's daughter and the first note• theological meaning of martyrdom and covenant love• liturgical memory in the Roman Martyrology• patronage for lovers, engaged couples, widowed, and lonely• transformation from Lupercalia to Christian feast• symbols and iconography that catechize through art• practical calls to live Eucharistic devotion and family holinessTake action and deepen your walk with God. Live Eucharistic devotion. Equip your spiritual arsenal, browse our unrivaled Catholic store, connect and evangelize, join virtual pilgrimages. Support the mission. Visit journeysoffaith.com website today.Open by Steve Bailey Support the showDownload Journeys of Faith Free App link. https://apps.apple.com/us/app/journeys-of-faith/id6757635073 Journeys of Faith brings your Super Saints Podcasts ***Our Core Beliefs*** The Eucharist is the Source and Summit of our Faith." Catechism 132 Click Here “This is the will of God, your sanctification.” 1Thessalonians 4“ Click Here ... lay up for yourselves treasures in heaven...” Matthew 6:19-2 Click Here The Goal is Heaven Click Here Please consider subscribing to this podcast or making a donation to Journeys of Faith we are actively increasing our reach and we are seeing good results for visitors under 40! Help us Grow! Buy Me a cup of Coffee...
The show OPEN... happiness... black gloves... and ribs!
Photo by Nathan Dumlao on Unsplash So, what does this overflow righteousness look like when it's communal rather than private? Private righteousness claims: I don't murder, I don't steal, I'm polite to others, I volunteer sometimes, I wash my hands after using the bathroom, I'm a good person. Overflow righteousness says: Our community redistributes resources so nobody goes hungry. We shelter people the state wants to cage, and absorb legal liability to protect the vulnerable. The budget reflects that we value other people's survival over institutional preservation. We organize collectively to confront systems that grind people down. Private righteousness is manageable, respectable, safe. Overflow righteousness can get you in trouble. Subscribe to us on iTunes! Sermon text: web | doc
This week the Dads talk about the Super Bowl, Bad Bunny and Green Day. Some players excel only when they are in the right organization. Foo's son participates in a student led walkout against ICE. Open campus vs closed campus high schools. Gym finally gets his wife's car stereo squared away with an Alpine. Big storm coming with snow expectations. Piper kills her car battery but is saved with a jumper pack and Facetime help from Dad. E-Bike discussion coming soon. Plus more!
Let's talk about something that doesn't get people excited. No machines, nothing about forklifts, and no mention of productivity or numbers. I'd like to talk about paperwork. I know I know, but this isn't boring paperwork. This is the paperwork of life. The kind of documents that quietly follow you from your first job all the way to retirement. The kind that, when handled correctly, makes life easier, and when ignored, can create stress, delays, lost money, or even lost opportunities. I was looking for the right word here, I highlighted the words personal responsibility, and that's not what I'm looking for, but there are things we, ourselves, need to make sure we get right. So instead of harping on what we need to do I'll just speak to it in an, “I've seen how this plays out” kind of way. Because here's the truth, no company, no HR department, no recruiter, no government agency cares about your paperwork more than you do, and they never will. When someone gets a job offer, they're excited. And they should be. But onboarding isn't just about orientation videos and a badge. From day one, you're asked to complete documents like I-9 employment verification, W-4 tax forms, Direct deposit information, Benefit elections, Emergency contacts, Policy acknowledgments. And these aren't just forms. These documents determine whether you can legally work, how and when you get paid, how much tax is withheld, whether you have insurance, and who gets called if there are any problems or emergencies. When onboarding paperwork is filled out incorrectly, or rushed through, problems can start immediately. Delayed paychecks. Incorrect tax withholdings. Missed benefits. And the worst part? Most of those problems are preventable. Here's a tip or an opinion I guess, if a document affects your pay, your health, or your job security, slow down. Ask questions if you do not understand something. Especially anything like deductions. Read what you're signing. If you don't understand a box, don't guess. Guessing on official paperwork almost always comes back around to us. The I-9 form is one of the most misunderstood documents in employment, and one of the most important. This form verifies your identity and your legal authorization to work in the United States. It requires specific documents, completed within a specific timeframe. If our hiring agent doesn't properly complete the I-9 you may not be allowed to start work. Your employment could be delayed, or you could be terminated, not for performance, but for a compliance issue. This isn't personal. It's just the law. As a worker, our responsibility is simple but serious. We need to bring valid, acceptable documents, make sure names match exactly, and pay attention to dates and signatures. Just this week I've heard about 3 individuals that met all the qualifications for a position, interviewed great, was offered the position, only to say that they didn't bring 2 forms of I.D. Their hiring process was delayed until they could return with their documents. For one of them the position was filled before she could return. And to our recruiter, being unprepared for an I-9 and the onboarding sends a message, fair or not, that you didn't take the process seriously. Taxes are another area where people often say, I'll just fill it out the way I always do. That mindset can cause problems for us. Your W-4 determines how much money is withheld from each paycheck. Too little withheld? You might owe money at tax time. Too much withheld? You're giving the government an interest-free loan all year. And it's important to remember that life changes, marriage, kids, second jobs, side work, all affect how your W-4 should be filled out. Here's another tip or opinion! Our paycheck is our responsibility. If something looks off, ask about it immediately. Waiting six weeks doesn't fix it, it only multiplies the problem. I want to mention a bit on our personal records too. Health records, Immunizations, Vaccinations, Physicals. In warehousing, manufacturing, transportation, and logistics, these come up more than people realize. Certain jobs, sites, or clients may require proof of Tetanus shots, Hepatitis vaccinations, physical capability exams or ergonomic testing, even drug screening history. Yes, these request or needs are rare in our field, but if you can't produce records, you may be delayed from starting a job, or even be excluded from certain assignments or have to repeat tests at your own expense. Keeping copies of our health records is important, it's about preparedness. Create a simple system, a physical folder at home, or digital copies on a secure drive with clear file names and dates. This is one of those, future you will be thankful for, habits. Oh and many people assume education records don't matter once they're working. That's not always true. High school diplomas, GEDs, college transcripts, certifications, licenses, these documents can come up when applying for leadership roles, moving into safety or compliance positions, transitioning into office or management roles and applying for specialized training. Saying I completed it is not the same as proving it. If you've earned something, keep the documentation. You worked for it. Don't let missing paperwork slow your progress later. And here's another free opinion! Your resume should never be written in a panic. It should be updated after each role, after learning new equipment, when gaining certifications, and after taking on leadership tasks. Too many people try to rebuild their entire work history the night before applying for a job, and details get lost. Dates get fuzzy. Job titles blur and we'll leave off some of our accomplishments. A resume isn't just for job hunting. It's a record of our career. Here's another unsolicited opinion of mine! Keep a running document. Add bullet points as you go. That away when opportunity shows up, you'll be ready. Now let's talk about open enrollment, this is where people can get hurt financially. Open enrollment windows are like written in stone. Miss them, and you may be locked out of Health insurance, Dental and vision, Life insurance or Disability coverage until the next enrollment period. Saying “I didn't know” doesn't reopen the window. This happened to me last year. I asked about the dental and vision offerings, but I didn't follow up when no one got back to me. So I didn't have dental and vision insurance! Understanding your benefits isn't optional adulthood, it's more like survival planning. If you don't understand a benefit, ask HR. That's what they're there for. And don't hesitate to follow up if you haven't heard back. Ignoring enrollment because it feels overwhelming can cost thousands of dollars later. Here is a hard truth, deadlines don't care about your schedule, your stress, or your intentions. Miss a form deadline and benefits don't activate, our coverage can lapse, pay adjustments don't happen. Professionals respect deadlines, even when the task isn't exciting. And we are professionals, right? That's part of being dependable. And all this documentation follows us right into retirement as well. At the end of your career, paperwork doesn't stop, believe it or not it actually increases! Retirement accounts. Pension records. Social Security documentation. Healthcare elections. People who kept records throughout their career transition more smoothly. People who didn't often scramble at the worst possible time. Your future self deserves better than all that last-minute chaos! I recently read something by a government agency. It said that paperwork isn't the enemy, neglect is. It made me think a bit! The paperwork of life isn't glamorous, but it is important. Careers don't fall apart because of one bad day on the floor. They fall apart because of missed details spread out over time. Let's all be sure to handle our paperwork with the same pride we bring to our work ethic. Oh, and I mentioned retirement a minute ago. One of the biggest myths is that retirement planning begins when you're close to retirement. It doesn't. It begins with your first benefit election, and your first 401(k) form, and your first beneficiary designation. The people who retire smoothly didn't magically get organized at 60, they stayed consistent for decades. Every form you complete correctly today reduces stress tomorrow. Every document you keep track of becomes a gift to your future self. Let me leave this part with something simple and honest. Paperwork is how the world keeps score. It records who you are, what you've earned, what you're entitled to, and how you're protected. Ignoring it doesn't make it go away, it just hands control to someone else. So lets take ownership of it, ask questions, respect those deadlines, and keep records. Ok, I'll leave it at that. I don't want it to sound like I'm standing up on a soap box here, but I've seen so many people struggle and take financial hits over the very things we discussed today. If you have any questions about anything I brought up, check with your HR department or a member of your management team, ask questions. And as always, feel free to send us an email to hose@warehouseandoperationsasacareer.com and I'll help find you an answer. Thanks for checking in and as always, please be safe in all you do.
Is Leftist Rage About to Turn as BLOODY as the French Revolution?! The Glenn Beck Podcast Watch this video at- https://youtu.be/X5trx7DtdtE?si=jMNcv2NZydqM2xpb Glenn Beck 1.67M subscribers 66,023 views Premiered 7 hours ago The Glenn Beck Podcast What if the rage tearing through America today is the exact same rage that turned the French Revolution into a bloodbath? Nationally acclaimed legal scholar Jonathan Turley sits down with Glenn to unpack his new book, "Rage and the Republic: The Unfinished Story of the American Revolution." Through the sharp lens of Thomas Paine — the revolutionary firebrand who played a role in both the American triumph and the French catastrophe — Turley delivers a chilling warning: We've been here before. He draws parallels between the mob-driven chaos of history and today's furious calls to trash the Constitution, pack the Supreme Court, and let raw majorities run wild. Turley spotlights the Minnesota riots: Are they an "insurrection" or a stark symptom of something far more dangerous? Turley suggests the Clinton-Epstein scandal should be "the world's fastest trial" and confronts the AI and robotics revolution head-on, warning of mass unemployment and proposing a solution. The American experiment hangs in the balance. Will we repeat the French nightmare or rediscover the genius that saved us the first time? GLENN'S SPONSORS: Relief Factor: If you're living with aches and pains, see how Relief Factor, a daily drug-free supplement, could help you feel better and live better. Try the three-week QuickStart for just $19.95 by visiting https://ReliefFactor.com. Subscriptions for Torch are now OPEN! Become a Torch Founding Member at https://glennbeck.com/torch if you subscribe during the month of February. ► Click HERE to subscribe to Glenn Beck on YouTube: https://bit.ly/2UVLqhL ► Click HERE to sign up to Glenn's newsletter: https://www.glennbeck.com/st/Morning_... Connect with Glenn on Social Media: / glennbeck / glennbeck / glennbeck #glennbeck #glennbeckpodcast #history #americanrevolution #supremecourt #scotus #clintons
While sitting on the beach meditating and knitting, Ashley heard that this is what you need to hear today -- and so, the episode was made. In this short episode, Ashley explores the relationship between the nervous system and intuition. When the body is in survival mode, clear consciousness is unavailable. Intuition cannot rise above urgency, pressure, or chronic activation. In a world that rewards anger, reaction, and constant engagement, choosing peace is a powerful act. This episode is a reminder to rest, to slow down, to feel what is present without overwhelm, and to create space by saying no, asking for help, delegating, and stepping back from what keeps the system activated. A regulated nervous system creates clarity. From that clarity, intuition becomes strong, trustworthy, and grounded. Applications for The North Node are Open - 2 spots left The Clearing, new workshop, is available today inside GUIDED. * * * * * * * * * * * * APPLY for The North Node beginning Febraury 25th The GUIDED Membership: Awaken Your Inner Guide — workshops, live satsang, tools and community all in one place. BOOK Journey Home Akashic Records Reading with Faith O'Higgins Path to Home on iOS Path to Home on Android How to do the Line Activation Receive a FREE Line Activation SHOP Juuso's Paintings Learn more about our work, offerings, and upcoming events at alnwithin.com Follow on Instagram @alnwithin and TikTok @alnwithin
l Paso's flight restrictions after the U.S. disables Mexican cartel drones that 'breached US airspace'. Why was the full stop originally supposed to be for 10 days? A 13-year-old boy shouted "Allahu Akbar" during a stabbing rampage at a London school. A Democrat Rep asks Acting ICE Director Todd Lyons if he thinks he's going to Hell during a committee hearing. Florida Gubernatorial Candidate James Fishback posts a video on his porch holding an AR-15 and declaring he would shoot anyone threatening his staff following an attempted arson at his home. Dana breaks down how the “Woke Reich” like Carrie Prejean Boller and Candace Owens are hijacking the Conservative movement. A trans shooter carried out one of the worst school shootings in Canadian history and the police referred to him as a “gun-person” instead of a gunman. Congressman Randy Fine calls for an immediate investigation over the Bad Bunny Halftime Show. Disney loses $170 Million on ‘Snow White' as the studio reveals the movie blew its budget. Olympians Eileen Gu and Alysa Liu spark nationality debates at the Winter Olympics over tensions around athletes' heritage and national choices with the CCP. Actor James Van Der Beek has died at the age of 47.Thank you for supporting our sponsors that make The Dana Show possible…Noble Goldhttps://NobleGoldInvestments.com/DanaThis is the year to create a more stable financial future. Open a qualified account with Noble Gold and receive a 3 oz Silver Virtue coin free. Relief Factorhttps://ReliefFactor.com OR CALL 1-800-4-RELIEFTry Relief Factor's 3-week Quickstart for just $19.95—tell them Dana sent you and see if you can be next to control your pain!Patriot Mobilehttps://PatriotMobile.com/DANA or call 972-PATRIOTSwitch to Patriot Mobile in minutes—keep your number and phone or upgrade, then take a stand today with promo code DANA for a free month of service!Humannhttps://HumanN.comGet simple, delicious wellness support when you pick up Humann's Turmeric Chews at Sam's Club next time you're there and see why they're such a fan favorite!Byrnahttps://Byrna.com/DanaMake 2026 the year you protect your family with solid options—Get the Byrna today.WebRoothttps://Webroot.com/DanaTake your cybersecurity seriously! Get 60% off Webroot Total Protection at WebrootSubscribe today and stay in the loop on all things news with The Dana Show. Follow us here for more daily clips, updates, and commentary:YoutubeFacebookInstagramXMore InfoWebsite
GOP Congressman Randy Fine calls for an immediate investigation over the Bad Bunny Halftime Show's lyrics and twerking. Meanwhile, Disney loses $170 Million on ‘Snow White' as the studio reveals the movie blew its budget. Noble Goldhttps://NobleGoldInvestments.com/DanaThis is the year to create a more stable financial future. Open a qualified account with Noble Gold and receive a 3 oz Silver Virtue coin freeSubscribe today and stay in the loop on all things news with The Dana Show. Follow us here for more daily clips, updates, and commentary:YoutubeFacebookInstagramXMore InfoWebsite
About this episode: Last month's abrupt cancellation and reinstatement of $2 billion in grants is just the most recent ordeal in SAMHSA's long year of funding cuts and administrative upheaval. In this episode: Dr. Yngvild Olsen, formerly the director of the Center for Substance Abuse Treatment at SAMHSA, chronicles the challenges facing the agency and their possible implications for efforts to reduce opioid overdose deaths and improve mental health outcomes. Guests: Dr. Yngvild Olsen, MPH, is a nationally recognized leader in addiction medicine, public health policy, and clinical care integration. She currently serves as a national advisor with Manatt Health. Host: Lindsay Smith Rogers, MA, is the producer of the Public Health On Call podcast, an editor for Expert Insights, and the director of content strategy for the Johns Hopkins Bloomberg School of Public Health. Show links and related content: 24 hours of chaos as mental health grants are slashed then restored—NPR SAMHSA Strategic Priorities—SAMHSA Progress on overdose deaths could be jeopardized by federal cuts, critics say—Stateline Transcript information: Looking for episode transcripts? Open our podcast on the Apple Podcasts app (desktop or mobile) or the Spotify mobile app to access an auto-generated transcript of any episode. Closed captioning is also available for every episode on our YouTube channel. Contact us: Have a question about something you heard? Looking for a transcript? Want to suggest a topic or guest? Contact us via email or visit our website. Follow us: @PublicHealthPod on Bluesky @PublicHealthPod on Instagram @JohnsHopkinsSPH on Facebook @PublicHealthOnCall on YouTube Here's our RSS feed Note: These podcasts are a conversation between the participants, and do not represent the position of Johns Hopkins University.
God's grace will either make you angry or it will make you worship. Pastor Colin explains why no one remains neutral when they finally understand what “grace” is. That's next time on Open the Bible.
In this powerful conversation, world-renowned astrologer Lynn Bell breaks down the major astrology of 2026 — from the rare Mars–Venus–Sun conjunction to the Saturn–Neptune era shift and the deeper, slower timeline of real transformation. We explore why this moment feels intensely personal, how the feminine (Venus) becomes the key stabilizing force, and why nervous system regulation, embodiment, and conscious choice matter more than ever. This episode is an essential guide for navigating the push–pull between the old world dissolving and the new one still forming. ARRIVE — The Free 3-Day ReWilding Challenge A rare, live global immersion held inside the Fire Horse Solar Eclipse and Zero-Degree Aries creation window — a moment that doesn't repeat. 3 days. Live. Free. Global. Open to all.. → Sign up here The Path of the Priest/ess In-Person Retreat This is our only in-person Priestess Training offered this year — a 5-day advanced retreat in Ibiza, Spain (22–26 April 2026), limited to 24 participants and available by application only. Early Bird Pricing available through March 1st, 2026. → Details & application here Ways to work with Lynn Bell: Website The Cosmic Speed of Change. Lynn Bell & Carolyn Myss The Dark Side of Venus Webinar Omega Institute Wounds and Remedies Workshop Greece Retreat Listen to “You Can Feel It: The Era Is Changing (with Lynn Bell)“ podcast here… Topics Explored in “You Can Feel It: The Era Is Changing (with Lynn Bell)” podcast: (Times based off audio version) (0:00) Lynn Bell on 2026 Astrology: The Era Shift Is Here (But Not Overnight) (5:32) Permanent Imbalance + Nervous System Language: Why It Feels So Fast (6:55) Venus Enters Pisces + Saturn Into Aries (Valentine's Day): Sweetness Meets Reality (10:21) Mars–Venus–Sun Triple Conjunction: Personal Awakening + The New vs Old Push–Pull (13:20) Mercury Retrograde in Pisces + Mercury–Mars Encounters: Resistance to the “New World” (15:23) Where to Look in Your Birth Chart: 0° Aries + The Pisces House Behind It (17:01) Venus Starpoint (Arielle Guttman) + Mars Dominance: Why Venus “Can't Just Be Herself” (20:47) Feb 17 Aquarius Eclipse: Venus Conjunct North Node (Feminine as the North Star) (23:14) Equinox Portal: Venus Stations Direct on 8° Pisces (The “Stepping Stones” Year) (25:25) Venus Embodiment: Noticing the New & Preparing for Key Portals (30:26) Epstein Files + Saturn in Aries: Ownership, Power, and the Feminine Response (35:16) Saturn Myth + The “Second Womb”: Hestia/Vesta Swallowed & the Priestess Fire (40:04) Sedna + Eris + Chiron: Trickster Patterns, Wounding, and Warrior Truth (42:39) Saturn–Neptune in Aries: Slavery vs Autonomy + The “Collective Hypnosis” Theme (45:37) Choosing Magic in 2026: Venus as the Only Force That Can Disarm Mars (57:08) Second Half of 2026: Extreme Fire, Chiron Into Taurus (June), Node Shift (Leo/Aquarius) (1:01:05) Pluto in Aquarius Revelations + Closing Invites (Feb 24 Classes, Greece, Ibiza, Vesta)You can leave a comment or question for Sabrina on the YouTube version of this episode. Listen to after “You Can Feel It: The Era Is Changing (with Lynn Bell)”: 2026 is a Turning Point (the episode that made Sabrina sick ) Watch Part 1 — “Are You in the First Wave?” STAY CONNECTED ReWilding Weekly (free, embodied astrology) IG Website Disclaimer: Educational/spiritual perspectives; not medical/mental-health advice. #2025Shift #NewHuman #SpiritualAwakening Welcome to ReWilding with Sabrina Lynn & ReWilding for Women! A gifted facilitator of revolutionary inner work and the world's leading archetypal embodiment expert, Sabrina Lynn is the creator of the groundbreaking ReWilding Way and founder of ReWilding For Women. Sabrina has led more than 100,000 people through programs based on the ReWilding Way, a modality of healing and awakening that strips away the false, the deep wounds from early life, and the fears that hold people back, to reveal their true and unique soul light and help them build their innate capacity to shine it in the world. Her work includes in-person retreats and events, the monthly ReWilding Membership, Living Close to the Bone, Priest/ess Trainings, Mystery Schools, the ReWilding with the Archetypes, and the wildly popular 6 Faces of the Feminine workshop series. Welcome to ReWilding! The post 364 – You Can Feel It: The Era Is Changing (with Lynn Bell) appeared first on Rewilding for Women.
If you're already giving to charity, you're leaving thousands of dollars in tax deductions on the table. What is a donor-advised fund and why should you care? Mike sits down with Adam Nash, CEO of Daffy, to break down how Donor-Advised Funds (DAFs) work and why they can be a powerful tax strategy for business owners and high-income earners. If you regularly give to your church, your kids' school, your alma mater, or other charities, this episode shows you how to give more strategically, reduce taxes, and increase your impact.
Experience behavior change breakthroughs, neural shifts, lasting results—discover a counterintuitive method to finally break old habits and ignite new ones. In this episode, guest psychotherapist Allison Maida reveals how rewiring your brain can create real, lasting change in health, relationships, and more. If you've struggled to stick with resolutions or wondered why change feels so hard, you won't want to miss this transformative conversation! LET'S TALK THE WALK! Join here for support, motivation and fun! Wellness While Walking Facebook page Walking to Wellness Together Facebook GROUP Wellness While Walking on Instagram Wellness While Walking on Threads Wellness While Walking on Twitter Wellness While Walking website for show notes and other information wellnesswhilewalking@gmail.com RESOURCES AND SOURCES (some links may be affiliate links) Allyson Maida, Ed.D., LCSW 180: A Counterintuitive Method for Long-Lasting Personal and Professional Change Subscribe to Allyson's blog Whole Life Workshop with Coach Carolyn Get on the waitlist for our next virtual workshop by emailing wellnesswhilewalkng@gmail.com HOW TO RATE AND REVIEW WELLNESS WHILE WALKING How to Leave a Review on Apple Podcasts on Your iOS Device 1. Open Apple Podcast App (purple app icon that says Podcasts). 2. Go to the icons at the bottom of the screen and choose "search" 3. Search for "Wellness While Walking" 4. Click on the SHOW, not the episode. 5. Scroll all the way down to "Ratings and Reviews" section 6. Click on "Write a Review" (if you don't see that option, click on "See All" first) 7. Then you will be able to rate the show on a five-star scale (5 is highest rating) and write a review! 8. Thank you! I so appreciate this! How to Leave a Review on Apple Podcasts on a Computer 1. Visit Wellness While Walking page on Apple Podcasts in your web browser (search for Apple Podcasts or click here) https://www.apple.com/apple-podcasts/ 2. Click on "Listen on Apple Podcasts" or "Open the App" 3. This will open Apple Podcasts and put in search bar at top left "Wellness While Walking" 4. This should bring you to the show, not a particular episode – click on the show's artwork 5. Scroll down until you see "Rating and Reviews" 6. Click on "See All" all the way to the right, near the Ratings and Review Section and its bar chart 7. To leave a written review, please click on "Write a Review" 8. You'll be able to leave a review, along with a title for it, plus you'll be able to rate the show on the 5-star scale (with 5 being the highest rating) 9. Thank you so very much!! OTHER APPS WHERE RATINGS OR REVIEWS ARE POSSIBLE Spotify Goodpods Overcast (if you star certain episodes, or every one, that will help others find the show) Castbox Podcast Addict Podchaser Podbean HOW TO SHARE WELLNESS WHILE WALKING Tell a friend or family member about Wellness While Walking, maybe while you're walking together or lamenting not feeling 100% Follow up with a quick text with more info, as noted below! (My favorite is pod.link/walking because it works with all the apps!) Screenshot a favorite episode playing on your phone and share to social media or to a friend via text or email! Wellness While Walking on Apple – click the up arrow to share with a friend via text or email, or share to social media Wellness While Walking on Spotify -- click the up arrow to share with a friend via text or email, or share to social media Use this universal link for any podcast app: pod.link/walking – give it to friends or share on social media Tell your pal about the Wellness While Walking website Thanks for listening and now for sharing! : ) DISCLAIMER Neither I nor many of my podcast guests are doctors or healthcare professionals of any kind, and nothing on this podcast or associated content should be considered medical advice. The information provided by Wellness While Walking Podcast and associated material, by Whole Life Workshop and by Bermuda Road Wellness LLC is for informational and entertainment purposes only. It is not intended to be a substitute for professional medical advice, diagnosis or treatment. Always seek the advice of your physician or other qualified health care provider with any questions you may have regarding a medical condition or treatment, and before undertaking a new health care regimen, including walking. Thanks for listening to Wellness While Walking, a walking podcast and a "best podcast for walking"!
Will Compton and Taylor Lewan are joined by NFL legends Clay Matthews and Delanie Walker to break down Super Bowl LX between the Seattle Seahawks and New England Patriots. The boys dive into Sam Darnold winning his first Super Bowl and just how dominant Seattle looked all season long. They also discuss Kenneth Walker III taking home Super Bowl MVP and what his huge performance could mean as he heads toward free agency and a potential payday. In our Jim Beam Taste of the Offseason segment, the crew looks ahead to the future of the New England Patriots, pouring one out for our guy Mike Vrabel while recognizing how much this team overachieved this year. With Drake Maye at quarterback, New England’s future is still bright. The guys also recap Bad Bunny’s Super Bowl halftime show, while Delanie and Clay share what it’s really like playing in a Super Bowl and how the week leading up to the game compares to a normal NFL game week. It’s the final football recap show of the season, and the boys bring the juice. Tap in. Timestamp Chapters: 0:00 Open 1:35 Super Bowl Recap 8:30 Nerves Of The Super Bowl 15:40 What Happened At Halftime During Delanie’s Super Bowl 18:25 Patriots D Balled Out 20:00 Patriots O Was Basic 23:00 Seahawks Roster 27:00 Halftime Show 45:00 San Francisco Was Kind Of Clean 49:00 Alcatraz 55:27 JIM BEAM TASTE OF THE OFFSEASON 59:12 Titans Outlook 1:00:22 Packers Outlook 1:04:12 Post Season Awards See omnystudio.com/listener for privacy information.