Podcasts about Engine

machine that converts one form of energy into mechanical energy

  • 5,773PODCASTS
  • 11,390EPISODES
  • 39mAVG DURATION
  • 2DAILY NEW EPISODES
  • Feb 28, 2026LATEST
Engine

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about Engine

Show all podcasts related to engine

Latest podcast episodes about Engine

The Extramilest Podcast
#121: How to Build an Unstoppable Aerobic Engine (Even If Starting From Zero) | Scott Johnston

The Extramilest Podcast

Play Episode Listen Later Feb 28, 2026 73:26


Thanks to LMNT for sponsoring this video. Get a free sample pack with any purchase at https://DrinkLMNT.com/FLO This is one of my favorite podcast episodes I've ever recorded. Scott Johnston is an established coach and co-author of the book Training for the Uphill Athlete. We dive into aerobic development, thresholds, Zone 2 training, and how to become a stronger, healthier and happier athlete.    Watch this full video on YouTube: https://youtu.be/vAqQPDXtq5k      Chapters:  0:00 – The aerobic foundation every endurance athlete needs 1:30 – LMNT sponsorship 2:40 – Introducing Scott & Training for the Uphill Athlete 6:38 – Why aerobic capacity matters most 11:49 – How long it takes to build an aerobic engine 14:31 – Aerobic vs anaerobic thresholds 18:38 – Aerobically developed vs deficient athletes 25:08 – Scott's high-level advice for aerobic development 29:47 – How to test aerobic and anaerobic thresholds 36:33 – The second threshold test: duration and intensity 38:50 – Training stress, recovery, and stagnation 43:00 – Injury risk and ramping volume too fast 44:48 – Advice Scott would give his younger self 54:02 – Fast vs slow twitch athletes 59:30 – Threshold workouts and training smarter 1:06:16 – When double threshold workouts make sense 1:07:24 – Where to find Scott 1:09:46 – How to be a healthier, stronger, happier athlete     FIND SCOTT JOHNSTON ► Website: https://evokeendurance.com/ ► Instagram: https://www.instagram.com/coach_scott_johnston/ ► Evoke Instagram: https://www.instagram.com/evokeendurance/ ► Reddit: https://www.reddit.com/r/evokeendurance/ ► Podcast: https://podcasts.apple.com/us/podcast/evokecast/id1652132598 ► Spotify: https://open.spotify.com/show/5hEgBO6OHjnya5S6CWL024     LINKS & TOOLS MENTIONED ► Training for the New Alpinism Book: https://a.co/d/056KjXT1 ► Training for the Uphill Athlete Book: https://a.co/d/00NXdpdR ► Jeff Pelletier Badwater Movie: https://youtu.be/S774m29AYr4  ► My new book Running Breakthroughs: https://florisgierman.com/  ► More options to buy the book worldwide on Amazon: https://geni.us/running-breakthroughs  ► Path Projects: https://pathprojects.com/ ► Extramilest Episode with Mark Allen: https://youtu.be/YlBUJTIggyA      YOU CAN FIND ME, FLORIS GIERMAN HERE: ► My Personal Best Running Coaching Program: https://www.pbprogram.com/ ► Podcast: https://extramilest.com/podcast/  ► Strava: https://www.strava.com/athletes/1329785  ► Instagram: https://www.instagram.com/florisgierman/ ► Buy my new book Running Breakthroughs: https://florisgierman.com/  ► Extramilest Website: https://extramilest.com  ► Path Projects Website: https://pathprojects.com      Affiliate Disclosure: I may earn commissions if you purchase items via my affiliate links. "As an affiliate I earn from qualifying purchases." Affiliate links do not increase cost to you. Also, you do not need to use these links. You can also search for these same items in Amazon or on any search engine/shopping site of your choice and buy/research them that way.   ABOUT THE EXTRAMILEST SHOW:   A podcast and YouTube channel where host Floris Gierman interviews world class athletes, coaches and health experts on the topic of how to become a stronger, healthier and happier athlete.   More info about our running coaching program can be found at https://www.pbprogram.com    Subscribe and hit the bell to see new videos: https://bit.ly/Flo-YT 

Live Wire with Luke Burbank
Scaachi Koul and Emma Ruth Rundle

Live Wire with Luke Burbank

Play Episode Listen Later Feb 27, 2026 52:28


Slate writer Scaachi Koul unpacks her latest book of essays Sucker Punch, in which she delves into her unexpected birth, the dissolution of her marriage, and how her friends have come to know her as "the divorce doula." Multidisciplinary artist Emma Ruth Rundle explains how she crafted her debut poetry collection The Bella Vista – which touches on love lost, addiction, and discovering oneself – while traveling on tour, then performs “Blooms of Oblivion” from her album Engine of Hell. 

Search Buzz Video Roundup
Search News Buzz Video Recap: Google Discover Core Update Done, Search Volatility, Search Serving Bug, AI Prompt Injection, Google Ads, Local & Bing

Search Buzz Video Roundup

Play Episode Listen Later Feb 27, 2026


This week, we covered the competition of the Google Discover core update. Also gave a status update on the Google Search volatility. Google had a brief serving issue with Google Search. Google is testing showing vertical...

Worship Online Podcast
You Can't Change the World Without This One Thing w/ Michael Bethany

Worship Online Podcast

Play Episode Listen Later Feb 26, 2026 28:56


Michael Bethany was leading worship, building wells in Sri Lanka, writing songs, and serving the church—but something was missing. Then in 2017, he watched a Francis Chan YouTube video about Isaiah 6. And everything changed. That night, he felt guilty praying from his bed. He got on his knees, humbled himself, and started waking up 1.5 hours earlier for a daily prayer closet—even though he "didn't know how to pray that long." What happened next reshaped his entire ministry. In this powerful conversation, Michael shares: ✅ Why intimacy with God is the ENGINE for mission (not the other way around) ✅ The teenager moment when he spontaneously sang "Holy" and everything shifted ✅ How getting tricked into a Sri Lanka missions trip changed his relationship with music ✅ Why "you don't have enough energy to change the world without intimacy with the Lord" ✅ The #1 reason worship leaders get fired (hint: it's not musical ability) If you're running on fumes, trying to lead well while feeling distant from God—this episode will recalibrate everything. Listen now and rediscover the secret place. Mentioned in the Episode Overflow Conference Life in the Wild   Worship Online is your new secret weapon for preparing each week. With detailed song tutorials and resources, you and your team will save hours every single week, and remove the stress from preparing for a set. Try a free trial at WorshipOnline.com and see the transformation! If you like what you hear, please leave us a review! Also, shoot us an e-mail at podcast@worshiponline.com. We want to know how we can better serve you and your church through this podcast. Don't forget to sign up for your FREE 2-week subscription to Worship Online at WorshipOnline.com! The Worship Online Podcast is produced by Worship Online in Nashville, TN.  

Lets Have This Conversation
Human Engagement is the Engine of Business Performance with Stephen Baer

Lets Have This Conversation

Play Episode Listen Later Feb 26, 2026 54:21


Impact of Feedback: When employees believe their feedback is actually used to make improvements, they are 37% less likely to look for a new job. Pew Research Center On average, engaged employees see a 20% individual performance improvement and an 87% reduction in the desire to leave.   A  2024 research Survey with The Harris Poll found that managers play a critical role in moving employees from burned out and checked out to thriving. For employees who say they are thriving, the top indicator is a manager who is "invested in their success." Employee thriving is driven by three key drivers:   Stephen Baer is the Co-Founder and Managing Partner of Engagency, a firm built on his core belief that human engagement is the engine of business performance. He leads a team of behavioral experts who help organizations build meaningful, measurable connections with their workforce and customers. With a 30-year career focused on the science of connection, motivation, and activation, Stephen brings a rare blend of behavioral insight, creativity, and operational discipline. He previously co-founded and led The Game Agency, a learning and engagement company acquired by ELB Learning, and held sales and marketing leadership roles at Atari and General Electric, where he was a Six Sigma Black Belt Certified and a recipient of GE's Global Marketing Excellence Award. Stephen has served on the Board of ELB Learning and the Advisory Board of the Life Sciences Trainers & Educators Network (LTEN), and was a contributing writer for the Forbes Human Resources Council for six years, sharing insights on engagement and organizational growth. The author of the book, "Stickology: How to Build Unbreakable Connections with Employees and Customers for Life," and two children's books (Catastrophe in the City and The Doghouse), Stephen holds a BA from Oberlin College and an MBA from Columbia University.      For more information: https://stephenbaer.com/ Get the book: https://www.amazon.ca/Stickology-Unbreakable-Connections-Employees-Customers/dp/9699592532. Learn more about your ad choices. Visit megaphone.fm/adchoices

Leading The Way with Dr Michael Youssef
When the Dashboard Says ‘engine Trouble' - 26 February 2026

Leading The Way with Dr Michael Youssef

Play Episode Listen Later Feb 26, 2026 22:44


Listen to the next LEADING THE WAY when Dr. Michael Youssef offers FREEDOM FROM FEAR. Biblical teaching that will distance you from the bondage that life brings when we allow fear to overtake. Join him for LEADING THE WAY! (Matthew 8) Support the show: https://au.ltw.org/See omnystudio.com/listener for privacy information.

Content Amplified
How Do You Build a Content Engine That Actually Drives Revenue?

Content Amplified

Play Episode Listen Later Feb 26, 2026 13:37


Most teams treat content like a marketing task. Pritesh Vora treats it like a revenue engine.In this episode, Pritesh breaks down a refreshingly disciplined approach to building a content engine that connects strategy, quality, and real business outcomes. Drawing from his experience scaling B2B companies 8x and 30x—and building revenue systems from the ground up—he shares how content can become the central fabric across marketing, sales, product, and customer success.His core belief is simple but demanding: content must make a real human's life better. If it doesn't, it won't compound.We explore how to plan quarterly content with clarity, how to tie every content promise to a revenue moment, and why defining what you won't do might be the most strategic move your team makes.If your content calendar feels busy but disconnected from results, this conversation will help you rebuild it with intent.What you'll learn in this episode:Why content should function as a company-wide revenue lever—not just a marketing channelThe concept of a “moral compass” for content—and how to use it to guide decisionsHow to define reader-centered promises instead of chasing keywordsHow to connect each content theme to a specific revenue moment (pipeline, win rate, acceleration)Why audience growth and reader gratitude are powerful quality signalsThe practical value of creating a “no-go list” for your quarterly planHow to prevent content teams from becoming reactive request machinesHow alignment documents and clear agreements protect focusAbout Pritesh VoraPritesh Vora is a B2B growth leader and self-described revenue engine builder. With over 13 years in the startup ecosystem, he has led business development, growth, and marketing teams while helping companies scale 8x and 30x across multiple stints.An engineer by training, Pritesh transitioned into growth and revenue leadership, eventually founding and exiting his own company. Most recently, he helped scale Sprinto from early-stage traction to thousands of customers, growing revenue from under $300K to multi-million dollars while building a 50+ person cross-functional team spanning marketing, BDR, partnerships, and operations.His approach blends systems thinking with strategic clarity—tying content directly to revenue outcomes while keeping the reader at the center.Connect with Pritesh:Pritesh's LinkedIn ProfileIf you're building content but want it to move pipeline, close deals, and strengthen positioning, this episode offers a structured way to think—and execute.Text us what you think about this episode!

Keen On Democracy
Stuck, Stuck, Stuck, Stuck: Maya Kornberg on Congress as a Four-Alarm Fire

Keen On Democracy

Play Episode Listen Later Feb 25, 2026 41:32


"The House hasn't reorganized committee jurisdictions since the early 70s—before the internet existed." — Maya KornbergAmerica is stuck stuck stuck stuck. Almost exactly a year ago, I interviewed the Atlantic's Yoni Applebaum about Stuck, his influential critique of the housing crisis. Now we have another Stuck—this one by Maya Kornberg, a senior fellow at the Brennan Center for Justice. Only her subtitle is about Congress, not housing: How Money, Media, and Violence Prevent Change in Congress.This is, Kornberg argues, one of the toughest times in modern American history to sit in Congress. Members are forced to spend most of their time making fundraising calls. They face record-high threats against themselves and their families. And the media incentivizes spectacle over policymaking—what she describes as "Kings and Prophets"—where members have the power of the megaphone but not the power to drive legislation.One fact captures Congressional stuckness: The House hasn't reorganized its committee jurisdictions since the early 1970s—before the internet existed. Half the Senate, then, questioned Mark Zuckerberg because no single committee is responsible for tech. Not even mad libertarians like Elon Musk could make that one up.Kornberg recently ran for New York City Council in Park Slope and, as a friend of Israel, discovered firsthand how media latches onto the most salacious angle. That said, she's not giving up on Congress. Kornberg is hopeful that a fresh wave of reformers, like the Watergate babies of '74 or the class of 2018, can unstick it. But she is, nonetheless, clear-eyed about what we're facing: a four-alarm fire for our democracy. Five Takeaways●      This Is the Hardest Moment in Modern History to Be in Congress: Members face astronomical campaign costs, record-high threats and violence against themselves and their families, and a leadership-driven system that has stripped rank-and-file members of real power to drive legislation.●      Money, Media, and Violence Keep Congress Stuck: Members spend every mealtime making fundraising calls. They pay "dues" to the party just to get on good committees. Media incentivizes spectacle over policymaking. And threats against members have risen year after year.●      Congress Hasn't Reorganized Since Before the Internet: The House hasn't reorganized committee jurisdictions since the early 1970s. Half the Senate questions Mark Zuckerberg because no single committee is responsible for tech. When everyone's responsible, no one is.●      More Chairmen Named Mike Than Women Committee Leaders: The pay-to-play system in Congress disadvantages women, communities of color, working-class Americans, and young Americans—anyone who faces greater barriers to fundraising faces greater barriers to power.●      Waves of Reformers Can Unstick Congress: The Watergate babies of '74, the Republican Revolution of '94, the class of 2018—frustrated reformers have reshaped Congress before. The midterms could bring another wave, if the public frustration is deep enough. About the GuestMaya Kornberg is a senior fellow at the Brennan Center for Justice. She holds a PhD from Oxford and is the author of Inside Congressional Committees. She recently ran for New York City Council in Brooklyn's Park Slope.ReferencesBooks mentioned:●      Stuck: How Money, Media, and Violence Prevent Change in Congress by Maya Kornberg — her new book on why Congress is stuck and how to unstick it.●      Stuck: How the Privileged and the Propertied Broke the Engine of American Opportunity by Yoni Applebaum — on the housing crisis, interviewed on this show a year ago.●      Why Nothing Works by Marc Dunkelman — on who killed progress and how to bring it back.People mentioned:●      Henry Waxman served four decades in Congress and passed landmark health and environmental legislation even under Reagan.●      Lauren Underwood came to Congress in 2018 and co-founded the Black Maternal Health Caucus after losing a friend who died after childbirth.●      Hélène Landemore is a Yale political theorist who advocates for citizen assemblies as an alternative to representative democracy.About Keen On AmericaNobody asks more awkward questions than the Anglo-American writer and filmmaker Andrew Keen. In Keen On America, Andrew brings his pointed Transatlantic wit to making sense of the United States—hosting daily interviews about the history and future of this now venerable Republic. With nearly 2,800 episodes since the show launched on TechCrunch in 2010, Keen On America is the most prolific intellectual interview show in the history of podcasting.WebsiteSubstackYouTubeApple PodcastsSpotify Chapters:(00:00) - Introduction: America is stuck (02:04) - Why everyone woke up to this problem at once (03:49) - Why study Congress? Is it boring? (06:33) - Money, media, and violence (07:11) - Congressional chameleons: Waxman, Underwood, Andy Kim (10:24) - Is this bipartisan? (12:37) - The crummiest job in Washington (15:53) - Money: 'I spend every mealtime making fundraising calls' (17:29) - Should Congress get a pay raise? (19:53) - Media and the Gaza third rail (23:14) - Kings and Prophets: Spectacle over policy (25:32) - Can Congress stand up to Trump? (27:43) - Congress is woefully unprepared to regulate tech (31:54) - Gerontocracy: More Mikes than women (37:34) - Can citiz...

UnX News Podcast with Margie Kay
Un-X News - Serial Killer Victim Sites with Paul Dale Roberts!

UnX News Podcast with Margie Kay

Play Episode Listen Later Feb 25, 2026 57:32 Transcription Available


Paul Dale Roberts personally investigated several serial killer victim sites. Watch or listen to find out what he discovered!Roberts was born on January 17, 1955 in Fresno, California. He has an Associate Degree in Criminology. In 1977, Roberts was a firefighter with the California Division of Forestry for one year. Firefighting was not his cup of tea. Military Career: Roberts from 1973 to 1976 served with the US Army's D.S.T. (Drug Suppression Team) C.I.D. (Criminal Investigation Division) in Germany, working undercover narcotics. From 1979 to 1986 Roberts served in the US Army's Military Intelligence. Working at PIC-K (Photo Interpretation Center in Korea). Roberts held a Top Secret S.B.I. (Special Background Investigation) clearance as an Intelligence Analyst, later receiving an H-Identifier with OPFOR (Opposing Forces), where Roberts wore a Soviet uniform, ski mask and trained elite troops like US Army's Special Forces, 101st Airborne, Air Force Special Operations, Delta Force, 82nd Airborne, Marine Recon the Soviet Threat and W.E.F.T. (Wings, Engine, Fuselage, Tail section) in identifying Soviet aircraft.Roberts in 2004 became a paranormal investigator and with 750 investigations under his belt and 750 paranormal articles he has written lead him to be in documentaries. From 3 episodes of My Ghost Story - Biography Channel to History Channel's Monsterquest (Mothman episode) to Conversations of a Serial Killer by Two Four Productions to Showtime's Penn & Teller Bullshxt - Mayan Prophesy of 2012 to Mysteries of Angels and Demons by Ives Street Entertainment to Michael Jackson:You are not Alone/In Search of his Spirit, that can be seen here:    • Michael Jackson: You Are Not Alone/  In Se...   Roberts is also a Fortean investigator in which he investigates ALL things paranormal from Mothman, Chupacabra, UFOs, Crop Circles, Ghosts, Poltergeists, Demons and more. Roberts is the HPI (Hegelianism Paranormal Intelligence - International) Owner.   / hpiinternational   Significant investigations by HPI are the Skinwalker Ranch in Utah, looking for Natalee Holloway's ghost in Aruba, UFOs and Bigfoot at Mount Shasta, UFOs and USOs at Monterey Bay, Area 51, Guatemala City - Guatemala. Writing Career: Roberts writes community stories and is a former columnist for the Sacramento Press, a former columnist for Haunted Times Magazine, and has written small blurbs for Newsweek, Time, National Geographic Traveler, and People Magazine. Roberts is a former columnist for Vamperotica by www.vamperatica.com/Brainstorm Comics; Writer's Digest; WebBound; Just Comics and More by Genesis Publications. Roberts now writes for online magazines such as Chatterbrew Magazine www.chatterbrew.com; Lorena's Angels http://www.lorenasangels.com/ ; Ceri Clark's All Destiny Magazine. Roberts was recently picked up by Paranormal Magazine UK and works for the online national news site Before its News. Roberts articles are featured in legendary Brad Steiger's books and Timothy Green Beckley's books. Roberts has now published 4 books - HPI Chronicles series with Lulu.Join the X to get our newsletter, show listings, and more perks! www.unxnetwork.comBecome a supporter of this podcast: https://www.spreaker.com/podcast/unx-news-podcast-with-margie-kay--5231151/support.Un-X News is broadcast on the UnXplained Network weekly. Check out all of our great shows on Spreaker! Join the X at www.unxnetwork.com to get our newsletter and more perks! The X offers more - On-Demand workshops on a variety of subjects, a bi-monthly magazine, our news blog, and the X Club group. Join the X family!

CanadianSME Small Business Podcast
The Visibility Engine:Why One-Time Valuations Don't Work

CanadianSME Small Business Podcast

Play Episode Listen Later Feb 25, 2026 14:33


Welcome to the CanadianSME Small Business Podcast, hosted by Kripa Anand. In this episode, we examine why an entrepreneur's largest asset, their business, is often overlooked in financial planning until it is too late to influence the outcome. Our guest is Trevor Greenway, Co Founder and CEO of interVal. With a decade of experience in M&A, Trevor shares how turning financial data into a real time visibility engine helps advisors and owners understand value, manage risk, and make smarter decisions long before an exit. Key Highlights The Owner's Blind Spot: Why business value is often ignored and the financial risks that creates.   Beyond Static Valuations: Why one time reports fail to guide meaningful strategic decisions.   Redefining Advisor Relationships: How continuous valuation data strengthens trust and long term planning.   Technology with Purpose: Where software enhances advisory insight without replacing human judgment.   Actionable Risk Intelligence: How dashboards deliver timely, differentiated insights for better outcomes. Special Thanks to Our Partners: UPS: https://solutions.ups.com/ca-beunstoppable.html?WT.mc_id=BUSMEWA Google: https://www.google.ca/ A1 Global College: https://a1globalcollege.ca/ ADP Canada: https://www.adp.ca/en.aspx For more expert insights, visit www.canadiansme.ca and subscribe to the CanadianSME Small Business Magazine. Stay innovative, stay informed, and thrive in the digital age! Disclaimer: The information shared in this podcast is for general informational purposes only and should not be considered as direct financial or business advice. Always consult with a qualified professional for advice specific to your situation.

The John Batchelor Show
S8 Ep506: General Blaine Holt analyzes China's J-35, noting it uses stolen F-35 designs but suffers from engine unreliability and systemic corruption within Chinese military procurement systems. 10.

The John Batchelor Show

Play Episode Listen Later Feb 24, 2026 7:17


General Blaine Holt analyzes China's J-35, noting it uses stolen F-35 designs but suffers from engine unreliability and systemic corruption within Chinese military procurement systems. 10.1793

Heal The Hurt
The Shame Engine Under Your Success

Heal The Hurt

Play Episode Listen Later Feb 24, 2026 14:10


The Michael Jordan Tragedy (The High Achiever's Curse: Part 3)That voice in your head that says, "You're slipping," or "It's not enough"—that is not your motivation. That is your abuser.Welcome to Part 3 of "The High Achiever's Curse: Healing The Void." In this lesson, we are dismantling the "Shame Engine." You've been told that you need to be hard on yourself to achieve greatness, but the story of Michael Jordan proves otherwise.In this lesson, you will learn why shame didn't create your talent—it hijacked it. We are going to separate who you really are from the abusive inner coach that has been driving your success at the cost of your happiness.IN THIS EPISODE:1- The "Michael Jordan" Trap: Why using shame as fuel eventually destroys you.2- The difference between "Healthy Drive" (Expansive) and "Shame Drive" (Contracted).3- Why sarcasm is actually "veiled anger" and violence against the self.

Ship Full of Bombs
Junkshop Jukebox #134: A Scattershot Selection of Songs & Tunes to Tickle, Tantalise and Tease Your Eardrums for a Feverish February (24/02/2026)

Ship Full of Bombs

Play Episode Listen Later Feb 24, 2026 121:34


Intro: One More Night – Can        Saturday Night Special – The Sundown Playboys (2:10) Two Step de Prairie Soileau – Savoy-Smith Cajun Band (4:42) Slow Down – Larry Williams, with his Band (2:43) I Can Only Give You Everything – Them (2:39)  Perversion – Stereolab (4:59) Shifting Sands – West Coast Pop Art Experimental Band (3:53) Engine 54 – The Ethiopians (2:37) In the Rain – Keith Hudson (IInd Street Dreads) (3:14) Toc – Tom Zé (2:59) Syren – Syrinx (6:00)  Dark Star – The Grateful Dead (2:41)  Malaguena – Snooks Eaglin (3:39) The Morning After – Hank Mobley (9:38) Night Before – The Mar-Keys (2:08)   The Old Man's Back Again – Scott Walker (3:40)  Srinivas – Marc Ribot, with Steve Earle (6:09) Youth Against Fascism – Sonic Youth (3:38) Bella Ciao – Marc Ribot, with Tom Waits (3:36) The Ballad of the Fallen – Charlie Haden & Carla Bley (4:19) You Fascists Bound To Lose – Resistance Revival Chorus, with Rhiannon Giddens (3:21) Power Show! – Fela Anikulapo Kuti & Egypt 80 (14:49) Goin' Down South – Bobby Hutcherson (7:05) Egon and Gertie – Rachel's (3:02) Cavatina (V) from String Quartet No.13 Op.130 – Beethoven, Amadeus-Quartett (6:32) Outro: Pogles Walk – Vernon Elliott Ensemble

The Food Institute Podcast
Clean Data, Clear Direction: Building a Foodservice Marketing Engine for Sales

The Food Institute Podcast

Play Episode Listen Later Feb 24, 2026 38:25


This Episode is Sponsored By: Tibersoft  Foodservice manufacturers might develop option paralysis with all the data available in the current day, but what kind of focus can really help drive marketing returns? Suzanne Cwik of Tibersoft and Eric Anderson of Conagra help break down data best practices to develop a foodservice marketing engine for food away from home manufacturers.  About Suzanne Cwik: Suzanne Cwik is the Vice President of Commercial & Client Services at Tibersoft. With over two decades of foodservice experience, Suzanne understands the friction between data complexity and sales execution. She is passionate about helping organizations move from reactive reporting to proactive strategy, empowering teams to transform fragmented supply chain data into clear, actionable growth plans. About Tibersoft: Tibersoft delivers trusted go-to-market intelligence for food and packaging manufacturers navigating the complexity of Food Away From Home. Our platform empowers Sales, Finance, Marketing, and IT to act faster, recover trade spend, and grow smarter. By bringing transaction-verified accuracy and clarity to operator-level performance, we align manufacturers with their partners, turning data complexity into shared confidence. To learn more, visit: tibersoft.com. More about Eric Anderson: Eric Anderson is a Senior Director of Category Marketing at Conagra Foodservice, leading both the shelf-stable product portfolio and the marketing activation team. With 30+ years of foodservice experience—from marketing pizza in K-12 to leading category strategy today—Eric believes data is most powerful when it answers specific questions and supports clear, compelling storytelling rather than chasing perfection. Known for his practical, operator-grounded mindset, he enjoys helping teams translate insights into action while fostering a culture of learning and continuous improvement.  More about Conagra Foodservice: Conagra Foodservice is an innovative, leading supplier to the Foodservice industry, offering a broad range of trusted brands. Conagra brings a rich heritage of making great food to satisfy consumers' ever-changing food preferences. Operators have come to depend on brands such as Hebrew National®, Healthy Choice®, Angela Mia®, PAM®, Gilardi®, Slim Jim®, and Reddi-wip® to stay on-trend and provide the best products and service to their patrons.  Learn more at: https://www.conagrafoodservice.com/. 

The Beerists Craft Beer Podcast
697 - Red Engine Brewing Co

The Beerists Craft Beer Podcast

Play Episode Listen Later Feb 23, 2026 47:48


Red Engine Brewing Co is a new one to us, and we're trying 4 of their beers to get acquainted.Pairs with skate parks, moaning in Mexican, and Gran't podcast fans. El Jefe Probie Java Bump Big Red Theme Music by Adrian Quesada of Black Pumas End Credits Music: GITM by Heyson Additional music licensed through Epidemic Sound And we have shirts! Get them at the Hello Crawlers store! The Beerists are John Rubio, Grant Davis, Pam Catoe, and Mark Raup. Subscribe on Apple Podcasts, Google Podcasts, or point your podcatcher to our RSS feed. You should also subscribe to our YouTube Channel. Support us by making a per-episode pledge at patreon.com/thebeerists and get some sweet rewards! Follow us on twitter, facebook, and instagram. Want to send us beer? Check our beer donation guidelines, and then shoot us and email at info@thebeerists.com

Church Planter Podcast
CPP #629 – Larry Walkemeyer on Disciple Making as the Engine for Multiplication

Church Planter Podcast

Play Episode Listen Later Feb 23, 2026 35:58


In this powerful conversation, Peyton sits down with his longtime mentor and friend, Larry Walkemeyer, to unpack why disciple-making must come before church multiplication, and why so many movements stall when they skip that step.Drawing from themes in Discipology and Larry's forthcoming book The Making of a Multiplier, this episode explores the deep connection between time, teaching, and tactics — the three rhythms of Jesus' disciple-making strategy that ultimately led to explosive Kingdom impact.Larry shares:Why the priesthood of all believers is the theological foundation of mobilizationHow relational disciple-making fuels true multiplicationA powerful personal vision that reshaped his ministry philosophyWhy you can't teach multiplication into existence The difference between a “lake mentality” and a “river mentality” churchYou'll also hear stories of everyday believers who became disciple-makers simply because someone walked closely with them long enough for the fire to catch.If you're passionate about church planting, leadership development, or seeing movements multiply, this episode will challenge you to slow down, go deep, and mobilize before you multiply.Resources and Links Mentioned in this Episode:DiscipologyBook.comReliant Mission: reliant.org/cppNewBreed TrainingThanks for listening to the church planter podcast. We're here to help you go where no one else is going and do what no one else is doing to reach people, no one else is reaching.Make sure to review and subscribe to the show on your favorite podcast service to help us connect with more church planters.

Lovecraft ASMR
Human.exe Demo | Retro Text‑Based Psychological Horror (Choices That Hurt)

Lovecraft ASMR

Play Episode Listen Later Feb 22, 2026 20:07


This is my quiet playthrough of the Human.exe demo — a retro, text‑based psychological horror experience that feels like a modern cousin to Zork and Majestic. You're dropped into a cold terminal interface and asked to make decisions that seem simple… until they aren't. In my run, the moral weight of each yes/no choice really hit me — especially when the subject's fate took a turn I wasn't expecting. It's short, eerie, and surprisingly emotional for a terminal‑style demo. If you enjoy cozy narration over unsettling games, you might like this one. This demo was originally played live on Twitch during a “let's try a few horror indies” stream. Developer: ⁠ @Weird.Engine ⁠ Genre: Text‑based horror, psychological, retro terminal Vibe: Horror‑Zork, moral choices, eerie system messagesTags: human.exe, human exe, human.exe demo, human exe demo, weird engine, weird engine games, weird engine human.exe, text based horror, text adventure horror, retro horror game, terminal horror game, psychological horror game, indie horror demo, horror zork, zork style horror, interactive fiction horror, cozy horror gameplay, asmr gaming, quiet gameplay, narrative horror game, moral choice horror, retro terminal game, indie game showcase, indie game playthrough, horror game demo, ⁠itch.io⁠ horror, atmospheric horror game, cozy narrator gameplay, soft spoken gameplay, asmr lets play, horror lets play, small indie horror, psychological text adventure, zork adventure

The John Batchelor Show
S8 Ep488: Bob Zimmerman reports Japanese private space startup ispace is struggling with severe engine development problems for its lunar landers, while archival images from New Horizons reveal Pluto's bizarre splotched surface and floating ice mountains

The John Batchelor Show

Play Episode Listen Later Feb 20, 2026 6:27


  Bob Zimmerman reports Japanese private space startup ispace is struggling with severe engine development problems for its lunar landers, while archival images from New Horizons reveal Pluto's bizarre splotched surface and floating ice mountains, and a newly discovered dim galaxy hints at dark matter's vastness. 8

Deconstructor of Fun
UA Monthly #1: Meta's New Playbook, Reddit Ads, & the State of UA

Deconstructor of Fun

Play Episode Listen Later Feb 20, 2026 49:45


Meta's apparent comeback runs headfirst into shifting UA economics, rising creative costs, and new pressure from platforms like Reddit, forcing marketers to rethink what “working” actually means. We unpack whether Meta is truly back or just delivering short-term dopamine, why in-app ads could reshape ad-monetized LTV, and how CPMs, payback windows, and creative volume are redefining the hyper-casual and hybrid playbooks. Cihan and Josh join to break down the latest Appsflyer data, Reddit's Max campaigns, China's UA surge, and Liftoff's IPO and to debate whether AI is leveling the field or quietly squeezing the middle out of mobile marketing.Chapters: 00:00 Welcome to Deconstructor of Funds + UA Monthly kickoff00:17 Meet the guests: Jihan (Scaling.Games) & Josh (Wildcard Games)01:06 Today's agenda: Meta's return, Appsflyer report, Reddit AI ads, Liftoff IPO01:29 Is “Meta back” real? The 2.5 Gamers breakdown & the dopamine-hit spike02:52 What Meta's actually doing: rollout strategy, templates, and market impact06:05 In-app ads explained: why Meta buying inventory could boost ad-monetized LTV07:48 Ad quality debate: intrusive formats, churn-per-impression, and broken incentives12:58 Can hyper-casual come back? CPMs, payback windows, and hybrid monetization18:38 State of Game Marketing report: shrinking US spend, growth in Turkey/India20:16 The creative arms race: AI variations, the ‘middle class' squeeze, and rising noise23:25 AI Shrinks the Creative Gap: Small Teams Catch Up, Mid-Tier Stalls24:41 China's UA Surge + iOS Outspending Android: Where the Scale Is Coming From25:38 30 Creatives a Day: The New ‘Tax' of Competing in Mobile UA26:07 Ripoffs, Ethics, and Beating the Filters: The Dark Side of Creative Volume28:00 Hero Creatives Aren't Dead—But Copy Speed Forces Smarter Variations30:16 Copying vs. Trends: When ‘Stealing' Is Real (and When It's Just the Market)31:40 Is the Market Really an Iceberg? US Spend Down, Web Shops, and the ‘Hidden' Picture34:27 Reddit ‘Max' Campaigns: Advantage+ for Reddit with a Promise of Transparency37:19 Top Audience Personas: Useful Insight or Just a Fancy Dashboard?39:34 How to Test Reddit Max: Onboarding Friction, Learning Periods, and Scalability Unknowns40:58 Liftoff Files to Go Public: Valuation, Margins, Debt, and the AI Black-Box Race45:44 What's Liftoff's Moat? Engine vs. Fuel, Data Advantages, and the AppLovin Comparison48:35 Wrap-Up: UA Monthly Feedback, What to Cover Next

Search Buzz Video Roundup
Search News Buzz Video Recap: Google Volatility Heated All Week, Google Reviews Vanishing, AI Overview & AI Mode Links Updated, Google Ads News and more

Search Buzz Video Roundup

Play Episode Listen Later Feb 20, 2026


For the original iTunes version, click here. This week...

Spoken Word with Electronics
# 94-B: "Strange Engine" (Check out r/CharliePickle on Reddit)

Spoken Word with Electronics

Play Episode Listen Later Feb 20, 2026 13:53


Ahoy! Check out Charlie Pickle on reddit at: https://www.reddit.com/r/charliepickle — New posts twice a week and a growing community. More sound soon!

The Sorare Ramble
MFL Ramble ep51 - Tactics & the Engine with Diamond managers

The Sorare Ramble

Play Episode Listen Later Feb 20, 2026 87:54


The MFL Ramble is a podcast series brought to you by hosts David (SRMonkey), Hoodwink, Kev & Hatton. On this weeks special episode Monkey & Kev were joined by ⁠⁠⁠⁠ Bob and Sid to chat about Season 11 from a Diamond point of view.** To sign up and play MFL, please ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠click ⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. **Feel free to contact us on Twitter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@soraremonkey⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@Hoodwink1983⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ , ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@Smoggypedro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ & ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@C_Hatton90⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The shows music has been kindly supplied by Stish who you can follow on Twitter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@Plastician⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ and on the excellent '⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠End Product⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠' podcast.The artwork and new logo was kindly created by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠McGettigan.eth⁠⁠⁠

B2B Go-To-Market Leaders
Inside the Mind of a Chief Growth Officer: Building a Bowtie GTM Engine

B2B Go-To-Market Leaders

Play Episode Listen Later Feb 19, 2026 49:24


Send a textIn this episode of the B2B Go-To-Market Leaders Podcast, Vijay Damojipurapu sits down with AJ Gandhi, Chief Growth Officer and Go-To-Market Operating Partner, to unpack what it really takes to build a high-performing, holistic GTM engine.With a career spanning Bain, McKinsey, venture-backed startups, Salesforce, RingCentral, and private equity, AJ brings a rare 360-degree perspective on strategy, sales, marketing, partner ecosystems, and post-sales execution.AJ defines go-to-market as the entire lifecycle journey of a customer — not just sales — and explains why most companies underperform because they fail to integrate product, marketing, sales, partners, and customer success into a unified system.They dive into:Why GTM must be holistic across the full “bow tie,” from acquisition to expansion and advocacy.The diagnostic framework AJ uses to assess strategy, talent, execution, and performance in portfolio companies.How to identify waste in sales coverage, geography expansion, marketing spend, and organizational design.Why partner ecosystems follow the 80/20 rule — and how doubling down on top partners drives disproportionate returns.The importance of measuring value realization, not just selling ROI promises.How to elevate mid-level business problems to CFO-level strategic priorities through economic impact framing.Lessons from scaling enterprise and mid-market GTM motions — and the danger of straying from your ICP.Why pricing optimization and expansion within existing customers often deliver faster impact than new logo acquisition.The leadership discipline required in the first 100 days of a transformation.And AJ's advice to rising GTM professionals: master the fundamentals, focus on the 80/20, and develop influence without authority.This episode is a masterclass in combining strategic rigor with execution discipline — and a reminder that sustainable growth comes from fundamentals done exceptionally well.Connect with Vijay Damojipurapu on LinkedInBrought to you by: stratyve.com

77 WABC Early News
Cops take Prince Andrew into custody on his birthday. A JetBlue flight loses an engine midflight.

77 WABC Early News

Play Episode Listen Later Feb 19, 2026 43:34


Cops take Prince Andrew into custody on his birthday. A JetBlue flight loses an engine midflight. Learn more about your ad choices. Visit megaphone.fm/adchoices

Under The Hood show
Auto Repair Tips Like Selling a Car After Engine Replacement and More

Under The Hood show

Play Episode Listen Later Feb 18, 2026 65:27


Under The Hood takes calls about car repair and gives answers over the air. You can also get your automotive advice on our youtube channel by asking questions on any of our videos in the comments. youtube.com/@underthehoodshow Here are today's callers. 1. 22 Tundra Turbo recall for engine fail 2. Lawn Mower won't start no fuel 3. 08 F250 Diesel fuel in oil and limp mode 4. 11 Suburban transfer case programing 5. 14 Sierra Transmission smell when towing 6. 18 Camry losing fuel mileage maybe 7. 23 F-150 Low mileage oil changes 8. 11 Mustang GT misfire 9. 08 F-150 Additive for engine oil 10. 14 Equinox cam crank correlation codes

CFO Thought Leader
1164: From Boardroom Lens to Operator Reality | Alex Melamud, CFO, Engine

CFO Thought Leader

Play Episode Listen Later Feb 18, 2026 56:25


Before his first cup of coffee, Alex Melamud opens Slack—not to scan revenue charts first, but to read customer feedback. “The first one that may surprise you as a CFO that I look at is actually NPS,” he tells us. At Engine, every survey drops into a shared channel so “every executive can see” what customers said, he tells us.That habit fits a finance leader who didn't grow up in the CFO seat. Melamud started in investment banking and then spent 16 years in private equity, learning to build theses, chase signal, and “sell… the product of private equity,” he tells us. Sitting on boards, he watched the CFO role evolve from “corporate governance accounting” into “executive first and maybe CFO second,” he tells us—someone who can talk like product, sales, or operations and earn board trust.Engine became the moment he stepped inside. After leading the company's round “18 months ago,” joining the board, and helping with a CFO search, he looked at founder “Eli” and asked, “what if I joined you as CFO?” he tells us. The draw was a focused mission: serving SMB travel, where customers book “like a consumer” and lose corporate rates and visibility, he tells us.Now his investor lens shows up in the unglamorous work. During annual planning, he dug into the “top 50 costs” outside headcount and pushed leaders to treat each contract “as a brand new relationship,” he tells us—an inspection that produced “10, 15%” savings and “tens of millions of dollars,” he tells us.

Wanderer's Journal
Presenting: The Mechadova Engine

Wanderer's Journal

Play Episode Listen Later Feb 18, 2026 22:47


Lumi Oakes, co-creator of Wanderer's Journal, presents their upcoming show The Mechadova Engine!Log, an exciteable and curious inhabitant of a lonely library, ends up on the adventure of a lifetime around a world entirely confined to steam trains in the sky. It's a story about meaningless structures, the power of community, and how knowledge is preserved and retold.Check out our Kickstarter: https://www.kickstarter.com/projects/lumioakes/the-mechadova-engine-steampunk-audio-dramaLearn more on the website:mechadovapod.comTranscript: https://docs.google.com/document/d/1AwNBC1ywrkUavWlB_P_VcFlcM-H6Ij4ebSoXYboqpDc/edit?usp=sharing Hosted on Acast. See acast.com/privacy for more information.

Astronomy Daily - The Podcast
Moon Hides Mercury Tonight, Artemis II Tests Tomorrow, Saturn Ring Origin Revealed

Astronomy Daily - The Podcast

Play Episode Listen Later Feb 18, 2026 17:51 Transcription Available


Episode: S05E42 — Wednesday, February 18, 2026 Hosts: Anna & Avery Network: Bitesz.com Podcast Network   In today's episode of Astronomy Daily, Anna and Avery cover six unmissable stories from across the cosmos. Here's what we're talking about in S05E42:   1. Artemis II Wet Dress Rehearsal Round Two NASA begins fuelling the SLS moon rocket tomorrow (Feb 19) for a second critical practice countdown. Engineers have replaced two seals and a filter after hydrogen leaks forced the February launch window to be abandoned. A clean test is required before NASA will commit to a launch date — currently no earlier than March 6. The four-person crew includes Victor Glover, Christina Koch, and Jeremy Hansen, each of whom will make history on the flight. 2. Moon Occults Mercury Tonight — Plus a Ganymede Transit Tonight, February 18, a thin crescent Moon passes so close to Mercury that observers in Arizona, New Mexico, Texas, Louisiana, Mississippi, and Georgia will see the Moon hide Mercury in a rare occultation. For everyone else, a stunning close conjunction is visible in the western sky just after sunset. Simultaneously, Jupiter's moon Ganymede transits the gas giant's face through the night. Two events, one evening. 3. Ariane 6 Launches Amazon Kuiper Satellites Europe's most powerful Ariane 6 configuration successfully launched 32 satellites for Amazon's Project Kuiper broadband constellation today — a direct competitor to SpaceX's Starlink. The launch highlights both the commercial ambitions of Amazon's internet satellite programme and the ongoing resurgence of European launch capability. 4. 3I/ATLAS Update: JUICE Data Downlinking Now ESA's JUICE spacecraft is currently transmitting data it collected on interstellar comet 3I/ATLAS back to Earth — the downlink window runs February 18–20. If successful, this would be the closest-ever spacecraft observations of an interstellar object. Meanwhile, 3I/ATLAS heads toward a close Jupiter flyby in March that may trigger fresh outbursts. 5. How Titan Formed — And Why Saturn Has Rings New research from the SETI Institute proposes a single ancient catastrophe that explains multiple Saturn mysteries at once: a moon called proto-Hyperion collided with proto-Titan about 400 million years ago. The merger debris re-accreted into Saturn's inner moons and left behind the iconic ring system. The hypothesis also explains Saturn's unusual axial tilt, Iapetus's orbital inclination, and the surprising youth of Titan's surface. 6. Russia's 30-Day Mars Engine Rosatom's Troitsk Institute is ground-testing a nuclear-powered magnetoplasma engine that its developers claim could reach Mars in 30 days — compared to 8 months for chemical rockets. With a plasma exhaust velocity of 100 km/s, the system is part of a global race toward deep-space plasma propulsion also being pursued by NASA's VASIMR programme and Chinese researchers. A flight prototype is targeted for 2030.     Follow & Connect

Irish Tech News Audio Articles
The Unseen Engine: How Enterprise Storage Is Powering Business Innovation in Ireland

Irish Tech News Audio Articles

Play Episode Listen Later Feb 18, 2026 8:47


In the pursuit of digital transformation, businesses often spotlight their cutting-edge applications, their multicloud strategies, or their latest AI models. Yet, behind each of these advancements lies a powerful, unseen engine: the enterprise storage platform. Once regarded as a back?end system, enterprise storage has become a strategic platform that underpins innovation. As Irish organisations race to modernise services, comply with regulation and compete internationally, the way they store, protect, and govern data is turning into a fundamental differentiator. Today's IT leaders face a significant challenge. They must support an ever-expanding portfolio of workloads, from critical business databases to cloud-native applications and data-intensive AI projects. All this must be achieved within the constraints of tight budgets and limited staffing. The sheer volume of data being created and managed is staggering; global data generation is expected to reach 393.9 ZB by 2028, as per IDC. This explosion of information puts immense pressure on infrastructure that was not designed for this scale or complexity, resulting in data foundations under strain According to the latest Dell Innovation Catalyst Study, 48% of Irish organisations are prioritising data readiness for AI-related workload, while 66% say they are still in their early or mid-stage of their AI/GenAI journey. This underscores a reality that organisations want to innovate, but their data foundations and current storage systems are not fully equipped. From Data Silo to Intelligent Hub The perception of enterprise storage as a mere commodity is outdated. Modern platforms have become intelligent hubs that automate complex tasks and unlock new efficiencies. By integrating machine learning and advanced analytics, today's storage systems can proactively optimise workload placement, predict performance bottlenecks before they occur, and simplify management tasks that once consumed countless hours. This shift is relevant in Ireland, where businesses from multinationals to SMEs are accelerating digital transformation under the National AI Strategy. A study Dell undertook found that 96% of Irish organisations face challenges when it comes to identifying, preparing, and using data for AI/GenAI use cases, with 40% struggle to integrate AI systems with existing IT infrastructure. Intelligent storage platforms directly address these pain points by reducing complexity and improving data accessibility without creating new data silos For Irish businesses planning to expand their e-commerce operations and presence, a modern storage platform can intelligently prioritise these diverse workloads, ensuring that customer-facing applications remain responsive while they have high-speed access, they need to train their models that maintain the strategic initiatives that drive business growth. Bridging Private Cloud and Multicloud for Seamless Innovation In today's digital landscape, businesses are increasingly faced with the decision to operate within a private cloud, adopt a multicloud environment, or find a balance between the two. Enterprise storage serves as the reliable backbone for these evolving strategies, delivering the infrastructure needed to provide both security and agility at scale. For Irish businesses relying on private cloud infrastructure, enterprise storage provides robust data protection, predictable performance, and the confidence that sensitive information remains under their control. As organisations here in Ireland expand further into multicloud setups, seamless data mobility becomes essential not just for storing data but also for making it accessible and secure wherever it resides. According to the Dell study, 46% of local organisations plan to modernise their IT with intelligent infrastructure, and another 46% aim to optimise workload placement across edge, core, and cloud environments. The right storage platform is central to both goals: it can synchronise data ac...

The Spin Sucks Podcast with Gini Dietrich
The Visibility Engine: Owned and Earned Media

The Spin Sucks Podcast with Gini Dietrich

Play Episode Listen Later Feb 17, 2026 13:46


In this episode, Gini Dietrich unpacks the visibility engine: Owned and Earned media working together. Owned is your home base—structured expertise built on defendable themes, repeatable authority anchors, and proof. Earned is credibility transfer—validation you can't buy that makes your expertise easier to trust, repeat, and surface across the web (and in AI answers).

The Digital Marketing Podcast
The Hubspot CMO Interview - Kipp Bodnar on AI, Answer Engine Optimisation and the Future of Marketing

The Digital Marketing Podcast

Play Episode Listen Later Feb 17, 2026 20:34


In this episode of The Digital Marketing Podcast, Daniel Rowles sits down with Kipp Bodnar, CMO of HubSpot, to discuss what may be the most disruptive year in marketing history. Kipp believes that 2026 could represent the biggest single wave of change our industry has ever seen. Weeks feel like months. Channels are fragmenting. Discovery is shifting. AI agents are entering workflows. And traditional attribution models are starting to break down. From Answer Engine Optimisation to AI agents, rising ad costs to workflow automation, this conversation explores how marketers can stay ahead when the pace of change is accelerating. In This Episode: Why 2026 may be the biggest year of change in marketing history Kipp explains why discovery, personalisation and team workflows are being reshaped simultaneously. Answer Engine Optimisation vs traditional SEO The shift from short keyword queries to ultra long-tail, conversational prompts of 40 to 60 words changes everything. Mentions vs citations in AI search Why brand visibility in ChatGPT, Gemini and Claude is more complex than link-based SEO ever was. The first mover advantage in AI discovery Early adopters can make exponential gains because competition is still low and optimisation is immature. Why AI agents are thriving in customer service but lagging in marketing Marketing problems are less formulaic and more complex, making agent adoption slower but highly promising. The practical AI workflow hack every marketer should try Record yourself completing a repetitive task, upload it to Google Gemini, and ask how to automate it. A simple but powerful shortcut to AI adoption Why attribution is becoming harder again The "golden age" of clean click-to-conversion tracking is fading as AI intermediates discovery. Rising ad costs and the need for new growth channels With paid media inflation increasing, marketers must adopt emerging channels such as AEO and AI-enabled creative optimisation. The importance of strategic conviction AEO cannot be treated as a side project. It must be embedded as a core capability. HubSpot's approach to AI and context Positioning HubSpot as the context layer for AI, enabling agents and assistants to work from real customer data. Key Takeaways: Discovery is changing faster than most organisations are adapting. Answer Engine Optimisation requires different content structures, including FAQs and machine-friendly formatting. Early adoption in AI search offers outsized returns. AI-assisted workflows are often more impactful than fully autonomous agents in marketing today. Marketing teams must bake experimentation and innovation into daily operations. The biggest risk is not AI itself, but failing to evolve working practices alongside it.

Two Dudes Talking Motorcycles
Episode 68 - Discussion: Bike and Engine Sizes for Beginner Riders

Two Dudes Talking Motorcycles

Play Episode Listen Later Feb 17, 2026 52:15


In this episode we hold a broad discussion about the weight and size of motorcycles and what a new rider should consider when looking for their first bike. Buying Riding Gear? Use our affiliate link and help out the podcast https://imp.i104546.net/3eZdXdHelp us support the pod or buy us a coffee! https://www.buymeacoffee.com/tdtmotorcyclesSpecial thanks to Derek Brown for our new song and logo! Check out his stuff belowdb SPL links:https://www.dbspl.studio/https://www.instagram.com/db_spl_/Glenarvon:https://www.glenarvonmusic.comhttps://www.instagram.com/glenarvonmusic/------------------Send us your questions and comments totdtmotorcycles@gmail.comFollow Us: Instagram: @gleblapham @meech2dbeech YouTube: @gleblapham

Fantha Tracks Radio: A Star Wars Podcast
Making Tracks Episode 251: Inner Engine: With guests Marti Matulis and Hugh Spight

Fantha Tracks Radio: A Star Wars Podcast

Play Episode Listen Later Feb 17, 2026 50:00


Join the Marks on episode 251 of Fantha Tracks Radios Making Tracks as they hop on their brand new Razor Crest and go off in search of blue cookies. This week they turn the page on the latest rumour that sees a possible Marvel vs Star Wars comic series come to life, get revved up for the new Galactic Racer trailer, discuss the possibility of Jon Favreau moving on from the GFFA after The Mandalorian and Grogu, look at the newly opened Skywalker Ranch store at the Presidio, and check out some of the Mando and Grogu related reveals from New York Toy Fair. That and guests Hugh Spight and Marti Matulis from Surrey Star Wars Weekend on episode 251 of Making Tracks. Remember to tune in to Good Morning Tatooine, LIVE Sunday evenings at 9.00pm UK, 4.00pm Eastern and 1.00pm Pacific on Facebook, YouTube, X, Instagram and Twitch and check out our Fantha Tracks Radio Friday Night Rotation every Friday at 7.00pm UK for new episodes of The Fantha From Down Under, Planet Leia, Desert Planet Discs, Start Your Engines, Collecting Tracks, Canon Fodder and special episodes of Making Tracks, and every Tuesday at 7.00pm UK time for your weekly episode of Making Tracks. Thanks to James Semple for the Fantha Tracks intro, Blues Harvest for our Making Tracks opening music and Mark Daniel and Vanessa Marshall for our voiceovers. Subscribe and tune in to all of our shows at https://radio.fanthatracks.com And of course for all your Lucasfilm and Star Wars news 24/7, 365 days a year head on over to https://www.fanthatracks.com You can contact our shows and send in your listeners questions by emailing radio@fanthatracks.com or by leaving a comment on our social media feeds: https://www.instagram.com/fanthatracks https://www.facebook.com/FanthaTracks https://www.x.com/FanthaTracks https://www.threads.net/@FanthaTracks https://www.reddit.com/r/fanthatracks/ https://mastodon.social/@fanthatracks https://bsky.app/profile/fanthatracks.com https://www.pinterest.co.uk/fanthatracks/ https://fanthatracks.tumblr.com/ And be sure to check out our live streams and video content at: https://www.youtube.com/@FanthaTracksTV/ https://www.tiktok.com/@fanthatracks https://www.twitch.com/fanthatrackstv All of our links can be found at https://links.fanthatracks.com/

Wedding Business Solutions
Brian Lawrence - Answer Engine Optimization

Wedding Business Solutions

Play Episode Listen Later Feb 16, 2026 34:40 Transcription Available


Is your website truly speaking the language of AI, or are you getting lost in translation? What happens when couples use AI to search for wedding pros—are you coming up as the answer, or is your competitor? In this episode, I dive into how the way we search has changed, why it's not just about traditional SEO anymore, and what practical steps you can take to help AI understand your business accurately. Learn how a simple page hidden from your navigation could be the game-changer in your online visibility, and why regularly updating your reviews and fresh content matters more than ever.Listen to this new episode for practical tips on making your website AI-friendly, creating an AI-focused resume page, and boosting your findability as search keeps evolving.About Brian: Brian Lawrence is the co-author with me of "From Browsing To Booking" and producer of the Inclusive Wedding Summit. His agency is the go-to web design and SEO resource for numerous wedding industry authorities and also consults with many businesses on website and seo strategies. He was homegrown in the industry, owning numerous wedding businesses and serving as VP of a national wedding brand.Contact Brian: https://www.brianlawrence.com/meeting-with-brian-lawrence/https://www.brianlawrence.com/If you have any questions about anything in this, or any of my podcasts, or have a suggestion for a topic or guest, please reach out directly to me at Alan@WeddingBusinessSolutions.com or visit my website Podcast.AlanBerg.com Please be sure to subscribe to this podcast and leave a review (thanks, it really does make a difference). If you want to get notifications of new episodes and upcoming workshops and webinars, you can sign up at www.ConnectWithAlanBerg.com  View the full transcript on Alan's site: https://alanberg.com/blog/Want to see how I can come and speak for your local association... for free? Reach out to me at Alan@WeddingBusinessSolutions.com or text or call +1.732.422.6362 I'm Alan Berg. Thanks for listening. If you have any questions about this or if you'd like to suggest other topics for "The Wedding Business Solutions Podcast" please let me know. My email is Alan@WeddingBusinessSolutions.com. Look forward to seeing you on the next episode. Thanks. Listen to this and all episodes on Apple Podcast, YouTube or your favorite app/site: Apple Podcast: http://bit.ly/weddingbusinesssolutions YouTube: www.WeddingBusinessSolutionsPodcast.tv Spotify: https://spoti.fi/3sGsuB8 Stitcher: http://bit.ly/wbsstitcher Google Podcast: http://bit.ly/wbsgoogle iHeart Radio: https://ihr.fm/31C9Mic Pandora: http://bit.ly/wbspandora ©2025 Wedding Business Solutions LLC & AlanBerg.com

Student Of The Game Fire Podcast

18 years of career experience. Captain With the City Of Issaquah East Side Fire & Rescue East of Seattle on Engine 178. The fire service wasn't on Berlin's radar. His parents thought he would end up being in the tech industry but everything in life occurs for a reason & we all serve a purpose regardless of what industry we are in. Once Berlin got on the key to his success was founded on two things. Betting on himself and having mentors or individuals who will push and guide you to things you would have never thought was possible. Berlin laid down the gospel on various topics we discussed especially when it came to Company Officer and leadership development. Grab you a notepad and get ready to jot a few things down with this one.

Bitcoin for Millennials
AI Will Build Anything, That's Why You Need Bitcoin | Jesse Tevelow | BFM232

Bitcoin for Millennials

Play Episode Listen Later Feb 16, 2026 40:40


Jesse Tevelow is an entrepreneur and the best-selling author of The Connection Algorithm and Life After Bitcoin. He explores the intersection of high-performance living, sovereign technology, and the civilizational shift toward a post-fiat "Creation Age."› https://x.com/jtevelow› https://mylaunchteam.com/life-after-bitcoinPARTNERS

Gettin' Salty Experience Firefighter Podcast
GETTIN' SALTY EXPERIENCE PODCAST Ep.280: FDNY | BATTALION CHIEF JACK SPILLANE

Gettin' Salty Experience Firefighter Podcast

Play Episode Listen Later Feb 13, 2026 129:48 Transcription Available


Be sure and join us live Thursday night, February 12th at 8pm on our Youtube Channel with our special guest, 36-year FDNY veteran, Battalion Chief Jack Spillane . Chief Spillane was appointed to the FDNY in 1982 and retired in 2018. With three and a half decades of service to the FDNY, he has had an outstanding career. Once appointed to the FDNY he went on to: Assigned Engine 14 Onion skin to Ladder 3 in 1986 August 1987 promoted to Lieut. Div. 3 August 1988 transferred to Div. 5 Jan. 1990 transferred to Engine 80 Sept. 1993 promoted to Captain Div. 6 Sept. 25, 1993 to March 1994 UFO Engine 61 May 1994 transferred to Squad 41 July 1997 promoted to Battalion Chief Div. 3 Sept 1998 detailed to SOC Battalion Nov. 2018 retired Going to be another great show. We will get the whole skinny. You don't want to miss this one. Join us at the kitchen table on the BEST FIREFIGHTER PODCAST ON THE INTERNET! You can also Listen to our podcast ...we are on all the players #lovethisjob #GiveBackMoreThanYouTake #Oldschool #Tradition #FDNYBecome a supporter of this podcast: https://www.spreaker.com/podcast/gettin-salty-experience-firefighter-podcast--4218265/support.

The Marshall Pruett Podcast
MP 1672: The Week In IndyCar Engine Deal and Manufacturer Charters Insights

The Marshall Pruett Podcast

Play Episode Listen Later Feb 13, 2026 22:05


It's audio from The Week In IndyCar YouTube show! TOPICS: The new engine supply extensions for Chevy and Honda plus details on their manufacturer charters. NEW show stickers and retro racing memorabilia: ThePruettStore.com EVERY episode is graciously supported by the Justice Brothers and TorontoMotorsports.com. If you'd like to join the PrueDay podcast listener group, send an email to pruedayrocks@gmail.com and you'll be invited to participate in the Discord chat that takes place every day and meet up with your new family at IndyCar events. Play on Podbean.com: https://marshallpruett.podbean.com/ Subscribe: https://marshallpruettpodcast.com/subscribe Join our Facebook Group: https://www.facebook.com/MarshallPruettPodcast [WTI]

skucast
Episode 358: Imprint Engine's Global Playbook for Promo

skucast

Play Episode Listen Later Feb 13, 2026 48:42


On skucast's latest episode, Caleb Gilbertson and Colin Loughran share Imprint Engine's playbook for global scale, plus AI champions and the leadership split.

The Family Biz Show
How Family Business Governance Helps You Grow Without Breaking the Family | The Family Biz Show Ep. 127

The Family Biz Show

Play Episode Listen Later Feb 13, 2026 56:14


Growth is hard. Growth inside a family enterprise is harder. Because in a family business, every strategic decision carries emotional weight. Every acquisition, every hiring choice, every leadership disagreement touches not just the company — but the relationships that built it. That's where family business governance becomes the difference between sustainable growth and generational fracture. In Episode 127 of The Family Biz Show, Christina Armentano, third-generation leader of Paraco Gas Corporation, shares what it really takes to grow a multi-location energy company without breaking the family behind it. Her insights reveal that family business governance isn't theory. It's daily discipline.   The Founder's Grit Is Not a Governance Strategy Christina's grandfather was born in 1929, the year of the Great Depression. He didn't finish grade school. He started working young. He built the company through charisma, salesmanship, and relentless drive. That founder grit built the foundation. But grit alone doesn't sustain three generations. As family enterprises mature, family business governance must evolve beyond personality and instinct. What works for a founder rarely scales to siblings, cousins, and future generations. Growth demands structure.   Why Outside Experience Strengthens Family Business Governance Christina didn't step directly into the family company. She spent nearly a decade outside the business: Executive search MBA Internship at the largest propane company in the U.S. Turned down multiple early opportunities to join Why? Because strong family business governance requires competence, not entitlement. When next-generation leaders build experience elsewhere, they return with: Credibility Financial discipline Confidence Perspective Governance begins with earned authority.   Two Roles. One Discipline. One of the most powerful lessons in this episode: "You have your shareholder role and then you have your employee role. Those are two very separate roles." This distinction is the heart of effective family business governance. Ownership thinks long-term. Employees execute short-term. Shareholders protect capital. Employees protect performance. When these roles blur, conflict accelerates. When they're clearly defined, growth stabilizes.   Communication Is the Engine of Family Business Governance Christina shares her grandfather's advice: "Do right by the business and the business will do right by you." That statement reflects mature family business governance thinking. Open lines of communication. Business lens over personal lens. Disagreements that are never personal. Clear separation between family emotion and enterprise decision-making. Without disciplined communication, growth becomes personal. With governance, growth becomes strategic.   Acquisition Growth Without Governance Is Dangerous Paraco has completed more than 60 acquisitions. That kind of expansion requires structured family business governance. Christina breaks acquisitions into two stages: Due diligence Transition Strong governance means: Written checklists Clear deal leadership Objective financial review Emotional detachment from transactions Written transition plans Ego left at the door One critical lesson: retain what you have first. Retention is governance. Foundation is governance. Infrastructure before scale is governance. Without disciplined family business governance, acquisition momentum becomes chaos.   Selling a Business Requires Governance Discipline Too Christina emphasizes something most owners overlook: "The deal is never done until the deal is done." During a sale process, owners must continue running the business as if no deal exists. Why? Because strong family business governance protects optionality. If performance slips, leverage disappears. If emotion rises, valuation suffers. If the owner becomes dependent on the deal, negotiating power evaporates. Governance protects freedom.   Industry Leadership as Governance Maturity Christina serves as President of the New York State Propane Gas Association. When propane faced regulatory bans in New York, competitors collaborated to protect the industry. This reflects expanded family business governance thinking. Governance is not just internal. It's external influence. It's political awareness. It's industry collaboration. Mature family enterprises understand they are stewards of an ecosystem, not just operators of a company.   Coaching, Peer Groups, and Governance Accountability Christina credits her Vistage experience for sharpening her leadership. Peer groups: Call out blind spots Pressure-test strategy Provide emotional separation Create accountability Outside perspective strengthens family business governance by preventing insularity. Family enterprises that refuse external input often stagnate.   The Three Rules That Protect Growth Christina's closing advice distills governance into three principles: Family members must want to be there. Separate personal from business. Give yourself grace — but earn your seat. Each one reinforces family business governance at a human level. Engagement. Clarity. Discipline. Without those, growth fractures relationships. With them, growth strengthens legacy.   The Real Purpose of Family Business Governance Family enterprises are uniquely powerful because they combine trust and long-term thinking. But that same proximity creates risk. The purpose of family business governance is not control. It is alignment. Alignment between: Ownership and leadership Growth and stability Family values and enterprise vision When governance is intentional, growth compounds. When governance is ignored, conflict compounds. Episode 127 is a masterclass in how disciplined family business governance allows you to scale acquisitions, navigate succession, develop next-generation leaders, and protect the family behind the enterprise.

Search Buzz Video Roundup
Search News Buzz Video Recap: Google Volatility, Bing AI Performance Reports, New AI Mode Retail Ads, UCP Checkout & ChatGPT Ads Go Live

Search Buzz Video Roundup

Play Episode Listen Later Feb 13, 2026


This week in search we have more ongoing Google search ranking volatility. Bing Webmaster Tools rolled out new AI Performance reports with a new design. Google AI Overviews tests new overlay cards. Grokipedia is seeing a decline in visibility in Google Search and ChatGPT...

Scuderia F1: Formula 1 podcast
Ep. 661 - EARLY REACTIONS FROM BAHRAIN

Scuderia F1: Formula 1 podcast

Play Episode Listen Later Feb 12, 2026 48:11


Mark Hamilton sits down to recap the start of pre-season testing in Bahrain and talk about the latest news in the world of F1. Hit that subscribe button and tune in for the full, unfiltered breakdown! You don't wanna miss this!

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:

Brute Strength Podcast
The Engine Problem Strength Athletes Face With Fabian Johnson | Brute Training Podcast

Brute Strength Podcast

Play Episode Listen Later Feb 12, 2026 70:30


In this Brute Strength episode, we sit down with Brute Hyrox athlete Fabian Johnson to break down the biggest conditioning mistakes CrossFitters make when transitioning into Hyrox-style racing.A lot of athletes think they just need more intensity, but intensity without an aerobic base only gets you so far.We cover:• Why Zone 2 training matters• How to build endurance without losing strength• Running volume for strength athletes• Structuring training with limited time• Avoiding injury when adding running• The difference between treadmill and outdoor running• Why pacing and heart rate control matterIf you want to perform better, deeper into workouts, and stop blowing up late, this conversation will change how you train.

Vitality Explorer News Podcast
Vital Mindset, Discipline and Coffee for Your Brain

Vitality Explorer News Podcast

Play Episode Listen Later Feb 12, 2026 23:54


Optimize Closeness and Purpose PodcastFIVE PRIMARY POINTS of the PODCASTAnchor Your Mind in the PresentResearch shows that mind wandering—especially toward unpleasant or neutral thoughts—reduces happiness, while staying focused in the present increases well-being. Vital people cultivate awareness and deliberately protect their attention.Master the Dual Skill of Focus and Intentional WanderingContinuous distraction fractures well-being, yet excessive hyper-focus can suffocate creativity. The goal is to train yourself to focus like a laser most of the time while allowing intentional, non-judgmental mind wandering to spark learning, planning, and breakthroughs.Use Cognitive Drills to Strengthen AttentionPractical tools include 20-minute “cognitive sprints” on a single demanding task, quick awareness check-ins (“What am I doing? Where is my attention?”), and phone-free walks to encourage creative thought. These practices build neural endurance and improve performance.Discipline Is the Engine of VitalityDiscipline helps control impulses, build resilience, and prevent future regret. Embracing discomfort—through exercise, challenges, or habit formation—develops grit, which research suggests can outperform talent or IQ in predicting success.Moderate Caffeine Supports Brain Health and LongevityLarge studies show that drinking about two to three cups of caffeinated coffee daily is associated with lower dementia risk, stronger cognitive performance, reduced psychiatric disorders, and decreased inflammatory disease risk—making it a simple strategy to enhance vitality.Copyright VyVerse, LLC. All Rights Reserved. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit vitalityexplorers.substack.com/subscribe

Bitcoin Magazine
Upgrading Bitcoin's Consensus Engine: Bitcoin Kernel Explained w/ Core Dev Sedited

Bitcoin Magazine

Play Episode Listen Later Feb 11, 2026 25:50


The Bitcoin Kernel project is one of the most misunderstood developments in Bitcoin Core. In this conversation, Shinobi and Sedited explain how isolating validation logic increases flexibility, improves security, and enables alternative node implementations. From multi-process architecture to formal protocol specifications, this episode covers why kernel development matters now. #Bitcoin #BitcoinDevelopment #BitcoinCore⭐️⚔: SIGN UP WITH DUELBITS TODAY FOR A CHANCE TO WIN UP TO 2 BTC:

The John Batchelor Show
S8 Ep434: HEADLINE: Arrival: Entering Lunar Orbit and the Grey World. GUEST AUTHOR: Bob Zimmerman. SUMMARY: Apollo 8 successfully enters lunar orbit using the SPS engine, allowing the crew to witness the moon's desolate, cratered surface and confirm its

The John Batchelor Show

Play Episode Listen Later Feb 9, 2026 9:33


HEADLINE: Arrival: Entering Lunar Orbit and the Grey World. GUEST AUTHOR: Bob Zimmerman. SUMMARY:Apollo 8 successfully enters lunar orbit using the SPS engine, allowing the crew to witness the moon's desolate, cratered surface and confirm its impact origins.

The John Batchelor Show
S8 Ep434: HEADLINE: The Return: "There is a Santa Claus." GUEST AUTHOR: Bob Zimmerman. SUMMARY: After a successful engine burn to leave lunar orbit, the crew navigates home using stars and a sextant, splashing down safely to conclude the missi

The John Batchelor Show

Play Episode Listen Later Feb 9, 2026 9:46


HEADLINE: The Return: "There is a Santa Claus." GUEST AUTHOR: Bob Zimmerman. SUMMARY: After a successful engine burn to leave lunar orbit, the crew navigates home using stars and a sextant, splashing down safely to conclude the mission. 1968 SFRIVING YORKTOWN

The Late Braking F1 Podcast
Toto Wolff blasts F1 rivals over 2026 engine controversy

The Late Braking F1 Podcast

Play Episode Listen Later Feb 9, 2026 75:30


“Get your sh*t together”: Ben, Sam & Harry react to Toto Wolff's fiery message to rival F1 teams over 2026 engine complaints. They also cover Lando's 2026 mindset shift, a possible Alonso-McLaren Indy 500 reunion, Doohan's new F1 role, and wrap up with some F1: Order Please. Want more Late Braking? Support the show on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Patreon⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ and get: Ad-free listening Full-length bonus episodes Power Rankings after every race Historical race reviews & more exclusive extras! Don't forget! You can also gift a Late Braking Patreon subscription—perfect for loved ones or your own wish list. Choose anything from 1 month up to a full year of top-notch F1 content: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.patreon.com/latebrakingf1/gift⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Connect with Late Braking: You can find us on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠X (Twitter)⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ and ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TikTok⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Come hang out with us and thousands of fellow F1 fans in our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Discord⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ server and get involved in lively everyday & race weekend chats! Get in touch any time at podcast@latebraking.co.uk Learn more about your ad choices. Visit podcastchoices.com/adchoices

Behind the Prop
E187 - Paul Craig, Killing Zone 3rd Edition & Story Time

Behind the Prop

Play Episode Listen Later Feb 9, 2026 49:09


Buy the 3rd edition here: https://asa2fly.com/the-killing-zone/This episode of Behind the Prop takes a deep, practical look at aviation safety culture, pilot decision-making, and the human factors that continue to drive accidents across all experience levels. Bobby Doss and Wally Mulhern are joined by Paul Craig, author of The Killing Zone, to discuss why judgment—not just skill or legal minimums—is the foundation of safe flying.The conversation begins with real-world examples of pilots choosing to delay or cancel flights despite external pressure, reinforcing that many of the best safety decisions never show up in accident statistics because nothing went wrong. Paul Craig shares data showing that from 2012 to 2023, approximately 82% of aviation accidents were survivable, shifting the focus toward preventing all accidents, not just fatal ones. Survivable accidents still represent breakdowns in judgment, awareness, or risk management, and often occur when pilots adopt an “it won't happen to me” mindset.A major theme of the episode is complacency, particularly as pilots gain experience. Wally and Bobby discuss how overconfidence can peak around key experience milestones, such as the first several hundred flight hours for pilots and around 1,000 hours for instructors. This complacency can quietly erode discipline in areas like preflight planning, fuel management, and risk assessment. The hosts emphasize that vigilance must be continuous, regardless of total time or aircraft type.The discussion also explores the evolution of The Killing Zone and the decision to move its third edition to an aviation-focused publisher. The book's continued relevance lies in its ability to wake pilots up to the statistically dangerous transition periods in their flying careers and encourage humility, preparation, and sound decision-making.Throughout the episode, the group stresses the importance of practical understanding over memorization. Real safety comes from applying knowledge in dynamic, imperfect situations—whether navigating unusual airspace, managing fatigue, or making conservative go/no-go decisions. The episode closes with a strong reminder that aviation safety is a shared responsibility built through mentorship, education, and a commitment to putting life ahead of ego, schedule, or expectation.