Podcasts about realistic

  • 5,956PODCASTS
  • 8,557EPISODES
  • 36mAVG DURATION
  • 3DAILY NEW EPISODES
  • Mar 2, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about realistic

Show all podcasts related to realistic

Latest podcast episodes about realistic

STRONG MAMA PODCAST - Health & fitness for an empowered pregnancy, confident birth, and faster postpartum recovery

Feel like there's never enough time to work out? You're definitely not alone — and this episode will show you a better way. I'm sharing a realistic look at what consistent movement can actually look like for busy moms, with real client examples and a simple “minimum effective week” that fits around kids, work, and real life. You'll learn the mindset shift that makes consistency possible and how even 10–20 minutes at a time can help you build strength during pregnancy or postpartum. Because you don't need more time — you just need a plan that works with your life.Get personalized prenatal or postpartum fitness support by working with me 1:1: Learn more and apply for coaching.All self-paced Pregnancy and Postpartum fitness programs can be found on my website, here.Say hello over on Instagram! @strongmamawellnessJust Ingredients 10% off: Click Here and use code STRONGMAMA

Zolak & Bertrand
Patriots Team Report Card // Travel Still A Problem // Andrew Callahan Says A.J. Brown Trade Is Realistic - 2/27 (Hour 1)

Zolak & Bertrand

Play Episode Listen Later Feb 27, 2026 39:54


(00:00) Zolak & Bertrand start the show reacting to the Patriots team report card.(15:30) We continue to discuss the team report cards and talk about what needs changing from a players perspective in New England.(25:58) The guys discuss the worst teams in terms of travel from the report cards as well as breakdown the NFL's response to the report cards being leaked.(36:29) Zolak & Bertrand close the hour talking about Andrew Callahan's take on an AJ Brown trade for the Patriots being realistic.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Kevin & Query Podcast
Friday 2/27: AR requests a trade, what's a realistic return? + ESPN's Field Yates & more!

Kevin & Query Podcast

Play Episode Listen Later Feb 27, 2026 134:05 Transcription Available


00:00 – 13:45 – It’s our final day at the NFL Combine and Kevin immediately strip teases into a Hammond Bears shirt, Fernando Mendoza will speak at the Combine later this morning, Anthony Richardson requests a trade from the Colts, what is a realistic return? 13:46 – 19:40 – Morning Checkdown 19:41 – 44:52 – Anthony Richardson requests a trade, the timeline of Anthony Richardson and the Colts and where it went wrong, who gets the most blame for things going south so quickly, how would Richardson have fared this season if he was healthy?, IndyCar Radio’s Mark Jaynes joins us to preview the IndyCar season opener in St. Pete 44:53 – 1:06:49 – ESPN’s Field Yates joins us and discusses his love for Indy at the Combine, his thoughts on Anthony Richardson’s trade request and what compensation he thinks they’ll receive, what extensions for Daniel Jones and Alec Pierce looks like, Fernado Mendoza’s meteoric rise, Morning Checkdown 1:06:50 – 1:20:19 – What should the Colts do with the QB position if Daniel Jones still isn’t healthy come training camp?, Visit Indy’s Chris Gahl joins us and discusses the origins of the Indiana Convention Center and how it was built with the NFL Combine in mind, how they keep the combine in Indy, trying to get the NFL Draft to town, next big event he wants to come to town 1:20:20 – 1:31:41– Fernando Mendoza meets the media at the combine, Kevin sings Happy Birthday to Mike Chappell 1:31:42 – 1:53:29 The most/least likely Colts to return next season, the future of Zaire Franklin, what Ballard had to say about Franklin earlier this week, Morning Checkdown 1:53:30 – 2:04:14 – Combine thoughts, Fernando Mendoza hit the podium and discussed hanging out with Peyton and Eli Manning and Daniel Jones at the Super Bowl, revisiting our most/least likely to return Colts lists 2:04:15 – 2:14:04 – Wrapping up our last show at the Combine – Chris Ballard’s biggest miss in his tenure looks to be the drafting and evaluation of Anthony Richardson, Chris Ballard’s tenureSupport the show: https://1075thefan.com/the-wake-up-call-1075-the-fan/See omnystudio.com/listener for privacy information.

ProjectME with Tiffany Carter – Entrepreneurship & Millionaire Mindset
Stop the Grind: Science-Backed Ways to Work Less and Make More w/ Dr. Guy Winch, Psychologist & Author

ProjectME with Tiffany Carter – Entrepreneurship & Millionaire Mindset

Play Episode Listen Later Feb 26, 2026 63:41


ONLY TIME THIS YEAR > Live Training Series: 3 Days to Make Bank Online  Register for FREE HERE (Starts next week March 3rd)    Feeling like work owns your life? You're not alone.  If you're struggling with burnout, feeling overwhelmed at work, or wondering how to actually achieve work-life balance in a world that demands more and more from us.    I sit down with world-renowned psychologist Dr. Guy Winch, author of Mind Over Grind, to discuss practical, science-backed strategies for burnout recovery and how to reclaim your time, energy, and mental health.      RESOURCES MENTIONED:    **ONLY TIME THIS YEAR** > 3 Days to Make Bank Online Live Training Series  Register for FREE HERE (hosted on a private YouTube Live!). VIP tickets also available!    **Abundance Sale Ending** Make More Work Less: The Money Relationship Healing & Manifestation Program GET THIS LIMITED TIME OFFER HERE    >> Join the famous ProjectME Posse Business & Money Coaching Membership HERE    CONNECT WITH TIFF:  Tiffany on Instagram @projectme_with_tiffany   Tiffany on TikTok @projectme_with_tiffany  Tiffany on YouTube: ProjectME TV  Tiffany's FREE Abundance Email Community: JOIN HERE > The Secret Posse Digest     CONNECT WITH DR. GUY WINCH:  Psychologist, TED speaker, and author of Mind Over Grind  BOOK: GET IT HERE "MIND OVER GRIND"  INSTAGRAM: @guywinch  WEBSITE: guywinch.com    We're covering:    > Why burnout is a systemic issue, not a personal failure — and how entrepreneur burnout differs from corporate burnout  > Realistic boundaries at work that actually protect your time and mental health  > How to disconnect from work without guilt or anxiety (especially if you run an online business)  > The psychology behind why we feel so distracted, stressed, and overwhelmed  > Sustainable business models that let you work less and make more money without sacrificing your well-being    This isn't about toxic positivity or grinding harder. This is about time management strategies and stress management techniques backed by research — tools you can implement today to stop feeling overwhelmed and start prioritizing your life.    Whether you're dealing with work anxiety, trying to build passive income streams, or just exhausted from working too much, this conversation will give you a roadmap to build a more sustainable business and life. 

Intermediate Spanish Podcast - Español Intermedio
E243 ¿Qué hace a una persona? - Español Intermedio

Intermediate Spanish Podcast - Español Intermedio

Play Episode Listen Later Feb 26, 2026 17:10 Transcription Available


En este episodio del pódcast de español intermedio de Spanish Language Coach hablamos de una pregunta fascinante: ¿qué hace a una persona? No hablamos solo de tu nombre o tu profesión, sino de tu identidad, tu memoria, tu conciencia y tu cultura.¿Somos solo biología? ¿Somos nuestros recuerdos? ¿Qué pasa si cambiamos de país, de trabajo o incluso de idioma? A través de ejemplos claros y reflexiones sencillas, exploramos conceptos como identidad, conciencia, responsabilidad, libertad, emociones y narrativa personal. También hablamos del papel del lenguaje y de cómo aprender español puede cambiar tu forma de ver el mundo y ampliar tu identidad.Este episodio es perfecto para estudiantes de español de nivel intermedio (B1) que quieren mejorar su comprensión auditiva mientras reflexionan sobre temas importantes de filosofía y desarrollo personal. Usamos vocabulario útil, explicamos palabras complejas y conectamos el tema con experiencias reales.Recuerda que en spanishlanguagecoach.com tienes recursos gratuitos para acompañar este episodio: la transcripción en español, la traducción al inglés, las tarjetas de vocabulario y un ejercicio de comprensión. Además, si quieres mejorar tu español de forma estructurada, puedes apuntarte a la lista de espera para los cursos online.Escucha este episodio si quieres practicar tu español con un tema interesante, ampliar tu vocabulario y pensar en quién eres realmente.Free eBooks: Habla español con AI & La guía del estudiante de españolMis cursos online: Español Camaleón - A REALISTIC pronunciation course Español Ágil - Intermediate Spanish Español PRO - Advanced Spanish Español Claro - Upper-beginner Spanish Si no sabes cuál es mejor para ti, haz el TEST. Intermediate Spanish Podcast with Free Transcript & Vocabulary Flashcards www.spanishlanguagecoach.com - Aprende español escuchando contenido natural adaptado para estudiantes de español de nivel intermedio. Si es la primera vez que escuchas este podcast, puedes usarlo como un podcast diario para aprender español - Learn Spanish Daily Podcast with Spanish Language Coach Social media:YouTubeInstagram...

Talking Manhattan
Resilience, Pricing, and Strategy with Michelle Griffith

Talking Manhattan

Play Episode Listen Later Feb 26, 2026 24:01


Today, Noah and John sit down with luxury powerhouse Michelle Griffith of Douglas Elliman to unpack what's really happening in the Manhattan market—and what it takes to thrive in it. With over $1.5B in career sales, Michelle breaks down why the word of the moment is resilient, pointing to rising contract activity, increased negotiability, and tight inventory in family-focused neighborhoods. She dives into strategic pricing, the art of over-communication, and why 2025 became a “second broker” market. Beyond tactics, Michelle shares powerful lessons on mindset, resilience, outsourcing for scale, and balancing a high-octane career with motherhood. This one's about playing the long game, evolving your brand, and never waiting for perfect. Top Tips from the tip top! =============== ✅ Michelle's Page at Douglas Elliman https://www.elliman.com/agent/michelle-griffith/1029349 ✅ Connect with Michelle on LinkedIn: https://www.linkedin.com/in/michelle-griffith-8a8330/ ✅ Follow Michelle on Instagram: https://www.instagram.com/michellegriffithnyc/ =============== ✅ Stay Connected With Us:

The Egg Whisperer Show
Simple Tests That Reveal Your Fertility Future (The TUSHY Method) with Dr. Aimee, hosted by Whitney Hall

The Egg Whisperer Show

Play Episode Listen Later Feb 25, 2026 31:24


I'm thrilled to join Whitney Hall on the Create a Happy Family podcast to share everything you need to know about taking control of your fertility journey before you even step into a doctor's office. I loved this conversation with Whitney so much that I'm sharing with you on The Egg Whisperer Show, too! As a fertility specialist with over 16 years of experience, I've seen too many people start their family-building journey without understanding their baseline fertility health, and I'm here to change that. In this conversation, I break down my TUSHY Method, a simple five-test framework that gives you clarity about your reproductive health, whether you're 25 or 45, single or partnered, just starting to think about kids or already deep in treatment. Read the full show notes on my website. In this episode, we cover: The TUSHY Method: Five essential fertility tests everyone should know about (tubes, ultrasound, sperm, hormones, and genetic carrier screening) Why proactive fertility testing matters, even if you're not ready to start trying yet Realistic expectations for egg quality, embryo banking, and success rates at different ages Egg freezing vs. embryo freezing: How to create a strategic plan based on your goals Breaking down egg donation myths and why my patients' only regret is not doing it sooner How to advocate for yourself in fertility appointments by asking one simple question: "Why?" The red flags you should never ignore—from painful periods to relationship issues Resources: Create a Happy Family website: createahappyfamily.com/thepodcast Egg Donor and Surrogate Solutions: createahappyfamily.com The Egg Whisperer Show podcast on Spotify and Apple Podcasts Dr. Aimee's Supplement Stack information Do you have questions about IVF, and what to expect? Click here to join Dr. Aimee for The IVF Class. The next live class call is on Monday, March 9th, 2026 at 4pm PST, where Dr. Aimee will explain IVF and there will be time to ask her your questions live on Zoom. Click to find The Egg Whisperer Show podcast on your favorite podcasting app.   Watch videos of Dr. Aimee answer Ask the Egg Whisperer Questions on YouTube.  Sign up for The Egg Whisperer newsletter to get updates  Dr. Aimee Eyvazzadeh is one of America's most well known fertility doctors. Her success rate at baby-making is what gives future parents hope when all hope is lost. She pioneered the TUSHY Method and BALLS Method to decrease your time to pregnancy. Learn more about the TUSHY Method and find a wealth of fertility resources at www.draimee.org.        Keywords: fertility testing, TUSHY method, egg freezing, egg donation, AMH test, fertility health, IVF, embryo freezing, fertility over 40, genetic carrier screening, semen analysis, fertility doctor, reproductive health, egg quality, donor eggs, fertility journey, infertility, family building, fertility preservation, proactive fertility care, fertility myths, patient advocacy, fertility consultation, endometriosis, PCOS, fertility red flags, independent motherhood, donor conception  

On Texas Football
State of the Program: Here's What's Actually Realistic for Arch Manning

On Texas Football

Play Episode Listen Later Feb 25, 2026 23:04


Bobby Burton and Rod Babers discuss the expectations for Arch Manning in 2026, what's realistic, how well he has to play for Texas to make a deep playoff run and more on this week's State of the Program!    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Safe Space ASMR
ASMR Realistic Spa Treatment For Sleep ☁️✨ (personal attention, skincare)

Safe Space ASMR

Play Episode Listen Later Feb 25, 2026 54:32


Youtube video linked below!https://www.youtube.com/watch?v=HEB44D2_AG0Links & Socials here:https://linktr.ee/haleygutz

The Lowe Down with Kevin Lowe
434: The Moment Everything Changes: The Day You Stop Being Realistic & Start Living with Reckless Optimism

The Lowe Down with Kevin Lowe

Play Episode Listen Later Feb 24, 2026 20:31 Transcription Available


What if the advice to “be realistic” is the very thing holding your life back?In this transformative episode, Kevin Lowe explores the moment everything changes — the day you stop shrinking your dreams and start living with reckless optimism.If you've ever lowered your expectations to avoid disappointment or convinced yourself that it's too late to chase something bigger, this conversation speaks directly to you.Why Must You Listen?Discover how shifting from guarded expectations to bold belief can transform the way you approach opportunities, risk, and personal growth.Learn why reckless optimism isn't naïve or toxic positivity — it's a powerful decision to act with hope even before certainty shows up.Walk away with practical mindset strategies you can apply immediately to think bigger, take action faster, and show up differently in your everyday life.What's It All About?In this powerful solo episode of Grit, Grace, & Inspiration, Kevin challenges one of the most common pieces of advice we've all heard: “be realistic.” He explores how that mindset can quietly limit dreams, confidence, and momentum — and why choosing reckless optimism may be the true turning point that changes everything.Through personal reflection, powerful storytelling, and relatable analogies, Kevin explains how belief without guarantees can reshape the way you see setbacks, opportunities, and your own potential. From redefining what optimism really means to sharing lessons from his own journey of adversity and growth, this episode invites you to stop waiting for proof and start living as if things can work out.Looking for the Links?Visit https://kevinspeaks.orgPress play now and experience the moment everything changes — when you stop playing it safe and start showing up to life with reckless optimism!Your Host: Kevin LoweKevin Lowe is the creator & host of Grit, Grace, & Inspiration, an inspirational speaker, blind visionary, and genuine creative soul on a mission to inspire the world! Through honest conversations and practical mindset shifts, he encourages listeners to think bigger, take bold action, and live with courage, resilience, and intention. Do not hesitate to reach out to Kevin using any of the links listed below...Hey, it's Kevin!I hope you enjoyed today's episode! If there is ever anything I can do for you, please don't hesitate to reach out. Below, you will find ALL the places and ALL the ways to connect!I would LOVE to hear from you! Send me a Voice MessageWant to be a guest on GRIT, GRACE, & INSPIRATION? Send Kevin Lowe a message on PodMatch!Book Kevin to Speak at Your Next Event: CLICK to Learn More + Get In TouchHire Kevin to Create Your Own Custom Soundtrack!Or for 1 Place for Everything, CLICK to visit the website!Stay Awesome! Live Inspired!© 2026 Grit, Grace, & Inspiration This podcast is designed specifically for those seeking healing from trauma,...

The Scoot Show with Scoot
How will hyper-realistic AI affect the future of Hollywood?

The Scoot Show with Scoot

Play Episode Listen Later Feb 23, 2026 16:20


Hollywood is freaking out after a hyper-realistic AI video of Brad Pitt and Tom Cruise fighting. It was created with a two-line prompt using a new tool from TikTok's parent company. Studios and actors' unions are calling it copyright theft and an existential threat to the entire film industry. If one person with a laptop can generate a blockbuster-level action scene in minutes… what happens to writers, actors, and the thousands of people who make movies for a living? Are we watching the future of entertainment or the beginning of the end for Hollywood as we know it?

Safe Space ASMR
ASMR Straightening Your Hair

Safe Space ASMR

Play Episode Listen Later Feb 21, 2026 18:09


Youtube video linked below!https://www.youtube.com/watch?v=h_T_hakJCJoLinks & Socials here:https://linktr.ee/haleygutz

The Fine Homebuilding Podcast
#725: LIVE From the 2026 International Builders' Show

The Fine Homebuilding Podcast

Play Episode Listen Later Feb 20, 2026 37:07


This week we're excited to bring you a very special episode from the floor of the 2026 International Builders' Show in Orlando, FL! FHB editorial director Brian Pontolilo, GBA editor Randy Williams, and FHB technical editor Mark Petersen are joined by Daniel Sutton from VERSATEX Building Products to discuss strategies for low-maintenance exteriors.   Tune in to Episode 725 of the Fine Homebuilding Podcast to learn more about:  Realistic expectations for low-maintenance exteriors The pros and cons of PVC siding and trim materials Common siding mistakes and how to avoid them Have a question or topic you want us to talk about on the show? Email us at fhbpodcast@taunton.com.     ➡️ Check Out the Full Show Notes: FHB Podcast 725 ➡️ Sign up for an FHB All-Access Membership ➡️ Follow Fine Homebuilding on Social Media:   Instagram • Facebook • TikTok • Pinterest • YouTube  ⭐⭐⭐⭐⭐  If you enjoy the show, please subscribe and rate us on iTunes, Spotify, YouTube Music, or wherever you prefer to listen.

Sportsmen's Nation - Whitetail Hunting
N.F.C. - Water Quality & Wildlife Habitat

Sportsmen's Nation - Whitetail Hunting

Play Episode Listen Later Feb 20, 2026 70:39


In this episode of the Nine Finger Chronicles podcast, host Dan Johnson speaks with Zach Haas, a habitat management specialist and former aquatic biologist. They discuss various topics including the importance of water quality, the impact of agriculture on ecosystems, and the challenges of managing wildlife habitats. Zach shares insights from his extensive experience in habitat management, emphasizing the need for realistic goals and practical strategies for landowners. The conversation also touches on parenting humor and the balance of work and family life. Takeaways Zach Haas is a habitat management specialist with a background in aquatic biology. Water quality is crucial for wildlife health and habitat management. Agricultural practices have significantly impacted water ecosystems. Eutrophication accelerates the aging of water bodies, harming aquatic life. Dead zones in water bodies can lead to mass fish die-offs. Wildlife, including deer, are affected by poor water quality and toxins. Habitat management requires a balance of invasive species control and natural growth. Realistic goals are essential for effective habitat management. Small properties can be managed effectively with the right strategies. Taking gradual steps in habitat management is key to success. Learn more about your ad choices. Visit megaphone.fm/adchoices

Nine Finger Chronicles - Sportsmen's Nation
Water Quality & Wildlife Habitat

Nine Finger Chronicles - Sportsmen's Nation

Play Episode Listen Later Feb 20, 2026 70:39


In this episode of the Nine Finger Chronicles podcast, host Dan Johnson speaks with Zach Haas, a habitat management specialist and former aquatic biologist. They discuss various topics including the importance of water quality, the impact of agriculture on ecosystems, and the challenges of managing wildlife habitats. Zach shares insights from his extensive experience in habitat management, emphasizing the need for realistic goals and practical strategies for landowners. The conversation also touches on parenting humor and the balance of work and family life. Takeaways Zach Haas is a habitat management specialist with a background in aquatic biology. Water quality is crucial for wildlife health and habitat management. Agricultural practices have significantly impacted water ecosystems. Eutrophication accelerates the aging of water bodies, harming aquatic life. Dead zones in water bodies can lead to mass fish die-offs. Wildlife, including deer, are affected by poor water quality and toxins. Habitat management requires a balance of invasive species control and natural growth. Realistic goals are essential for effective habitat management. Small properties can be managed effectively with the right strategies. Taking gradual steps in habitat management is key to success. Learn more about your ad choices. Visit megaphone.fm/adchoices

Thoughts On Money [TOM]
Are Your Market Expectations Realistic?

Thoughts On Money [TOM]

Play Episode Listen Later Feb 20, 2026 42:10


This week's blogpost - https://bahnsen.co/4tQ4xaA Trevor Cummings hosts the Thoughts of Money Podcast with article author Blaine Carver and Brett Bonecutter, discussing Carver's piece “Stock Market Expectations.” Using examples from relationships, premarital counseling, and sports fandom, they emphasize that expectations must be communicated early, clearly, and realistically to avoid disappointment, resentment, and poor decisions. They connect this to investing by explaining how stocks can fall even on good results when expectations are “priced to perfection,” why unrealistic return targets (e.g., 20–25% annually) break financial plans, and how compounding magnifies small percentage differences. 00:00 Welcome to the Thoughts on Money Podcast + Introducing Blaine & Brett 00:21 Under-Promise, Over-Deliver: Why Expectations Drive Everything 00:59 Vikings Season Story: Rock-Bottom Expectations → “Best” Year 02:05 From Football to Finance: Priced to Perfection & Pleasant Surprises 04:34 Expectations in Marriage (and Advisor-Client Relationships) 06:55 Unrealistic Return Targets: The 20% Conversation & Compounding Reality Check 10:39 Long-Run vs One-Year Thinking: Annual vs Annualized + Attribution 12:11 Strategy Whiplash: 2025 vs 2026 Reversal & Staying the Course 14:35 The Expectations Gap: Investors Want 12.6%, Advisors Model 7.1% 17:27 Why the Gap Exists: Valuations, History, and Risk Accountability (Bitcoin Example) 20:19 The “Road” Matters: Normal Drawdowns, Slow Recoveries, and the Bumpy Path 26:17 Coping Tools: Dividends, Business Fundamentals, and the 14% Intra-Year Drawdown 29:52 Optimists vs Pessimists: Experience, Confirmation Bias, and Fear of Running Out 37:37 Closing Reflections: Gratitude vs Grumbling + Final Thoughts & How to Reach Us Links mentioned in this episode: http://thoughtsonmoney.com http://thebahnsengroup.com

Rhythm and News
Are media consumers ready for realistic characters?

Rhythm and News

Play Episode Listen Later Feb 20, 2026 23:38


“Rhythm and News" hosts Haydée Cepeda and Genna Edelstein discuss realistic and complicated characters from movies and tv shows, aiming to answer why audiences don't seem to understand them.This episode was written and hosted by Genna Edelstein and Haydée Cepeda; edited by Haydée Cepeda; produced by Kaylee Eiber, Nathan Elias and Zachary Whalen. “Rhythm and News” is one of three shows on the Daily Trojan podcast network. You can find more episodes anywhere you listen to podcasts, as well as our website, dailytrojan.com.

Optimal Health Daily
3299: [Part 2] Lost in a Labyrinth: Getting Healthy Isn't a Straight Shot by Steve Kamb of Nerd Fitness on Realistic Fitness Strategy

Optimal Health Daily

Play Episode Listen Later Feb 19, 2026 9:16


Discover all of the podcasts in our network, search for specific episodes, get the Optimal Living Daily workbook, and learn more at: OLDPodcast.com. Episode 3299: Steve Kamb offers a powerful reminder that progress in health and life often means navigating uncertainty with action, not perfection. Instead of getting stuck analyzing every option, he urges us to make a choice, test it, and adjust, because momentum, not mastery, is what ultimately leads us out of the maze. Read along with the original article(s) here: https://www.nerdfitness.com/blog/lost-in-a-labyrinth-getting-healthy-isnt-a-straight-shot/ Quotes to ponder: "Sometimes, we're going to be at a point in our lives where there are many options laid out before us." "It's tough to solve a maze when you are sitting still. It's time to pick a path." "With enough perseverance, crossing off enough possibilities, and continuing to push ahead…you will find the center of the maze." Episode references: The Fellowship of the Ring: https://www.amazon.com/Fellowship-Ring-Being-First-Rings/dp/0547928211 Learn more about your ad choices. Visit megaphone.fm/adchoices

Imperfect Marketing
What Does It Really Cost to Self-Publish a Quality Book?

Imperfect Marketing

Play Episode Listen Later Feb 19, 2026 23:54 Transcription Available


Send a textIn this episode of Imperfect Marketing, I'm joined by Michele DeFilippo, founder of 1106 Design and a publishing industry veteran with more than 50 years of experience. We dive into the realities of book publishing—what most people get wrong, what actually matters, and how a well-done book can completely transform your business.From traditional publishing to self-publishing and hybrid models, Michele breaks down the landscape with clarity and candor, helping authors avoid costly mistakes and disappointing outcomes Why Most Self-Published Books Miss the MarkWhy skipping market research is one of the biggest (and most expensive) mistakes authors makeHow treating a book like a passion project instead of a business asset leads to poor resultsThe danger of cutting corners on editing, design, and production quality Traditional Publishing vs. Self-Publishing vs. Hybrid ModelsWhy landing a traditional publishing deal is harder than ever (and what publishers really want now)How Amazon changed publishing forever—and where things went sidewaysThe hidden downside of hybrid publishers that charge upfront and take royaltiesWhat “true” self-publishing was originally meant to be What It Actually Costs to Publish a Professional BookThe difference between real editing and running a manuscript through GrammarlyWhy nonfiction books cost more to produce than fictionWhat goes into professional cover design (and why it's never a “5-minute job”)Realistic investment ranges for publishing to traditional industry standards One-Stop Shop vs. Piecing It Together YourselfWhy project management matters just as much as creative talentThe risks of hiring freelancers without knowing what to look forCommon problems authors face after using bargain services or template-based designsHow working with an experienced team protects both your book and your sanity How a Small Book Can Create Big Business ResultsHow Michele's 88-page guide became a powerful lead generator—unexpectedlyWhy books make exceptional lead magnets in an overwhelmed digital worldHow publishing can lead to speaking opportunities, authority positioning, and new revenue streamsWhy a book doesn't need to be long to be impactful A Marketing Lesson That Still Holds TrueMichele closes the episode with a timeless reminder that applies to publishing, marketing, and business as a whole:Listen to your customers. Even when the feedback is uncomfortable—especially then.It's a lesson that has guided her work for decades and continues to pay dividends today If you've been sitting on a book idea, a half-finished manuscript, or the sense that a book could open doors for you—this episode will help you think about publishing strategically, not emotionally.

Optimal Health Daily - ARCHIVE 1 - Episodes 1-300 ONLY
3299: [Part 2] Lost in a Labyrinth: Getting Healthy Isn't a Straight Shot by Steve Kamb of Nerd Fitness on Realistic Fitness Strategy

Optimal Health Daily - ARCHIVE 1 - Episodes 1-300 ONLY

Play Episode Listen Later Feb 19, 2026 9:16


Discover all of the podcasts in our network, search for specific episodes, get the Optimal Living Daily workbook, and learn more at: OLDPodcast.com. Episode 3299: Steve Kamb offers a powerful reminder that progress in health and life often means navigating uncertainty with action, not perfection. Instead of getting stuck analyzing every option, he urges us to make a choice, test it, and adjust, because momentum, not mastery, is what ultimately leads us out of the maze. Read along with the original article(s) here: https://www.nerdfitness.com/blog/lost-in-a-labyrinth-getting-healthy-isnt-a-straight-shot/ Quotes to ponder: "Sometimes, we're going to be at a point in our lives where there are many options laid out before us." "It's tough to solve a maze when you are sitting still. It's time to pick a path." "With enough perseverance, crossing off enough possibilities, and continuing to push ahead…you will find the center of the maze." Episode references: The Fellowship of the Ring: https://www.amazon.com/Fellowship-Ring-Being-First-Rings/dp/0547928211 Learn more about your ad choices. Visit megaphone.fm/adchoices

Geek Freaks Headlines
Toy Story 5's Most Realistic Villain Yet: Screen Time

Geek Freaks Headlines

Play Episode Listen Later Feb 19, 2026 1:36


This week on Geek Freaks Headlines, we break down the new Toy Story 5 trailer and why its big threat is less “new toy in town” and more “toys getting left behind.” The episode digs into Lilypad, a LeapFrog-style tablet that pulls kids away from playtime, and how the movie seems to shift its emotional focus toward parenthood, modern childhood, and the struggle to keep connection alive in a screen-first world. We also talk about Jessie stepping into leadership at Bonnie's house, where Woody fits now, and why the story may be aiming for balance instead of a simple defeat.00:00 Toy Story 5 trailer headline and the Lilypad setup00:14 Why this is different from “toys replaced by toys”00:31 Toy Story growing with its audience, now with a parent lens00:40 Jessie takes the lead at Bonnie's house00:58 Where Woody is now and why he may return to the core crew01:14 The likely message: balance with screens, not total rejection01:27 Release date and sign-offLilypad is framed as a villain because she competes for attention all day, not just during playtimeThe conflict is about toys becoming irrelevant, not being replaced by a newer toyThe trailer's theme feels aimed at parents trying to protect “real” childhood momentsJessie appears positioned as the on-site leader for Bonnie's toys, with confidence and legacy pressureWoody's role sounds more like a returning guide who remembers what being a toy is supposed to meanThe episode predicts the story resolves with balance, not a total “tech is bad” messageThe film is set for June 19, 2026“This is about toys becoming irrelevant.”“Not just for playtime, it's for all the time.”“We're not going to see Lilypad defeated, but embraced… finding that balance.”If you enjoyed this breakdown, subscribe to Geek Freaks Headlines, leave a review, and share the episode with #GeekFreaksHeadlines. It helps more people find the show, and it tells us what you want us covering next.News source for everything we cover: GeekFreaksPodcast.comTwitter: @GeekFreaksPodInstagram: @GeekFreaksPodcastThreads: @GeekFreaksPodcastFacebook: Geek Freaks PodcastSend us your questions and topic requests for the next episode, especially if you want us to compare Toy Story 5's themes to earlier films or talk more about how Pixar handles growing up.Timestamps and TopicsKey TakeawaysMemorable QuotesCall to ActionLinks and ResourcesFollow UsListener Questions

Optimal Health Daily
3298: [Part 1] Lost in a Labyrinth: Getting Healthy Isn't a Straight Shot by Steve Kamb of Nerd Fitness on Realistic Fitness Strategy

Optimal Health Daily

Play Episode Listen Later Feb 18, 2026 9:55


Discover all of the podcasts in our network, search for specific episodes, get the Optimal Living Daily workbook, and learn more at: OLDPodcast.com. Episode 3298: Steve Kamb reframes the journey to better health as navigating a labyrinth rather than running a marathon, emphasizing the need to embrace detours, dead ends, and course corrections along the way. Drawing inspiration from video games, pop culture, and a powerful analogy by Oliver Emberton, he encourages readers to see failure not as defeat but as data, guiding them toward smarter, more personalized strategies for growth. Read along with the original article(s) here: https://www.nerdfitness.com/blog/lost-in-a-labyrinth-getting-healthy-isnt-a-straight-shot/ Quotes to ponder: "It turns out, life, and your quest for a healthier lifestyle, has a lot more twists and turns than expected." "Sometimes, heading in a new direction is the best thing you can do." "We oftentimes let our stubbornness force us down a losing path because we've already started it." Episode references: Life is a Maze, not a Marathon: https://oliveremberton.com/2014/life-is-a-maze-not-a-marathon Learn more about your ad choices. Visit megaphone.fm/adchoices

The Cook & Joe Show
12PM - Five realistic free agent targets for the Steelers; The Limitless Express - Crosby injured in Olympics, NFL international games

The Cook & Joe Show

Play Episode Listen Later Feb 18, 2026 35:37


Hour 3 with Donny Football: Who are five realistic free agent targets for the Steelers in free agency? ESPN predicted safety Jaquon Brisker to the Steelers to join safety DeShon Elliott in the secondary. Running back Rico Dowdle could make sense if the Steelers don't re-sign Kenneth Gainwell. Was it a mistake for Crosby to play in the Olympics? Konnor Griffin continues to excel in the early portion of spring training.

The Cook & Joe Show
Five realistic free agent targets for the Steelers

The Cook & Joe Show

Play Episode Listen Later Feb 18, 2026 19:51


Sidney Crosby is in the locker room and has not returned after taking a big hit to the ice. Who are five realistic free agent targets for the Steelers in free agency? ESPN predicted safety Jaquon Brisker to the Steelers to join safety DeShon Elliott in the secondary. Running back Rico Dowdell could make sense if the Steelers don't re-sign Kenneth Gainwell.

Optimal Health Daily - ARCHIVE 1 - Episodes 1-300 ONLY
3298: [Part 1] Lost in a Labyrinth: Getting Healthy Isn't a Straight Shot by Steve Kamb of Nerd Fitness on Realistic Fitness Strategy

Optimal Health Daily - ARCHIVE 1 - Episodes 1-300 ONLY

Play Episode Listen Later Feb 18, 2026 9:55


Discover all of the podcasts in our network, search for specific episodes, get the Optimal Living Daily workbook, and learn more at: OLDPodcast.com. Episode 3298: Steve Kamb reframes the journey to better health as navigating a labyrinth rather than running a marathon, emphasizing the need to embrace detours, dead ends, and course corrections along the way. Drawing inspiration from video games, pop culture, and a powerful analogy by Oliver Emberton, he encourages readers to see failure not as defeat but as data, guiding them toward smarter, more personalized strategies for growth. Read along with the original article(s) here: https://www.nerdfitness.com/blog/lost-in-a-labyrinth-getting-healthy-isnt-a-straight-shot/ Quotes to ponder: "It turns out, life, and your quest for a healthier lifestyle, has a lot more twists and turns than expected." "Sometimes, heading in a new direction is the best thing you can do." "We oftentimes let our stubbornness force us down a losing path because we've already started it." Episode references: Life is a Maze, not a Marathon: https://oliveremberton.com/2014/life-is-a-maze-not-a-marathon Learn more about your ad choices. Visit megaphone.fm/adchoices

Gym Marketing Made Simple
The Real Math Behind Gym Lead Conversion Rates | Episode 117.

Gym Marketing Made Simple

Play Episode Listen Later Feb 18, 2026 9:38


Bad math is quietly draining gym revenue.When assumptions are off, even strong marketing can miss the mark.Welcome to Gym Marketing Made Simple, the show focused on cutting through the noise around gym growth. Each episode centers on practical marketing, sales, and leadership systems that help boutique gyms build steady momentum without guesswork or constant outreach.Episode HighlightsIn this episode, the focus is on the real cost of bad marketing advice in the gym space. Tommy Allen breaks down why common mentorship claims about lead-to-conversion rates often fail in real-world conditions. Using a sample gym with 125 members and a 4% churn rate, the discussion shows how unrealistic expectations—like assuming a 50% close rate—can distort planning and lead to wasted budget and time. Realistic benchmarks and the importance of broad, reliable data are emphasized throughout.Episode OutlineThe hidden cost of bad advice in gym marketingWhy some mentorship guidance creates unrealistic expectationsExample breakdown: 125-member gym with 4% churnProblems with assuming a 50% lead-to-conversion rateReal-world conversion benchmarks of 20–30%How cherry-picked data skews decision-makingThe financial impact of inaccurate projectionsWhy gym owners must demand larger, credible datasetsEpisode Chapters00:00 Intro00:27 Today's topic: cost of bad advice01:20 Baseline gym: 125 members, 4% churn02:25 Lead needs vs. lead-to-conversion claims04:05 Realistic conversion rates from data05:40 What wrong math costs gym owners07:10 Cherry-picked data & small samples08:20 Call to action: demand real data09:08 Outro & free call invitationAction TakenShare the screen at the start to present the simple gym math scenarioFollow up with Best Hour to schedule the lead conversion discussionRequest that mentorship companies provide datasets from 100+ gymsConclusionAccurate math drives better decisions. When gyms rely on inflated conversion assumptions, marketing plans become fragile and costly. Grounding strategy in realistic data protects both time and revenue.CTAListen to the full episode and follow the show for more gym marketing clarity.

Mully & Haugh Show on 670 The Score
What's a realistic trade that could land Maxx Crosby in Chicago?

Mully & Haugh Show on 670 The Score

Play Episode Listen Later Feb 17, 2026 7:19


David Haugh and Gabe Ramirez discussed what the Bears could offer in a trade for Raiders star pass rusher Maxx Crosby.

Physical Therapy Owners Club
Navigating The Possibilities Of AI In 2026: Realistic Implementations For Private Practice Owners, With Sharif Zeid Of Empower EMR

Physical Therapy Owners Club

Play Episode Listen Later Feb 17, 2026 43:22


Most practice owners feel the pressure to “keep up with AI” — but few have real clarity on what actually works, what's hype, and what could quietly overwhelm their team.In this episode of the Private Practice Owners Club Podcast, host Nathan Shields sits down with Sharif Zeid, longtime EMR leader and representative of Empower EMR, for a grounded, practical conversation about where AI is truly delivering value in private practice — and where expectations need a serious reset.Drawing on years of experience working with hundreds of practices, Sharif breaks down how AI adoption is accelerating faster than any technology wave we've seen before — and why documentation, scheduling, compliance, and phone systems are at the center of that shift. They also unpack the hidden risks of chasing tools without systems, and why “AI as the solution” fails without strong operational foundations.Together, they explore:Why documentation is still AI's biggest and safest win for practicesHow generative AI (scribes, summaries, chart review) is actually being used in real clinicsWhy “perfect” AI is the wrong benchmark — and how partial wins still create massive ROIThe growing AI arms race between providers and insurance companiesWhere AI helps with compliance — and why trust-but-verify still mattersWhy billing automation is over-promised and under-delivered (for now)The real cost of stacking tools — and how to evaluate ROI per providerWhy team overwhelm is the biggest risk of fast AI adoptionThe rise of AI in phone systems, scheduling, and patient self-serviceWhy patient portals and foundational systems must come before automationHow AI should support decision-making, not replace leadershipIf you're a practice owner trying to decide where AI actually belongs in your clinic — and how to adopt it without breaking your team, your culture, or your systems — this episode offers clarity without hype.

Father and Joe
Father and Joe E449: Shrove Tuesday to Ash Wednesday — A Plan, Realistic Penances, and God's Help

Father and Joe

Play Episode Listen Later Feb 17, 2026 20:05


Lent isn't just “trying harder.” It's a Church-wide reset—entered intentionally, with a plan, and with God's help. As this episode releases on Shrove Tuesday, Joe Rockey and Father Boniface Hicks explain why today (and Ash Wednesday) matters, how confession and a concrete Lenten plan set you up for real change, and why the goal isn't perfection—it's growth in virtue and deeper communion with God.Through the lens of relationships—self, others, and God—they contrast two approaches: “Fat Tuesday” as last-chance indulgence versus Shrove Tuesday as spiritual preparation. They also explore how shared momentum (everyone doing Lent together) makes lasting habit-change more achievable, and why a meaningful, realistic step sustained for 40 days can reshape your life long after Easter.Key IdeasShrove Tuesday is historically tied to shriving: preparing for Lent through confession and renewed intention.Lent works best with a plan: pick a meaningful step that's realistic enough to sustain for 40 days.Virtue grows like training: discipline isn't the goal—holiness is, and virtue is the habit of choosing the good.Avoid “outside pressure” spirituality; listen for what God is already stirring inside you (desire, conviction, readiness).Lent isn't a solo project: we lean on God's help and the reinforcement of the whole Church moving together.Links & References (official/source only)None referenced with clear official/source URLs in this episode.CTA: If this helped, please leave a review or share this episode with a friend.Questions or thoughts? Email FatherAndJoe@gmail.com .Tags (comma-separated)Father and Joe, Joe Rockey, Father Boniface Hicks, Shrove Tuesday, Ash Wednesday, Lent, confession, penance, fasting, abstinence, virtue, holiness, sanctity, spiritual discipline, habits, self-control, temptation, renewal, Easter preparation, liturgical season, Rule of St Benedict, Christian perfection, realistic goals, spiritual growth, prayer plan, spiritual reading, daily Mass, phone usage, algorithms, community support, accountability, fatherhood, being present, playing with children, patience, training, athletes and virtue, interior freedom, gratitude

Behind The Thread
Timothy Armoo: The Realistic Way To Make Your First Million (Just Copy Me) | Timothy Armoo

Behind The Thread

Play Episode Listen Later Feb 16, 2026 77:39


Timo Armoo built and sold A global marketing company (Fanbytes) for $30M+ at age 27. He started as an anxious, self-doubting kid on a council estate in East London. He had no money or connections. His story is proof you don't need luck to succeed as an entrepreneur. In this conversation, Timo shares the EXACT cheat codes you can use to start your first $1M business.Our Sponsors:Grab the 2026 State of Marketing Report by Hubspot: https://clickhubspot.com/42cf89Start using @replit today: https://replit.comFollow Us!https://www.instagram.com/calumjohnson1/https://x.com/calum_johnson9Timo: https://www.instagram.com/timarmoo/?hl=enTimestamps00:00 Intro03:40 Watch this if you feel misunderstood08:00 The cage story (I was scared for my life!)12:29 How a gangster saved my life...23:02 The life changing business I started at 14 (copy this business model!)28:46 Why Fanbytes worked (secrets behind the $30M business)34:17 This visualization practice made me a multi-millionaire37:20 The alter-ego cheatcode (use this when you're feeling anxious)40:25 You'll never build a successful business without this...44:59 The expertise gap (how to find $1M business ideas)49:15 The most underrated hack for making your first $1M 53:44 3 ways to copy business ideas57:59 3 business myths that keep you broke01:06:03 Follow these 2 rules to hit your first $10k/month01:09:38 The easiest path to $1M/year01:16:20 Watch this if you doubt yourself (best advice you'll hear today)

The Unforget Yourself Show
The Recovering Procrastinators Realistic Guide to Doing It All with Jackie Murakami

The Unforget Yourself Show

Play Episode Listen Later Feb 14, 2026 27:22


Jackie Murakami, founder of Vintage Isle Digital, a systems strategy and productivity consultancy that helps solopreneurs reclaim their time, energy, and focus without burning out in the process.Through her personalised productivity method and ongoing accountability support, Jackie guides clients who identify as “recovering procrastinators and perfectionists” to create structure and balance that actually works for their real lives.Now, Jackie's journey from Ohio to Tokyo to Hawaii, all while raising twins and running her business remotely, demonstrates how resilience and intentional systems can turn chaos into calm.And while living part of the year in Japan and part in Hawaii, she continues to help entrepreneurs find that elusive middle ground between ambition and rest - proving you can do it all without losing yourself.Here's where to find more:https://vintageisledigital.comhttps://www.facebook.com/share/1Jtgi4XyF7/?mibextid=wwXIfrhttps://www.linkedin.com/in/jackie-murakami-96a7778b?utm_sourc…________________________________________________Welcome to The Unforget Yourself Show where we use the power of woo and the proof of science to help you identify your blind spots, and get over your own bullshit so that you can do the fucking thing you ACTUALLY want to do!We're Mark and Katie, the founders of Unforget Yourself and the creators of the Unforget Yourself System and on this podcast, we're here to share REAL conversations about what goes on inside the heart and minds of those brave and crazy enough to start their own business. From the accidental entrepreneur to the laser-focused CEO, we find out how they got to where they are today, not by hearing the go-to story of their success, but talking about how we all have our own BS to deal with and it's through facing ourselves that we find a way to do the fucking thing.Along the way, we hope to show you that YOU are the most important asset in your business (and your life - duh!). Being a business owner is tough! With vulnerability and humor, we get to the real story behind their success and show you that you're not alone._____________________Find all our links to all the things like the socials, how to work with us and how to apply to be on the podcast here: https://linktr.ee/unforgetyourself

Fringe Radio Network
Bruce Collins - Dennis Kneale, Former CNBC and Fox Business Host - Encouraging Investment Advise

Fringe Radio Network

Play Episode Listen Later Feb 14, 2026 79:44 Transcription Available


Encouraging, realistic and engouraging investment talk with Dennis Kneale, former CNBC and Fox Business Host, managing editor at Forbes and writer at the Wall Street Journal. Dennis presents a look inside the world of financial journalism as only he can do! He also gives us a sneak peek at his upcoming book, Ore-goners!  This is a must listen for anyone interested in investing and the Amercan economy.  A refreshing alternative view point concerning the investing business in America.

Sharp & Benning
Realistic Nebrasketball Expectations - 9

Sharp & Benning

Play Episode Listen Later Feb 13, 2026 11:13


More discussion on Husker hoops priorities.

Zolak & Bertrand
Is AJ Brown To The Pats Realistic? // Kyle Boddy Helping Red Sox Win Games // The Rafael Devers Salary Dump - 2/13 (Hour 2)

Zolak & Bertrand

Play Episode Listen Later Feb 13, 2026 37:56


(00:00) Zolak & Bertrand start the hour discussing the potential of the Patriots trading for Eagles WR AJ Brown.(13:15) The crew discusses DriveLine's Kyle Boddy and whether his tactics help the Red Sox win games by teaching hitters different techniques.(24:09) We have a heated discussion about Red Sox GM Craig Breslow and the Rafael Devers trade.(31:23) We talk about what the Boston sports scene looks like after the Super Bowl.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Joe Giglio Show
Is it Realistic for the Eagles to Draft a Quaterback?!

Joe Giglio Show

Play Episode Listen Later Feb 13, 2026 23:35


Joe Giglio and Hugh Douglass react to the possibility of the Eagles drafting a quarterback! The two continue to talk about the pros and cons that this would have on the Eagles as a whole.

Intermediate Spanish Podcast - Español Intermedio
E242 La carga cognitiva: cerebro lleno, mente lenta - Intermediate Spanish

Intermediate Spanish Podcast - Español Intermedio

Play Episode Listen Later Feb 12, 2026 19:11 Transcription Available


¿Sientes que tienes demasiadas cosas en la cabeza al mismo tiempo? Como si tu mente fuera un navegador con 48 pestañas abiertas… En este episodio, hablamos de la carga cognitiva, la memoria de trabajo, por qué la multitarea es un mito y cómo el exceso de estímulos afecta a tu concentración, tu energía y tu aprendizaje del español. También te comparto estrategias simples para reducir esa carga mental y estudiar con más claridad y menos frustración.¿Qué haces tú para reducir tu carga mental en el día a día?Free eBooks: Habla español con AI & La guía del estudiante de españolMis cursos online: Español Camaleón - A REALISTIC pronunciation course Español Ágil - Intermediate Spanish Español PRO - Advanced Spanish Español Claro - Upper-beginner Spanish Si no sabes cuál es mejor para ti, haz el TEST. Intermediate Spanish Podcast with Free Transcript & Vocabulary Flashcards www.spanishlanguagecoach.com - Aprende español escuchando contenido natural adaptado para estudiantes de español de nivel intermedio. Si es la primera vez que escuchas este podcast, puedes usarlo como un podcast diario para aprender español - Learn Spanish Daily Podcast with Spanish Language Coach Social media:YouTubeInstagram...

Cops and Writers Podcast
Police Stories: The Rookie Years. Wow! That's Realistic!

Cops and Writers Podcast

Play Episode Listen Later Feb 12, 2026 10:17


Send a textIn this episode of the Cops and Writers Podcast bonus series, retired Milwaukee Police Sergeant Patrick O'Donnell reads Chapter 28, "Wow! That's Realistic!" from his upcoming book:Police Stories: The Rookie Years - True Crime, Chaos, & Life as a Big City CopIt's 2:00 AM at District Five. Patrick and his partner Rachel are processing an arrest when a bloodcurdling scream echoes from the front lobby: "Help! He's been shot!"What they find is a dead body with a gunshot wound to the head, a grieving girlfriend, and—behind the victim—District Five's crime prevention display: a casket surrounded by yellow crime scene tape for Police Week.Hours later, after the body is removed, but the blood remains, the captain walks through and delivers the perfect line: "Damn, that is one realistic crime scene display!"All stories are real. Names and locations have been changed where necessary.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:

Talking Manhattan
Regulation, Supply, and the 99-Unit Rule with Robert Rahmanian & Louis Adler

Talking Manhattan

Play Episode Listen Later Feb 12, 2026 21:58


Today, Noah and John sit down with Louis Adler and Robert Rahmanian of REAL New York to break down what's really happening in the Manhattan market right now. From a rental market that's critically undersupplied to the ripple effects of 485-x replacing 421-a, the guys explain why new development pipelines are thinning — and why rents likely aren't coming down anytime soon. They dive into office-to-residential conversions, the future of Midtown and FiDi, amenity wars in luxury rentals, and the widening gap between renting and buying. Plus, they share how they built a 2,000+ unit pipeline, why conversions are the “flavor of the month,” and the one piece of advice they'd give their younger selves. If you want boots-on-the-ground intel from operators in the trenches, this is it! ==== ✅ Stay Connected With Us:

The Midday Show
Hour 3 - Not all movies have to be realistic

The Midday Show

Play Episode Listen Later Feb 10, 2026 38:39


In Hour 3, Andy and Randy talk about sports movies of the 90's Michael Penix comeback from injury, Brandon Adams stops by the show, and the AMA.

Ordway, Merloni & Fauria
Is Raiders star Maxx Crosby a realistic trade target for the Patriots this offseason?

Ordway, Merloni & Fauria

Play Episode Listen Later Feb 10, 2026 10:41


Hart and Ted react to trade rumors involving Raiders star edge rusher Maxx Crosby, and they debate whether Crosby is a realistic trade target for the Patriots in this upcoming offseason.

Cracking the Code of Spy Movies!
SECRET MISSION 1942 -Decoded

Cracking the Code of Spy Movies!

Play Episode Listen Later Feb 10, 2026 35:17


This episode, SECRET MISSION (1942) – Decoded explores one of the most unusual spy films ever made. We examine this wartime espionage movie created without hindsight or guaranteed victory SECRET MISSION Background In this episode of Cracking the Code of Spy Movies, we return to Britain in 1942. World War II is still raging, Europe is occupied, and the outcome is terrifyingly uncertain. Out of that uncertainty comes SECRET MISSION (1942), a spy film that treats espionage as survival, not fantasy. Unlike later spy movies, this film offers no gadgets, swagger, or invincible heroes. Instead, it presents secrecy, fear, restraint, and the danger of being noticed. Every choice matters, and every mistake risks lives. What we decode for SECRET MISSION (1942) The episode breaks down how SECRET MISSION functions as both cinema and wartime instruction. It was propaganda, but also a sober reflection of real intelligence work. The discussion places the film within its historical context, explaining why 1942 truly matters We analyze performances by James Mason, Hugh Williams, Carla Lehmann, and Michael Wilding. James Mason's quiet, observant presence stands in stark contrast to later Bond-style heroes. Here, being invisible is success. The episode also explores civilian involvement and moral cost. Helping a spy could destroy an entire family. We must also remember that trust is fragile, alliances are uncertain, and no one is fully safe. Spycraft takes precedence over action in this movie: Code phrases, compartmentalization, and limited knowledge drive the tension. This approach connects SECRET MISSION to later realistic spy films like The Third Man and Tinker Tailor Soldier Spy. Ultimately, this episode argues that SECRET MISSION reveals the foundation beneath modern espionage cinema. Before Bond became myth, spying was quiet, dangerous, and rarely celebrated. That reality is what makes this forgotten film worth decoding today. Episode Highlights How this movie coming out DURING the war was impacted by the fact that outcome of World War II hadn't yet been decided.   Espionage is portrayed as restraint, not spectacle James Mason in an early, anti-Bond role Realistic spycraft over action and gadgets Wartime cinema as psychological preparation In addition, we talk about this poster that was used in the movie.  Is it a real poster?  What was its purpose?   Tell us what you think about our decoding of SECRET MISSION (1942) Have you seen this movie yet?  If not, did listening to this episode make you want to watch it?  On the other hand, if you have seen it, where do Dan and Tom get it right, and where do they get it wrong? Let us know your thoughts, ideas for future episodes, and what you think of this episode. Just drop us a note at info@spymovienavigator.com.  The more we hear from you, the better the show will surely be!  We'll give you a shout-out in a future episode!   You can check out all our CRACKING THE CODE OF SPY MOVIES podcast episodes on your favorite podcast app or our website. In addition, you can check out our YouTube channel as well.   Episode Webpage:  https://spymovienavigator.com/episode/secret-mission-1942-decoded/   #SecretMission1942 #ClassicSpyMovies #SpyFilmHistory #WartimeCinema #WWIISpyMovies #JamesMason #EspionageFilms #Spycraft #FilmHistory #OldHollywood #BritishCinema #PreJamesBond #SpyMoviePodcast

Carrots 'N' Cake Podcast
Ep324: Peptides for Perimenopause: What Actually Works, What's Safe, and What's Hype

Carrots 'N' Cake Podcast

Play Episode Listen Later Feb 10, 2026 34:34


In this episode, Tina is joined by returning guest Jennifer Woodward to cut through the noise and have an honest, experience-driven conversation about peptides in midlife. Drawing from years of hands-on use, client work, and clinical education, they unpack why peptides can be powerful tools when used strategically, how they fit alongside foundational habits like nutrition and strength training, and where unrealistic expectations often derail results. Here's what you'll learn: - Safety and long-term effects of peptides in perimenopause and menopause - The best peptides for women over 40 - Realistic timelines for noticing results - Tackling stubborn midlife belly fat and weight resistance - Protecting muscle and metabolism with GLP-1 peptides - Best peptides for supporting energy, brain fog, sleep, mood, and libido - How to know which peptides are best for you and your goals - Peptides for Practitioners Certification + Live Cohort now open for enrollment! Certification course + live cohort (50% off): https://www.carrotsncake.com/offers/Hrosw4Mc?coupon_code=MARCHTINA Live cohort only (50% off): https://www.carrotsncake.com/offers/be72Ve2V?coupon_code=COHORTADS Explore peptides via EllieMD: https://Elliemd.com/Carrotsncake Peptides for Women Course: https://www.carrotsncake.com/offers/3Q7wttmr?coupon_code=PEP Connect with Tina Haupert: https://carrotsncake.com/ Facebook: Carrots 'N' Cake https://www.facebook.com/carrotsncake Instagram: @carrotsncake https://www.instagram.com/carrotsncake YouTube: Tina Haupert https://www.youtube.com/user/carrotsncake About Tina Haupert: Tina Haupert is the owner of Carrots ‘N' Cake as well as a Certified Nutrition Coach and Functional Diagnostic Nutrition Practitioner (FDN-P). Tina and her team use functional testing, peptides, and a personalized approach to nutrition to help women find balance within their diets while achieving their body composition goals.

The Link Fitness Show
How Long Does Postpartum Fat Loss Really Take? A Realistic Timeline for Moms

The Link Fitness Show

Play Episode Listen Later Feb 10, 2026 13:26


Lost only 2 pounds this month while someone else dropped 15? Here's why that's actually a good thing. In this episode, I'm breaking down why sustainable fat loss isn't a race and why comparing your journey to someone else's will only hold you back. I share my own current fat loss phase, walk you through the 3-phase framework I use with my 1:1 coaching clients in Elevated Evolution (Rebuild, Reignite, Redefine), and explain why everyone's timeline looks different based on their starting point, lifestyle, and unique circumstances. If you've ever felt "behind" or frustrated by slow progress, this episode is your reminder that consistency beats speed every time. Let's ditch the all-or-nothing approach and focus on what actually works for YOUR body and YOUR life.

The Brian Kilmeade Show Free Podcast
“It's Not Realistic” – The Truth About Mass Deportations

The Brian Kilmeade Show Free Podcast

Play Episode Listen Later Feb 9, 2026 122:44


[00:00:00] Lawrence Jones   [00:18:26] Sen. Markwayne Mullin   [00:55:12] Steven Moore   [01:24:00] Chris Klomp   [01:32:00] Ian O'Connor Learn more about your ad choices. Visit podcastchoices.com/adchoices

The Systems and Workflow Magic Podcast
A Realistic Blogging System for Busy Family Photographers

The Systems and Workflow Magic Podcast

Play Episode Listen Later Feb 9, 2026 22:56


Hey family photographer, if blogging keeps getting pushed to the bottom of your to-do list because of sessions, editing, or just life… you're not alone.In this episode, I'm walking you through the exact blogging system I use to publish consistent, keyword-optimized blog posts for my family photography business — without staring at a blank screen or reinventing the wheel every month.Blogging used to feel heavy, confusing, and time-consuming for me, too! But once I built a repeatable system (and paired it with trained AI the right way), everything changed. Now I batch four blog posts a month with clarity, confidence, and zero dread.In this episode, you'll learn:Why blogging still matters for family photographers (even with AI everywhere)The “master hub” I use to organize blog ideas, keywords, CTAs, and publishing rhythmsWhy ChatGPT alone is not a blogging strategyHow I pair keyword research with AI to write blogs in my actual voiceThe exact 3-step blogging system I use every single monthHow I repurpose one blog into email marketing, Google My Business, and moreWhat consistent blogging really looks like for a solo family photographerResources & Links Mentioned In This Episode▸ Read the full blog post that goes with this episode (that way, you get all the links mentioned): https://systemsandworkflowmagic.com/blogging-for-photographers-a-realistic-system-for-family-photographers/▸ My FULL Blogging & Visibility System (A ChatBot Suite): https://systemsandworkflowmagic.com/blogging-visibility-system▸ The Family Photographer's Marketing Society: https://systemsandworkflowmagic.com/the-family-photographers-marketing-society▸ Get 25% OFF of Flodesk with my affiliate link: https://flodesk.com/c/DOLLYDELONGEDUCATION▸ Grab the FREE 2026 Family Photographers Marketing Trends Report: https://systemsandworkflowmagic.com/family-photography-marketing-trendsConnect with Me (Dolly DeLong Education)

The John Batchelor Show
S8 Ep422: Anatol Lieven discusses Estonia's call for dialogue with Moscow and the need for Europe to develop realistic defense and negotiation strategies regarding Russia rather than relying solely on American protection.

The John Batchelor Show

Play Episode Listen Later Feb 6, 2026 7:02


Anatol Lieven discusses Estonia's call for dialogue with Moscow and the need for Europe to develop realistic defense and negotiation strategies regarding Russia rather than relying solely on American protection.1917 KREMLIN

The No-Till Market Garden Podcast
Is the Self Sustaining Farm a Realistic Goal + Filtering Chlorine and Cholarimine

The No-Till Market Garden Podcast

Play Episode Listen Later Feb 6, 2026 24:23


Welcome to episode 337 of Growers Daily! We cover: water filtration for municipal water, self sustaining farms, and it's feedback friday!  We are a Non-Profit! 

The JJ Redick Podcast
James Harden on the Move Again, Giannis's Bidding War, and the Trade Deadline Outlook

The JJ Redick Podcast

Play Episode Listen Later Feb 3, 2026 79:11


Verno and Jacoby return ahead of the trade deadline to discuss the latest news and rumors around the league, beginning with the latest report that James Harden and the Clippers are working on a trade to send him out of Los Angeles. They then go through the latest surrounding Giannis Antetokounmpo and debate which of the reported teams is most likely to acquire him. Next, the guys ask six questions ahead of Thursday's trade deadline. (00:00) Welcome to The Mismatch!(00:46) James Harden and the Clippers are reportedly working on a trade before the deadline(13:54) Giannis trade talks are heating up(24:10) What would you tell Jon Horst (Bucks GM) to prioritize in any Giannis trade?(33:11) Ja Morant's trade outlook(51:00) LeBron James, Lakers reportedly ready to move on from one another(46:23) Players with expiring contracts who could possibly be traded(53:35) Which player do you not want to see traded?(55:59) Which player do you want to see traded the most?(58:08) Which team that has no chance of contending needs a win-now move?(1:01:43) Best move to make for a team to contend next year(1:07:15) Realistic but hilarious trades that could happen(1:08:32) What non-NBA trade would you make? Leave us a message on our Mismatch voicemail line! (323) 389-5091 Hosts: Chris Vernon and David JacobyProducers: Jessie Lopez, Stefan Anderson, and Jeff ShearinSocial: Keith Fujimoto Learn more about your ad choices. Visit podcastchoices.com/adchoices

Mitch Unfiltered
Episode 369 - Seahawks Too Good for Pats: Realistic or Overconfident?

Mitch Unfiltered

Play Episode Listen Later Feb 2, 2026 112:38


RUNDOWN   Mitch and Hotshot Scott open Super Bowl week pleading for the rarest gift in sports: a wire-to-wire Seahawks blowout with zero anxiety attached. Instead, they confront history, betting lines, and the uncomfortable reality that Seahawks–Patriots games almost never come easy, dissecting spreads, totals, MVP odds, and prop bets surrounding Sam Darnold, Kenneth Walker, and the Seattle defense. ESPN insiders Mike Reiss and Brady Henderson join Mitch to trace the improbable parallel journeys of the Patriots and Seahawks from offseason uncertainty to Super Bowl 60. Reiss details how Mike Vrabel reshaped New England's culture around connection and accountability, while Henderson explains why Mike Macdonald's Seahawks are thriving on trust, depth, and collective buy-in rather than star power. The discussion zeroes in on Drake May's health, New England's offensive line vulnerabilities, Seattle's defensive front, and why the Seahawks are favored — while acknowledging that Patriots fans view this matchup as dangerous, not nostalgic. Mitch and Jason Puckett wrestle with the strangest part of Super Bowl 60 week: the complete absence of a believable reason the Seahawks should lose. They debate conspiracy theories, historical heartbreak, and why this matchup feels more like a gift than a grind, with comparisons to past Seattle sports collapses adding a layer of unease. Mitch reconnects with Dave Grosby to reflect on a defining week in Seattle sports history, Grosby's decades-long presence behind the microphone, and his upcoming honor from the American Parkinson Disease Association at the March 14 Magic of Hope Gala. Grosby shares a candid, deeply personal look at living with Parkinson's, the lack of a cure despite years of advocacy and fundraising led by figures like Michael J. Fox, and why continued research is critical. Peter King joins Mitch to unpack the shock of Bill Belichick not being a first-ballot Hall of Famer, offering rare insight into how Hall of Fame voting dynamics, strategic ballots, and a flawed system can produce surprising outcomes. The conversation shifts to Super Bowl 49 memories, lingering fallout inside the Seahawks locker room, and why the Seahawks–Patriots rematch echoes past championship blind spots where favorites felt inevitable — until they weren't. GUESTS   Brady Henderson | Seahawks Insider, ESPN Mike Reiss | Patriots Insider, ESPN Jason Puckett | Seattle sports radio host and founder of The Daily Puck Drop Dave Grosby | Seattle sports broadcasting fixture and longtime radio voice, Groz with Gas "Take 5" Peter King | Hall of Fame voter, longtime NFL writer, Football Morning in America founder   TABLE OF CONTENTS   0:00 | No Stress, No Drama? Seahawks Fans Beg for a Blowout as Super Bowl 60 Arrives 16:15 | GUEST: Seahawks v Patriots; Two Paths, Same Destination — How Seattle and New England Landed in Super Bowl 60 40:00 | GUEST: Jason Puckett; Nothing Makes Sense — And That's Why This Super Bowl Feels Inevitable 59:10 | GUEST: Dave Grosby; A Voice That's Always Been There — Dave Grosby, Parkinson's Advocacy, and a Super Bowl Run That Feels Unreal 1:17:49 | GUEST: Peter King; Peter King on Belichick, the Hall of Fame Mess, and Why This Super Bowl Feels Familiar 1:36:53 | Other Stuff Segment: Epstein file reactions and viral AI prank video, Seahawks offensive coordinator vacancy and Clint Kubiak leaving for the Raiders, skepticism about Raiders coaching stability, Pepsi Super Bowl ad parodying Coldplay concert affair, Diet Coke vs Diet Pepsi rant, NFL fine issued to Riq Woolen for NFC Championship taunting penalty, Puka Nacua publicly flirting with Sydney Sweeney on social media, athlete celebrity dating culture, Rick Rizzs announcing retirement after 2026 Mariners season, Jarrell "Big Baby" Miller's toupee flying off during boxing match, NBA suspending Paul George for violating drug policy tied to mental health medication, Lou Holtz reportedly entering hospice care, Sha'Carri Richardson arrested for excessive speeding RIPs: Demond Wilson (Sanford and Son actor), Catherine O'Hara (actress, Schitt's Creek and Home Alone) HEADLINES: Malaysian minister claims work stress can make people gay, man arrested for exposing himself and having sex with a vacuum, mother slaps daughter and is attacked back with a pork chop, woman gives birth and develops a third breast