POPULARITY
Categories
Today's episode is a good one — We're joined by two very special guests, Leigh and Aly from Pure Salt Interiors, the design studio behind some of the most effortlessly beautiful and livable homes. We're diving into what it actually takes to create a dream space — not just something that looks good, but a home that feels calm, intentional, and truly yours. Leigh and Aly break down their design philosophy, practical design tips for your current season of life, and the small changes that make the biggest impact. Whether you're building, renovating, redecorating, or just dreaming, this episode is packed with design wisdom straight from the pros. Happy Wednesday! Follow Pure Salt Interiors HEREShop Pure Salt Interiors HERE__________________________- FOLLOW our new TTP Daily Instagram Account HERE- SUBSCRIBE to our new Youtube Channel HERE__________________________Kristin's Amazon Store FrontJon's Amazon Store FrontJoin all the fun on PatreonFollow us on Socials:InstagramThat's The Point KristinJonTiktokThat's The PointYoutubeKristin's Channel__________________________Find your favorite flavor at PremierProtein.com or at Amazon, Walmart, and other major retailers.Head over to thisisneeded.com and use code THATSTHEPOINT for 20% off your first order.If you visit carawayhome.com/THEPOINT10 you can take an additional 10% off your next purchase.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
The Patriotically Correct Radio Show with Stew Peters | #PCRadio
Gareth Icke rips into the Epstein files leak as elite mockery—Jewish-linked globalists parading taped child rape, torture, cannibalism, sacrifices, and transhuman experiments, daring us to either submit or end them for defiling our children. BitChute's Ray Vahey details dodging Jewish-led globalist assaults—debanked in Europe, slammed by regimes, NGOs, and blacklists—delivering censorship-free video since 2017 with spy-free ops, honest trending, auto-monetization for creators, and a $10K shadowban bounty untouched.
Welcome, welcome, welcome to the Distraction Pieces Podcast with Scroobius Pip!This week Pip is joined by the Traitors-starring legend JESSIE ROUX!An episode for the books right here! Those of you who love and appreciate the how Traitors will have surely also come to love and appreciate the recent breakout superstar Jessie. Jessie's awesome from the start of course but it's a weirdly rare thing to see and hear folks with a stammer out there in the mainstream media (our very own Pip notwithstanding), so when she comes along and blesses the world with such an aura and pure vibes it is a treat to be celebrated. Naturally - as comes up in the conversation - Jessie is not her stammer, in so much as it is not her identity. So enjoy this brilliant chat which covers so much in her life, upbringing, present day and so much in her personal experience which leads to the person you're listening to in this episode. Pure greatness.* SPOILERS LATE IN THE EP FOR ANYONE WHO HASN'T SEEN SERIES 4 OF TRAITORS AND INTENDS TO!PIP'S PATREON PAGE if you're of a supporting natureINSTAGRAMTRAITORS SERIES 4TRAITORS UNCLOAKEDSTAMMADPP #293 • STAMMA SPECIALDPP #591 • JANE POWELL of STAMMADPP #200 • JESS THOM (Tourettes Hero)PIP AT PRINCE CHARLES CINEMA!SPEECH DEVELOPMENT WEBSTOREPIP TWITCH • (music stuff)PIP INSTAGRAMPIP TWITTERPIP PATREONPIP IMDB Hosted on Acast. See acast.com/privacy for more information.
FIREY FULL EP www.patreon.com/dopeypodcastToday on Dopey! Dave and new co-host Doug Brown kick off a banked Tuesday Patreon episode from Sayville (prepping for Florida trip, five-days-of-Dopey debate: 30 fans love it, 1 says it's too much). Dave reflects on negativity bias (2 bad comments haunt more than 98 good ones), aging roasts ("you're so old" from Ingrid Casares), sugar/carb break progress, processed food blame for modern misery (WWII San Diego streets clip), John Joseph shoutout (Chrome Ags, upcoming book, Ken Rideout hookup). Mostly mailbag: Minnesota Matt's epic Peru travel relapse tale (23yo, 30 days sober → heavy drinking on flight → cheap pure Peruvian coke from pool-hall connect, $10/gram clean lines, numb throat euphoria, dilemma on 3 leftover grams before La Paz flight to Bolivia, drug-dog risks, coke-fueled threesome tease — cliffhanger for Patreon). Matt now 8 months sober, praises Dopey tipping point, family rebuild, Chris/Todd tribute. Ends with "Good So Bad" playout and toodles for Chris. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The Patriotically Correct Radio Show with Stew Peters | #PCRadio
Jake GTV joins Stew Peters live from Puerto Vallarta at Anarchapulco ripping open the Epstein files that nail Bill Gates in direct collusion with Jeffrey Epstein—the pedo financier who bankrolled the entire COVID bioweapon operation—complete with pre-planned depopulation, white fibrous clots yanked from corpses, and skyrocketing deaths that prove this was mass murder, not a pandemic. Max Igan live from Anarchapulco Genesis in Puerto Vallarta dropping hard truth: Trump was Epstein's best buddy sharing the Talmudic, Star of Moloch child-sacrifice fetish of the elite Jewish network running blackmail-free pedo rings, while Rothschild central banks control compliant governments and depopulate the world through wars for Greater Israel.
WEEI Celtics Insider Justin Turpin joins the show // More on Red Sox President and CEO Sam Kennedy's comments from Sunday // Keefer Madness //
Get your tickets to the Philadelphia Yoga & Wellness Conference HERE In this episode of the Inspire Create Manifest podcast, Joe sits down with sisters Karen Hepp and Lizzie Lange, founders of Pure Pony Yoga, to explore how they turned a simple idea into a thriving beach yoga community in Stone Harbor.Karen, a journalist, and Lizzie, a corporate lawyer, share how yoga has supported them through high-stress careers, how they navigated burnout, and what it really takes to build something sustainable without losing the joy that inspired it in the first place.From navigating town permits and paperwork to expanding into a 7-day-a-week beach offering, this conversation dives into entrepreneurship without hustle culture, trusting your complementary strengths, taking initiative, and leaning into awe instead of exhaustion.If you've ever wondered whether there's another way to build something meaningful without burning out, this episode is your proof.
Tayyib Living – Choosing a Pure & Wholesome Lifestyle by Radio Islam
Mike Pentecost joins us for a deep conversation on woodsman ship, patience, and what most turkey hunters still don't understand. From sitting four hours without moving to breaking down the “blob factor,” stealth, and why turkey hunting is truly a gentleman's chess match — this episode is packed with hard-earned wisdom from decades in the big woods. We dive into: • What woodsman ship really means • Why most hunters move too soon • How to scout and set up on pressured gobblers • The mindset required to consistently tag gobblers • Lessons learned the hard way in the national forest timber If you care about becoming a better turkey hunter — not just killing one, but understanding the game — this is an episode you need to hear. Listen in and let us know what stood out to you. Stay Southern. Got a question for the show? Submit a listener Q&A form - https://l.linklyhq.com/l/1uMXP Check out the Hunt Regs App - https://linkly.link/2ZuKR Grab some Southern Outdoorsmen merch here - https://l.linklyhq.com/l/1u4aK Join Woodsman Wire - https://l.linklyhq.com/l/1u4aR Use the promo code “southern” for a discount on your OnX Hunt membership here - https://l.linklyhq.com/l/1tyfm Check out Latitude Outdoors for your mobile hunting gear - https://2ly.link/1zVDI Use code TSOP15 for a discount on Mossy Oak - https://linkly.link/2ERb8 Save 10% on your next Vortex Optics order at eurooptic.com using the Promo Code “southern10” - https://2ly.link/1wyYO Use code SOUTHERN20 for a discount on all vortex apparel, including eyewear Use code “SOUTHERN25” for a discount on Houndstooth Game Calls: https://2ly.link/24tFz Have you tagged a deer using something you heard on the show? Submit your listener success story here - Share Your Story Here Come chat with us on our Thursday Hunter Hangouts! Join our patreon - https://l.linklyhq.com/l/1uMXU NOTE: Not all advertisements run on this show are endorsed by The Southern Outdoorsmen Podcast unless an ad is read by one of the hosts. Learn more about your ad choices. Visit megaphone.fm/adchoices
Send a text In this chapter, we explore humility, the limits of human understanding, and the greatness of God. Agur asks tough questions about creation, then he warns against adding words to what God says. This chapter uses very vivid comparisons which touch upon our desires, mysteries of creation, and many other things. All of these things are used to illustrate the folly of man, pride, and our corrupt nature. By living what is taught in this chapter, we will have more wisdom and reverence for our Lord. We invite you to come study God's Word with us today!
PJ talks to Gaye Shortland on her books written in the Cork vernacular describing LGBT life in the 90s, a time of change in Ireland Hosted on Acast. See acast.com/privacy for more information.
The story of the Transfiguration of Jesus is a challenge. On the mountain, three disciples witness Jesus shing with the light of heaven. It is a deeply spiritual moment, full of mystery. Maybe that is why it is a challenge. We tend to be materialistic people. If something can't be touched, measured or possessed it is not real. Spiritual things are just wishful thinking. This story, however, declares that Jesus Christ is both spiritual and material, both divine and human. Heaven and earth are united in Christ. That is the core of the gospel. In Christ, God united all things including our bodies and souls.
The future isn't just AI that thinks—it's AI that SEES and INTERACTS with the physical world.Simon Erickson chats with Emmett Savage (MyWallStreet & Prophet founder) to break down Ouster (OUST)—the company making "seeing eyes for AI" through breakthrough solid-state LIDAR technology. No moving parts. Pure semiconductor engineering. And it's already deployed in over 100,000 sensors.This isn't a paved road—it's early and risky. But we might be looking at one of the ultimate building blocks of seeing machines.Stocks Discussed:Ouster (OUST) - Featured stockVertical Aerospace - Previous EVTOL discussionServ Robotics - Delivery robotsTesla, Apple - Tier-1 customersiRobot, Mobileye, InvenSense - Historical comparisonsNext Episode Monday (Feb 2): Simon reveals the space where his NEXT 7investing recommendation operates (Groundhog Day special!)Next Episode Wednesday (Feb 4): Emmett returns with a THIRD off-radar stock pick
Blessed are the pure in heart, for they will see God. Many of us have mistaken pure in heart to mean perfection, or sinlessness. But to be pure in heart means something quite different. How do we see God? This week, we continue our study of the Beatitudes as we learn about what it means to be pure in heart.
Linktree: https://linktr.ee/AnalyticJoin The Normandy For Additional Bonus Audio And Visual Content For All Things Nme+! Join Here: https://ow.ly/msoH50WCu0KDive into the monumental arrival of J. Cole's purported final chapter on Notorious Mass Effect with Analytic Dreamz. This segment examines "The Fall-Off," the rapper's seventh studio album and only double album, released February 6, 2026, via Dreamville and Interscope Records.Analytic Dreamz details the project's ambitious scope: 24 tracks split across two discs ("Disc 29" and "Disc 39"), exceeding 100 minutes in runtime, with a reflective, career-summation narrative echoing mixtape-era aesthetics. Features include Future (on multiple tracks), Tems, Burna Boy, Erykah Badu, Morray, Petey Pablo, PJ, and others, backed by production from Cole himself, T-Minus, The Alchemist, Boi-1da, FnZ, and more.As of February 13, 2026, industry projections from Hits Daily Double and echoed across Complex, XXL, and HotNewHipHop position the album for a No. 1 debut on the Billboard 200 with approximately 291,000 album-equivalent units in its first week—around 115,000 in pure sales (CDs, vinyl, digital downloads) driven by the innovative "Trunk Sale" tour tactic for direct-to-consumer physicals, plus 176,000 from streaming and equivalents. This would mark Cole's seventh consecutive No. 1, surpassing his 2021 The Off-Season (282K) and far exceeding 2024's Might Delete Later (115K), amid the "final album" hype fueling media and fan amplification.Analytic Dreamz analyzes the market signals: robust physical component stands out in modern rap, long tracklist boosts streaming volume, and strategic narrative plus tour activation align for strong commercial impact. Critical reception highlights lyrical introspection and evolution, though some note self-referential elements and length. With no major contradictory forecasts, this positions "The Fall-Off" among 2026's top rap openings, confirming Cole's enduring dominance while priming for official chart confirmation.Stay tuned as Analytic Dreamz provides the comprehensive breakdown on this landmark release and its place in hip-hop legacy.Support this podcast at — https://redcircle.com/analytic-dreamz-notorious-mass-effect/exclusive-contentPrivacy & Opt-Out: https://redcircle.com/privacy
In this 30-minute guided transmission, you'll discover why the heart informs the brain (not the other way around), how daily practice becomes identity, and why healing yourself creates a ripple that transforms everyone around you. Featuring heart coherence breathing, a rose-gold light visualization, and the ancient seed sound YAM to open your heart center.Your heart contains 40,000 neurons, a brain within your body that thinks, remembers, and knows before your mind ever catches up. It generates an electromagnetic field that extends six feet beyond your skin, touching everyone you encounter. And it responds to one frequency above all others: unconditional love.Pure frequency medicine for the heart.This sound therapy session features 639 Hz, the Connecting Frequency associated with heart healing and harmonious relationships, layered with 528 Hz, the frequency linked to DNA repair and deep transformation. Designed to bring your heart into coherence, calm your nervous system, and create the conditions for unconditional self-love to emerge.Send a textSupport the show
Episode 478 – Ice Cold Facts: Olympic Hockey Dominance & NHL Trade Deadline ChaosIn this segment from Thursday Night Live, Dan K. “The Hockey Dude” delivers a comprehensive Olympic hockey breakdown and NHL trade analysis.
JOHN & TED RENT A SPECIAL TAPE + WE GET A HALLOWEEN EPISODE!! With TED Season 2 dropping March 5th, Greg & the Jo(h)ns RETURN for another TED The Series Reaction, Recap, Commentary, Breakdown, & Review! Visit https://huel.com/rejects to get 15% off your order TED Season 1, Episode 1 Reaction: • TED EPISODE 1 REACTION – HOLY S*** IS THIS... TED Season 1, Episode 2 Reaction: • TED EPISODE 2 REACTION –THIS WENT WAY TOO ... TED (2012) Movie Reaction: • TED (2012) MOVIE REACTION –WE DIDN'T EXPEC... TED 2 (2015) Movie Reaction: • TED 2 (2015) MOVIE REACTION – EVEN MORE UN... Gift Someone (Or Yourself) An RR Tee! https://shorturl.at/hekk2 Greg Alba, John Humphrey & Jon Maturan continue their Reaction & Review of Peacock's TED prequel series with Episodes 3 & 4, diving deeper into the chaotic 1990s life of everyone's favorite foul-mouthed teddy bear. Follow Jon Maturan: https://www.instagram.com/jonmaturan/?hl=en Intense Suspense by Audionautix is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/... Support The Channel By Getting Some REEL REJECTS Apparel! https://www.rejectnationshop.com/ Follow Us On Socials: Instagram: https://www.instagram.com/reelrejects/ Tik-Tok: https://www.tiktok.com/@reelrejects?lang=en Twitter: https://x.com/reelrejects Facebook: https://www.facebook.com/TheReelRejects/ Music Used In Ad: Hat the Jazz by Twin Musicom is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/by/4.0/ Happy Alley by Kevin MacLeod is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/... POWERED BY @GFUEL Visit https://gfuel.ly/3wD5Ygo and use code REJECTNATION for 20% off select tubs!! Head Editor: https://www.instagram.com/praperhq/?hl=en Co-Editor: Greg Alba Co-Editor: John Humphrey Music In Video: Airport Lounge - Disco Ultralounge by Kevin MacLeod is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/by/4.0/ Ask Us A QUESTION On CAMEO: https://www.cameo.com/thereelrejects Follow TheReelRejects On FACEBOOK, TWITTER, & INSTAGRAM: FB: https://www.facebook.com/TheReelRejects/ INSTAGRAM: https://www.instagram.com/reelrejects/ TWITTER: https://twitter.com/thereelrejects Follow GREG ON INSTAGRAM & TWITTER: INSTAGRAM: https://www.instagram.com/thegregalba/ TWITTER: https://twitter.com/thegregalba Learn more about your ad choices. Visit megaphone.fm/adchoices
Hot flashes aren't the whole story. Perimenopause and menopause can impact your gut, hormones, and chronic illness symptoms - you're not imagining it. Listen to this episode of The Gut Show as we talk with Casey Farlow about what menopause is, how to get support, and more! In this episode, we cover: Perimenopause and menopause [3:20] Introducing our guest [4:40] What is menopause? [6:01] Changes to gut health [8:53] Other symptoms [11:14] Monitoring estrogen and progesterone [13:53] Birth control [15:41] Can you stabilize hormones? [17:54] Hormone therapy & breast cancer [20:23] Is it hopeless? [21:30] Chronic illness & things getting worse [26:35] Hormone therapy and breast cancer [30:12] Who monitors this? [32:18] Labwork [35:20] Bone density screening [38:11] Mentioned in this episode: MASTER Method Membership FREE IBS Warrior Summit Take the quiz: What's your poop personality? About our guest: Casey Farlow, MPH, RDN is a registered dietitian and nationally recognized perimenopause nutrition expert who helps women stop fighting their bodies and start working with them during the hormonal transition of perimenopause. As the founder of The Perimenopause Nutritionist, Casey supports women struggling with stubborn weight gain, fatigue, sleep disruption, mood changes, and food frustration through hormone-aware nutrition, blood sugar regulation, and nervous system support. Connect with Casey Thank you to our partners: ModifyHealth is the leader in evidence-based, medically-tailored meal delivery offering Monash Certified low FODMAP, Gluten free, and Mediterranean meals - expertly crafted to help you achieve better symptom control AND improve overall health. The best part? They make it easy by doing all prep work for you. Simply choose the meals you want, stock your fridge or freezer when meals arrive at your door, then heat and enjoy when you're ready. Delicious meals. Less stress. Complete peace of mind. Check out modifyhealth.com and save 35% off your first order plus free shipping across the US with code: THEGUTSHOW. mBIOTA is the next generation of the elemental diet. Developed with leading gastroenterologists and food scientists, it's the first formula that's both clinically effective and genuinely easy to drink. Pure, easily absorbed nutrients are essential, but the mBIOTA difference is in the details: from their proprietary Amino Taste Modification Technology (ATMT), to their fully vegan and gluten-free ingredients, mBIOTA provides balanced daily nutrition backed by science. The result is a game-changing medical-grade formula that helps restore GI function in patients with SIBO, IMO, IBS, Crohn's, EoE and more. Learn more at mbiota.com and save 20% off their 2 week protocol with the code GUTIVATE. FODZYME is the world's first enzyme supplement specialized to target FODMAPs. When sprinkled on or mixed with high-FODMAP meals, FODZYME's novel patent-pending enzyme blend breaks down fructan, GOS and lactose before they can trigger bloating, gas and other digestive issues. With FODZYME, enjoy garlic, onion, wheat, brussels sprouts, beans, dairy and more — worry free! Discover the power of FODZYME's digestive enzyme blend and eat the foods you love and miss. Visit fodzyme.com and save 20% off your first order with code THEGUTSHOW. One use per customer. Connect with Erin Judge, RD: Instagram TikTok Work with Erin FREE symptom tracker
From outrageous fair food to flashing carnival rides, this episode dives into the chaos, charm, and pure Florida energy of the Florida State Fair.
Pure NFL greed full 634 Fri, 13 Feb 2026 16:17:40 +0000 ewz7gKU7JQ3Svo5ATBsu5VAgTPjGADq6 nfl,society & culture Cody & Gold nfl,society & culture Pure NFL greed Hosts Cody Tapp & Alex Gold team up for 610 Sports Radio's newest mid-day show "Cody & Gold." Two born & raised Kansas Citians, Cody & Gold have been through all the highs and lows as a KC sports fan and they know the passion Kansas City has for their sports teams."Cody & Gold" will be a show focused on smart, sports conversation with the best voices from KC and around the country. It will also feature our listeners with your calls, texts & tweets as we want you to be a part of the show, not just a listener. Cody & Gold, weekdays 10a-2p on 610 Sports Radio. 2024 © 2021 Audacy, Inc. Society & Culture False https://player.amperwavepodcasting.com?feed-link=https%3A%2F%2Frss.amperwave.n
The boys are BACK talking Beanpot aka Deanpot, Hagens + Deano, McAvoy response to Department of player safety, Olympics, 1980 doc ++ PLENTY more . Make sure to follow us on twitter @OnlyBruinsPod @DowntownBoosy2 @BrettHoward_ @BobbieBrewski. Follow us on tiktok @onlybruinsFollow us on instagram @OnlyBruins_Follow us on Youtube @OnlybruinspodcastMake sure to check out our Pure hockey link and get the best hockey gear out there! https://alnk.to/bisa9vc
The Go Radio Football Show: 13th of February 2026. PLAY and HIT SUBSCRIBE, and NEVER miss an episode! Paul Cooney, Andy Walker and Barry Ferguson break down one of the most unpredictable title races in years. This episode delivers fast‑paced debate, insider insight, and full‑throttle passion as the team dive into all the late drama gripping the Premiership. Motherwell's stunning rise The panel rave about Motherwell's fearless style, their late equaliser against Rangers, and the transformation under their manager. Rangers' pressure cooker James Tavernier says every game is now “must‑win”, and the team debate whether Rangers' dropped points at Fir Park could define the season. Barry and Andy break down team selection, the striker dilemma, and why Sunday's game against Hearts feels monumental. Celtic's last‑minute magic Alex Oxlade‑Chamberlain's spectacular late winner sparks a conversation on his impact, his quality, and how he could be a title‑decider. Martin O'Neill's honesty, Celtic's inconsistency, and the challenge ahead at Kilmarnock on plastic turf all take centre stage. The three‑way title chaos Hearts top the table, Rangers chasing, Celtic lurking — and all three dropping and gaining points in dramatic fashion. The team predict twists, relive past title races, and ask: who's really got the bottle? Fans call in with passion Rangers and Celtic supporters jump on the lines with heated takes on team selection, mindset, the atmosphere at Ibrox and Celtic Park, and the pressure of a season that could go down to the final kick. Predictions, banter & big laughs From score predictors to debates about Scottish quality vs the English Premier League, the episode is stacked with opinion, humour, and insider anecdotes — including nostalgic nods to legends like Davie Cooper and Fernando Ricksen. Follow us @thisisgoradio on Instagram, Facebook, LinkedIn and Tik Tok The Go Radio Football Show, weeknights from 5pm-7pm across Scotland on DAB, YouTube, Smart Speaker - launch Go Radio - and on the Go Radio App. IOS: https://apps.apple.com/gb/app/go-radio/id1510971202 Android: https://play.google.com/store/apps/details?id=uk.co.thisisgo.goradio&pcampaignid=web_share In Association with Burger King. Home of the Whopper, home delivery half time or full time, exclusively on the Burger King App https://www.burgerking.co.uk/download-bk-app. Watch the Replay on YouTube: https://www.youtube.com/live/hnoE3tJxT1E?si=WtKLPHUCSUYM6sGf For more Podcasts from Go Studios, head to: https://thisisgo.co.uk/podcasts/ Facebook: https://www.facebook.com/share/1ATeQD...
Sports Chasers Podcast Episode 478 (Thursday Night LIVE) delivers an objective, no-narrative breakdown of the biggest storylines across sports: Super Bowl fallout and the defense vs entertainment debate, Cleveland Browns “data-driven” messaging vs on-field results, NCAA eligibility disputes moving into federal court, MLB payroll imbalance fueled by the Dodgers' luxury tax spending, the NBA tanking crisis and what it says about league integrity, Olympic hockey updates, the NHL trade deadline outlook, and the ongoing WNBA labor dispute.If you're tired of recycled takes and want real sports conversation—this one's for Chasers Nation.⭐ Highlighted Chapter (SEO Feature)43:30 — NBA Tanking Crisis: Utah Jazz Fine, Draft Incentives & Fan DisrespectA hard look at how tanking impacts competitive integrity, ticket buyers, and the league's credibility—plus what accountability should actually look like.
L'invisibile dello spazio ha qualcosa di misterioso, affascinante a volte realmente surreale con dimensioni, distanze ed energie che vanno ben oltre i parametri che regolano la vita sulla terra e a volte anche oltre le regole della fisica stessa così come la intendiamo. Una terra incognita – lo spazio – a cui da millenni l'essere umano si ispira per cercare di dare un senso e un ordine alla propria esistenza.Il fascino dell'universo nasce probabilmente da questo intreccio profondo tra curiosità scientifica, bisogno di significato ed emozioni umane ancestrali. E in questo fascino ci immergiamo in questo “Laser” insieme a Valentina Tamburello, astrofisica e ricercatrice all'Università di Zurigo, da anni impegnata in diverse collaborazioni tra cui ultima con l'ESA, l'Ente Spaziale Europeo.Che cos'è la materia oscura, che cosa sono i buchi neri e perché sono fenomeni tanto estremi? E le sonde Voyager ai limiti della nostra galassia o il telescopio James Webb, vero e proprio gioiello tecnologico che da tre anni permette di vedere l'universo come mai l'avevamo visto prima.Ma angoli misteriosi e inesplorati di universo li portiamo anche in noi stessi. Chi non conosce la sensazione di avere già vissuto qualcosa o quel senso di appartenenza e amore universale che sente chi ha vissuto un'esperienza di pre-morte, medita o assume alcune sostanze psichedeliche? Pure autosuggestioni e allucinazioni o fenomeni reali che sempre più trovano spiegazione nella fisica quantistica? Anche di questo parliamo con l'astrofisica Valentina Tamburello dell'Università di Zurigo.
The Rebbe discusses the importance of pure education, citing the Gemara in Shabbos 119b regarding the "breath of schoolchildren." He explains that this pure breath ascends to the highest spiritual levels and is so vital that Torah study for children is not interrupted even for the Beis Hamikdash. https://www.torahrecordings.com/rebbe/igroskodesh/016/005/6024
George Orwell spoke bluntly about the nefarious nature of advertising, calling it “the rattling of a stick inside a swill bucket.”Even Orwell, though, would've been astonished by the cacophony of swill bucket advertising currently being blasted at us by Amazon, Google, Meta, and other profiteering tech giants. What are they trying to sell?Pure hogwash. Having spent billions to develop artificial intelligence so humanoid robots can displace workers, the tech geniuses are now rushing to build thousands of vast computer data centers necessary to power their Brave New AI World. Each center wills suck up local water supplies, drastically raise people's utility bills, create monstrous industrial blight and pollution, and enthrone such autocratic thugs as Bezos, Musk, and Zuckerberg as absentee bosses with domineering power over each locality.But the billionaires forgot something: You and me. “We the People” are in open rebellion against this Orwellian future, with officials in multiple states and localities “Just Saying Hell No” to the profiteers' invasive scams.Thus, the billionaire hucksters are frantically rattling their swill sticks. For example, Mark Zuckerberg – whose Meta goliath already operates 26 massive data centers and is now spending $600 billion to plop more of them in our communities – has launched a multimillion-dollar offensive to beat back local opponents. It's running BS television ads in state capitol cities, financing political candidates to hype the data centers, deploying untold numbers of lobbyists to rig the rules against opponents, and hiring an army of “community affairs” agents to spread AI propaganda.The swill bucket brigade has the fat cats, but a groundswell of us alley cats that has them on the run. To get involved, go to mediajustice.org/tools.Do something!The Center for Media Justice has been leading the way in fighting data centers in lots of communities around the country— here's how they beat back one in Amarillo, TX, for example. Get involved at mediajustice.org!Jim Hightower's Lowdown is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit jimhightower.substack.com/subscribe
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
The quality of Swadisthan on the right side is creativity, i.e. truly inspired thoughts, ideas and actions. The quality of Swadhistan on the left side is pure knowledge, i.e. the truly discerning and discriminating power to see the innate nature of things at a new stage in our awareness called vibrational awareness.
In this episode of The Pure Property Podcast, co-hosts Paul Glossop and Phil Tarrant discuss the economic forces shaping Australia's property market and what they mean for investors. Glossop outlines how unexpected inflation data has prompted the Reserve Bank of Australia to reconsider its rate path, fuelling speculation about future interest rate movements. The hosts note a divide among major banks: some forecast stability, while Westpac anticipates further hikes, adding to market uncertainty. Drawing on insights from Chris Joye of Coolabah Capital, the episode highlights how shifting economic data has challenged earlier forecasts and reinforced the need for investors to remain adaptable. The conversation also examines debates about persistent inflation, including criticisms that government spending and subsidies contribute to it. Glossop stresses that investors should focus on fundamentals and adopt disciplined strategies to navigate these headwinds. Potential policy changes, such as adjustments to the capital gains tax (CGT) discount, are flagged as risks that could dampen market liquidity by encouraging investors to hold properties longer. Despite these pressures, strong housing demand, structural undersupply, and strategic planning continue to support long-term opportunities for property investors.
www.longviewbaptistchurch.org I Timothy 6:11-12 Wednesday, February 11, 2026 1. Choose your passion wisely and flee worthless idols. 2. Run after righteousness, godliness, faith, love, gentleness and endurance. 3. The fight of faith is the path to eternal life.
What God Calls Pure Christianity (And Why It's So Uncomfortable) What if much of what we call Christianity… God wouldn't call pure? In this powerful episode of The Rob Skinner Podcast, Rob explores the simple yet challenging definition of real faith found in James 1:27. When church culture, traditions, and labels are stripped away, Scripture reveals a clear picture of what God considers pure and faultless religion: Loving the vulnerable and living with a clean heart. Rob unpacks how early Christians transformed the world through radical compassion—serving the sick, the poor, and the forgotten—even at great personal cost. But James' message doesn't stop with outward action. True faith also requires guarding our inner life from the quiet spiritual pollution that can dull our devotion. This episode is a call to live with: Radical compassion toward those in need Personal purity in a culture that pulls us away from God Balanced faith that is both outwardly loving and inwardly clean Through personal stories, ministry reflections, and practical challenges, Rob invites you to examine: Who around you needs protection, encouragement, or help What habits or influences may be quietly polluting your heart How to live the kind of faith that God calls pure and faultless If you want a simple definition of a 10X Christian, this might be it: Love radically. Live purely. And that kind of faith doesn't just change you— it changes the world.
Send a textSinger, songwriter and producer of R&B and Funk. It's Curt Jones!
Send a text→ Stay Connected Instagram: https://www.instagram.com/lifechurchuk/Facebook: https://www.facebook.com/lifechurchfolkestoneYoutube: https://www.youtube.com/@lifechurchuk1Instagram: https://www.instagram.com/robertmaasbach/Facebook: https://www.facebook.com/robertmaasbach/→ Give It's the generosity of many that enable Life Church to fulfil all that God has called us to do https://www.lifechurchuk.org/give/→ New to Life Church?If you're new we would love to get in touch and connect with youhttps://lifechurchuk.org/new-to-life-church/
Is "entertainment-first" content killing your Pinterest growth? Fun videos and memes might get views, but do they drive saves, clicks, and sales for busy mom entrepreneurs? We'll discuss that in this video as well as what works best for Pinterest. Pinterest Marketing for Beginners. Pinterest strategy. FACEBOOK GROUPAUDIT PAGE
9th Oct 2021(Satsangs from the Archives) These are teachings and pointers from ongoing NDA(Non-duality awareness)/Advaitic Satsangs held at Bhagavan Ramana Maharshi Centre in Melbourne, Australia. Om Namo Bhagavate Sri Arunachala Ramanaya !
Czabe delivers a massive triple-header today! First, he weighs in on the pros/cons of Olympic legend Lindsay Vonn deciding to ski on a torn ACL, and then break her leg crashing while trying. Also, we get a consult from Dr. BRIAN KOCH an actual orthopedic surgeon on the situation. Czabe shares a snippet of his weekly Monday chit-chat with Scott and Solly on *their* pod, then MATT MUELLER swings by to discuss all the good, bad, and the whaaaaa? of the Super Bowl ad slate.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
The new Babolat Pure Aero is here, and after going beyond the initial playtest, we have a lot more to say. In this episode, we dive deep into what makes the latest Pure Aero update such a positive step forward and why it's winning us over on court. If you've ever loved the Pure Aero but wanted more feel, confidence, or control without losing that signature spin, this conversation is for you. We break down how the updates actually translate to real match play—and who we think will benefit most from this generation. As tennis gear specialists who hit with racquets daily, we move past spec sheets and marketing claims to talk honestly about what stood out once the racquet settled in. In this episode, we cover: What changed in the new Babolat Pure Aero and why it matters How the racquet feels after extended hitting (not just day one) Spin potential, comfort, and confidence from the baseline Who this Pure Aero is best suited for compared to past versions
When it comes to love, the secular world has plenty to say. Turn on the television, radio, or explore the internet, and you'll find advice on every type of relationship, from friendships to getting along with your parents, and of course, advice on romantic relationships. People are clamoring to figure love out…but what does the Bible say about love?1 Corinthians 13:4–5 says, “Love is patient, love is kind. It does not envy, it does not boast, it is not proud. It does not dishonor others, it is not self-seeking, it is not easily angered, it keeps no record of wrongs.”Love isn't about control or about protecting our own interests. It's not about winning, because true love doesn't keep score. God's Word gives us a beautiful picture of what love can be when we seek to follow the Lord: patient, kind, selfless, humble, and forgiving. That's the kind of love we should strive to build with family, friends, and a potential mate.If we want to take Paul's wisdom in Corinthians to heart, we must first think about what the other person needs in our relationships. And you know, if you practice this model, you'll find that soon enough, your own needs will be met, too! Pure love has a ripple effect.Let it wash over you!Let's pray. Lord, your relationship to us is perfect. Help us love as you love. In Jesus' name, amen. Change your shirt, and you can change the world! Save 15% Off your entire purchase of faith-based apparel + gifts at Kerusso.com with code KDD15.
It's the Pure Report annual predictions episode! We welcome Shawn Rosemarin to dive deep into the world of tech in 2026, including a look back at 2025 predictions on AI becoming a strategist, Multi-Cloud 2.0 requiring a unified data platform, and end-to-end security ramping up. Shawn holds himself accountable for last year's bets, particularly noting that the expected "operating model transformation" driven by AI has yet to fully materialize, arguing that many organizations are still grappling with the hard changes to people, process, and technology required for true transformation. Our conversation pivots to what's next, starting with the evolution of AI from simple co-pilots to autonomous agents that will soon become mature process owners capable of completing end-to-end workflows. This shift will require a greater emphasis on verification, changing the industry's focus from time to answer to time to trust (or time to truth) as enterprises build verification stacks to ensure AI accuracy, recognizing that every mistake costs money and customer satisfaction. Finally, Rosemarin forecasts that growing energy scarcity will drive new AI economics, forcing serious programs to run AI like a business system by routing queries to the most efficient models. Furthermore, he predicts that data stops being an asset and evolves to a supply chain, necessitating a manufacturing-like process to refine structured, semi-structured, and unstructured data for uniform consumption by training systems. This new landscape will ultimately punish infrastructure complexity and reward the platform mindset that simplifies operations and removes friction through automation and orchestration. To learn more, visit https://blog.purestorage.com/perspectives/2026-ai-predictions-data-storage/ Check out the new Pure Storage digital customer community to join the conversation with peers and Pure experts: https://purecommunity.purestorage.com/ 00:00 Intro and Welcome 09:30 Look back at 2025 Predictions 17:33 William Gibson Quote on the Future 22:20 2026 Predictions - Copilots Become Agents 26:48 Verification and Time to Trust 30:30 Energy Scarcity and AI Economics 34:13 Data as a Supply Chain 38:50 Relevance Engines 42:10 Platform Mindset 45:43 Content Authenticity 49:37 Cyber as an Executive Imperative 52:35 Workforce Productivity 55:21 Summary of 2026 Predictions
Super Bowl 60 reignited the debate: boring game or defensive brilliance?On this episode of Sports Chasers Podcast – Monday Night Football Blitz, Kevin L. Warren and D-Dubbz Warren analyze how Seattle's defense controlled the game, why mistake-free quarterback play still wins championships, and what this Super Bowl says about modern NFL expectations.
While most fans called it boring, this was a perfect night for anyone who enjoys watching the New England Patriots fall flat on the biggest stage. We break down why Super Bowl 60 was far more entertaining than people want to admit, especially if you are a Jets fan or anyone tired of Patriots success. From Seattle Seahawks controlling the game with defense and the run, to New England Patriots looking overwhelmed offensively, this episode dives into why the outcome felt inevitable early. We talk about Drake Maye struggling under playoff pressure, why the Patriots' easy path finally caught up to them, and how Seattle won without needing a heroic performance from Sam Darnold. Was the game ugly? Absolutely. Was it satisfying? For at least one bitter Jets fan, it was four hours of joy. We also look ahead to what this loss means for the Patriots, why getting close does not guarantee you will ever get back, and whether Seattle can realistically repeat this formula next season.
Environmental Series. Episode #1 of 4. In 1851, a journalist named Henry Mayhew set out to document the lives of London's working poor. What he found was astonishing. In the richest city in the world, thousands of people made their living by picking through other people's trash. There were the bone-grubbers, who scavenged bones from gutters to sell to soap manufacturers. There were the mudlarks, mostly children, who waded through the filthy banks of the Thames searching for coal, rope, and bits of metal. And then there were the pure-finders. What's “pure” you ask? Well, "pure" was a Victorian euphemism for dog excrement. Pure-finders, mostly elderly women, spent their days scouring the streets of London for dog droppings, which they then sold by the pailful to tanneries in Bermondsey. The tanners used it to purify leather. Hence the name. We tend to think of recycling as a modern invention, something that started with the environmental movement of the 1970s. Blue bins, sorting instructions, that kind of thing. But as brilliant historians have uncovered, the story of how humans have dealt with their discarded materials stretches back millennia. For most of human history, the concept of "throwing something away" barely existed. To begin our series on environmental history, we're tackling the premodern history of recycling. Or as pre-WWII people would have called it: reclamation, salvage, scrapping, repair, and reuse. We'll meet rag-and-bone men and dustmen, shoddy masters and mudlarks. We'll discover how rags became paper, how old wool became new cloth, and how virtually nothing in the premodern world was ever truly waste. Find transcripts and show notes at www.digpodcast.org Learn more about your ad choices. Visit podcastchoices.com/adchoices
The Epstein Debacle Unfolds To support this ministry financially, visit: https://www.oneplace.com/donate/549/29?v=20251111
CA-CHOW!! Cars Full Reaction Watch Along: / thereelrejects Visit https://huel.com/rejects to get 15% off your order RATATOUILLE (2007) Movie Reaction: • RATATOUILLE (2007) MOVIE REACTION – WE DID... Gift Someone (Or Yourself) An RR Tee! https://shorturl.at/hekk2 The Jo(h)n Squad is back to give their CARS Reaction, Recap, Commentary, Analysis, Breakdown, & Spoiler Review! John Humphrey and Jon Maturan rev up their reaction and review of Pixar's 2006 animated classic Cars, a high-energy, heartwarming story about speed, humility, and finding purpose off the beaten path. The film follows hotshot rookie race car Lightning McQueen (voiced by Owen Wilson, Wedding Crashers, Midnight in Paris), whose obsession with winning lands him stranded in the forgotten desert town of Radiator Springs on the way to the Piston Cup Championship. What begins as an inconvenience slowly becomes a life-changing detour as Lightning learns the value of friendship, community, and slowing down to appreciate the journey. Along the way, Lightning forms an unlikely bond with wise tow truck Mater (Larry the Cable Guy, Larry the Cable Guy: Health Inspector, Cars 2), sparks a romance with determined attorney Sally Carrera (Bonnie Hunt, Jumanji, Toy Story 4), and gains mentorship from legendary racer Doc Hudson (Paul Newman, Cool Hand Luke, The Hustler). Packed with iconic moments like Lightning's crash on Route 66, Mater's hilarious tractor-tipping escapades, Doc's reveal as the Hudson Hornet, and the emotional Piston Cup finale, Cars blends laugh-out-loud humor with Pixar's signature emotional storytelling. We break down the film's themes of legacy, ego, and redemption, why Radiator Springs remains one of Pixar's most memorable worlds, and how Cars became a beloved franchise for generations of fans. Follow Jon Maturan: https://www.instagram.com/jonmaturan/?hl=en Intense Suspense by Audionautix is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/... Support The Channel By Getting Some REEL REJECTS Apparel! https://www.rejectnationshop.com/ Follow Us On Socials: Instagram: https://www.instagram.com/reelrejects/ Tik-Tok: https://www.tiktok.com/@reelrejects?lang=en Twitter: https://x.com/reelrejects Facebook: https://www.facebook.com/TheReelRejects/ Music Used In Ad: Hat the Jazz by Twin Musicom is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/by/4.0/ Happy Alley by Kevin MacLeod is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/... POWERED BY @GFUEL Visit https://gfuel.ly/3wD5Ygo and use code REJECTNATION for 20% off select tubs!! Head Editor: https://www.instagram.com/praperhq/?hl=en Co-Editor: Greg Alba Co-Editor: John Humphrey Music In Video: Airport Lounge - Disco Ultralounge by Kevin MacLeod is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/by/4.0/ Ask Us A QUESTION On CAMEO: https://www.cameo.com/thereelrejects Follow TheReelRejects On FACEBOOK, TWITTER, & INSTAGRAM: FB: https://www.facebook.com/TheReelRejects/ INSTAGRAM: https://www.instagram.com/reelrejects/ TWITTER: https://twitter.com/thereelrejects Follow GREG ON INSTAGRAM & TWITTER: INSTAGRAM: https://www.instagram.com/thegregalba/ TWITTER: https://twitter.com/thegregalba Learn more about your ad choices. Visit megaphone.fm/adchoices
Toxins, chemicals, environmental exposure... How much is too much, how much should we worry, who should be concerned? The goal isn't to be afraid, but to understand how this fits into IBS management - listen to this episode of The Gut Show to learn more about TILT theory without going down a fear-based rabbit hole. Mentioned in this episode: MASTER Method Membership FREE IBS Warrior Summit Take the quiz: What's your poop personality? MCAS episode Thank you to our partners: mBIOTA is the next generation of the elemental diet. Developed with leading gastroenterologists and food scientists, it's the first formula that's both clinically effective and genuinely easy to drink. Pure, easily absorbed nutrients are essential, but the mBIOTA difference is in the details: from their proprietary Amino Taste Modification Technology (ATMT), to their fully vegan and gluten-free ingredients, mBIOTA provides balanced daily nutrition backed by science. The result is a game-changing medical-grade formula that helps restore GI function in patients with SIBO, IMO, IBS, Crohn's, EoE and more. Learn more at mbiota.com and save 20% off their 2 week protocol with the code GUTIVATE. FODZYME is the world's first enzyme supplement specialized to target FODMAPs. When sprinkled on or mixed with high-FODMAP meals, FODZYME's novel patent-pending enzyme blend breaks down fructan, GOS and lactose before they can trigger bloating, gas and other digestive issues. With FODZYME, enjoy garlic, onion, wheat, brussels sprouts, beans, dairy and more — worry free! Discover the power of FODZYME's digestive enzyme blend and eat the foods you love and miss. Visit fodzyme.com and save 20% off your first order with code THEGUTSHOW. One use per customer. ModifyHealth is the leader in evidence-based, medically-tailored meal delivery offering Monash Certified low FODMAP, Gluten free, and Mediterranean meals - expertly crafted to help you achieve better symptom control AND improve overall health. The best part? They make it easy by doing all prep work for you. Simply choose the meals you want, stock your fridge or freezer when meals arrive at your door, then heat and enjoy when you're ready. Delicious meals. Less stress. Complete peace of mind. Check out modifyhealth.com and save 35% off your first order plus free shipping across the US with code: THEGUTSHOW. Connect with Erin Judge, RD: Instagram TikTok Work with Erin FREE symptom tracker
It's not about what you're doing in the decade you're in, it's about what those habits turn into in the next one. In this conversation we get into why we're leaning harder into relative strength and athletic movement (high-volume calisthenics, hangs, pull-ups, push-up variations, sandbags, sled work, reaction drills, rope flow, tennis balls, and even bouldering) without abandoning the basics of strength training. We talk about how coordination is trainable, why expanding your “movement vocabulary” can carry over into sport and everyday life, and how staying capable as you age comes down to building skills that keep you confident, reactive, and durable. Plus: insane feats of strength and speed making the rounds online, strongman grip madness, and why the goal isn't just being strong once, but being strong, mobile, and useful for decades.Special perks for our listeners below!