POPULARITY
Categories
What if I told you that eating certain foods that could help your body burn fat? On today's show, you're going to learn the exact science behind this process, including the specific ways that certain foods can aid in fat loss, and what to eat if your goal is to lose body fat. On this episode of The Model Health Show, you're going to hear from world-renowned physician and scientist, Dr. William Li, on how to eat to optimize your metabolism. Dr. Li is an expert on the intersection of nutrition and human health. Today he's sharing the exact science behind how certain foods affect your metabolism and ability to lose body fat. I'm also sharing five science-backed foods that can help you lose more body fat, and the best ways to incorporate these fat-burning foods into your existing routine. This episode is full of powerful science and simple, accessible ways to improve your health from the inside out. So click play and enjoy! In this episode you'll discover: What body fat actually is, and its purpose. (2:42) How overeating affects your fat cells. (8:42) The process behind how fat can become inflammatory. (10:23) Which specific foods have anti-angiogenesis properties. (12:54) The truth about eating soy. (14:13) How curcumin helps the body burn fat. (22:24) Why whole foods are often like a Swiss army knife. (26:18) The science of stem cells inside your body fat. (27:40) How the compound hydroxytyrosol can reprogram stem cells. (32:46) The connection between the immune system and the metabolism. (43:03) How eating avocados can help you lose more belly fat. (52:57) The metabolic benefits of eating lean fish. (57:39) How to add more sweet potatoes into your diet. (1:02:42) Items mentioned in this episode include: Boncharge.com/model - Use my code MODEL for 15% off blue light blocking glasses! Piquelife.com/model - Get exclusive savings on bundles & subscriptions! How to Use Food to Burn Fat - Watch the entire interview with Dr. William Li! Eat to Beat Disease by Dr. William Li - Learn how to use food Eat to Beat Your Diet by Dr. William Li - Learn how to heal your metabolism! Eat Smarter Family Cookbook - Transform the health, fitness, and connection of your entire family with the Eat Smarter Family Cookbook! Be sure you are subscribed to this podcast to automatically receive your episodes: Apple Podcasts Spotify Soundcloud Pandora YouTube This episode of The Model Health Show is brought to you by Bon Charge and Pique. Bon Charge carries the world's largest selection of science-backed blue light blocking technology to enhance your sleep and overall wellness. Use my code MODEL at boncharge.com/model for 15% off blue light blocking glasses and more. Go to Piquelife.com/model for exclusive savings on bundles & subscriptions on cutting-edge solutions for your head-to-toe health and beauty transformation.
Is social media addictive by design or just irresistible entertainment? The panel tackles the lawsuit that's dragging tech giants onto the witness stand and how surveillance tech is quietly expanding while lawmakers and users scramble to catch up. Jury told that Meta, Google 'engineered addiction' at landmark US trial Instagram Chief Says Social Media Is Not 'Clinically Addictive' in Landmark Trial Section 230 turns 30 as it faces its biggest tests yet Meta apparently thinks we're too distracted to care about facial recognition and Ray-Bans Amazon Ring's Super Bowl ad sparks backlash amid fears of mass surveillance Ring cancels its partnership with Flock Safety after surveillance backlash TikTok is tracking you, even if you don't use the app. Discord backtracks on controversial age verification rollout...kind of Discord/Twitch/Snapchat age verification bypass The DJI Romo robovac had security so poor that this man remotely accessed thousands of them HP's laptop subscriptions are a great deal — for HP FTC Ratchets Up Microsoft Probe, Queries Rivals on Cloud, AI T-Mobile announces its network is now full of AI by rolling out real-time translation Apple's latest attempt to launch the new Siri runs into snags SpaceX Prioritizes Lunar 'Self-Growing City' Over Mars Project, Musk Says Elon Musk declares victory with Medicaid data release Waymo Is Getting DoorDashers to Close Doors on Self Driving Cars Backblaze Drive Stats for 2025 $1.8 million MST3K Kickstarter brings in (almost) everyone from the old show OpenAI Is Nuking Its 4o Model. China's ChatGPT Fans Aren't OK Hideki Sato, designer of all Sega's consoles, has died Byte magazine artist Robert Tinney, who illustrated the birth of PCs, dies at 78 Launching The Rural Guaranteed Minimum Income Initiative Host: Leo Laporte Guests: Wesley Faulkner, Stacey Higginbotham, and Thomas Germain Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security monarch.com with code TWIT ZipRecruiter.com/twit helixsleep.com/twit cachefly.com/twit
Is social media addictive by design or just irresistible entertainment? The panel tackles the lawsuit that's dragging tech giants onto the witness stand and how surveillance tech is quietly expanding while lawmakers and users scramble to catch up. Jury told that Meta, Google 'engineered addiction' at landmark US trial Instagram Chief Says Social Media Is Not 'Clinically Addictive' in Landmark Trial Section 230 turns 30 as it faces its biggest tests yet Meta apparently thinks we're too distracted to care about facial recognition and Ray-Bans Amazon Ring's Super Bowl ad sparks backlash amid fears of mass surveillance Ring cancels its partnership with Flock Safety after surveillance backlash TikTok is tracking you, even if you don't use the app. Discord backtracks on controversial age verification rollout...kind of Discord/Twitch/Snapchat age verification bypass The DJI Romo robovac had security so poor that this man remotely accessed thousands of them HP's laptop subscriptions are a great deal — for HP FTC Ratchets Up Microsoft Probe, Queries Rivals on Cloud, AI T-Mobile announces its network is now full of AI by rolling out real-time translation Apple's latest attempt to launch the new Siri runs into snags SpaceX Prioritizes Lunar 'Self-Growing City' Over Mars Project, Musk Says Elon Musk declares victory with Medicaid data release Waymo Is Getting DoorDashers to Close Doors on Self Driving Cars Backblaze Drive Stats for 2025 $1.8 million MST3K Kickstarter brings in (almost) everyone from the old show OpenAI Is Nuking Its 4o Model. China's ChatGPT Fans Aren't OK Hideki Sato, designer of all Sega's consoles, has died Byte magazine artist Robert Tinney, who illustrated the birth of PCs, dies at 78 Launching The Rural Guaranteed Minimum Income Initiative Host: Leo Laporte Guests: Wesley Faulkner, Stacey Higginbotham, and Thomas Germain Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security monarch.com with code TWIT ZipRecruiter.com/twit helixsleep.com/twit cachefly.com/twit
Matt sits down with Lucky Burton from Luckys Hot Rod shop & Speed Equipment, Lucky talks about his early days at Old Crow Speed Shop, Bonneville racing, and how he became the Model t hot rod guru. Check out our website!! - www.irontrapgarage.comDon't forget to listen to our weekly podcast!! - https://open.spotify.com/show/09WnyHe97uUrMkeXF6dQIL?si=dObfWrBKTyqP42qwrO5vjw- Get 10% Off Your Eastwood Order With The Coupon Code ITG10 At Checkout * Some Products Excluded - https://glnk.io/73rnx/irontrap Wanna send us something?Iron Trap GaragePO Box 6New Berlinville, PA19545Matt's Instagram - @irontrap - https://www.instagram.com/irontrap/Mike's Instagram - @mhammsteak - https://www.instagram.com/mhammsteak/Iron Trap Parts Instagram - @irontrapfinds - https://www.instagram.com/irontrapfinds/Iron Trap eBay - https://www.ebay.com/usr/irontrapgarage/
I. Introduction Welcome to the Victory Church podcast and Sunday worship gathering. Victory's mission: reaching the lost, restoring the broken, reviving believers. Joy and gratitude for being in God's house where worship, prayer, the Word, and fellowship occur. Emphasis that God's grace enabled people to be present, overcoming hindrances. II. The Nature and Purpose of Prayer Prayer and the Word as central priorities at Victory Church. Biblical commands to pray: “men ought always to pray,” “pray without ceasing,” “watch and pray,” “continue earnestly in prayer.” Clarification: prayer is not a religious ritual but a relational conversation with a loving Father. Prayer as sharing cares, dreams, concerns with God; Scripture as God sharing His thoughts and heart with us. III. Reactive vs. Proactive Prayer A. Reactive Prayer Definition: responding to events, crises, and immediate needs after they happen. Typical reactive requests: jobs, finances, housing, healing, family and school pressures. Affirmation: these needs matter to God; believers should cast all cares on Him. Problem: if this is the only kind of praying, discipleship and prayer life are out of alignment with God's best. B. Proactive Prayer Definition: creating or shaping situations by praying God's will in advance, not only reacting. Example from the Lord's Prayer: “Thy kingdom come, Thy will be done on earth as it is in heaven” as a proactive request. Goal: move believers beyond crisis-only praying into kingdom-focused, forward-looking prayer. IV. Acts 4 as a Model of Prayer A. Context of Acts 4 Acts as early church history, showing the Spirit-empowered beginnings of the church. Peter and John preaching, healing a crippled man, and provoking opposition from religious leaders. Authorities command them not to speak or teach in the name of Jesus. Connection to today: pressure in culture to silence biblical truth and the name of Jesus. B. The Disciples' Response They return “to their own” (the church, fellow believers) when threatened. Principle: where you turn in crisis reveals much about your heart. They share the report as a prayer request and turn immediately to corporate prayer. They pray in alignment with Scripture (Psalm 2) and God's will, not just emotions. C. Content of Their Prayer (Acts 4:24–31) Acknowledge God as Creator and Sovereign Lord over heaven and earth. Rehearse Scripture about nations raging and rulers opposing the Lord and His Christ. Interpret persecution as part of God's sovereign purpose in Christ's suffering. Reactive element: “Lord, look on their threats.” Proactive element: ask for boldness to speak the Word, and for God's hand to heal with signs and wonders in Jesus' name. Result: the place is shaken, all are filled with the Holy Spirit, and they speak God's Word with boldness. V. Praying with the Word and God's Will Call to pray not only from need or emotion but aligned with Scripture. Examples of praying Scripture over needs (provision, healing, emotional and spiritual needs, relationships). Recognition that God's will includes timing; believers must be sensitive and obedient. Emphasis: there is power when prayer and the Word are joined. VI. From Problem to Launching Pad Observation: in Acts 4, the crisis launches the church into deeper proactive prayer, not retreat. Instead of praying primarily for safety and comfort, they pray for greater boldness and impact. Application: believers today should ask God to use trials to produce testimony, messages, and greater influence for His glory. VII. Call to a Proactive Kingdom Focus A. For Truth and Witness in a Confused Culture Culture tolerates generic “god talk” but reacts strongly to the exclusive claims of Jesus. Expect opposition when living and speaking biblical truth, without being obnoxious or hypocritical. The church must stand firm on Scripture, not be shaped by social media or worldly opinions. B. For Local and Global Mission Victory Church's call: reach Providence and the nations through evangelism and missions. Example: missions trips (Kenya, Sierra Leone, Liberia) and conferences to strengthen pastors and churches. Appeal for proactive prayer for missions: bold preaching, anointing, signs and wonders, and lasting fruit. C. For Revival and Awakening Distinction: revival for the church (bringing believers back to life), awakening for the lost. Invitation to pray for souls, discipleship, anointing, revival in churches, and awakening in the nation. Desire to create cultures of discipleship, evangelism, missions, and deep engagement with Scripture. VIII. Illustrations of Proactive Prayer in History and Life Personal testimony: long season in temporary housing, choosing contentment and kingdom focus while trusting God's timing. Application of Matthew 6:33: prioritizing God's kingdom and righteousness, trusting Him to add needed things. Biblical example: Job praying for his friends and receiving double restoration. Historical examples: John Knox's burden “give me Scotland or I die” and its influence. David Brainerd's fervent prayer for Native Americans and resulting impact. William Tyndale's martyrdom for translating Scripture and the later spread of English Bibles. The Moravians' 100-year prayer meeting and remarkable missionary sending. IX. Practical Application and Invitation Challenge: move beyond “needs-only” praying to kingdom-centered, proactive prayer. Specific areas to pray proactively: personal walk, church, ministries, missions, national awakening, and social issues. Encouragement to stay for times of corporate prayer, lifting up pastors, leaders, and global work. Final appeal: cultivate a passion that cries, “Lord, give us souls, give us revival, use my life and this church for Your glory.”
We meet a Ghanaian woman who is challenging stereotypes of beauty and disability by modelling with her prosthetic leg wrapped in colourful kente fabric. Abena Christine Jon'el had her leg amputated when she was just two years old because of an aggressive form of cancer. She says she's fought through so much to survive that she's determined to fight for anyone who's ever felt defeated by life.Also: A mobile gaming app that's helping teenagers in Brazil learn how to support their friends with mental health issues. A scheme teaching gardening skills to prisoners in the UK to help cut the numbers who reoffend after their release.The Washington museum curator who's adopted Gen Z slang to get younger people interested in its works of art. Alison Luchs has attracted over nine million views with two social media posts, and is challenging others to submit similar videos about other exhibits.Plus big baby elephant news, some unusual guard animals, and how one new family helped bring an entire community together, just by showing they cared.Our weekly collection of inspiring, uplifting and happy news from around the world.(Photo: Abena Christine Jon'el on the catwalk in Ghana. Credit: Vino Studio / Nineteen57 Events)
Today, we examine a year that looked chaotic but felt familiar to trend followers. Gold surged, equities rotated globally, and non-US markets quietly gained momentum. Yet beneath strong returns lies a deeper debate about model design, short versus long horizon signals, and whether innovation in liquid alternatives serves investors or sales desks. From AI disruption to product engineering and allocator behavior, this episode explores where systematic strategies truly create value and where structural incentives distort outcomes. In a changing macro regime, clarity of purpose may matter more than ever.-----50 YEARS OF TREND FOLLOWING BOOK AND BEHIND-THE-SCENES VIDEO FOR ACCREDITED INVESTORS - CLICK HERE-----Follow Niels on Twitter, LinkedIn, YouTube or via the TTU website.IT's TRUE ? – most CIO's read 50+ books each year – get your FREE copy of the Ultimate Guide to the Best Investment Books ever written here.And you can get a free copy of my latest book “Ten Reasons to Add Trend Following to Your Portfolio” here.Learn more about the Trend Barometer here.Send your questions to info@toptradersunplugged.comAnd please share this episode with a like-minded friend and leave an honest Rating & Review on iTunes or Spotify so more people can discover the podcast.Follow Andrew on Twitter.Episode TimeStamps: 00:00 - Introducing the Systematic Investor series01:17 - AI disruption and systematic investing06:52 - The shifting perception of US risk09:21 - Winter Olympics and market metaphors12:59 - Hedge funds versus private equity19:23 - 2025 review and the importance of risk allocation22:17 - Why CTAs did not fully capture gold29:48 - Short term versus long term trend models35:27 - Letting risk run versus investor palatability44:43 - The problem with liquid alternatives51:21 - Product design versus fiduciary duty59:06 - QIS, innovation and investor sophistication01:04:41 - Model risk and the myth of risk premia01:10:51 - Hopes for a trending year aheadCopyright © 2025 – CMC
On this episode the boys talk to Steve Herod the developer of a system that takes all the headaches out of model management. No more pens and paper or having to write up judging results. This systems lets you get on with the job of running a model show without the hassle of writing online entries, judging tabulations and award results. Nathan Bradford, a judge from ScaleACT 2025, also joins the conversation to explain how easy the hobbyshow program is to use from a punters point of view. If you would like to know more go to www.hobbyshow.org Mailbag, Wheel of Fortune all that and more! Don't forget to support the sponsor of our show Scott from the Scale Modellers Supply https://www.scalemodeller.com.au/ Leave us a message, comment or even ask a question, we would love to hear from you! Write to Onthebench64@gmail.com. If you would like to support our show please go to www.patreon.com/onthebench
Seedance 2.0 is the best AI video model we've ever seen. Bytedance's new AI tool generates 15 second clips of basically anything. But what happens next? And where do we go from here? Gavin got early access before it got locked down. We tested it with original animation, fake sitcoms, anime, and a McDonald's ad. The results are genuinely shocking - multi-shot editing, cinematic camera work, and real celebrity voices coming straight out of the model. People are making fake Seinfeld episodes, Avengers deleted scenes, and Rocky working at a fast food restaurant with Optimus Prime. Plus two new Chinese LLMs beating American models on benchmarks, Google Deep Think scores 84% on ARC-AGI, OpenAI's Codex Spark model, that viral AI post your mom sent you, and Kevin loses his mind building an Open Claw agent named Mr. Tibs who now requests server upgrades at 3am. HOLLYWOOD LAWYERS ARE GOING TO HAVE A VERY INTERESTING YEAR. ITS FINE. Come to our Discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // Seedance 2.0 - Bytedance's New AI Video Best Since Sora 2 https://seed.bytedance.com/en/seedance2_0 https://www.reuters.com/business/media-telecom/bytedances-new-ai-video-model-goes-viral-china-looks-second-deepseek-moment-2026-02-12/ The Seinfeld Test: https://x.com/apples_jimmy/status/2021351821718225330?s=20 Even better The Seinfeld Fight: https://x.com/itspoidaman/status/2021409465355075655?s=20 Wolverine vs Thanos: https://x.com/AndrewCurran_/status/2021979655130296487?s=20 Avengers Endgame: https://x.com/cfryant/status/2021398605278376201?s=20 Tom Cruise vs Brad Pitt: https://x.com/RuairiRobinson/status/2021394940757209134?s=20 Ethan Hunt vs John Wick: https://x.com/chatcutapp/status/2021902856367092108?s=20 Gavin's Game of Friends Sitcom Test: https://x.com/gavinpurcell/status/2021418263432032635?s=20 Gavin's Rocky Balboa & Optimus Prime Test: https://x.com/gavinpurcell/status/2021429329012650045?s=20 Original Animation style: https://x.com/gavinpurcell/status/2021396803787383254?s=20 Anime test: https://x.com/gavinpurcell/status/2021732810554507352?s=20 Comparing AI McDonald's Ads: https://x.com/AIForHumansShow/status/1802715910488400047?s=20 GLM-5 from Z.AI https://x.com/Zai_org/status/2021638634739527773 https://z.ai/blog/glm-5 GLM-5 Gameboy Emulation https://x.com/Zai_org/status/2021754659590033565?s=20 New Minimax 2.5 Model https://x.com/MiniMax_AI/status/2021980761210134808?s=20 CODEX SPARK - Faster Codex https://openai.com/index/introducing-gpt-5-3-codex-spark/ New Google Deep Think Model CRUSHES Benchmarks https://x.com/GoogleDeepMind/status/2021981510400709092?s=20 The 'Something Big Is Happening' Post Heard Round The World https://x.com/mattshumer_/status/2021256989876109403?s=20 Gavin's Simultaneous Take (in Monday's newsletter too) https://x.com/gavinpurcell/status/2021292314291999182 AI 2027 https://ai-2027.com/ One-Shot Otter Anime From Ethan Mollick: https://x.com/emollick/status/2021412306291392535 The Best Prompt Ever Seedance https://x.com/Gossip_Goblin/status/2021468902220497061?s=20 MattVidPro's Shrek & Donkey Crash a Honda Accord https://x.com/MattVidPro/status/2021739211674566808?s=20
Money comes into your business, and where it goes after that is one of the most underrated skill sets in business. The top-earning owners know how to tweak their financial model based on what the business needs. —-------------------------------------------------------------------------------------------------------------I solve problems in your business and make you more money. Guaranteed. For over a decade I've been working with gym owners (via one-on-one consulting) to help create tailored solutions to solve their business problems, engineer the game plan and empower them to execute the strategy.Stop wishing your business problems are going to magically go away. Invest in your business and let me solve your problems and optimize your business fast and efficiently. We'll work together daily/weekly, with a monthly call until the problem is solved and then I want you to fire me. Because this is YOUR business, I'm just here to solve a specific problem and then get out of your way.Learn more about what it's like for us to work together.—-------------------------------------------------------------------------------------------------------------Want to increase your business IQ by 100x for only $50? Get enrolled in Microgym University - the only online business school that teaches you the best practices and business frameworks from some of the most successful brands in our industry and then lets you decide which ones to install in your business.New courses are added every month. www.microgymuniversity.com —-------------------------------------------------------------------------------------------------------------Need help leasing or buying a building?I created the Gym Real Estate Company so that gym owners had someone who could go beyond the duties of a typical real estate broker and actually advise them on business aspects as they relate to site selection, market location fit, operational capacity, facility layout, pre-sell marketing, and more.If you're looking for help with your next lease or if you want us to help you along the journey of buying a building - head over to www.gymrealestate.co and book a Discovery Call.—--------------------------------------------------------------------------------------------------------------
What does it take to reimagine aging services in a complex health and social care system? In this episode, Marta Corvêlo, President & Chief Executive Officer at Somerville-Cambridge Elder Services, talks about her journey into health and human services and her approach to transforming community-based aging support. She shares how her upbringing in Portugal and early work with refugee families shaped her commitment to social impact and equity. Marta explains the mission of SCES and how its home-based, wraparound services help older adults, people of all abilities, and caregivers age with independence and dignity. She also discusses operating at the intersection of health care, behavioral health, and social supports to better navigate the systems serving aging populations. Tune in to hear how values-driven leadership and community-based innovation are reshaping the future of aging services! Resources: Connect with and follow Marta Corvêlo on LinkedIn. Follow Somerville-Cambridge Elder Services on LinkedIn, Instagram, and Facebook, and explore their website! Learn more about your ad choices. Visit megaphone.fm/adchoices
Back9 Golf is rewriting the rules of franchising — and in just three years, they've scaled to 443 locations across the U.S. and abroad. In this episode, Dustin Hansen, CEO of Back9, breaks down the franchise model, the culture, and the strategy behind one of the fastest‑growing concepts in America. Inside this conversation: • How Back9 evaluates franchisees and maintains culture at scale • Why indoor golf is exploding across all demographics • The economics, build‑out model, and membership strategy fueling rapid growth A must‑listen for founders, operators, and anyone curious about how a franchise becomes a movement.
What if I told you that a $3–4M practice, 3 days a week, with 45 weeks a year off, is not only doable—but might be easier than the "lean and mean" model everyone's pushing?There's a growing chorus in orthodontics preaching that the only way to be happy is to stay small, keep overhead low, and see fewer patients. But I'm here to tell you—that's just one lens, and frankly, it might be blinding you to what's truly possible.Quotes“I became an orthodontist, not an ortho assistant. And I delegate most of what I don't [enjoy], because it doesn't bring me happiness to clean a chair.”— Dr. Glenn Krieger“You can run a $3 to $4 million practice on three days a week—easily—with 40–45 weeks a year. And you can do it at 50% overhead or lower.”— Dr. Glenn KriegerKey TakeawaysIntro & Practice Philosophy Reality Check (00:00)Why judging high-volume practices is short-sighted (00:02)You can build a $3–4M practice at 50% overhead—here's how (00:43)Why 55–60 patients/day might be the sweet spot (02:40)Delegation and avoiding burnout (03:45)A breakdown of true overhead realities in most markets (05:00)Introducing the “Make More Money” meeting (08:30)Additional ResourcesAre you ready to break free from limiting beliefs about what your practice has to look like?
LEGO travels through time this week — heading to the future with the Maersk Container Ship, rolling back to the early days of motoring with the Ford Model T, and stepping into classic art with a Monet masterpiece. We also celebrate the arrival of spring with some fun new duck-themed builds. All that and more on this week's episode of Bricking LEGO News!FOLLOW my YouTube channel: Back 2 BrickSet Review: 76324 Spider-man vs. Oscorp Rebrickable Review:Monastery of Spinjitzu moc update by alexwashere13 Maersk Duel Fuel container vesselDuck family GWPBricklink Designer Program Series 10Claude Monet Water LilliesSea otter timeFord Model TSpider-man rumorBe the next LEGO designerZelda rumorsBrickjournal creatorPokemon designer meet-and-greetremote controll tumblerTED & LEGOChinese Celebration Dragonball Z!Thank you, Patrons! - Bellefonte Bricks Studio, Jimmy Tucker, David, Paul Snellen, Lee Jackson, Pop's Block Shop, Steve Miles, David Support the showSee some of the designs I've built - REBRICKABLE.COMHead over to Back2brick.com for links to the latest LEGO set discounts!Support the podcast through our affiliate links AND join the Back 2 Brick Patreon!Have a question? Want to be a guest? Send me a message!backtobrick@gmail.comBack 2 Brick Podcast is not an affiliate nor endorsed by the LEGO Group.LEGO, the LEGO logo, the Minifigure, and the Brick and Knob configurations are trademarks of the LEGO Group of Companies. ©2025 The LEGO Group.
Seth Price returns from Puerto Rico to discuss how technology is slashing costs in pre-litigation. The hosts react to news of 700 staffers being let go at Baker McKenzie due to AI efficiencies, sparking a debate on the future of paralegal billing. Jay reveals his potential newest internal hire—a dedicated Firm Technologist—while the hosts weigh in on the controversial industry debate: should you promote your best paralegal to the C-suite?. Is your firm's profit model at risk? Listen now to learn how to restructure your team for the age of automation.#LawFirmBlueprint #LegalAI #LawFirmManagement #LegalTech #LawFirmGrowth #CFO #HiringFromWithin
Most apartment communities focus on amenities.Ovation focused on experience.And it changed everything.Shellie Greer, Director of Ovation Lifestyle, shares how her team turned traditional multifamily properties into true lifestyle brands that residents feel connected to, talk about, and choose to stay in. Instead of generic events or one-off perks, Ovation built a system for community through wellness programming, branded experiences, influencer content, and consistent resident touch points that make every property feel like home.We break down how events double as marketing, how content fuels leasing, why community takes time to build, and what most operators miss when trying to improve retention. If you care about resident loyalty, brand differentiation, or standing out in a crowded market, this is a playbook worth studying.Tap in to learn how Ovation is redefining multifamily living and why lifestyle may be the strongest competitive advantage in real estate today.Learn More About Digible: https://digible.com/?utm_source=dd_podcast&utm_medium=episode_description(00:00) Preview(00:45) Shellie's Role at Ovation(01:27) From UFC Branding to Real Estate(04:33) Designing Immersive Event Experiences(10:20) Ovation's Las Vegas Legacy and Model(14:25) Building the Ovation Lifestyle Program(20:29) Resident App, Feedback, and Personalization(25:24) Measuring Lifestyle ROI and Retention(29:35) Turning Events Into Marketing Content(34:01) Programming for Luxury vs 55+ Residents(43:37) Why Ovation Stays Focused on Vegas(47:21) Influencers and Organic Resident Content(53:03) The Long Game of Community Building
What is ours to do during the putrefactory collapse of Western civilisation? Let's find out! At the very least, it is choosing the right story to live in so that we can nudge the trajectory of this ghastly moment more and more toward the good. Here are some off-the-cuff thoughts on how we might do that.
Host Dennis Scully and BOH executive editor Fred Nicolaus discuss the biggest news in the design world, including the results of the Food52 bankruptcy auction, why designers are embracing electric kitchens and how cold weather might heat up the real estate market. Later, designer Bella Mancini joins the show to talk about elevating a partner at her firm. This episode is sponsored by Loloi and Morris & Co.LINKSMancini Burns DesignBusiness of Home
What if the real secret to building leaders, not followers, was simply honesty paired with care? Nikki sits down with Steve Sorenson, Senior Director Learning and Culture at Johnsonville, to explore a leadership model rooted in humble candor, empowerment, and intentional coaching. Known for replacing "managers" with "coaches" and fostering autonomy through frameworks like the ABC Decision Method and the Simon Principle, Steve walks the talk when it comes to building trust, elevating teams, and modeling people-first leadership. This conversation is packed with tangible tools, soul-shifting moments, and the kind of clarity that challenges old norms and makes culture come alive. Whether you're trying to turn the ship or sharpen your edge, this one will shift how you lead. Additional Resources: Connect with Steve on LinkedIn Learn more about Johnsonville Watch Gut + Science (and more) on YouTube! Connect with Nikki on LinkedIn Follow PeopleForward Network on LinkedIn Learn more about PeopleForward Network Nikki's Key Takeaways: Empowerment grows when autonomy and clarity align. Humble candor blends honesty with deep care. Coaches build leaders, not followers. Shared language strengthens trust and culture. Model the behavior you want repeated.
In the final hour, Leila Rahimi and Marshall Harris were joined by Portage, Ind. mayor Austin Bonta to discuss his city's pitch to lure the Bears to build a new stadium there. After that, Rahimi and Harris explained why the Chiefs' move from Missouri to Kansas can't be a model for the Bears to follow as they explore where to build a new stadium.
Water Damage! Pat's basement is flooded, Seb can't figure out why he's cold... Things are happening in this week's episode of Man of the Hour. Tune in for yet another clue on how to win a Nintendo Switch 2 Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
#776 What does it take to go from cutting shrubs with your dad to designing luxury outdoor spaces for celebrities and high-net-worth clients? In this episode, Brien Gearin sits down with Steve Griggs, founder of Steve Griggs Design, who shares how he built a premium landscaping business in New York over the past 40 years. Steve opens up about his humble beginnings, the pivotal moment that forced him to change his business model, and how he grew from running crews himself to becoming a high-end outdoor design authority. He breaks down the subcontractor model that helped him scale profitably, why clear communication and doing what you say matters more than flashy marketing, and how to build lasting client relationships that drive repeat business. Whether you're new to the green industry or aiming to elevate your home services brand, this episode delivers real talk, hard-earned wisdom, and plenty of laughs along the way! (Original Air Date - 6/11/25) What we discuss with Steve: + Starting out cutting shrubs + Lessons from the 2008 crash + Scaling with subcontractors + Importance of client relationships + Spotting red flag clients + Pricing for profit and margin + Communicating clearly with customers + Working with high-profile clientele + Avoiding burnout and staying relevant + Knowing your numbers and costs Thank you, Steve! Check out Steve Griggs Design at SteveGriggsDesign.com. Follow Steve on Instagram. Watch the video podcast of this episode! To get access to our FREE Business Training course go to MillionaireUniversity.com/training. To get exclusive offers mentioned in this episode and to support the show, visit millionaireuniversity.com/sponsors. Learn more about your ad choices. Visit megaphone.fm/adchoices
CEO of Radisson Mining, discusses the company's recent drilling activities at the O'Brien mine, highlighting the success of their new drilling strategy and the geological implications of their findings. He emphasizes the high success rate of their drilling program and the continuity of mineralization, while also addressing the upcoming resource estimation and market expectations.
In this episode of our Cross-Border Catch-Up podcast series, Diana Nehro (shareholder, New York/Boston), who is the chair of the Cross-Border Practice Group, and Maya Barba (associate, San Francisco) unpack Denmark's parental leave model and discuss what employers should know for leave management and workforce planning in Denmark. The speakers explain the phased structure that gives each parent their leave entitlement, including non-transferable weeks that encourage uptake, plus flexible portions that can be postponed.
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
On today's Texas-sized episode of Quick Charge, Tesla Cybertruck owners in the Lone Star state get V2G access, Tesla Model 3 and Y models get WeChat via OTA, and Toyota has an all-new electric Toyota Highlander with up to 320 miles of range! We're also taking a look at a Japanese joint venture involving oil refining giant Idemitsu Kosan and Sumitomo Metal Mining that's helping Toyota scale up its solid-state battery production plus up to $5,000 off and 0% interest financing on new Toyota EVs. Source Links Tesla launches Cybertruck V2G program in Texas, earning money with your truck's battery pack Tesla partners with Tencent to bring WeChat inside over 1 million cars in China Toyota reveals the Highlander EV as first 3-row electric SUV with 320 miles range [Images] Big oil is betting big on Toyota to win the solid-state battery race Toyota is already offering a $5,000 discount and 0% financing on its new EVs find Toyota bZ deals near you (trusted affiliate) find Toyota C-HR deals near you (trusted affiliate) Prefer listening to your podcasts? Audio-only versions of Quick Charge are now available on Apple Podcasts, Spotify, TuneIn, and our RSS feed for Overcast and other podcast players. New episodes of Quick Charge are (allegedly) recorded several times per week, most weeks. We'll be posting bonus audio content from time to time as well, so be sure to follow and subscribe so you don't miss a minute of Electrek's high-voltage podcast series. Got news? Let us know!Drop us a line at tips@electrek.co. You can also rate us on Apple Podcasts and Spotify, or recommend us in Overcast to help more people discover the show If you're considering going solar, it's always a good idea to get quotes from a few installers. To make sure you find a trusted, reliable solar installer near you that offers competitive pricing, check out EnergySage, a free service that makes it easy for you to go solar. It has hundreds of pre-vetted solar installers competing for your business, ensuring you get high-quality solutions and save 20-30% compared to going it alone. Plus, it's free to use, and you won't get sales calls until you select an installer and share your phone number with them. Your personalized solar quotes are easy to compare online and you'll get access to unbiased Energy Advisors to help you every step of the way. Get started here.
Sunrise Life - beyond skin deep conversations with freelance nude models
Aim is a full time freelance model from Australia! I had a fantastic time learning more about some of her intense methods: Her tours can be up to FIVE months long! Spending a week at a time in a country, state or province. Aim is also very talented in other ways too: she is quite versed in secluded location remote shooting, requiring her to bring a camera, laptop, tripod and internet hot spot so that photographers can sit comfortably in their homes while capturing epic artistic nudes in nature places around the world. Her process is so fascinating! Check out Aim here: Http://www.instagram.com/aim.model and all her links are at http://www.linktr.ee/aim.model
In this episode of Franchise Unfiltered, Wes sits down with Kurt Landwehr, founder and CEO of OVO Salons, to break down a salon franchise model that doesn't fit neatly into the usual categories. While OVO may appear to be an emerging franchise, the business itself has been operating successfully for 18 years in Atlanta, one of the most competitive salon markets in the country. What's new isn't the model...it's the decision to franchise it. Kurt explains how OVO blends commission-based stylists and chair rental under one roof, creating a hybrid approach that benefits both stylists and franchise owners. This structure allows stylists to grow their careers without leaving the salon, while giving owners flexibility to adjust their revenue mix as the business matures. The conversation also explores the “massive middle” of the salon industry — a largely fragmented segment dominated by independent operators — and why OVO is intentionally positioning itself there rather than competing with discount chains or salon suites. Wes and Kurt discuss who this franchise model is designed for, why no hair-cutting experience is required, and how leadership, management, and people skills matter far more than technical ability. They also cover investment ranges, build-out costs, scalability, and why OVO is built for multi-unit ownership, not owner-operators. If you're evaluating franchise opportunities and want to understand how a proven business can still be early in its franchising journey, this episode offers a clear, grounded look at how OVO Salons is approaching growth. 7 Steps to Owning a Franchise: https://path2frdm-1.hubspotpagebuilder.com/path-to-freedom-about-franchising
In this episode of the Care Ministry Podcast, host Laura Howe is joined by Rebecca Bailey for a rich conversation on how spiritual formation functions as preventative care within church care ministries. Together, they explore why care and formation cannot be separated, how spiritual formation shows up across Hope Made Strong's five-part Model of Care (self, community, peer, pastoral, and professional), and why care is less about fixing problems and more about being present and becoming formed alongside one another. This episode invites leaders to move beyond reactive care toward cultivating cultures that sustain people before crisis hits. Quotes “If in our care ministries we're focused on the doing and the fixing, then we will miss the being and the becoming.” –Rebecca Bailey “Spiritual formation is happening on the inside and the outside—it's circular. What happens between us forms us.” –Rebecca Bailey “Care has historically been reactive in the church. Spiritual formation helps us think about care as preventative.” –Laura Howe “Spiritual formation isn't about adding more programs. It's about becoming more intentional with what's already happening.” –Rebecca Bailey “Good spiritual formation in care ministries keeps teams from absorbing what they were never meant to carry.” –Rebecca Bailey Resources Hope Made Strong Book Club HMS Amazong store
As Prime Minister Benjamin Netanyahu arrived in Washington for yet another meeting with President Donald Trump, the Israeli media were already spinning tales of tension and drama based on anonymous speculation. But instead of rumors, the hard facts tell a different story, one where Israel is described in the U.S. Department of Defense strategy itself as a “model ally,” reflecting a historic strategic shift unfolding in real time.Do not miss hearing these historic facts.Join Our Whatsapp Channel: https://chat.whatsapp.com/GkavRznXy731nxxRyptCMvFollow us on Twitter: https://x.com/AviAbelowJoin our Telegram Channel: https://t.me/aviabelowpulseFollow us on Instagram: https://www.instagram.com/pulse_of_israel/?hl=enPulse of Israel on Facebook: https://www.facebook.com/IsraelVideoNetworkVisit Our Website - https://pulseofisrael.com/Donate to Pulse of Israel: https://pulseofisrael.com/boost-this-video/
Chaz has founded 3 companies. The first sold for over $40M. The second sold to GoPuff for even more. Now, he's on his third act with Model ML, having just raised $75M Series A
Charles Frohman discusses different models of health coverage that give individuals and providers more autonomy within the healthcare relationship. https://youtu.be/wkyvUtgz0x4
Join Rev. Josh Hall for Wednesday Night Bible Study. Tonight we continue our look at how we should go about making God focused decisions.. To learn more about First Baptist Church of Tallahassee, visit https://www.fbctlh.org.
Your self-image shapes your decisions, your perspective and your outcomes. And because your self-image is formed from a very young age, it can be difficult to change the habits, thoughts, and patterns that have been ingrained for decades. Today, you're going to learn a proven framework for improving your self-image and overcoming self-doubt. On this episode of The Model Health Show, our guest is behavioral researcher and educator, Dr. Shadé Zahrai. She's here to share science-backed strategies for shaping your self-image from her new book, Big Trust. You're going to discover how to rewire self-doubt, how to build confidence, and how transforming your self-image can help you reach your potential. Specifically, we're going to discuss the four inner attributes that could be holding you back, and how to sharpen them to improve your self-esteem, mindset, and overall personal growth. Dr. Shadé is an absolute expert in helping folks break free from the mindset blocks that are holding them back. If you're ready to take control of your inner world so you can improve your relationships, decision making, and more, this episode is for you. Enjoy! In this episode you'll discover: The connection between confidence and well-being. (3:29) Why you need to take action to develop confidence. (3:49) What self-image is. (8:21) The truth about changing your personality. (10:58) What the most common self-doubt is, and where it originates from. (13:10) Four main signs that you struggle with self-acceptance. (15:51) How having a creative hobby can help improve your self-esteem. (18:57) The power in simply delaying a decision. (29:51) What imposter syndrome is and how to reframe it. (37:26) How pursuing discomfort can activate BDNF. (43:18) Why comparison can be a sign that you struggle with agency. (47:31) What autonomy is and how it relates to self-doubt. (59:13) How microdosing hard things can get you out of your comfort zone. (1:05:05) Signs you might be struggling with adaptability. (1:15:37) Items mentioned in this episode include: Organifi.com/Model - Use the coupon code MODEL for 20% off + free shipping! DrinkLMNT.com/model - Get a FREE sample pack of electrolytes with any order! Big Trust by Dr. Shadé Zahrai - Order your copy of the book today! The Doubt Profille - Discover your doubt profile & what it's costing you! Connect with Dr. Shadé Zahrai Website / Instagram / Facebook / YouTube Be sure you are subscribed to this podcast to automatically receive your episodes: Apple Podcasts Spotify Soundcloud Pandora YouTube This episode of The Model Health Show is brought to you by Organifi and LMNT. Organifi makes nutrition easy and delicious for everyone. Take 20% off your order with the code MODEL at organifi.com/model. Head to DrinkLMNT.com/model to claim a FREE sample pack of electrolytes with any purchase.
What would you do if you saw a $500,000 house listed for $210,000?One of my team members brought this to me last week. Brand-new construction neighborhood. Homes selling for $400K–$500K.Then… a few were listed for literally half price.So we dug in.And what we found is something every real estate investor needs to understand.So this episode isn't just about whether you should “buy half a house.”It's about this bigger question:If institutions are using creative finance to move inventory and create cash flow…What should you be doing in your business right now?That's exactly what we build inside 7 Figure Flipping.If you want to stop guessing and start operating like a real business owner with better funding strategies, better deal structures, and sharper market awareness, this is where that happens. APPLY HERE to join 7 Figure Flipping >>Catch you later!LINKS & RESOURCESLet OmniDrip “work” your old ghosted leads for you on autopilot using proven copy-driven marketing auto-sequences.CLICK HERE: https://www.reiomnidrip.com/ref/7figureflipping/7 Figure Flipping UndergroundIf you want to learn how to make money flipping and wholesaling houses without risking your life savings or "working weekends" forever... this book is for YOU. It'll take you from "complete beginner" to closing your first deal or even your next 10 deals without the bumps and bruises most people pick up along the way. If you've never flipped a house before, you'll find step-by-step instructions on everything you need to know to get started. If you're already flipping or wholesaling houses, you'll find fast-track secrets that will cut years off your learning curve and let you streamline your operations, maximize profit, do MORE deals, and work LESS. CLICK HERE: https://hubs.ly/Q01ggDSh0 7 Figure RunwayFollow a proven 5-step formula to create consistent monthly income flipping and wholesaling houses, then turn your active income into passive cash flow and create a life of freedom. 7 Figure Runway is an intensive, nothing-held-back mentoring group for real estate investors who want to build a "scalable" business and start "stacking" assets to build long-term wealth. Get off-market deal sourcing strategies that work, plus 100% purchase and renovation financing through our built-in funding partners, a community of active investors who will support and encourage you, weekly accountability sessions to keep you on track, 1-on-1 coaching, and more. CLICK HERE: https://hubs.ly/Q01ggDLL0 Connect with us on Facebook and Instagram: @7figureflipping7 Figure Real Estate Ready RoomUse this proven blueprint to launch and grow your real estate investing business. Step-by-step video course takes you through everything you need to know… and we'll jump on WEEKLY workshops to break down each step with you LIVE! Think of it like getting a master's degree in tactical real estate investing for a fraction of the cost. CLICK HERE: https://7figureflipping.com/ready Hosted on Acast. See acast.com/privacy for more information.
Minnesota's top officials are fanning the flames of unrest in their state. When are we going to call it like we see it? This is an insurrection. From Gov. Tim Walz and Attorney General Keith Ellison to Minneapolis Mayor Jacob Frey, state and city leaders have repeatedly excused, enabled, and emboldened disorder. They're only the […]
#10MinuteswithJesus ** Put yourself in the presence of God. Try talking to Him. ** 10 minutes are 10 minutes. Even if you can get distracted, reach the end. ** Be constant. The Holy Spirit acts "on low heat" and requires perseverance. 10-Minute audio to help you pray. Daily sparks to ignite prayer: a passage from the gospel, an idea, an anecdote and a priest who speaks with you and the Lord, inviting you to share your intimacy with God. Find your moment, consider you are in His presence and click play.
Empowering Women in Real Estate - The Podcast with Karen Cooper
Teams are hard. They're complex, emotionally heavy, and often far less profitable than people assume. And if you're leading a team right now that feels more draining than fulfilling … it's time to rethink how your team works, not whether you're cut out for leadership. In this episode, I'm sharing what I've learned after 11 years as a team leader and two years into national expansion with our team. I'm walking you through the moments when a team model stops working, and why that's not a failure. Sometimes it's because of success. Sometimes it's the market. Sometimes it's the season of life you're in. And sometimes it's because the "old way" just isn't sustainable anymore … either for you or for your team members. You'll also hear the specific shifts we made that changed everything … including the biggest mistake we made early in expansion, how we simplified service levels, reduced overhead, and created a structure that's more stable financially and lighter emotionally. If you're rethinking your team model (whether you want to expand nationally or simply run your local team differently), start here. Listen now to episode 392 of Empowering Women in Real Estate® - The Podcast on your favorite podcast app or by clicking the link below. Click subscribe to be notified every Wednesday when our latest episode is released, and be sure to check out our group on Facebook. https://www.facebook.com/groups/empoweringwomeninrealestate We are 40,000 members strong and we want you to join us! And if you want to follow me on Instagram, that's where I'm having the most fun right now. https://www.instagram.com/karen.w.cooper/
Hakuro Matsuda さんをゲストに迎えて、OpenClaw, AI コーディング、iPhone Air, Heroku などについて話しました。 Show Notes 在外選挙制度とは Anthropic Superbowl Ad Rebuild Supporter B52 Victory Museum OpenClaw — Personal AI Assistant OpenClaw Showed Me What the Future of Personal AI Assistants Looks Like RentAHuman moltbook Dear diary, today the user asked me if I'm alive Blueprints Xcode 26.3 unlocks the power of agentic coding Databricks CEO says SaaS isn't dead, but AI will soon make it irrelevant Genie 3 Google Pixel 10a FYI - Gemini AI Pro includes $10 monthly Google Cloud Credits トヨタが独自の“ゲームエンジン”「Fluorite」を開発:FlutterとDartで次世代デジタルコクピットを再定義 Aston Martin & Apple CarPlay Ultra® An Update on Heroku TikTok seals deal for new US joint venture to avoid American ban Keychron Q16 HE 8K Magnetic Switch Keyboard Lofree Flow 2 NocFree &: Wireless Split Keyboard. Split to Reconnect by Solar Keyball39 conductor ver.0.1.1 – Plot. BILRESA remote control, white smart/dual button - IKEA Ikea's new Matter smart home devices are having connection problems 超かぐや姫! BUMP OF CHICKEN feat. HATSUNE MIKU「ray」 K-Pop Demon Hunters フォールアウト シーズン2 機動戦士ガンダム 閃光のハサウェイ キルケーの魔女 420 (cannabis culture)
The Automotive Troublemaker w/ Paul J Daly and Kyle Mountsier
Shoot us a Text.Episode #1266: Ford posts its biggest earnings miss in years but bets big on a 2026 rebound. Robotaxis scale nationwide while public trust hangs in the balance. And NADA partners with Northwood to strengthen the next generation of dealership leadership.Ford just posted its biggest quarterly earnings miss in four years and its worst net loss since 2008. But beneath the headline loss, the company's trucks and commercial vehicles are still carrying the load—and 2026 is being framed as a rebound year.The Q4 adjusted EPS (Earnings Per Share) came in at 13 cents versus the expected 19 cents, the largest miss in four years.Revenue remained strong, with $45.9B in Q4 and a record $187.3B for the full year, but about $900M in unexpected tariff costs and aluminum supply disruptions pressured margins.The company reported an $11.1B net loss in Q4 and an $8.2B loss for the full year, largely driven by $15.5B in EV-related special charges and restructuring actions.Ford Pro and Ford Blue have projected 2026 pre-tax earnings of up to $7.5B and $4.5B respectively, while the Model e unit is expected to lose up to $4.5B.CFO Sherry House noted that the Novelis aluminum plant disruption is not expected to fully resolve until mid-2026, meaning the company will continue sourcing alternative supplies at a higher cost.Waymo, Tesla, Zoox and others are racing to scale robotaxis across the U.S., but recent crashes and investigations show that winning public trust may be harder than winning market share.A Waymo vehicle struck a child who ran into the street from behind a parked SUV in California, prompting a federal investigation. Zoox also reported a crash after a driver opened a door into its path. Both companies say their systems reacted appropriately.A majority of Americans say they're unlikely to try a self-driving taxi, though younger consumers are more open to the idea.“When something goes wrong, people don't experience it as a statistical issue — they experience it as a moral and emotional one,” said Professor William Riggs.Northwood University and NADA are teaming up to expand education access for franchised dealers, their employees and their families — with discounted tuition, scholarships and a clear focus on building the next generation of dealership leadership.NADA dealer members can enroll in Northwood's online undergraduate programs at $350 per credit hour, with the benefit extending to eligible spouses and dependents.Northwood's DeVos Graduate School is offering 20% MBA tuition scholarships, discounted master's programs and up to $15,000 toward a Doctor of Business Administration.Both organizations say the goal is strengthening the leadership pipeline in a people-driven, capital-intensive retaiJoin Paul J Daly and Kyle Mountsier every morning for the Automotive State of the Union podcast as they connect the dots across car dealerships, retail trends, emerging tech like AI, and cultural shifts—bringing clarity, speed, and people-first insight to automotive leaders navigating a rapidly changing industry.Get the Daily Push Back email at https://www.asotu.com/ JOIN the conversation on LinkedIn at: https://www.linkedin.com/company/asotu/
In this episode of Case Studies, Casey sits down with Amy Antonelli, CEO of HumanitarianXP, former Apple executive, and nonprofit trailblazer. From Silicon Valley to rural India, Amy's unconventional path is shaped by faith, grit, and a calling to build something greater than herself. Today, her organization sends over 8,000 teens a year across the globe to serve, disconnect, and discover who they really are.Amy unpacks the miracle-filled story behind HXP's growth and its deeper mission: helping young people chase purpose instead of performance. Through hard labor, spiritual grounding, and intentional leadership, teens return from these trips transformed—and often redirected toward a more meaningful life.She and Casey explore what it takes to lead with both excellence and empathy, run a nonprofit like a business, and stay spiritually aligned while scaling global impact. Amy's journey is a powerful case study in living on mission and building organizations where faith fuels execution.In this EpisodeThe unique “builder” model that transforms both youth and communitiesAmy's leadership lessons from Steve Jobs and Silicon ValleyThe miracle-filled story of faith, impact, and intentionalityHow young leaders, digital detox, and raw service unlock personal purposeChapters00:00 | Meet Amy Antonelli01:33 | Faith, Purpose, and Customized Curriculums05:13 | What is HXP? Mission, Model & Impact08:42 | The Origin Story: One Teen, One Trip12:00 | Exponential Growth and Divine Timing15:36 | High-Standard Volunteers & Life-Changing Leadership20:18 | Alumni Trip Leaders and Generational Impact23:35 | Running a Nonprofit Like a Business25:30 | Purpose-Driven Culture and Morning Rituals27:59 | Excellence, Expectations, and the “Vibe Check”31:08 | Iron Sharpens Iron: The Power of Peer Leadership34:27 | Harvard, Faith, and the Power of Diversity37:36 | Amy's Early Life and Family Influence43:00 | Defining Moments: From Engagement to a New Path48:05 | Silicon Valley, Apple, and Meeting Steve Jobs55:09 | Leadership Mistakes and Growing in Love & Standards59:59 | Walking Away from Tech to Find True Purpose01:03:37 | India, Leprosy Colonies & Discovering Plan C01:10:23 | Founding Rising Star Outreach Hosted on Acast. See acast.com/privacy for more information.
We discuss UGA's declining ability to land five-star prospects, debate shifts in the broader recruiting landscape, and talk about whether top-end talent is more important than overall roster depth.
In this episode, Nicolai Tangen sits down with Mike Gitlin, President and CEO of Capital Group, one of the world's largest investment managers. They discuss Capital Group's distinctive investment model, its long-term philosophy, and the firm's unique ownership structure that has shaped its success over nearly a century. Mike also shares his views on today's market structure and what it means for long-term investors, as well as how AI is transforming the way Capital Group works. They explore culture, career longevity, incentives, and what it really takes to build a sustainable investment organization.In Good Company is hosted by Nicolai Tangen, CEO of Norges Bank Investment Management. New full episodes every Wednesday, and don't miss our Highlight episodes every Friday. The production team for this episode includes Isabelle Karlsson and PLAN-B's Niklas Figenschau Johansen, Sebastian Langvik-Hansen and Pål Huuse. Background research was conducted by Isabelle Karlsson. Watch the episode on YouTube: Norges Bank Investment Management - YouTubeWant to learn more about the fund? The fund | Norges Bank Investment Management (nbim.no)Follow Nicolai Tangen on LinkedIn: Nicolai Tangen | LinkedInFollow NBIM on LinkedIn: Norges Bank Investment Management: Administrator for bedriftsside | LinkedInFollow NBIM on Instagram: Explore Norges Bank Investment Management on Instagram Hosted on Acast. See acast.com/privacy for more information.
If you're tired of training believers who still aren't making disciples, this episode is your breakthrough! Discover the four crucial elements missing from most church training—and how Jesus' apprenticeship-style approach in Luke 9–10 can transform passive learners into confident, competent disciple-makers. We'll show you how to move beyond information-heavy classes to hands-on skill development, real-life modeling, goal-setting, and powerful accountability rhythms. Learn the MAWL method (Model, Assist, Watch, Launch) and pick up practical tools that help your people actually do the work of evangelism and multiplication. If you want to ignite bold, active disciple-makers in your church or ministry, you won't want to miss this episode!
Minnesota's top officials are fanning the flames of unrest in their state. When are we going to call it like we see it? This is an insurrection. From Gov. Tim Walz and Attorney General Keith Ellison to Minneapolis Mayor Jacob Frey, state and city leaders have repeatedly excused, enabled, and emboldened disorder. They're only the latest links in a long Democrat chain of political indulgence toward radicalism, tracing back to the civil rights era. Victor Davis Hanson warns of the fractures this mindset brings on today's episode of “Victor Davis Hanson: In a Few Words.” “ What we're seeing is a complete failure of the blue state model. And the failure is ironic because it's neo-Confederate. Just like the old Confederacy and the Antebellum South, these blue states are obsessed with race. This is where DEI comes from. This is where, if you're one-sixteenth of this, or you have DNA of that, you identify, primarily, by your ethnic or racial background and not your common humanity or your common American citizenship. Very similar to the South. “This is something that's disturbing, that it's a trademark of over 150 years that the Democratic Party has, maybe it feels that it's more a people's party, but they feel they can defy federal law at their own volition.” 00:00 Introduction 00:10 Minnesota's Insurrectionary Rhetoric 00:29 Impact on ICE and Law Enforcement 03:43 Historical Context of Defiance 05:53 Blue State Model and Neo-Confederate Comparison 08:45 Conclusion: The Future of Blue State Defiance
In this episode, Stephan Livera interviews Jos Lazet from Blockrise, discussing the recent volatility in Bitcoin prices, the semi-custodial model of Blockrise, and the future of Bitcoin lending. They explore the implications of market movements, the importance of risk management in lending, and the evolving landscape of Bitcoin services. Joss shares insights on Blockrise's offerings, including asset management and lending, and emphasizes the need for user-friendly solutions in the Bitcoin space.Takeaways:
Ashlee is joined by Katie Liebold with Wildlife Partners, an investment and exotic wildlife farming and breeding group out of TX to discuss taking a capitalistic approach to wildlife. Their efforts have contributed to the recovery of multiple endangered and threatened species, while affording their clients an alternative revenue source. Get to know the guest: https://wildlifepartners.com/ Do you have questions we can answer? Send it via DM on IG or through email at info@theoriginsfoundation.org Support our Conservation Club Members! Sun Africa: https://www.sun-africa.com/ Wildlife Center of MI: https://www.wildlifecentermi.org/ Project Grizzly Balance: https://theoriginsfoundation.org/conservation-projects/project-grizzly-balance/ See more from Blood Origins: https://bit.ly/BloodOrigins_Subscribe Music: Migration by Ian Post (Winter Solstice), licensed through artlist.io This podcast is brought to you by Bushnell, who believes in providing the highest quality, most reliable & affordable outdoor products on the market. Your performance is their passion. https://www.bushnell.com This podcast is also brought to you by Silencer Central, who believes in making buying a silencer simple and they handle the paperwork for you. Shop the largest silencer dealer in the world. Get started today! https://www.silencercentral.com This podcast is brought to you by Safari Specialty Importers. Why do serious hunters use Safari Specialty Importers? Because getting your trophies home to you is all they do. Find our more at: https://safarispecialtyimporters.com Learn more about your ad choices. Visit megaphone.fm/adchoices
Breathing is a unique biological process because even though our breath is automatic, we also hold the capacity to control our breathing. When used correctly, your breath can be a powerful tool for self-regulation, pain relief, and so much more. Today, you're going to learn how to unlock the power of breath for better health. Our guest today is Jill Miller. She is an author, the cofounder of Tune Up Fitness Worldwide, and a pioneer in the health and fitness space. Jill is back on The Model Health Show to share how to utilize your breath as a resource for reducing stress and pain, recovery, emotional health, and so much more. We're going to talk about the connection between breathing and stress, and how to use breathing exercise to increase resilience and accelerate healing. Jill Miller is an absolute expert in the realm of anatomy and movement. I hope you enjoy this interview with Jill Miller! In this episode you'll discover: The relationship between breathing and stress. (4:57) How your diaphragm works. (7:47) What the zones of respiration are. (10:29) The connection between breathing and pain. (16:41) What gut smashing is and its benefits. (23:07) How to use abdominal massage techniques. (28:50) Breathing practices for reducing anxiety. (35:48) What relaxation induced anxiety is. (46:31) What interoception is. (51:54) How breathing can accelerate healing. (54:00) Items mentioned in this episode include: Peluva.com/model - Get 15% off barefoot shoes with my code MODEL! Paleovalley.com/model - Use code MODEL for 15% off! Body by Breath by Jill Miller - Get your copy of Jill's book! The Roll Model by Jill Miller - Read Jill's mobility and pain book! Coregeous Ball - Get your own therapy ball from Tune Up Fitness! Connect with Jill Miller Website / Instagram / YouTube Be sure you are subscribed to this podcast to automatically receive your episodes: Apple Podcasts Spotify Soundcloud Pandora YouTube This episode of The Model Health Show is brought to you by Peluva and Paleovalley. Peluva's barefoot minimalist shoes support postural alignment, proprioception, and overall functionality. Get 15% off your order by using code MODEL at peluva.com/model. Use my code MODEL at Paleovalley.com/model to save 15% sitewide on nutrient dense snacks, superfood supplements, and more.
HEADLINE: Cold Dark Matter and WIMPs. GUEST: Govert Schilling. SUMMARY: Neutrinos are ruled out for Cold Dark Matter as Schilling describes the WIMP search at CERN and how computer simulations validate the model.