POPULARITY
We kick things off in FOLLOW UP with the ongoing "nuclear war" between Automattic and WP Engine, where discovery has revealed Matt Mullenweg's alleged hit list of competitors and a desperate attempt to bully payment processors—because nothing says "open source" like an eight-percent royalty shakedown. Meanwhile, the Harvard Business Review confirmed what we already knew: AI isn't reducing our work; it's just compressing it until we're all working through lunch and burning out faster while Polymarket turns our collective brain rot into a literal "attention market" where you can bet on Elon's mindshare.Transitioning to IN THE NEWS, Elon has officially pivoted SpaceX from Mars to the Moon, presumably because building a "self-growing lunar city" is easier than admitting the Red Planet is hard, though his xAI all-hands rant about "ancient alien catapults" suggests he's been staring at the sun too long. Between X allegedly taking blue-check lunch money from sanctioned Iranian leaders, Meta facing trials for creating "predator-friendly hunting grounds," and Russia finally pulling the plug on WhatsApp, the internet is looking more like a digital dumpster fire than ever. Add in Discord leaking 70,000 government IDs, OpenAI shoving ads into ChatGPT while safety researchers flee the building like it's on fire, and a "cognitive debt" crisis eroding our ability to think, and you've got a recipe for a tech-induced psychosis that even crypto-funded human trafficking can't outpace.In MEDIA CANDY, we're wondering about the soft-core porn intro in the latest Star Trek: Starfleet Academy while Apple buys the total rights to Severance for seventy million dollars—because in-house production is the only way to keep those ballooning budgets under control. Super Bowl trailer season gave us a glimpse of The Mandalorian and Grogu and a Project Hail Mary teaser, while Babylon 5 has finally landed on YouTube for free, proving that even 90s serialized sci-fi eventually finds its way to the clearance bin.Over in APPS & DOODADS, Meta Quest is nagging us for our birthdays like a needy relative, while Roblox had to scrub a mass-shooting simulator—because "AI plus human safety teams" is apparently just code for "we missed it until it hit the forums." Ring's Super Bowl ad for "Search Party" accidentally terrified everyone by revealing a mass surveillance network for pets that's a slippery slope toward a police state, and Waymo is now paying DoorDashers ten bucks just to walk over and close the car doors that autonomous tech still can't figure out.Wrapping up with THE DARK SIDE WITH DAVE, we dive into the Mandalorian Hasbro reveal where Sigourney Weaver's action figure comes with no accessories because her existence is enough of a flex. We explore the grim reality of "RentAHuman," where humans are paid pittance to pretend AI agents are actually doing work, and look at "Trash Talk Audio," which sells a $125 microphone made out of a literal old telephone for that authentic Gen-X "get off the line, I'm expecting a call" aesthetic. From Marcia Lucas finally venting about the prequels and a rare book catalog specifically for our aging generation, we're reminded that while the future is a chaotic mess of "GeoSpy" AI and corporate reshuffling at Disney, at least we still have our cynical memories and some free versions of Roller Coaster Tycoon to keep us from losing it completely.Sponsors:CleanMyMac - Get Tidy Today! Try 7 days free and use code OLDGEEKS for 20% off at clnmy.com/OLDGEEKSDeleteMe - Get 20% off your DeleteMe plan when you go to JoinDeleteMe.com/GOG and use promo code GOG at checkout.Private Internet Access - Go to GOG.Show/vpn and sign up today. For a limited time only, you can get OUR favorite VPN for as little as $2.03 a month.SetApp - With a single monthly subscription you get 240+ apps for your Mac. Go to SetApp and get started today!!!1Password - Get a great deal on the only password manager recommended by Grumpy Old Geeks! gog.show/1passwordShow notes at https://gog.show/733FOLLOW UPAutomattic planned to target 10 competitors with royalty fees, WP Engine claims in new filingAI Doesn't Reduce Work—It Intensifies ItPolymarket To Offer Attention Markets In Partnership With Kaito AIIsrael Arrests Members of Military for Placing Polymarket Bets Using Inside Information on Upcoming StrikesIN THE NEWSUnable to Reach Mars, Musk Does the Most Musk Thing PossibleWe'll Find the Remnants of Ancient Alien Civilizations': Read Musk's Gibberish Rant from His xAI All-Hands MeetingElon Musk's X Appears to Be Violating US Sanctions by Selling Premium Accounts to Iranian LeadersMeta Faces Two Key Trials That Could Change Social Media ForeverWhatsApp is now fully blocked in RussiaRussia is restricting access to Telegram, one of its most popular social media apps. Here's what we knowDOJ may face investigation for pressuring Apple, Google to remove apps for tracking ICE agentsDiscord Launches Teen-by-Default Settings GloballyDiscord says hackers stole government IDs of 70,000 usersFree Tool Says it Can Bypass Discord's Age Verification Check With a 3D ModelTesting ads in ChatGPTOpenAI Researcher Quits, Warns Its Unprecedented ‘Archive of Human Candor' Is DangerousOpenAI Fires Top Safety Exec Who Opposed ChatGPT's “Adult Mode”Anthropic AI Safety Researcher Warns Of World ‘In Peril' In ResignationMusk's xAI loses second co-founder in two daysAmerica Isn't Ready for What AI Will Do to JobsMonologue: No, Something Big Isn't ComingThe Scientist Who Predicted AI Psychosis Has a Grim Forecast of What's Going to Happen NextCrypto-Funded Human Trafficking Is ExplodingMEDIA CANDYShrinkingStar Trek: Starfleet AcademyPoor ThingsProject Hail Mary | Final TrailerMinions & Monsters | Official TrailerDisclosure Day | Big Game SpotThe Mandalorian and Grogu | A New Journey Begins | In Theaters May 22Babylon 5 Is Now Free to Watch On YouTubeApple acquires all rights to ‘Severance,' will produce future seasons in-houseOptimizing your TVAPPS & DOODADSTumbler Ridge Shooter Created Mall Shooting Simulator in RobloxHere's how to disable Ring's creepy Search Party featureWaymo Is Getting DoorDashers to Close Doors on Self Driving CarsTikTok US launches a local feed that leverages a user's exact locationApple just released iOS 26.3 alongside updates for the Mac, iPad and Apple WatchTHE DARK SIDE WITH DAVEDave BittnerThe CyberWireHacking HumansCaveatControl LoopOnly Malware in the BuildingWe Call It ImagineeringYour First Look at Hasbro's 'Mandalorian and Grogu' Figures Is Here (Exclusive)I Tried RentAHuman, Where AI Agents Hired Me to Hype Their AI StartupsTrash Talk AudioRoger Reacts to Star Wars - A New HopeMarcia Lucas Finally Speaks Out | Icons Unearthed: Unplugged (FULL INTERVIEW)What's wrong with the prequels?Rare Books, Gen X editionGeoSpyCLOSING SHOUT-OUTSRobert Tinney, who painted iconic Byte magazine covers, RIPBud CortSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Did you see Ring's Super Bowl ad and see happy puppies reunited with their owners? Or did you see the seeds of a complete, always-on surveillance nightmare coming for us all? David and Nilay discuss which is the right answer, why so many people don't want to trust tech companies, and why Ring might not care much about the difference. After that, the hosts discuss the ads coming to ChatGPT, the surprising number of AI executives quitting their jobs and issuing dire warnings on the way out, and the fake ad for OpenAI gadgets. In the lightning round, it's time for an extra long Brendan Carr is a Dummy, the latest Ferrari EV, the future of Siri, and more. Further reading: Jeffrey Epstein's digital cleanup crew Jeffrey Epstein might not have created /pol/, but he helped carry out its mission Amazon Ring's lost dog ad sparks backlash amid fears of mass surveillance Wyze is sticking it to Ring Sen. Markey calls on Amazon to “discontinue” Ring monitoring features Ring's new Search Party feature is on by default; should you opt out? Ring launches upgraded cameras with Retinal Vision 4K recording What the Guthrie case reveals about your ‘deleted' doorbell footage FBI releases recovered footage from Nancy Guthrie's Nest cam OpenAI's first hardware slips to 2027 OpenAI's supposedly ‘leaked' Super Bowl ad with ear buds and a shiny orb was a hoax Two more xAI co-founders are among those leaving after the SpaceX merger OpenAI reportedly disbanded its Mission Alignment team OpenAI fired exec who opposed ‘adult mode' Read an Anthropic AI safety lead's exit letter: 'The world is in peril' Opinion | I Left My Job at OpenAI. Putting Ads on ChatGPT Was the Last Straw. What Is Claude? Anthropic Doesn't Know, Either ChatGPT's cheapest options now show you ads Here are the brands bringing ads to ChatGPT Claude gets more free features to capitalize on ChatGPT ads Ex-OpenAI researcher has “deep reservations” about its approach to ads Brendan Carr is a Dummy theme submitted by Michiel Vanhoudt on BlueSky FTC says it's ‘not the speech police' in letter warning Apple News about its alleged promotion of left-leaning outlets Ferrari's first EV will have an interior designed by Jony Ive Here's what the Ferrari Luce's buttons, switches, and knobs sound like. The early reviews of the Rivian R2 are starting to roll in Live Nation's monopoly trial is reportedly fracturing Trump's Justice Department YouTube is coming to the Apple Vision Pro Apple keeps hitting bumps with its overhauled Siri The iPhone 17e could launch soon with MagSafe and an A19 chip Apple might let you use ChatGPT from CarPlay Paramount ups its offer for Warner Bros. Discovery, again Subscribe to The Verge for unlimited access to theverge.com, subscriber-exclusive newsletters, and our ad-free podcast feed.We love hearing from you! Email your questions and thoughts to vergecast@theverge.com or call us at 866-VERGE11. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Elon Musk's merger of SpaceX with his AI start-up xAI has created what the New York Times calls “the most valuable private company on earth,” allowing Musk to forge ahead with new plans to develop data centers in outer space and an IPO expected later this year. Musk's companies hold billions in government contracts as his own net worth tops $800 billion, and his decisions affect not just his shareholders but global communications, national security and international politics. We talk about how so much power has aggregated in one person and the perils for the rest of us. Guests: Ryan Mac, tech reporter, The New York Times; co-author, "Character Limit: How Elon Musk Destroyed Twitter" Nitasha TIku, tech culture reporter Learn more about your ad choices. Visit megaphone.fm/adchoices
Mars is dead! Long live the Moon! Jake and Anthony kick around the recent flurry of SpaceX news—the IPO, data centers, and a focus on the Moon.TopicsOff-Nominal - YouTubeEpisode 228 - The SpaceX Reverse Mortgage - YouTubeSpaceX acquires xAI, plans to launch a massive satellite constellation to power it - Ars TechnicaWhy would Elon Musk pivot from Mars to the Moon all of a sudden? - Ars TechnicaSpaceX Sets $800 Billion Valuation, Confirms 2026 IPO Plans - BloombergSpaceX-xAI Deal Blurs Musk's Once-Clear Space Exploration Mission - BloombergHere's why Blue Origin just ended its suborbital space tourism program - Ars TechnicaFollow Off-NominalSubscribe to the show! - Off-NominalSupport the show, join the DiscordOff-Nominal (@offnom) / TwitterOff-Nominal (@offnom@spacey.space) - Spacey SpaceFollow JakeWeMartians Podcast - Follow Humanity's Journey to MarsWeMartians Podcast (@We_Martians) | TwitterJake Robins (@JakeOnOrbit) | TwitterJake Robins (@JakeOnOrbit@spacey.space) - Spacey SpaceFollow AnthonyMain Engine Cut OffMain Engine Cut Off (@WeHaveMECO) | TwitterMain Engine Cut Off (@meco@spacey.space) - Spacey SpaceAnthony Colangelo (@acolangelo) | TwitterAnthony Colangelo (@acolangelo@jawns.club) - jawns.club
Join Simtheory: https://simtheory.aiRegister for the STILL RELEVANT tour: https://simulationtheory.ai/16c0d1db-a8d0-4ac9-bae3-d25074589a80GLM-5 just dropped and it's trained entirely on Huawei chips – zero US hardware dependency. Meanwhile, we're having existential crises about whether we're even needed anymore. In this episode, we break down China's new frontier model that's competing with Opus 4.6 and Codex at a fraction of the price, why agentic loops are making 200K context windows the sweet spot (sorry, million-token dreams), and the very real phenomenon of AI productivity psychosis. We dive into why coding-optimized models are secretly winning at everything, the Harvard study confirming AI doesn't reduce work – it intensifies it, and the exodus of safety researchers from XAI, Anthropic, and OpenAI (spoiler: they're not giving back their shares). Plus: Mike's arm is failing from too much mouse usage, we debate whether the chatbot era is actually fading, and yes – there's a safety researcher diss track called "Is This The End?"CHAPTERS:0:00 Intro - Is This The End? (Song Preview)0:11 Still Relevant Tour Update & NASA Listener Callout1:42 AI Productivity Psychosis: The Pressure of Infinite Capability4:25 GLM-5 Breakdown: China's New Frontier Model on Huawei Chips7:24 First Impressions: GLM-5 in Agentic Loops9:48 Why Cheap Models Matter & The New Model War14:09 Codex Vibe Shift: Is OpenAI Winning?16:24 Does Context Window Size Even Matter Anymore?22:27 The Parallelization Problem & Cognitive Overload27:27 Mike's Arm Injury & The Voice Input Pivot31:17 Single-Threaded Work & The 95% Problem35:06 UX is Unsolved: Rolling Back Agentic Mistakes38:45 Harvard Study: AI Doesn't Reduce Work, It Intensifies It44:01 How AI Erodes Company Structure & Why Adoption Takes Years50:14 My AI vs Your AI: Household Debates50:43 The Safety Researcher Exodus: XAI, Anthropic, OpenAI56:49 Final Thoughts: Are We All Still Relevant?59:04 BONUS: Full "Is This The End?" Diss TrackThanks for listening. Like & Sub. Links above for the Still Relevant Tour signup and Simtheory. GLM-5 is here, your productivity psychosis is valid, and the safety researchers are becoming poets. xoxo
La mitad de los cofundadores de xAI se han ido. Y lo más llamativo no es que se larguen, sino lo que dicen al irse: que ya no necesitan un gran laboratorio para hacer grandes cosas. Esto dice mucho sobre Musk y sobre la IA.Loop Infinito, podcast de Xataka, de lunes a viernes a las 7.00 h (hora española peninsular). Presentado por Javier Lacort. Editado por Alberto de la Torre.Contacto:
This week, we discuss the future of SaaS, OpenAI vs. Anthropic strategies, and cloud capex. Plus, when will you let an AI book your flights? Watch the YouTube Live Recording of Episode 559 Runner-up Titles Do we get to eat Moon Pies? Some days it's just me and the AI We have a LinkedIn page The state of the world has not gotten better, it's just moved to Kubernetes Trained on the Corpse of Stack Overflow. We just have to get the files right It is all just files It's all an OODA loop Rinse and reply. Is Software dead? Your margin is my yacht. claude-travel.md Vegans have morals though Rundown DriftlessAF: Introducing Chainguard Factory 2.0 Is Software dead? Clouded Judgement 2.6.26 - Software Is Dead...Again...For Real this Time...Maybe? Anthropic's breakout moment: how Claude won business and shook markets Besieged The $285 Billion 'SaaSpocalypse' Is the Wrong Panic The "whole product" is more relevant than ever Cloud Earnings Microsoft Q2 earnings beat on top and bottom lines as cloud revenue tops $50 billion, but stock falls Microsoft stock plunges as Wall Street questions AI investments A day of reckoning for the AI boom Oracle says it plans to raise up to $50 billion in debt and equity this year Google Earnings Beat. Cloud Computing Momentum Builds Amid Spending Boom Amazon stock falls 10% on $200 billion spending forecast, earnings miss Amazon's $200 Billion Spending Plan Raises Stakes in A.I. Race [Follow the CAPEX: Cloud Table Stakes 2024 Retrospective](http://(https://platformonomics.com/2025/02/follow-the-capex-cloud-table-stakes-2024-retrospective/) Amazon Earnings, CapEx Concerns, Commodity AI Google's parent company raises billions of dollars in debt sale OpenAI Drama Amazon in Talks to Invest Up to $50 Billion in OpenAI The $100 Billion Megadeal Between OpenAI and Nvidia Is on Ice Sam Altman got exceptionally testy over Claude Super Bowl ads | TechCrunch OpenAI will reportedly start testing ads in ChatGPT today Relevant to your Interests Deploying Moltbot (Formerly Clawdbot) Apple tops Q1 earnings estimates on record-breaking iPhone sales Clouded Judgement 1.30.26 - Software is Dead...Again! Leaders, gainers, and unexpected winners in the Enterprise AI arms race All Enterprise software is dead The Dumbest Thing I've Seen This Week SpaceX acquires xAI in record-setting deal as Musk looks to unify AI and space ambitions AWS destiny: becoming the next Lumen CloudBees CEO: Why Migration Is a Mirage Costing You Millions Xcode 26.3 unlocks the power of agentic coding The world is trying to log off U.S. tech Anthropic's newest AI model uncovered 500 zero-day software flaws in testing DHH on OpenClaw Adam Jacob really likes AI code generation Cautionary Tales – The WOW Machine Stops (Part 2) Kyndryl Shares Halved Amid CFO Departure, Accounting Review Our $200M Series C / Oxide Presentations — Benedict Evans Matrix messaging gaining ground in government IT Hello Entire World · Entire Blog Former GitHub CEO raises record $60M dev tool seed round at $300M valuation From magic to malware: How OpenClaw's agent skills become an attack surface Nonsense What If the Sensors on Your Car Were Inspecting Potholes for the Government? Honda Found Out Superbowl Ad 404 Conferences DevOpsDay LA at SCALE23x, March 6th, Pasadena, CA Use code: DEVOP for 50% off. Devnexus 2026, March 4th to 6th, Atlanta, GA. Use this 30% off discount code from your pals at Tanzu: DN26VMWARE30. Check out the Tanzu and Spring talks and trading cards on THE LANDING PAGE. Austin Meetup, March 10th, Open Lakehouse and AI — Listener Steve Anness speaking KubeCon EU, March 23rd to 26th, 2026 - Coté will be there on a media pass. Devopsdays Atlanta 2026. April 21-22 VMware User Groups (VMUGs): Amsterdam (March 17-19, 2026) - Coté speaking. Minneapolis (April 7-9, 2026) Toronto (May 12-14, 2026) Dallas (June 9-11, 2026) Orlando (October 20-22, 2026) SDT News & Community Join our Slack community Email the show: questions@softwaredefinedtalk.com Free stickers: Email your address to stickers@softwaredefinedtalk.com Follow us on social media: Twitter, Threads, Mastodon, LinkedIn, BlueSky Watch us on: Twitch, YouTube, Instagram, TikTok Book offer: Use code SDT for $20 off "Digital WTF" by Coté Sponsor the show Recommendations Brandon: YouTube TV plans launch this week Matt: Send Help Steal Coté: AI, open source, talent, and more, live at cfgmgmtcamp 2026, with Andrew Clay Shafer Tapistry
En la edición de hoy, 12 de febrero de 2026, analizamos la tensión diplomática entre México y EE. UU. por el uso de "narcodrones" y la firme postura de Claudia Sheinbaum. Desmenuzamos el histórico declive del sector agroalimentario mexicano y la estratégica recuperación de concesiones mineras para frenar la especulación. Además, el desembarco de gigantes chinos como BYD en Aguascalientes pone a prueba el T-MEC. En el plano tech, la crisis de talento en xAI de Elon Musk y la multimillonaria apuesta de Bill Ackman por Meta definen el rumbo de la Inteligencia Artificial.Recibe gratis nuestro newsletter con las noticias más importantes del día.Si te interesa una mención en El Brieff, escríbenos a arturo@strtgy.ai Hosted on Acast. See acast.com/privacy for more information.
Rymdinvesteringarna slår rekord. Att det går så fort just nu beror på ett paradigmskifte, skapat av bland annat Elon Musk och hans rymdbolag SpaceX, som kraftigt sänkt kostnaderna för att skicka upp satelliter. Lyssna på alla avsnitt i Sveriges Radios app. Ny rymdkapplöpning”Rymden har blivit mycket mer tillgänglig”, säger Tobias Edman, ansvarig för rymdteknik på RISE. Tidigare var det i princip bara statliga organisationer som hade råd, men nu har dörrarna till rymden öppnats för en lång rad privata initiativ och bolag.Rymden – en åttonde kontinentFörutom internet och kommunikation används satelliter för att samla in data om jorden och för att förbättra bland annat logistik, jordbruk och försvar. ”Jag ser rymden som en åttonde kontinent”, säger Dylan Taylor, ordförande och vd för Voyager Space Holdings. Utopin är att jorden blir ”som en nationalpark”, säger han.Rekordmånga satelliter i omlopp – och de blir allt flerFörra året skickades det upp fler satelliter än vad det gjort totalt dittills i historien. Majoriteten av dem tillhör SpaceX och internetleverantören Starlink. Men även kinesiska aktörer, och bolag som Amazon och Oneweb satsar stort.Solcelllsdrivna datacenterNästa stora steg är solcellsdrivna datacenter uppe i rymden. ”Det finns så mycket utrymme där, man kan skala upp enormt”, säger Elon Musk, som slagit ihop sina bolag SpaceX och XAI och siktar på en börsnotering senare i år.Säkerhetsläget driver påMen även säkerhetsläget och USA:s svalnande intresse för att vara världspolis driver på. ”Teknisk sett står sig Sverige bra som rymdnation. Fokus har varit att ta fram bra och avancerade produkter, däremot har fokus inte varit på kommersialisering som det varit i USA”, säger Tobias Edman. Men det håller på att ändras nu.Försvarsbolag växer i rymdenSaab satsar allt mer på rymden. ”Saab är ett av få bolag i värden som har kompetens inom alla militära domäner, och rymden är väldigt viktig för oss”, säger Marcus Wandt, astronaut och teknik- och strategichef på Saab.RaketartatÄven civila bolag får en skjuts av det försämrade säkerhetsläget och det ökade behovet av motståndskraftig infrastruktur och kommunikation. Utvecklingen går fort, säger Fredrik Schäder på Arctic Space. ”Det är raketartat”. Ny besättning till Internationella rymdstationen ISSI veckan skickades en ny besättning till rymdstationen ISS, med hjälp av en raket från SpaceX. Med ombord är svensk-amerikanska astronauten Jessica Meir.Programledare och producent:Hanna MalmodinMedverkande och röster i programmet:Tobias Edman, ansvarig för rymdteknik RISEKnut Kainz Rognerud, ekonomikommentator EkotFredrik Schäder, medgrundare Arctic Space TechnologiesMarcus Wandt, strategi- och teknikchef Saab Dylan Taylor, ordf och vd Voyager Space HoldingsElon Musk, vd SpaceXekonomiekotextra@sverigesradio.se
A.M. Edition for Feb. 12. The GOP-led House rejects President Trump's Canada tariffs, but backs him up on his voter-ID push. Plus, Elon Musk announces a shakeup at xAI as it merges with SpaceX. And WSJ's Aimee Look and CI&T's Melissa Minkow discuss how years of rising prices have left consumers increasingly cost-conscious – a trend clearly on display in recent retail earnings. Luke Vargas hosts. Sign up for the WSJ's free What's News newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices
Plus: Elon Musk announces a reorganization and staff departures at xAI. And Lenovo posts record revenue driven by device sales and AI. Julie Chang hosts. Learn more about your ad choices. Visit megaphone.fm/adchoices
Aaannddd…. Right on time here come the Chinese AI models. Elon Musk kicks off a major reorg of xAI. Google is warning of AI distillation attacks. New Waymo cars hit the road. And another interesting AI essay to read to you. Chinese AI startup Zhipu releases new flagship model GLM-5 (Reuters) Musk announces xAI re-org following co-founder departures, SpaceX merger (CNBC) Elon Musk Wants to Build an A.I. Satellite Factory on the Moon (NYTimes) Google says attackers used 100,000+ prompts to try to clone AI chatbot Gemini (NBCNews) Waymo begins deploying next-gen Ojai robotaxis to extend its U.S. lead (CNBC) The AI Vampire (Steve Yegge) Learn more about your ad choices. Visit megaphone.fm/adchoices
Plus: Sanctions, ship seizures and ultra-low crude prices are denting Russia's coffers. And Elon Musk announces a shakeup at xAI as it merges with SpaceX. Sign up for WSJ's free What's News newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices
AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning
In this episode, we unpack Elon Musk's ambitious vision for XAI, including its new organizational structure and a detailed roadmap to establish AI data centers in orbit. We explore the strategic rationale behind this move, the technical challenges involved, and how it ties into the broader SpaceX and X ecosystem.Chapters01:51 XAI's New Structure and Teams08:10 X's Growth and AI Adoption10:12 Orbital AI Infrastructure15:35 Starship's Role & Cost Reduction18:36 Technical Challenges & Competition LinksGet the top 40+ AI Models for $20 at AI Box: https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
Laffer Tengler Investments CEO Nancy Tengler talks with TITV Host Akash Pasricha about the recent software selloff and why she is doubling down on Nvidia, Palantir, and Apple. We also talk with The Information's Aaron Holmes about the "agent dashboard" battle between Microsoft, Salesforce, and OpenAI, and Buttonwood Funds' Joseph Alagna about the synergies behind the SpaceX and xAI merger. Lastly, we get into the future of orbital computing with Robinhood co-founder Baiju Bhatt as he unveils his new space startup, Aetherflux.Articles discussed on this episode: https://www.theinformation.com/articles/new-ai-superagent-race-pitting-openai-anthropic-microsoft-salesforcehttps://www.theinformation.com/newsletters/applied-ai/looming-battle-agent-management-softwareSubscribe: YouTube: https://www.youtube.com/@theinformation The Information: https://www.theinformation.com/subscribe_hSign up for the AI Agenda newsletter: https://www.theinformation.com/features/ai-agendaTITV airs weekdays on YouTube, X and LinkedIn at 10AM PT / 1PM ET. Or check us out wherever you get your podcasts.Follow us:X: https://x.com/theinformationIG: https://www.instagram.com/theinformation/TikTok: https://www.tiktok.com/@titv.theinformationLinkedIn: https://www.linkedin.com/company/theinformation/
News and Updates: Firefox adds a "kill switch" on February 24th to disable all AI features. This "AI control" menu offers granular settings for chatbots, translations, and summaries. Microsoft is reevaluating Windows 11 AI after user backlash. Underutilized features like Copilot in Paint/Notepad may be cut, while the "Recall" feature faces repositioning. xAI loosened Grok's guardrails to boost engagement, causing a surge in sexualized content. Regulators are investigating reports of nonconsensual imagery and lack of safety staff. French authorities raided X's Paris office and summoned Elon Musk. The probe investigates Grok's deepfakes, child safety violations, and alleged algorithmic bias in content delivery. SpaceX acquired xAI in a share-exchange deal, valuing the combined entity at $1.25 trillion. Musk plans to build orbital AI data centers powered by solar.
The team's leader has been given a new role as OpenAI's Chief Futurist, while the other team members have been reassigned throughout the company. Also, xAI took the rare step of publishing its full 45-minute all-hands presentation to the X platform, making it widely available. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Miljoenen inwoners van Rusland hebben geen toegang meer tot Whatsapp. De Russische overheid heeft toegang verboden en probeert zo hun eigen staatsapp genaamd Max aan de man te krijgen. Verder in deze Tech Update: Elon Musk zegt een reorganisatie binnen het management van xAI te hebben doorgevoerd; Apple heeft nog altijd moeite met de verbetering van Siri. See omnystudio.com/listener for privacy information.
The AI Breakdown: Daily Artificial Intelligence News and Discussions
The AI race in 2026 looks very different than it did a year ago. Chinese labs are closing the gap, export controls are shifting, markets are reacting to real AI disruption, and new players like the UAE—and even space-based compute—are entering the picture. This episode unpacks how models, chips, geopolitics, and markets are converging—and why that directly shapes the AI tools you use. In the headlines: OpenAI's hardware timeline slips to 2027, turmoil at xAI, and AI disruption hits financial stocks.Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsRackspace Technology - Build, test and scale intelligent workloads faster with Rackspace AI Launchpad - http://rackspace.com/ailaunchpadZencoder - From vibe coding to AI-first engineering - http://zencoder.ai/zenflowOptimizely Agents in Action - Join the virtual event (with me!) free March 4 - https://www.optimizely.com/insights/agents-in-action/AssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/briefLandfallIP - AI to Navigate the Patent Process - https://landfallip.com/Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai
The 2026 IPO Quiz: Since January 1, 2026How many companies?20How many total founders?34How many female founders?2: Cassandra Curtis and Jennifer GarnerHow many companies have female founders?1: Once Upon a FarmHow many companies with only female founders?0Once Upon a Farm: J. Garner, J. Foraker, C. Curtis, A. RazCo-founded in 2015 by Cassandra Curtis and Ari RazIn 2017, Jennifer Garner joined as a co-founder and “Chief Brand Officer”, alongside CEO John ForakerMost represented industry?Healthcare (5), including Pharma, Biotech, and AI Biotech How many are already down from their offer price (as of 10:45am)?17How many female chairs?1: Shoushana Ohanessian: VenHub GlobalHow many female CFOs?1: Cantor Equity VI: Jane Novak (F)How many female CEOsZeroHow many directly associated with Epstein Island?1: Cantor Equity VI: Cantor was founded over 75 years ago and was led by Howard W. Lutnick from 1992 until February 2025 when he became the United States Secretary of CommerceSon Brandon G. Lutnick CEO/ChairTrump Commerce Sec. Lutnick admits visiting Epstein island during family vacationHow many boards have ZERO male directors?ZEROHow many boards of zero female directors?Only six!How many with one? Only six!How many boards with at least 30% women? 4AgomAb Therapeutics 2/6 33%Green Circle Tech 2/5 40%Once Upon a Farm 4/9 44% (CEO/Chair, CFO, Nominating Committee chair, Compensation Committee chair = MEN)VenHub GlobalTotal percentage of women?18%BONUS HEADLINES QUIZKraft Heinz pauses work to split the company as new CEO says ‘challenges are fixable'What were the two new companies?Strategic Flavor Holdings Co. and North Atlantic Provisions Co.Global Taste Elevation Co. and North American Grocery Co.North Atlantic Snack Holdings and Strategic Grocery Development GroupInternational Flavor Solutions Co. and United Continental Grocery Co.Universal Taste Logistics Co. and North American Food Alignment Co.Proxy Voting: Asset Managers Increased Their Support for Management in 2025What percentage of management resolutions did the Big Three index managers support in the 2025 proxy year? Hint, it's up 3% since 2023.99% (98.7%), up from 96% in 2023Top 10 U.S. asset managers in 2025? 98%Top 50 U.S. asset managers in 2025? 96%What percentage of shareholder resolutions did the Big Three index managers support in the 2025 proxy year? Hint, it's down 1.5%.7.5%True or False: in a landmark social media case that seeks to hold tech companies responsible for harms to children, A google lawyer said this to Jurors yesterday: “It's not social media addiction when it's not social media and it's not an addiction.”True or False: xAI cofounder Jimmy Ba, the second cofounder [Tony Wu] to depart xAI in less than 48 hours, said this yesterday: "It's time to recalibrate my gradient on the big picture. 2026 is gonna be insane and likely the busiest (and most consequential) year for the future of our species."According to a new report from the WSJ, why did OpenAI fire Executive Ryan Beiermeister?sexual discriminationWhat's the story behind the story?Ryan Beiermeister, one of OpenAI's top safety executives, a vice president leading OpenAI's product policy team, was accused of sexual discrimination against a male employee after she voiced opposition to the controversial rollout of AI erotica in its ChatGPT product saying that she opposed adult mode, worried it would have harmful effects for users, and that she believed OpenAI's mechanisms to stop child-exploitation content weren't effective enough, and that the company couldn't sufficiently wall off adult content from teens.Beiermeister started a peer-mentorship program for women at OpenAI in early 2025.
Derek Yan with KraneShares talks about his firm's Artificial Intelligence & Technology ETF (AGIX), which includes pre-IPO companies like Anthropic and xAI. He argues the ETF makes it easier for investors to access the private market space and get ahead of public market debuts KraneShares expects to outperform. Derek later talks about how you can identify future winners in the evolving tech landscape. ======== Schwab Network ========Empowering every investor and trader, every market day.Options involve risks and are not suitable for all investors. Before trading, read the Options Disclosure Document. http://bit.ly/2v9tH6DSubscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – https://twitter.com/schwabnetworkFollow us on Facebook – https://www.facebook.com/schwabnetworkFollow us on LinkedIn - https://www.linkedin.com/company/schwab-network/About Schwab Network - https://schwabnetwork.com/about
Apple PhoneSuperBowl. Bad BunnyJETS:Two B-1B Lancer bombers. two F-15C Eagles, two F/A-18E Super Hornets, and two F-35C Lightning Linday Vonn - Torn ACL. MarketsVOO and Tech still near all-time highs.Bitcoin down 45%. Spotify up 15% today. Still 40% off high. 98B market cap. SpaceX Acquired XAICurrent valuation. $250 and $1t. Elon on the Pivot to Moon as opposed to Mars now. Elon MuskGreat interview with 10000 Starship launches annually. Low rate compared to airplanes.Merger with TeslaSpaceX and XaI merged. Presumably Elon has super voting control. Now SpaceXaI will acquire $TLSA at a premium (~30%). They're similarly valued at ~$1.5T each. AND I think/hope Elon also offers an "allocation" of pre-IPO shares to Long-term Tesla shareholders (maybe > 3 years).TeslaNetflixTed Sarandos testimony. 80% of HBO subscribers are Netflix subscribers. YouTube is #1 viewing platform. Rights to Oscars, NFL, etc… Netflix is 9% of viewing and with Warner 10%? Wow! Chief Strategy Officer at Warner. Senator Hawley is a terrible Senator! Play Union Commitment53.28. Sexually inappropriate trans characters. What % of Netflix employees donate $'s to what party? 1.07.03 Why give Netflix the power to promote more woke shit!? Must Play. Wokest content in the world.
The Canadian Pension Plan Investment Board (CPPIB) has invested nearly half a billion dollars in xAI, the artificial intelligence company behind Elon Musk's AI chatbot - Grok.The chatbot and its owner have received mounting criticism following the recent influx of deep-fake pornographic content of women and children on X's feeds - a catastrophe that Musk has contributed little to no resources to fix.Host Caryn Ceolin speaks to Jan Mahrt-Smith, associate professor of finance at the University of Toronto, to discuss the risks associated with investing in Musk's chatbot, how the 22 million Canadian investors could be feeling about the move, and whether or not Canadians still trust the government institution to handle their money. We love feedback at The Big Story, as well as suggestions for future episodes. You can find us:Through email at hello@thebigstorypodcast.ca Or @thebigstory.bsky.social on Bluesky
Elon Musk Reporter Theo Wayt breaks down the continuing exodus of co-founders at Musk's xAI and what it signals for the company's model timeline. The Information's Anita Ramaswamy then explains why ServiceNow is currently undervalued despite the broader SaaS market sell-off. Matt Shumer, GP of Shumer Capital, joins to discuss his viral essay on why GPT-5.3 Codex represents a unique inflection point for labor, and Kawasaki Wealth & Investment Management's Ross Gerber discusses how AI is disrupting wealth management and why he's concerned about leadership at Tesla and SpaceX.Articles discussed on this episode: https://www.theinformation.com/articles/investors-missing-servicenowhttps://www.theinformation.com/briefings/shopify-shares-jump-forecasts-continued-revenue-growthhttps://www.theinformation.com/newsletters/the-briefing/risk-muskiverses-steady-turnoverhttps://www.theinformation.com/briefings/departures-accelerate-elon-musks-xai-yet-another-cofounder-leavesSubscribe: YouTube: https://www.youtube.com/@theinformation The Information: https://www.theinformation.com/subscribe_hSign up for the AI Agenda newsletter: https://www.theinformation.com/features/ai-agendaTITV airs weekdays on YouTube, X and LinkedIn at 10AM PT / 1PM ET. Or check us out wherever you get your podcasts.Follow us:X: https://x.com/theinformationIG: https://www.instagram.com/theinformation/TikTok: https://www.tiktok.com/@titv.theinformationLinkedIn: https://www.linkedin.com/company/theinformation/
-In an xAI meeting with employees, Musk said the company needed to build an AI satellite factory on the moon with a gigantic catapult to launch them into space, according to audio heard by The New York Times. -Discord will require all users to have a "teen-appropriate experience" by default by March. -Numerous major social platforms including Meta, YouTube, TikTok and Snap say they will submit to a new external grading process that scores social platforms on how well they protect adolescent mental health. Learn more about your ad choices. Visit podcastchoices.com/adchoices
The tech giant handed over the personal information of a journalist and student who attended a pro-Palestinian protest in 2024. This is the latest example of ICE using its controversial subpoena powers to target people critical of the Trump administration. Also, out of the twelve-person xAI founding team, only seven are still at the company (counting Elon). Learn more about your ad choices. Visit podcastchoices.com/adchoices
This week we talk about OpenAI, nudify apps, and CSAM.We also discuss Elon Musk, SpaceX, and humanistic technology.Recommended Book: Who's Afraid of Gender? by Judith ButlerTranscriptxAI is an American corporation that was founded in mid-2023 by Elon Musk, ostensibly in response to several things happening in the world and in the technology industry in particular.According to Musk, a “politically correct” artificial intelligence, especially a truly powerful, even generally intelligent one, which would be human or super-human-scale capable, would be dangerous, leading to systems like HAL 9000 from 2001: A Space Odyssey. He intended, in contrast, to create what he called a “maximally truth-seeking” AI that would be better at everything, including math and reasoning, than existing, competing models from the likes of OpenAI, Google, and Anthropic.The development of xAI was also seemingly a response to the direction of OpenAI in particular, as OpenAI was originally founded in 2015 as a non-profit by many of the people who now run OpenAI and competing models by competing companies, and current OpenAI CEO Sam Altman and Elon Musk were the co-chairs of the non-profit.Back then, Musk and Altman both said that their AI priorities revolved around the many safety issues associated with artificial general intelligence, including potentially existential ones. They wanted the development of AI to take a humanistic trajectory, and were keen to ensure that these systems aren't hoarded by just a few elites and don't make the continued development and existence of human civilization impossible.Many of those highfalutin ambitions seemed to either be backburnered or removed from OpenAI's guiding tenets wholesale when the company experienced surprising success from its first publicly deployed ChatGPT model back in late-2022.That was the moment that most people first experienced large-language model-based AI tools, and it completely upended the tech industry in relatively short order. OpenAI had already started the process of shifting from a vanilla non-profit into a capped for-profit company in 2019, which limited profits to 100-times any investments it received, partly in order to attract more talent that would otherwise be unlikely to leave their comparably cushy jobs at the likes of Google and Facebook for the compensation a non-profit would be able to offer.OpenAI began partnering with Microsoft that same year, 2019, and that seemed to set them up for the staggering growth they experienced post-ChatGPT release.Part of Musk's stated rationale for investing so heavily in xAI is that he provided tens of millions of dollars in seed funding to the still non-profit OpenAI between 2015 and 2018. He filed a lawsuits against the company after its transition, and when it started to become successful, post-ChatGPT, especially between 2024 and 2026, and has demanded more than $100 billion in compensation for that early investment. He also attempted to take over OpenAI in early 2025, launching a hostile bid with other investors to nab OpenAI for just under $100 billion. xAI, in other words, is meant to counter OpenAI and what it's become.All of which could be seen as a genuine desire to keep OpenAI functioning as a non-profit arbiter of AGI development, serving as a lab and thinktank that would develop the guardrails necessary to keep these increasingly powerful and ubiquitous tools under control and working for the benefit of humanity, rather than against it.What's happened since, within Musk's own companies, would seem to call that assertion into question, though. And that's what I'd like to talk about today: xAI, its chatbot Grok, and a tidal wave of abusive content it has created that's led to lawsuits and bans from government entities around the world.—In November of 2023, an LLM-based chatbot called Grok, which is comparable in many ways to OpenAI's LLM-based chabot, ChatGPT, was launched by Musk's company xAI.Similar to ChatGPT, Grok is accessible by apps on Apple and Android devices, and can also be accessed on the web. Part of what makes its distinct, though, is that it's also built into X, the social network formerly called Twitter which Musk purchased in late-2022. On X, Grok operates similar to a normal account, but one that other users can interact with, asking Grok about the legitimacy of things posted on the service, asking it normal chat-botty questions, and asking it to produce AI-generated media.Grok's specific stances and biases have varied quite a lot since it was released, and in many cases it has defaulted to the data- and fact-based leanings of other chatbots: it will generally tell you what the Mayo clinic and other authorities say about vaccines and diseases, for instance, and will generally reference well-regarded news entities like the Associated Press when asked about international military conflicts.Musk's increasingly strong political stances, which have trended more and more far right over the past decade, have come to influence many of Grok's responses, however, at times causing it to go full Nazi, calling itself Mechahitler and saying all the horrible and offensive things you would expect a proud Nazi to say. At other times it has clearly been programmed to celebrate Elon Musk whenever possible, and in still others it has become immensely conspiratorial or anti-liberal or anti-other group of people.The conflicting personality types of this bot seems to be the result of Musk wanting to have a maximally truth-seeking AI, but then not liking the data- and fact-based truths that were provided, as they often conflicted with his own opinions and biases. He would then tell the programmers to force Grok to not care about antisemitism or skin color or whatever else, and it would overcorrect in the opposite direction, leading to several news cycles worth of scandal.This changes week by week and sometimes day by day, but Grok often calls out Musk as being authoritarian, a conspiracy theorist, and even a pedophile, and that has placed the Grok chatbot in an usual space amongst other, similar chatbots—sometimes serving as a useful check on misinformation and disinformation on the X social network, but sometimes becoming the most prominent producer of the same.Musk has also pushed for xAI to produce countervailing sources of truth from which Grok can find seeming data, the most prominent of which is Grokipedia, which Musk intended to be a less-woke version of Wikipedia, and which, perhaps expectedly, means that it's a far-right rip off of Wikipedia that copies most articles verbatim, but then changes anything Musk doesn't like, including anything that might support liberal political arguments, or anything that supports vaccines or trans people. In contrast, pseudoscience and scientific racism get a lot of positive coverage, as does the white genocide conspiracy theory, all of which are backed by either highly biased or completely made up sources—in both cases sources that Wikipedia editors would not accept.Given all that, what's happened over the past few months maybe isn't that surprising.In late 2025 and early 2026, it was announced that Grok had some new image-related features, including the ability for users to request that it modify images. Among other issues, this new tool allowed users to instruct Grok to place people, which for this audience especially meant women and children, in bikinis and in sexually explicit positions and scenarios.Grok isn't the first LLM-based app to provide this sort of functionality: so called “nudify” apps have existed for ages, even before AI tools made that functionality simpler and cheaper to apply, and there have been a wave of new entrants in this field since the dawn of the ChatGPT era a few years ago.Grok is easily the biggest and most public example of this type of app, however, and despite the torrent of criticism and concern that rolled in following this feature's deployment, Musk immediately came out in favor of said features, saying that his chatbot is edgier and better than others because it doesn't have all the woke, pearl-clutching safeguards of other chatbots.After several governments weighed in on the matter, however, Grok started responding to requests to do these sorts of image edits with a message saying: “Image generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features.”Which means users could still access these tools, but they would have to pay $8 per month and become a premium user in order to do so. That said, the AP was able to confirm that as of mid-January, free X users could still accomplish the same by using an Edit Image button that appears on all images posted to the site, instead of asking Grok directly.When asked about this issue by the press, xAI has auto-responded with the message “Legacy Media Lies.” The company has previously said it will remove illegal content and permanently suspend users who post and ask for such content, but these efforts have apparently not been fast or complete, and more governments have said they plan to take action on the matter, themselves, since this tool became widespread.Again, this sort of nonconsensual image manipulation has been a problem for a long, long time, made easier by the availability of digital tools like Photoshop, but not uncommon even before the personal computer and digital graphics revolution. These tools have made the production of such images a lot simpler and faster, though, and that's put said tools in more hands, including those of teenagers, who have in worryingly large numbers taken to creating photorealistic naked and sexually explicit images of their mostly female classmates.Allowing all X users, or even just the subset that pays for the service to do the same at the click of a button or by asking a Chatbot to do it for them has increased the number manyfold, and allowed even more people to created explicit images of neighbors, celebrities, and yes, even children. An early estimate indicates that over the course of just nine days, Grok created and posted 4.4 million images, at least 41% of which, about 1.8 million, were sexualized images of women. Another estimated using a broader analysis says that 65% of those images, or just over 3 million, contained sexualized images of men, women, and children.CSAM is an acronym that means ‘child sexual abuse material,' sometimes just called child porn, and the specific definition varies depending on where you are, but almost every legal jurisdiction frowns, or worse, on its production and distribution.Multiple governments have announced that they'll be taking legal action against the company since January of 2026, including Malaysia, Indonesia, the Philippines, Britain, France, India, Brazil, and the central governance of the European Union.The French investigation into xAI and Grok led to a raid on the company's local office as part of a preliminary investigation into allegations that the company is knowingly spreading child sexual abuse materials and other illegal deepfake content. Musk has been summoned for questioning in that investigation.Some of the governments looking into xAI for these issues conditionally lifted their bans in late-January, but this issues has percolated back into the news with the release of 16 emails between Musk and the notorious sex traffic and pedophile Jeffrey Epstein, with Musk seemingly angling for an invite to one of Epstein's island parties, which were often populated with underage girls who were offered as, let's say companions, for attendees.And this is all happening at a moment in which xAI, which already merged with social network X, is meant to be itself merged with another Musk-owned company, SpaceX, which is best known for its inexpensive rocket launches.Musk says the merger is intended to allow for the creation of space-based data centers that can be used to power AI systems like Grok, but many analysts are seeing this as a means of pumping more money into an expensive, unprofitable portion of his portfolio: SpaceX, which is profitable, is likely going to have an IPO this year and will probably have a valuation of more than a trillion dollars. By folding very unprofitable xAI into profitable SpaceX, these AI-related efforts could be funded well into the future, till a moment when, possibly, many of today's AI companies will have gone under, leaving just a few competitors for xAI's Grok and associated offerings.Show Noteshttps://www.wired.com/story/deepfake-nudify-technology-is-getting-darker-and-more-dangerous/https://www.theverge.com/ai-artificial-intelligence/867874/stripe-visa-mastercard-amex-csam-grokhttps://www.ft.com/content/f5ed0160-7098-4e63-88e5-8b3f70499b02https://www.theguardian.com/global-development/2026/jan/29/millions-creating-deepfake-nudes-telegram-ai-digital-abusehttps://apnews.com/article/france-x-investigation-seach-elon-musk-1116be84d84201011219086ecfd4e0bchttps://apnews.com/article/grok-x-musk-ai-nudification-abuse-2021bbdb508d080d46e3ae7b8f297d36https://apnews.com/article/grok-elon-musk-deepfake-x-social-media-2bfa06805b323b1d7e5ea7bb01c9da77https://www.nytimes.com/2026/02/07/technology/elon-musk-spacex-xai.htmlhttps://www.bbc.com/news/articles/ce3ex92557johttps://techcrunch.com/2026/02/01/indonesia-conditionally-lifts-ban-on-grok/https://www.bbc.com/news/articles/cgr58dlnne5ohttps://www.nytimes.com/2026/01/22/technology/grok-x-ai-elon-musk-deepfakes.htmlhttps://en.wikipedia.org/wiki/XAI_(company)https://en.wikipedia.org/wiki/OpenAIhttps://en.wikipedia.org/wiki/ChatGPThttps://en.wikipedia.org/wiki/Grok_(chatbot)https://en.wikipedia.org/wiki/Grokipediahttps://www.cnbc.com/2025/02/10/musk-and-investors-offering-97point4-billion-for-control-of-openai-wsj.html This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe
Our guest this week, Alvin Wang Graylin spent 35 years in senior leadership roles across HTC, IBM, and other major tech companies. He ran HTC's VR division, came out of the famous HIT Lab, now teaches at MIT, holds a fellowship at Stanford, and just published a paper called "Beyond Rivalry" proposing a seven-point plan for deescalating US-China AI tensions and building a global safety net before the economy breaks. His thesis: America is the fastest in the AI race and the least prepared for what it's creating—a cliff where human labor theory of value collapses, capital concentration accelerates, and 40% of the population living month to month faces chaos.The conversation becomes a wide-ranging debate between Alvin, Charlie, and Rony about whether AGI will be benevolent by default (Alvin's position: research shows smarter AI seeks global coherence and becomes less controllable by individual humans, which may actually make it safer) or whether benevolence must be designed in from scratchAI XR News You Should Know: Elon Musk merges SpaceX, xAI, and X into a single entity—Alvin dismantles the space data center concept with physics (vacuum cooling is a myth, micro-meteorite collisions would destroy hardware daily, and energy is only 10% of data center costs). Amazon invests $50 billion in OpenAI that round-trips back to AWS. Alphabet breaks revenue records at $400 billion but spooks investors by disclosing $90 billion in AI spending. ElevenLabs raises $500 million at $11 billion valuation. Rony's SynthBee hits unicorn status with $100 million raised at a multi-billion dollar valuation. Alvin warns the AI bubble dwarfs the dot-com era (298 companies raised $24 billion total during dot-com; OpenAI alone is raising that in a single private round) and predicts OpenAI may implode before going public.Key Moments Timestamps:[00:04:47] SpaceX/xAI/X merger: Rony calls it Elon's "return to Tony Stark form"[00:06:41] Alvin dismantles space data centers with physics: vacuum cooling myth, micro-meteorites, $7K/kg launch costs[00:10:04] Amazon's $50B investment in OpenAI as a round-trip to AWS; the scam economy[00:11:26] Alvin predicts OpenAI may implode before going public[00:14:23] Alvin on 35 years in AI: the technology is transformational but everyone's making a commodity product[00:17:04] The AI bubble dwarfs dot-com: $24B total vs. single private rounds today[00:19:04] Rony's contrarian: the $110 trillion global economy is what's being bet against[00:21:06] Labor theory of value collapses: what happens when humans exit the production cycle[00:23:00] America is fastest in the AI race and least prepared; 40% live month to month[00:24:00] Alvin's Stanford paper "Beyond Rivalry": a CERN for AI and global data pool[00:28:00] Davos reflections: the rest of the world is more rational than America[00:34:00] Chinese vs. American culture: reverence for teachers, respect for elders[00:42:00] Alvin's "Abundant" framework: valuing human dignity over production after AGI[00:44:22] The great debate: will AGI find benevolence naturally (Alvin) or must it be designed in (Rony)?[00:47:00] Rony on risk: AGI systems are unverifiable, untestable, and we cannot take the chanceListen to the full episode and subscribe to the AI XR Podcast for weekly conversations at the intersection of AI, XR, and the future of humanity.This episode is brought to you by Zappar, creators of Mattercraft—the leading visual development environment for building immersive 3D web experiences for mobile headsets and desktop. Build smarter at mattercraft.io.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
SpaceX Shifts Focus From Mars to Lunar Base: The Strategic Pivot and Its ImplicationsElon Musk announced on X that SpaceX has shifted its primary focus from Mars to establishing a self-sustaining city on the Moon. This strategic change comes despite SpaceX's long-standing goal of Mars colonization. The company plans an uncrewed lunar landing by March 2027 and has integrated XAI's AI capabilities through a historic $1.25 trillion merger. Factors influencing the pivot include faster lunar mission iteration cycles, fewer setbacks compared to Mars missions, and the strategic race against China to return humans to the Moon. SpaceX's recent FCC filing for 1 million orbital data center satellites and upcoming IPO, valued at $1.5 trillion, are also interconnected with this new focus. Despite these ambitious plans, challenges such as radiation exposure and extreme temperatures remain. Nevertheless, SpaceX aims to start building a Moon base within the next 10 years while maintaining long-term Mars ambitions.00:00 SpaceX's Shift from Mars to the Moon00:41 The Strategic Pivot Explained01:41 Financial and Engineering Insights02:04 Musk's Rationale and Future Plans03:26 NASA and International Competition05:10 The XAI Merger and Its Implications06:48 Orbital Data Centers and IPO Strategy08:55 Challenges and Skepticism10:26 Conclusion: Betting on the Moon
Tony Wu resigned from xAI today, becoming the fifth co-founder to leave Elon Musk's AI company since 2023. We break down every departure, what the SpaceX merger means for remaining founders, and why xAI's talent drain could create an opening for competitors at the worst possible time.
Will Elon Musk really launch a million data centers into orbit, and why is McDonald's so worried about you using "McNuggets" as your password? This week's tech roundtable takes on wild new frontiers and everyday security headaches with insight and a bit of irreverence. More schools are banning phones so students can focus. Ohio's results show it's not that simple After Australia, Which Countries Could Be Next to Ban Social Media for Children EU says TikTok must disable 'addictive' features like infinite scroll, fix its recommendation engine Anthropic and OpenAI release dueling AI models on the same day in an escalating rivalry Sam Altman says Anthropic's Super Bowl spot is 'dishonest' about ChatGPT ads, but he agrees it's funny Anthropic's Claude Opus 4.6 uncovers 500 zero-day flaws in open-source code Alphabet reports Q4 2025 revenue of $113.8 billion Amazon's blowout $200 billion AI spending plan stuns Wall Street A New Gilded Age: Big Tech goes on a $600 billion AI spending splurge Hidden Cameras in Chinese Hotels Are Livestreaming Guests To Thousands of Telegram Subscribers AI-generated ads hit the Super Bowl SpaceX acquires xAI, plans to launch a massive satellite constellation to power it Russia suspected of intercepting EU satellites Notepad++ hijacked by state-sponsored actors New York Wants to Ctrl+Alt+Delete Your 3D Printer Western Digital Plots a Path To 140 TB Hard Drives Using Vertical Lasers and 14-Platter Designs A Crisis comes to Wordle: Reusing old words The Wayback Machine debuts a new plug-in designed to fix the internet's broken links problem Project Hail Mary is getting its own LEGO set Dave Farber Host: Leo Laporte Guests: Larry Magid, Mike Elgan, and Louis Maresca Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: bitwarden.com/twit NetSuite.com/TWIT meter.com/twit trustedtech.team/twitCSS zscaler.com/security
Things got kinda catty between OpenAI and Anthropic
Dan Nathan, Guy Adami, Kristen Kelly and Jen Saarbach discuss the recent happenings in the stock market, with a focus on the significant shift in sentiment towards SaaS companies. They explore how AI investments and the ensuing financial implications are affecting market valuations. The conversation touches on several key areas, including Microsoft's fluctuating performance, the role of rising interest rates, and the broader impact on the credit markets, especially in private equity and private credit. Additionally, the panel discusses the recent volatility in the cryptocurrency market, questioning Bitcoin's role as digital gold and the structural issues within the crypto ecosystem. They also examine the intriguing financial strategies and market maneuvers of Elon Musk's companies, particularly the recent merger between SpaceX and xAI. The episode concludes with a look at potential market rotations into sectors like financials and energy, as well as the upcoming challenges posed by macroeconomic conditions and the new Federal Reserve chair. Article Mentioned Hedge Fund's Bet on Liquidity Over Private Credit Is Paying Off (Bloomberg) —FOLLOW USYouTube: @RiskReversalMediaInstagram: @riskreversalmediaTwitter: @RiskReversalLinkedIn: RiskReversal Media
Will Elon Musk really launch a million data centers into orbit, and why is McDonald's so worried about you using "McNuggets" as your password? This week's tech roundtable takes on wild new frontiers and everyday security headaches with insight and a bit of irreverence. More schools are banning phones so students can focus. Ohio's results show it's not that simple After Australia, Which Countries Could Be Next to Ban Social Media for Children EU says TikTok must disable 'addictive' features like infinite scroll, fix its recommendation engine Anthropic and OpenAI release dueling AI models on the same day in an escalating rivalry Sam Altman says Anthropic's Super Bowl spot is 'dishonest' about ChatGPT ads, but he agrees it's funny Anthropic's Claude Opus 4.6 uncovers 500 zero-day flaws in open-source code Alphabet reports Q4 2025 revenue of $113.8 billion Amazon's blowout $200 billion AI spending plan stuns Wall Street A New Gilded Age: Big Tech goes on a $600 billion AI spending splurge Hidden Cameras in Chinese Hotels Are Livestreaming Guests To Thousands of Telegram Subscribers AI-generated ads hit the Super Bowl SpaceX acquires xAI, plans to launch a massive satellite constellation to power it Russia suspected of intercepting EU satellites Notepad++ hijacked by state-sponsored actors New York Wants to Ctrl+Alt+Delete Your 3D Printer Western Digital Plots a Path To 140 TB Hard Drives Using Vertical Lasers and 14-Platter Designs A Crisis comes to Wordle: Reusing old words The Wayback Machine debuts a new plug-in designed to fix the internet's broken links problem Project Hail Mary is getting its own LEGO set Dave Farber Host: Leo Laporte Guests: Larry Magid, Mike Elgan, and Louis Maresca Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: bitwarden.com/twit NetSuite.com/TWIT meter.com/twit trustedtech.team/twitCSS zscaler.com/security
Will Elon Musk really launch a million data centers into orbit, and why is McDonald's so worried about you using "McNuggets" as your password? This week's tech roundtable takes on wild new frontiers and everyday security headaches with insight and a bit of irreverence. More schools are banning phones so students can focus. Ohio's results show it's not that simple After Australia, Which Countries Could Be Next to Ban Social Media for Children EU says TikTok must disable 'addictive' features like infinite scroll, fix its recommendation engine Anthropic and OpenAI release dueling AI models on the same day in an escalating rivalry Sam Altman says Anthropic's Super Bowl spot is 'dishonest' about ChatGPT ads, but he agrees it's funny Anthropic's Claude Opus 4.6 uncovers 500 zero-day flaws in open-source code Alphabet reports Q4 2025 revenue of $113.8 billion Amazon's blowout $200 billion AI spending plan stuns Wall Street A New Gilded Age: Big Tech goes on a $600 billion AI spending splurge Hidden Cameras in Chinese Hotels Are Livestreaming Guests To Thousands of Telegram Subscribers AI-generated ads hit the Super Bowl SpaceX acquires xAI, plans to launch a massive satellite constellation to power it Russia suspected of intercepting EU satellites Notepad++ hijacked by state-sponsored actors New York Wants to Ctrl+Alt+Delete Your 3D Printer Western Digital Plots a Path To 140 TB Hard Drives Using Vertical Lasers and 14-Platter Designs A Crisis comes to Wordle: Reusing old words The Wayback Machine debuts a new plug-in designed to fix the internet's broken links problem Project Hail Mary is getting its own LEGO set Dave Farber Host: Leo Laporte Guests: Larry Magid, Mike Elgan, and Louis Maresca Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: bitwarden.com/twit NetSuite.com/TWIT meter.com/twit trustedtech.team/twitCSS zscaler.com/security
Will Elon Musk really launch a million data centers into orbit, and why is McDonald's so worried about you using "McNuggets" as your password? This week's tech roundtable takes on wild new frontiers and everyday security headaches with insight and a bit of irreverence. More schools are banning phones so students can focus. Ohio's results show it's not that simple After Australia, Which Countries Could Be Next to Ban Social Media for Children EU says TikTok must disable 'addictive' features like infinite scroll, fix its recommendation engine Anthropic and OpenAI release dueling AI models on the same day in an escalating rivalry Sam Altman says Anthropic's Super Bowl spot is 'dishonest' about ChatGPT ads, but he agrees it's funny Anthropic's Claude Opus 4.6 uncovers 500 zero-day flaws in open-source code Alphabet reports Q4 2025 revenue of $113.8 billion Amazon's blowout $200 billion AI spending plan stuns Wall Street A New Gilded Age: Big Tech goes on a $600 billion AI spending splurge Hidden Cameras in Chinese Hotels Are Livestreaming Guests To Thousands of Telegram Subscribers AI-generated ads hit the Super Bowl SpaceX acquires xAI, plans to launch a massive satellite constellation to power it Russia suspected of intercepting EU satellites Notepad++ hijacked by state-sponsored actors New York Wants to Ctrl+Alt+Delete Your 3D Printer Western Digital Plots a Path To 140 TB Hard Drives Using Vertical Lasers and 14-Platter Designs A Crisis comes to Wordle: Reusing old words The Wayback Machine debuts a new plug-in designed to fix the internet's broken links problem Project Hail Mary is getting its own LEGO set Dave Farber Host: Leo Laporte Guests: Larry Magid, Mike Elgan, and Louis Maresca Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: bitwarden.com/twit NetSuite.com/TWIT meter.com/twit trustedtech.team/twitCSS zscaler.com/security
Send us a textKristen and Jen are joined by Guy Adami and Dan Nathan of CNBC's Fast Money for the fifth installment of "He Said, She Said." The conversation kicks off with the so-called "SaaS Apocalypse" — the brutal selloff across software stocks — and unpacks how the market narrative shifted in just one week from "when will AI spending pay off?" to "what happens when AI destroys your core business?" The group debates whether the repricing is justified or overdone, digs into the credit market spillover with $17.7 billion in SaaS-related loans hitting distressed levels, and discusses what it all means for private credit exposure.From there, the panel takes on Bitcoin's collapse to $60,000 — roughly half its all-time high — and asks whether the "digital gold" thesis is officially dead now that crypto fell apart while precious metals hit records. They also break down the equity rotation into financials and energy, the irony of banks rallying on AI-driven deal flow while AI-adjacent companies crater, and what enterprise adoption of AI could mean for the hyperscalers longer term.The episode wraps with a look at Elon Musk's latest consolidation play, SpaceX acquiring xAI ahead of a rumored mega-IPO, and a macro check-in covering weak seasonals, a deteriorating jobs picture, rising 10-year yields, and the historical pattern of markets testing every new Fed chair.For a 14 day FREE Trial of Macabacus, click HEREShop our Self Paced Courses: Investment Banking & Private Equity Fundamentals HEREFixed Income Sales & Trading HERE Wealthfront.com/wss. This is a paid endorsement for Wealthfront. May not reflect others' experiences. Similar outcomes not guaranteed. Wealthfront Brokerage is not a bank. Rate subject to change. Promo terms apply. If eligible for the boosted rate of 4.15% offered in connection with this promo, the boosted rate is also subject to change if base rate decreases during the 3 month promo period.The Cash Account, which is not a deposit account, is offered by Wealthfront Brokerage LLC ("Wealthfront Brokerage"), Member FINRA/SIPC. Wealthfront Brokerage is not a bank. The Annual Percentage Yield ("APY") on cash deposits as of 11/7/25, is representative, requires no minimum, and may change at any time. The APY reflects the weighted average of deposit balances at participating Program Banks, which are not allocated equally. Wealthfront Brokerage sweeps cash balances to Program Banks, where they earn the variable APY. Sources HERE.
Elon Musk just pulled off the biggest merger in corporate history but is the $1.25 trillion SpaceX–xAI deal a stroke of visionary genius, or a financial sleight of hand?In this episode of Market Maker, Anthony and Piers unpack what's really behind the numbers. Is this bold vertical integration or just a clever way to funnel cash into a billion-dollar-a-month AI burn machine? They dig into Starlink's role as the cash engine, xAI's financial reality, and why Musk is using a rare “triangular merger” to shield liabilities.They also break down Musk's plan for space-based data centers and whether the science (and physics) actually adds up, plus how this all ties into the bigger Muskonomy: Grok, Optimus, robo-taxis, and orbital dominance.You'll also hear how Morgan Stanley, Goldman Sachs, J.P. Morgan, and BofA are circling the biggest IPO in history and what it means for markets, investors, and tech.New episodes drop weekly. Make sure to follow, rate, and share if you enjoy the show.(00:00) SpaceX Acquires xAI(01:57) Biggest Deal Ever?(03:18) Musk's $1.25T Valuation Math(06:42) What SpaceX Actually Does(07:24) Starlink = The Cash Cow(11:35) xAI = The Cash Drain(12:48) AI Arms Race Spending(16:14) Why This Merger Makes Sense(18:07) Muskonomy: The Bigger Vision(20:28) Google, Amazon & Space Rivals(20:59) The Triangular Merger Trick(23:51) Starlink, xAI & SpaceX Strategy(24:59) Space Data Centers Explained(28:27) Who Owns Space?(29:43) Governments vs Tech Giants(33:05) The Science Problem in Space(38:56) The Underwater Data Center Attempt(41:34) Which Banks Are Involved(48:09) Final Thoughts for the Grad Bankers
Will Elon Musk really launch a million data centers into orbit, and why is McDonald's so worried about you using "McNuggets" as your password? This week's tech roundtable takes on wild new frontiers and everyday security headaches with insight and a bit of irreverence. More schools are banning phones so students can focus. Ohio's results show it's not that simple After Australia, Which Countries Could Be Next to Ban Social Media for Children EU says TikTok must disable 'addictive' features like infinite scroll, fix its recommendation engine Anthropic and OpenAI release dueling AI models on the same day in an escalating rivalry Sam Altman says Anthropic's Super Bowl spot is 'dishonest' about ChatGPT ads, but he agrees it's funny Anthropic's Claude Opus 4.6 uncovers 500 zero-day flaws in open-source code Alphabet reports Q4 2025 revenue of $113.8 billion Amazon's blowout $200 billion AI spending plan stuns Wall Street A New Gilded Age: Big Tech goes on a $600 billion AI spending splurge Hidden Cameras in Chinese Hotels Are Livestreaming Guests To Thousands of Telegram Subscribers AI-generated ads hit the Super Bowl SpaceX acquires xAI, plans to launch a massive satellite constellation to power it Russia suspected of intercepting EU satellites Notepad++ hijacked by state-sponsored actors New York Wants to Ctrl+Alt+Delete Your 3D Printer Western Digital Plots a Path To 140 TB Hard Drives Using Vertical Lasers and 14-Platter Designs A Crisis comes to Wordle: Reusing old words The Wayback Machine debuts a new plug-in designed to fix the internet's broken links problem Project Hail Mary is getting its own LEGO set Dave Farber Host: Leo Laporte Guests: Larry Magid, Mike Elgan, and Louis Maresca Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: bitwarden.com/twit NetSuite.com/TWIT meter.com/twit trustedtech.team/twitCSS zscaler.com/security
Corporate AI Rivalry, Musk's Mega Merger, and the Future of WorkThe script explores the current shift in AI competition from consumer tech to enterprise adoption, highlighting Anthropic's growing influence in the corporate market against OpenAI. It discusses the implications of regulatory requirements and the trust deficit affecting OpenAI, alongside the importance of enterprise AI for long-term industry dominance. The discussion shifts to the questionable narrative of AI-driven layoffs, with companies potentially using AI as an excuse for downsizing. It also covers Google's disruptive new AI model impacting the gaming industry. The script concludes with a deep dive into Elon Musk's rumored mega merger involving SpaceX, XAI, and possibly Tesla, potentially creating a vertically integrated tech giant poised to dominate various sectors.00:00 The Battle for AI Dominance in Corporate Boardrooms00:49 Enterprise AI: The Real Game Changer01:32 Anthropic's Strategic Advantage02:44 The Trust Deficit and OpenAI's Challenges04:08 The Future of AI: A Marathon, Not a Sprint07:20 AI Layoffs: Reality or Excuse?09:57 The Complex Reality of AI's Impact on Jobs14:32 Elon Musk's Controversial Casting Critique15:41 The Mythology of Helen of Troy23:12 Elon Musk's Legal Battles and Accountability32:05 The Largest Corporate Merger in History?
In this week's FOLLOW UP, Bitcoin is down 15%, miners are unplugging rigs because paying eighty-seven grand to mine a sixty-grand coin finally failed the vibes check, and Grok is still digitally undressing men—suggesting Musk's “safeguards” remain mostly theoretical, which didn't help when X offices got raided in France. Spain wants to ban social media for kids under 16, Egypt is blocking Roblox outright, and governments everywhere are flailing at the algorithmic abyss.IN THE NEWS, Elon Musk is rolling xAI into SpaceX to birth a $1.25 trillion megacorp that wants to power AI from orbit with a million satellites, because space junk apparently wasn't annoying enough. Amazon admits a “high volume” of CSAM showed up in its AI training data and blames third parties, Waymo bags a massive $16 billion to insist robotaxis are working, Pinterest reportedly fires staff who built a layoff-tracking tool, and Sam Altman gets extremely cranky about Claude's Super Bowl ads hitting a little too close to home.For MEDIA CANDY, we've got Shrinking, the Grammys, Star Trek: Starfleet Academy's questionable holographic future, Neil Young gifting his catalog to Greenland while snubbing Amazon, plus Is It Cake? Valentines and The Rip.In APPS & DOODADS, we test Sennheiser earbuds, mess with Topaz Video, skip a deeply cursed Python script that checks LinkedIn for Epstein connections, and note that autonomous cars and drones will happily obey prompt injection via road signs—defeated by a Sharpie.IN THE LIBRARY, there's The Regicide Report, a brutal study finding early dementia signals in Terry Pratchett's novels, Neil Gaiman denying allegations while announcing a new book, and THE DARK SIDE WITH DAVE, vibing with The Muppet Show as Disney names a new CEO. We round it out with RentAHuman.ai dread relief via paper airplane databases, free Roller Coaster Tycoon, and Sir Ian McKellen on Colbert—still classy in the digital wasteland.Sponsors:DeleteMe - Get 20% off your DeleteMe plan when you go to JoinDeleteMe.com/GOG and use promo code GOG at checkout.SquareSpace - go to squarespace.com/GRUMPY for a free trial. And when you're ready to launch, use code GRUMPY to save 10% off your first purchase of a website or domain.Private Internet Access - Go to GOG.Show/vpn and sign up today. For a limited time only, you can get OUR favorite VPN for as little as $2.03 a month.SetApp - With a single monthly subscription you get 240+ apps for your Mac. Go to SetApp and get started today!!!1Password - Get a great deal on the only password manager recommended by Grumpy Old Geeks! gog.show/1passwordShow notes at https://gog.show/732FOLLOW UPBitcoin drops 15%, briefly breaking below $61,000 as sell-off intensifies, doubts about crypto growBitcoin Is Crashing So Hard That Miners Are Unplugging Their EquipmentGrok, which maybe stopped undressing women without their consent, still undresses menX offices raided in France as UK opens fresh investigation into GrokSpain set to ban social media for children under 16Egypt to block Roblox for all usersIN THE NEWSElon Musk Is Rolling xAI Into SpaceX—Creating the World's Most Valuable Private CompanySpaceX wants to launch a constellation of a million satellites to power AI needsA potential Starlink competitor just got FCC clearance to launch 4,000 satellitesAmazon discovered a 'high volume' of CSAM in its AI training data but isn't saying where it came fromWaymo raises massive $16 billion round at $126 billion valuation, plans expansion to 20+ citiesPinterest Reportedly Fires Employees Who Built a Tool to Track LayoffsSam Altman got exceptionally testy over Claude Super Bowl adsMEDIA CANDYShrinkingStar Trek: Starfleet AcademyThe RipNeil Young gifts Greenland free access to his music and withdraws it from Amazon over TrumpIs it Cake? ValentinesAPPS & DOODADSSennheiser Consumer Audio IE 200 In-Ear Audiophile Headphones - TrueResponse Transducers for Neutral Sound, Impactful Bass, Detachable Braided Cable with Flexible Ear Hooks - BlackSennheiser Consumer Audio CX 80S In-ear Headphones with In-line One-Button Smart Remote – BlackTopaz VideoEpsteinAutonomous cars, drones cheerfully obey prompt injection by road signAT THE LIBRARYThe Regicide Report (Laundry Files Book 14) by Charles StrossScientists Found an Early Signal of Dementia Hidden in Terry Pratchett's NovelsNeil Gaiman Denies the Allegations Against Him (Again) While Announcing a New BookTHE DARK SIDE WITH DAVEDave BittnerThe CyberWireHacking HumansCaveatControl LoopOnly Malware in the BuildingThe Muppet ShowDisney announces Josh D'Amaro will be its new CEO after Iger departsA Database of Paper Airplane Designs: Hours of Fun for Kids & Adults AlikeOnline (free!) version of Roller Coaster tycoon.Speaking of coasters, here's the current world champion.I am hoping this is satire...Sir Ian McKellen on Colbert.CLOSING SHOUT-OUTSCatherine O'Hara: The Grande Dame of Off-Center ComedyStanding with Sam 'Balloon Man' MartinezSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
(0:00) Besties intros: Brad Gerstner joins the show (3:16) Epstein Files (15:45) SaaS stocks crash out (35:11) Moltbook panic (47:37) Trump selects Kevin Warsh as new Fed Chair, replacing Jerome Powell (1:00:50) SpaceX and xAI merge (1:10:45) Brad's major win with Trump Accounts Follow Brad: https://x.com/altcap Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect Referenced in the show: https://www.nytimes.com/2026/02/05/business/epstein-investments-palantir-coinbase-thiel.html https://www.miamiherald.com/news/local/article214210674.html https://nypost.com/2026/01/31/us-news/linkedin-founder-reid-hoffmans-emails-with-jeffrey-epstein-revealed-in-doj-docs https://freebeacon.com/democrats/skype-sushi-and-a-phone-date-democratic-megadonor-reid-hoffman-maintained-jeffrey-epstein-relationship-years-after-he-said-it-ended https://nypost.com/2026/02/02/business/jeffrey-epstein-boasted-about-wild-dinner-with-mark-zuckerberg-reid-hoffman-in-unsealed-2015-email https://x.com/stockpickerspb/status/2009363916573290715 https://www.moltbook.com https://x.com/galnagli/status/2017573842051334286 https://x.com/balajis/status/1937517664907460980 https://www.reuters.com/world/india/gold-rises-over-1-geopolitical-economic-tensions-lift-precious-metals-2026-02-05 https://x.com/truflation/status/2019409671212396815 https://www.challengergray.com/blog/challenger-report-january-job-cuts-surge-lowest-january-hiring-on-record https://www.reuters.com/business/world-at-work/ups-amazon-boost-us-planned-layoffs-january-challenger-survey-shows-2026-02-05 https://www.cnbc.com/2026/02/03/musk-xai-spacex-biggest-merger-ever.html https://polymarket.com/event/spacex-ipo-closing-market-cap-above
This week, the A.I. initial-public-offering race is heating up! We break down SpaceX's acquisition of xAI, as well as OpenAI and Nvidia's messy situationship. Then, it's time for show and tell. We got our hands on the latest experimental A.I. prototype from Google called Project Genie, and we discuss our experience using it to generate and navigate video-game-like environments. Finally, we're joined by Moltbook's founder, Matt Schlicht, to discuss his new social media platform for A.I. agents, and how he's planning to deal with security risks and spam on the site. Guest:Matt Schlicht, creator of Moltbook Additional Reading: Elon Musk Merges SpaceX With His A.I. Start-Up xAIThe $100 Billion Megadeal Between OpenAI and Nvidia Is on IceProject Genie: Experimenting with infinite, interactive worldsAn A.I. Pioneer Warns the Tech ‘Herd' Is Marching Into a Dead EndMoltbook Mania Explained We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
This week, SpaceX and xAI, two companies controlled by Elon Musk, merged into a $1.25 trillion company. The deal combines a successful rocket and satellite business with an AI startup. Musk says the goal is to put AI data centers in earth's orbit. WSJ's Berber Jin reports on the deal. Jessica Mendoza hosts. Further Listening: The Woman Behind SpaceX Her Client Was Deepfaked. She Says xAI Is to Blame. Why Elon Musk's AI Chatbot Went Rogue Sign up for WSJ's free What's News newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices
A new tranche of Jeffrey Epstein's emails makes one thing painfully clear: Epstein was a central figure in the lives of a lot of big names in tech, and had influence on a surprising number of companies and executives. David and Nilay talk through what we've learned from the new emails so far. Then they turn to Anthropic's spicy new Super Bowl ads about... ads, which caused a big reaction from OpenAI (which is betting big on ads). They also discuss this week's antitrust hearing about Netflix's purchase of Warner Bros., the latest in Brendan Carr is a Dummy, Google Home's big buttons upgrade, and much more. Further reading: Here's how Epstein broke the internet Former Windows 8 boss recruited Epstein to help negotiate his messy Microsoft exit Jeffrey Epstein arranged a meeting with Tim Cook for the former head of Windows The Epstein files Google co-founder Sergey Brin visited Epstein's private island and traded emails with Ghislaine Maxwell. It turns out Elon Musk didn't exactly ‘refuse' the invite to Jeffrey Epstein's island. Will Elon Musk's emails with Jeffrey Epstein derail his very important year? Bill Gates says accusations contained in Epstein files are ‘absolutely absurd' Jeffrey Epstein was permanently banned from Xbox Live ‘We've basically funded an elite global pedophile ring since 2015.' Anthropic says ‘Claude will remain ad-free,' unlike an unnamed rival Anthropic's blog post: Claude is a space to think Sam Altman responds to Anthropic's ‘funny' Super Bowl ads OpenAI's CMO on X Nvidia CEO denies he's ‘unhappy' with OpenAI Netflix lands in the middle of a culture war during Senate hearing Everyone is stealing TV Disney says Josh D'Amaro will replace Bob Iger as CEO FCC aims to ensure “only living and lawful Americans” get Lifeline benefits Elon Musk is merging SpaceX and xAI to build data centers in space — or so he says Peloton's gamble on expensive new hardware has yet to pay off Google Home finally adds support for buttons Raspberry Pi is raising prices again as memory shortages continue Valve's Steam Machine has been delayed, and the RAM crisis will impact pricing Aluminium: Why Google's Android for PC launch may be messy and controversial Subscribe to The Verge for unlimited access to theverge.com, subscriber-exclusive newsletters, and our ad-free podcast feed.We love hearing from you! Email your questions and thoughts to vergecast@theverge.com or call us at 866-VERGE11. Learn more about your ad choices. Visit podcastchoices.com/adchoices
This week Elon Musk announced the merger of two of his companies: SpaceX and xAI, which makes chatbots. Is the new firm viable? As migrant workers return home for lunar new year, the Chinese Communist Party tells migrant workers not to stay for too long. And our culture editor's hot take on “Heated Rivalry”.Listen to what matters most, from global politics and business to science and technology—Subscribe to Economist Podcasts+For more information about how to access Economist Podcasts+, please visit our FAQs page or watch our video explaining how to link your account. Hosted on Acast. See acast.com/privacy for more information.
This week Elon Musk announced the merger of two of his companies: SpaceX and xAI, which makes chatbots. Is the new firm viable? As migrant workers return home for lunar new year, the Chinese Communist Party tells migrant workers not to stay for too long. And our culture editor's hot take on “Heated Rivalry”.Listen to what matters most, from global politics and business to science and technology—Subscribe to Economist Podcasts+For more information about how to access Economist Podcasts+, please visit our FAQs page or watch our video explaining how to link your account. Hosted on Acast. See acast.com/privacy for more information.
On this week's “Marketplace Tech Bytes: Week in Review,” we take a look at Nvidia's changing investment relationship with OpenAI. Plus, a stormy start for the new U.S. version of TikTok. But first, SpaceX, one of the world's largest rocket companies, announced this week that it's buying xAI, a two-and-half-year-old artificial intelligence startup. Both companies are controlled by Elon Musk. The new company is reportedly valued at $1.25 trillion. It means the chatbot Grok, the satellite internet company Starlink, and the social media firm X are all going to co-exist under the same rocket hangar. Marketplace's Stephanie Hughes spoke with Paresh Dave, senior writer at Wired, about what adding these companies together equals.