POPULARITY
Categories
Join the HG101 gang as they discuss and rank an Australian mascot platformer about being a tiger with two mouths. Then stick around for Crimsonland, a bloody 2003 game that paved the way for Vampire Survivors! This weekend's Patreon Bonus Get episode will be MILANO'S ODD JOB COLLECTION — a 2D life-sim/minigame collection from the makers of Wonder Boy! Donate at Patreon to get this bonus content and much, much more! Follow the show on Bluesky to get the latest and straightest dope. Check out what games we've already ranked on the Big Damn List, then nominate a game of your own via five-star review on Apple Podcasts! Take a screenshot and show it to us on our Discord server! Intro music by NORM. 2026 © Hardcore Gaming 101, all rights reserved. No portion of this or any other Hardcore Gaming 101 ("HG101") content/data shall be included, referenced, or otherwise used in any model, resource, or collection of data.
Generative AI is moving beyond the lab and into the production environment. But for many enterprises, "pilot fatigue" is becoming a major bottleneck to real ROI. In this episode of AiR (AI in Retail), Top AI Leader, Barry McGeough (Group VP, AmeriCo) and Top Retail Expert, Michael Zakkour (Top Expert) deliver a masterclass on Applied Innovation. Through a real-world case study, they reveal how brands are using generative design to collapse the distance between a creative spark and a finished product. Key Takeaways: - The Text-to-Design Workflow: How designers are using text prompts to bypass traditional CAD bottlenecks, moving from 2D patterns to 3D assets and video in record time. - Crossing the Uncanny Valley: Why high-fidelity realism is the prerequisite for consumer trust and digital transactions. - Applied Innovation vs. "Big I" Innovation: The framework for ensuring AI projects solve core business problems rather than staying stuck in the lab. - The AI Super-Cycle: Why AI is a foundational layer for the next decade of retail, not a temporary financial bubble.
(00:00:00) Introduction (00:09:12) Development History (00:21:01) Visuals and Art Style (00:37:21) Mechanics (01:03:12) Music and Sound (01:21:50) Story, TV Ads, and Stuck on the PS3 (01:36:15) Wrapping Up Please consider supporting the show on Patreon! You can also click here to join the free Discord server or connect with the show on Bluesky and Instagram!Hey, all! Pixel Project Radio here!To break up Fighting Game February, Rick and Bill (Gaming and Collecting; 3DO Experience; Geek Addicts) jump into the 3D Kingdom of Dotnia in 3D Dot Game Heroes! This game—with credits from Fromsoft and Atlus!—was released to general favorability in 2009 but never made a notable impression. Perhaps this is why it's marooned on the PS3 and the PS3 only to this day! This is a very special game, and in this episode Rick and Bill break down why it deserves far more love than it gets: the infectious music, the self-referential and -parodic writing, the gorgeous marriage of 2D and 3D, and more. 3D Dot Game Heroes does so much right. It deserves more recognition—and we're gonna tell you about it! Hope you love the show today. Enjoy!Thank you for listening! Want to reach out to PPR? Send your questions, comments, and recommendations to pixelprojectradio@gmail.com! And as ever, any ratings and/or reviews left on your platform of choice are greatly appreciated!
Enjoy a lovely conversation with Kennedy Freeman, an animation director, storyboard artist, character designer, and founder of the 2D animation studio Luv Letter Studios, as we discuss the art journey that got her opportunities to work in anime, her out-of-this-world indie animated series Towards Galaxy's End, magical girls, and so much more!Kennedy's Links:Towards Galaxy's End Kickstarter: https://www.kickstarter.com/projects/towardsgalaxysend/towards-galaxys-end-indie-animation-series?ref=profile_created&category_id=29TGE YouTube: https://www.youtube.com/@TowardsgalaxysendofficialTGE Instagram: https://www.instagram.com/towardsgalaxysend/TGE Bluesky: https://bsky.app/profile/towardsgalaxysend.bsky.socialTGE Twitter: https://x.com/TowardsGalaxysPersonal YouTube: https://www.youtube.com/@kenzyxartTikTok: https://www.tiktok.com/@kenzyxartBluesky: https://bsky.app/profile/kenzyxart.bsky.socialInstagram: https://www.instagram.com/kenzyxart/Twitter: https://x.com/kenzyxartTumblr: https://www.tumblr.com/kenzyxartThumbnail Done By: Kennedy FreemanHelp fund my new laptop: https://throne.com/postmodartpod/item/c93d8ef0-f6d1-4f0a-912a-ea970fb3731eCheck out the MERCH SHOP: https://post-modern-art-podcast-shop.fourthwall.com/Join the PostModArtPod Discord server: https://discord.gg/bdg4UFbmm9Join the PMAP Patreon: https://www.patreon.com/pmapIntro Animated by: https://bsky.app/profile/fasado.bsky.socialIntro Song - "Seductive Treasure" - Color of IllusionOutro Song - "Parts In Motion" - Vera Much Stream her EP "Thank U!": https://open.spotify.com/album/3AO61mm8a81osp9FsPpFgv?si=sZ2Pq_aSTbWLzHLwff2RigLinktree (To find other platforms, socials, etc.): https://linktr.ee/PostModernArtPodcastFor business inquiries, contact postmodernartpodcast@gmail.com Showrunners of the podcast are Nathan Ragland and TipsyJHeartsTipsy's Links:Twitter: https://twitter.com/TipsyJHeartsBluesky: https://bsky.app/profile/tipsyjhearts.bsky.socialInstagram: https://www.instagram.com/tipsyjhearts/Patreon: https://www.patreon.com/tipsyjheartsKo-fi: https://ko-fi.com/tipsyjheartsPortfolio: https://tipsyjhearts.wixsite.com/portfolioProduced with A1denArtzAiden's Links:Carrd: https://a1denartz.carrd.co/Tumblr: https://a1denartz.tumblr.com/Bluesky: https://bsky.app/profile/a1denartz.bsky.socialInkblot: https://inkblot.art/profile/a1denartzInstagram: https://www.instagram.com/a1denartz/Go out there and create something special!
This week we're chatting all things Oscars, Arco, and The Independent Picture House with Claire Lechtenberg, Director of Development & Marketing for IPH! To celebrate, we are stepping into the prismatic, hand-drawn wonder of the 2026 Oscar-nominated animated masterpiece: Arco.Directed by Ugo Bienvenu and produced by Natalie Portman, this film is a breathtaking 2D odyssey. We'll follow the story of Arco, a 10-year-old boy from a utopian future who crash-lands in the year 2075. We'll discuss the film's heartfelt friendship between Arco and Iris, and how it manages to be a hopeful, kaleidoscopic answer to an uncertain futureTo match the film's stunning visual palette and Arco's time-traveling rainbow trail, we're shaking up a drink that literally transforms in your hand: Clouds Away! A cocktail as beautiful and complex as the film it pays honor to.Check out The Independent Picture House to check out all the amazing movies and programs they have going on!Merch ShopPatreonInstagramBlueskyFacebookhttps://www.drinkthemovies.comYouTubeDiscord*Please Drink Responsibly*
Dextall is attacking a structural inefficiency in construction: the 3-year design coordination cycle that precedes every mid-rise building, combined with the chaotic on-site execution that follows. Founded by Aurimas Sabulis after years running a commercial window company and witnessing construction site dysfunction firsthand, Dextall is building what Aurimas calls a "prefab operating system"—software that connects architectural design directly to factory production of building exteriors. In a market where less than 1% of U.S. mid-rise projects use prefab (versus 75% in Scandinavia), Dextall is bridging the 3-4 year gap between design inception and approved drawings while manufacturing building components that arrive on-site as "Lego blocks." In this episode, Aurimas shares the hard lessons learned from building in construction's unforgiving risk environment. Topics Discussed: Targeting the 6-40 story sweet spot: steel, concrete, and mass timber construction where prefab delivers maximum value (below 6 stories is wood frame; above 40 enters different glass-box typology) The reality of U.S. prefab penetration: 99% of projects in Dextall's pipeline would go traditional route without them Why the physical product stayed constant from day one while software took multiple failed iterations The expensive lesson: building software that goes from design to fabrication in one day, only to learn architects rejected it because it removed their design control Evolving from 2D drawings to 3D renderings to animations to physical two-story mock-ups—and why customers only "got it" after seeing real completed buildings Launching a separate SaaS division for architects that independently generates value while creating 90% backend efficiency when connected to Dextall's manufacturing The three-to-five-year vision: prompt-engineered buildings with real-time cost, carbon footprint, and feasibility feedback GTM Lessons For B2B Founders: Domain credibility is your entry ticket in risk-averse industries: Aurimas's first customers came because he had "street credibility"—a track record of delivering complex, large-scale window projects. In construction, healthcare, and other industries where failure has severe consequences, founders without domain experience face insurmountable trust barriers. If you're building in these markets without industry background, your co-founder or first hires must bring that credibility, or you'll burn years trying to earn it. Proof velocity matters more than proof perfection: Dextall moved from 9-story buildings to 40-story projects by stacking proof points, not by waiting to debut with a showcase project. Each successful delivery de-risked the next larger bet. Founders should optimize for proof velocity—getting the smallest viable validation that enables the next larger commitment—rather than trying to land the trophy customer that "proves everything." Physical businesses require physical proof—budget accordingly: Dextall built multiple two-story physical mock-ups and actual buildings before customers truly understood their value proposition, despite having sophisticated 3D animations. Aurimas noted customers kept claiming they understood, then asking the same questions until they could physically see and touch completed work. If you're building in construction, manufacturing, or industrial sectors, your CAC will include physical demonstration costs that software founders never face. Budget 3-5x what you think you'll need for mock-ups and proofs of concept. Workflow disruption fails when you remove user agency: Dextall's software could compress 3-4 years of design coordination into one day—a 1000x improvement. Architects rejected it because it was "too heavy" and removed their control over design. The team had to rebuild to let architects control design while Dextall's system handled the backend connection to manufacturing. When your "better way" requires users to surrender control or change how they think about their craft, you're not selling efficiency—you're selling identity change, which rarely works. Find the integration layer that adds value without displacing existing agency. In mature industries, selectively challenge the status quo: Aurimas explicitly asks "is this fight worth fighting?" when Dextall encounters resistance to their approach. They focus on 3-4 nuances at a time rather than attempting to fix all 100 industry problems. When pushback happens, they evaluate whether to press the issue or "build deeper trench within the customer base" first, then return to that battle later. Founders tackling established industries should map their battles, not just their product roadmap—identify which conventions are essential to challenge for your value prop, and which can wait until you have more market power. Bridge disconnected systems rather than optimizing endpoints: The construction industry has sophisticated design tools (AI-powered generative design) and manufacturers (though often Excel-based). Dextall's differentiation is connecting these two worlds—architects can design freely, and their designs automatically translate to manufacturing specifications with real-time costing and feasibility. Many mature industries have this pattern: advanced front-end tools, capable back-end production, but manual/broken handoffs between them. The integration layer often provides more defensible value than improving either endpoint. Layer software distribution onto enterprise sales once you have proof: Dextall spent years doing "old school" enterprise sales—cold calling developers, lunch-and-learns with architects, bringing customers to job sites. Only after building credibility and understanding architect workflows are they launching SaaS for architectural firms. The software creates independent value for architects while generating 90% backend efficiency for Dextall when connected. Founders in hybrid businesses should resist the temptation to lead with software distribution before proving the full value chain works—but actively build toward that transition. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM
Recorded live from the show floor, Sean is joined by Liz Cuneo of Healthcare Packaging and Matt Reynolds of Packaging World to break down what stood out at PACK EXPO East 2026. They cover cold-chain growth, measurable sustainability, paperization and monomaterials, plus the shift to 2D codes and AI-driven insights. If you couldn't attend, this is your quick recap of what mattered most.Register for PACK EXPO International today!
In order for any lawyer to be a successful advocate for their client or law firm they must become an excellent dealmaker. However, the secrets to the art of deal closing can seem incredibly elusive to even the most initiated. What are the fundamental tenets of being a good dealmaker, and how does one focus on honing these skills? In this episode of The Legal Toolkit, host Jared Correia sits down with Cohen Gardner LLP Co-Founder Jeff B. Cohen, a former child actor best remembered for his role as Chunk in The Goonies, to discuss dealmaking in the context of the law. The conversation opens with Jeff providing insights into his experiences behind the camera as a child actor and how this unique upbringing influences his perception of entertainment dealmaking. Within these recollections he also discusses how Machiavelli's “The Prince” aided him after his acting career ended and how these teachings inspired his book “The Dealmaker's Ten Commandments: Ten Essential Tools for Business Forged in the Trenches of Hollywood.” Jeff provides a glimpse into his methodologies and why he thinks it's so important for lawyers to effectively manage their time. He then provides a few of his personal commandments and best practices that any legal professional can use to become a more effective and successful dealmaker. Jeff B. Cohen co-founded Cohen Gardner LLP in 2002 and focuses on transactional representation for clients in the entertainment, media and technology verticals. His first book, “The Dealmaker's Ten Commandments: Ten Essential Tools for Business Forged in the Trenches of Hollywood” was published by the American Bar Association's imprint Ankerwycke in 2015. Jeff received his Juris Doctor from UCLA Law School with an emphasis in business law and his undergraduate degree from The University of California at Berkeley, Haas School of Business. While at UC Berkeley, Jeff served as President of the Associated Students of the University of California. Oh, man! I bet you didn't know how much you were missing Jared's unique take on culture, legal practice, and whatever else pops into his head. But don't fret, there's plenty to go around. Jared's back with a new **WEEKLY** show, Legal Late Night, available not only on your favorite podcast app, but in living color on your neighborhood YouTubes. That's right, Jared's more than just a pretty voice. Join him and his guests in high-def 2D through the links below. Subscribe to Legal Late Night with Jared Correia on: Apple - https://podcasts.apple.com/podcast/legal-late-night/id1809201251 Spotify - https://open.spotify.com/show/0Rkik0LLMaU6u0e7AKfK9h Or your favorite podcasting app. And bask in the majesty of our YouTube here: https://www.youtube.com/channel/UCZO71dMbPZJWAKWw_-qrRRQ
¿Te gusta Reload? Apóyanos en Patreon (https://www.patreon.com/anaitreload) para acceder a contenidos exclusivos, recibir los episodios dos días antes y hacer posible que sigamos adelante
Support Boss Rush on Patreon here.This February's State of Play presentation from PlayStation may have been one of its strongest showings yet. This week on the Boss Rush Podcast, Corey Dirrig, LeRon Dawkins, and Patrick Klein break down Sony's latest presentation and react to a packed slate of exciting announcements from both first- and third-party studios.Konami made a major statement with the reveal of Metal Gear Solid Master Collection Vol. 2, a brand-new 2D Castlevania title from Motion Twin and Evil Empire, Silent Hill: Townfall from Bloober Team, and a publishing deal for The Darwin Project.Capcom also stood out, sharing final looks at Resident Evil Requiem, Monster Hunter Stories 3, and Pragmata ahead of their respective launches.Sony's first-party lineup was equally impressive, featuring Suros from Housemarque, a final look at Marathon from Bungie, a Ghost of Yotei Legends update from Insomniac Games, the multiplayer-focused 4Loop, and perhaps the biggest surprise of all—a major showing from the God of War franchise.That included a shadow drop of Sons of Sparta, a new 2D search-action title starring a young Kratos and Deimos, alongside a full ground-up remake of the original trilogy from Santa Monica Studio.All this and more on a packed episode of The Boss Rush Podcast.Email Questions to the Boss Rush Podcast.Join the Boss Rush Network Community Discord here.Follow the Boss Rush Network on X/Twitter, Bluesky, Facebook, LinkedIn, Threads, and Instagram.Thanks for your continued support of the Boss Rush Podcast and the Boss Rush Network! If you listen on podcast services, leave us a 5 star rating and a nice review or comment. If you're listening to this episode on YouTube, subscribe to the channel, like the video, leave a comment, and hit the bell so you don't miss an episode posting. Visit our website for more great content from Boss Rush and our community.
Janet and Isaiah talk about their time playing Love Eternal, a gravity flipping difficult platformer, and God of War Sons of Sparta, a surprise 2D metroidvania entry set in the world of God of War (GAMES). Then they talk about going to a Resident Evil x Porsche launch event and their time riding in a very, very expensive and fast car (NOT GAMES).Poll of the Show:https://gamesandnotgames.com/pollTimestamps:00:00 Intro00:18 Catching Up5:03 Games / Love Eternal26:24 God of War Sons of Sparta01:04:09 Not Games / Resident Evil Requiem x Porsche01:30:12 Poll of the Show01:52:23 OutroSend your questions and topics to the email address Inbox@GamesAndNotGames.comFor more Games And Not Games:YouTube.com/@GamesAndNotGamesTwitch.tv/GamesAndNotGamesBlueSky: @GamesAndNotGames.comTikTok, Instagram, Threads: @GamesAndNotGamesBackloggd: GamesAndNotGamesFollow the hosts:Janet: @gameonysus.bsky.socialIsaiah: @isaiahsmith.dev
Catharine Pitt, co-founder of Brighton-based animation duo Form Play, joins the podcast to talk about what happens when you burn out, start over, and finally build something worth protecting. ~Catharine and her partner Mark spent years running a full-service design studio doing ad campaigns and seasonal retail work — ticking every box and feeling none of it. In their mid-forties, they walked away. What followed was two years of gradual reinvention: evenings spent relearning, slowly phasing out old clients, and rediscovering the joy of drawing. They emerged with a hyper-focused studio specialising in 2D frame animation, character design, and short-form storytelling — working with brands like Google, Patreon, and Comedy Central, while building their reputation with growth-stage startups who are still finding their voice.The conversation covers their creative manifesto, how COVID gave them the space to develop their micro-story framework, and why they use AI only as a "stress-testing knowledge base" — never for the creative work itself. Most compellingly, Catharine explains how they license rather than sell their characters, borrowing principles from the music and illustration industries to build longer-term client relationships and a more sustainable creative business.Key TakeawaysThe mid-forties crossroads is more common than you think – Catharine and Radim discover a shared experience: reaching the peak of what they'd worked for, and realising it wasn't who they wanted to be nextBurning out is data – A previous studio that depleted rather than fuelled them became the compass for everything Form Play stands for: client work must energise, not exhaustIncremental change beats big leaps – Their transition took two years, running old and new in parallel, until the new was strong enough to stand alonePlay is the methodology, not just the name – Form Play's approach to creation — sketch, iterate, test, publish, move on — is how they stay resilient, stay fresh, and avoid creative paralysisMicro stories have a formula – Start in the middle of the action; use humor, empathy, and surprise; condense time to exaggerate emotion. Their Instagram playground became their client frameworkAI as untrusted advisor – They use AI to challenge assumptions and explore unfamiliar territory in business, but keep it entirely out of their visual creative processLicensing changes everything – Influenced by the music and illustration industries, they separate creation fees from usage fees, giving clients flexibility and protecting the studio's long-term incomeThe risk of not changing – Rory Sutherland's overlooked point resonates here: staying the same carries its own risk; creative people need to stop treating change as the dangerous optionDistinction will be the premium – As AI floods the world with average output, work with imperfection, humanity, and emotional depth will become more valuable, not less Daring Creativity. Podcast with Radim Malinic daringcreativity.com | desk@daringcreativity.com Books by Radim Malinic Paperback and Kindle > https://amzn.to/4biTwFcFree audiobook (with Audible trial) > https://geni.us/free-audiobookBook bundles https://novemberuniverse.co.ukLux Coffee Co. https://luxcoffee.co.uk/ (Use: PODCAST for 15% off)November Universe https://novemberuniverse.co.uk (Use: PODCAST for 10% off)
¿Te gusta Reload? Apóyanos en Patreon (https://www.patreon.com/anaitreload) para acceder a contenidos exclusivos, recibir los episodios dos días antes y hacer posible que sigamos adelante
CEREAL TALKのニュースレターはこちらhttps://cerealtalk.jp/大森美希さん海外在住25年。服飾学校教員→ Balenciaga→ Lanvin→ Nina Ricci→ Coachシニアデザインディレクターと、ジョブ型雇用で米仏老舗ブランドのデザイナーを歴任。現在パーソンズ・パリ修士課程アソシエイトディレクターとランバンのデザインコンサルタントを両立。Instagram - https://www.instagram.com/mikiomori_/X - https://x.com/_omori_miki/note - https://note.com/mikiomori(0:00) 大森さんの凄い経歴(9:26) ウィメンズウェアとメンズウェアのデザイン(11:26) 2D vs 3Dから入るデザイン(16:38) デザイナーが持てる自由度の変動と影響(21:55) ニコラ・ジェスキエールの無茶振り・常識に囚われない発想(24:06) アルベール・エルバスの総合的にファッションを見る力(26:49) 世界観を作るのはready to wearデザイナーの使命(29:32) シーズンのコンセプト作り(34:03) クリエイティブディレクターを評価する人、結果を出せる人(36:04) 時代を作ることを目標にする大森さん(39:41) 米国 vs ヨーロッパの違い(42:01) クリエイティブディレクターからの学び<おたよりフォーム>https://forms.gle/Tgzv4y8PCTs86EcY6<メンバー>沼田 雄二朗https://twitter.com/Numauer最所 あさみhttps://twitter.com/qzqrnl宮武 徹郎https://twitter.com/tmiyatake1草野美木https://twitter.com/mikikusano
In this week's episode of GameBurst, we cast a critical eye over a turbulent period for the gaming industry. The news is dominated by Sony's strategic pivot, as the company confirms a remastered trilogy of the original God of War titles alongside the surprise release of the 2D prequel, Sons of Sparta. However, this nostalgic revival is offset by the somber news of the closure of Bluepoint Games, the celebrated studio behind the Demon's Souls remake, marking another casualty in a string of recent internal restructures at Sony. The industry also pauses to mourn Hideki Sato, the visionary engineer who designed every Sega console from the SG-1000 to the Dreamcast, whose passing marks the end of an era for hardware innovation. Furthermore, we examine the looming threat of global component shortages that Valve warns could impact hardware availability and pricing throughout 2026. Our "Pick of the Week" segment features a diverse range of titles, from the high-octane racing of Grid Legends to the indie charm of Cast n Chill and The Rogue Prince of Persia. Finally, we dive into the mailbag to address listener questions regarding podcast analytics and the motivations behind independent gaming broadcasts. YouTube Recommendations: The Tragedy Of Becoming Better by The Masked Man: https://youtu.be/FNAyjfDUX2o?si=Rb46TxDd3wxEXPrJ Mass Effect - Shaped By Stories: https://youtu.be/ghAXbX-F5K0?si=WxFVibwCOMD8DKS1
In this episode of our weekly gaming podcast, Game Informer's Kyle Hilliard reviews God of War: Sons of Sparta, a new 2D metroidvania that Mega Cat Studios and Sony Santa Monica released during last week's PlayStation State of Play. Unfortunately, it's an underwhelming title. Kyle then explains why he spent $99 USD on Nintendo's Virtual Boy rerelease and what it's like experiencing the headset's strange catalog of games on the Switch 2.Later in the show, Eric Van Allen dives into Paranormasight: The Mermaid's Curse, a mystery adventure about a cursed Japanese seaside town that has impressed him. Finally, Alex Van Aken shares his thoughts on the rock-climbing simulator Cairn and how it's an excellent example of how games can uniquely tell stories.The Game Informer Show is a weekly podcast covering the video game industry. Join us every Friday for chats about video game reviews, news, and exclusive reveals alongside Game Informer staff and special guests from around the industry.Buy Game Informer Magazine: https://gameinformer.com/subscribeFollow our hosts on social media:Alex Van Aken (@itsVanAken)Kyle Hilliard (@kylehilliard)Eric Van Allen (@seamoosi)Jump ahead using these timestamps:00:00 - Intro03:31 - God of War: Sons of Sparta Review17:44 - Nintendo Virtual Boy on Switch 230:22 - Paranormasight: The Mermaid's Curse44:59 - Cairn is Excellent
Linktree: https://linktr.ee/AnalyticJoin The Normandy For Additional Bonus Audio And Visual Content For All Things Nme+! Join Here: https://ow.ly/msoH50WCu0KThe latest segment of Notorious Mass Effect dives deep into God of War: Sons of Sparta, the surprise shadow-dropped PS5 exclusive from Santa Monica Studio and Mega Cat Studios. Released February 12, 2026, for $29.99, this 2D action-platformer—marketed as a Metroidvania—explores young Kratos during his Spartan Agoge training alongside brother Deimos, framed by adult Kratos (voiced by T.C. Carson) narrating to Calliope.Analytic Dreamz breaks down the core premise, gameplay execution, and franchise implications. The ~12-hour experience features light/heavy attacks, parry/dodge mechanics, color-coded enemy attacks, and "Gifts from Olympus" abilities like double jump and slingshot. Exploration includes interconnected maps, collectibles (owls, lore, olive trees), optional bosses, and upgrades via blood orbs.Critically, it holds a 69 Metacritic / 70 OpenCritic score—the lowest in the 20+ year God of War series—praised for brotherhood themes and retro style but criticized for stiff combat lacking impact, shallow Metroidvania elements, limited backtracking encouragement, repetitive story loops, and basic progression. Visuals sit between retro and modern but lack cinematic scale. The co-op controversy—initial listings implied full campaign support, but it's limited to post-game challenge mode—sparked confusion and refund requests.User scores sit higher at 8.2, with some quick Platinum achievements. Analytic Dreamz examines if this low-risk spin-off serves as franchise maintenance amid Greek trilogy remake news, or falls short of mainline prestige.Tune in as Analytic Dreamz delivers a concise, no-holds-barred breakdown of this polarizing entry—serviceable but forgettable for many, yet a nostalgic Greek-era return for fans.Support this podcast at — https://redcircle.com/analytic-dreamz-notorious-mass-effect/exclusive-contentPrivacy & Opt-Out: https://redcircle.com/privacy
Descubre Como Entender de Verdad Un Trastorno de Ansiedad y Tomar Acción En Nuestro Curso Gratuito El Mapa de La Ansiedad: https://escuelaansiedad.com/Cursos/el-mapa-de-la-ansiedad Descripción para YouTube (AMADAG TV) — ~600 palabras Haz una prueba rápida: mira tu mano derecha con la palma hacia ti… y compara la longitud del dedo índice con la del anular. ¿Cuál gana? Lo que parece una curiosidad sin importancia podría ser, según una línea de investigación, una pista sorprendente sobre nuestra biología prenatal. Hoy nos metemos de lleno en una idea fascinante y polémica a la vez: el ratio 2D:4D. Esta teoría se apoya en el trabajo del psicólogo Richard Wiseman, que recopila y comenta investigaciones del catedrático John Manning sobre la relación entre la proporción índice/anular y la exposición hormonal antes de nacer. Pero aquí no venimos a “creer” sin pensar: venimos a desmenuzar. ¿Qué evidencia hay? ¿Qué resultados son llamativos? ¿Dónde están los puntos débiles? Porque sí: es un tema serio… pero también controvertido. Y lo mejor es que todo empieza con una escena casi de novela, en el siglo XVII. El protagonista es Giacomo Casanova, que en una discusión con el pintor Anton Raphael Mengs afirmó que un dedo índice más largo que el anular en una figura masculina era un error anatómico. Mengs le enseñó su mano (índice más largo). Casanova le enseñó la suya (anular más largo). Apostaron fuerte… y al revisar manos reales, Casanova ganó: en muchos hombres, el anular suele ser algo más largo. Lo anecdótico terminó inspirando una hipótesis moderna. ¿Qué es el ratio 2D:4D? Muy simple: divides la longitud del índice (2D) entre la del anular (4D). ✅ Si el anular es más largo → ratio menor que 1 ✅ Si son iguales → ratio ≈ 1 ✅ Si el índice es más largo → ratio mayor que 1 El núcleo de la teoría: esta proporción reflejaría (en parte) el “cóctel hormonal” prenatal, especialmente el equilibrio entre testosterona y estrógenos. Es decir: tus manos podrían conservar un “registro fósil” del desarrollo. ⚠️ Pero ojo con tres frenos importantes: Correlación no es causalidad (puede haber otros factores). Hay estudios con muestras pequeñas y replicaciones discutidas. Riesgo de determinismo biológico: una medida no define tu destino. ♂️ Aun así, los hallazgos son intrigantes: se han observado asociaciones con rendimiento deportivo, fuerza de agarre y ratios más bajos en atletas de élite. También aparecen resultados curiosos en música, con líderes de sección en orquestas mostrando ratios más bajos. Y en el terreno más llamativo, Hollywood: mediciones de huellas en cemento sugieren perfiles distintos entre actores “tipo duro” y cómicos. La idea final es potente: quizá no existe un “mejor” perfil, sino predisposiciones distintas que podrían facilitar talentos diferentes. Pero lo importante es esto: la biología puede inclinar… no escribir. Y tú, después de mirarte la mano, ¿qué crees? ¿Somos guion… o borrador inicial? Cómo medir tu ratio (rápido) Mide en milímetros desde el pliegue más bajo del dedo (donde nace) hasta la punta sin uña. ➗ Divide: índice / anular. 25 Keywords (separadas por comas) ratio 2D:4D, dedo índice, dedo anular, testosterona prenatal, estrógenos prenatales, hormonas en el útero, biología del comportamiento, personalidad y hormonas, correlación causalidad, determinismo biológico, John Manning, Richard Wiseman, Casanova, Anton Raphael Mengs, psicología evolutiva, habilidades deportivas, fuerza de agarre, rendimiento atlético, rotación mental, habilidad espacial, música y cerebro, liderazgo musical, Hollywood huellas manos, tipos duros cine, creatividad comedia #️⃣ 6 Hashtags (separados por comas) #Psicologia,#Ciencia,#BiologiaHumana,#Curiosidades,#Mente,#Ansiedad 5 títulos sugeridos (catchy)
Vindo num shadow drop depois de um dos State of Play mais hypados de todos os tempos, God of War: Sons of Sparta prometeu trazer as origens de Kratos e seu irmão, Deimos em um jogo 2D estilo Metroidvania. Porém, o lançamento surpresa revelou um jogo pouco polido, abaixo das exigências dos fãs da saga. Tivemos também o fechamento da Bluepoint Games, um dos estúdios mais queridos para remasterizações. E o que a PlayStation Studios estava fazendo com eles?De última hora: comentamos a saída de Sarah Bond e a aposentadoria de Phil Spencer. Nova CEO assumiu a chefia do Xbox. O que vem por aí?Vem conferir no Flow Games News de hoje!
"Instagram is dentistry and Facebook is where friends and family are... Instagram is like meth, I can't get off it." — Alan Mead Alan sits down with a powerhouse trio—Dr. Russell Schafer, Dr. Mitch Hopkins, and Dr. Matt Standridge—live from Voices of Dentistry 2026. They bridge the gap between "Instagram Dentistry" and the daily grind of running a productive practice, covering everything from why 2D photography still beats 3D face scanning to the hard truths about dental school debt and clinical mastery. The Guest List: Dr. Russell Schafer: The man who helped name the "Group Function" and a recurring voice on the network. Dr. Mitch Hopkins: A first-time guest and longtime listener, sharing his journey into implant training and full-mouth rehabs. Dr. Matt Standridge: The "Tom Hanks of the Very Dental Podcast," returning to discuss the discipline of dental photography. Key Conversations: The Gateway to Comprehensive Care: Photography The group discusses Matt's presentation on photography. While high-end DSLRs create "dental porn" for Instagram, the real value lies in using photography as a diagnostic tool for every new patient. The "Dummy" Solution: Discussion on the ShoFu camera as a simpler alternative to complex DSLR setups. The IG Satire: A hilarious pitch for "Garbage Mouth Instagram"—high-end photography of the most neglected cases to stick it to the "AACD" types. 3D Scanning vs. The "Uncanny Valley" Despite the push for $10,000 3D face scanners, the consensus remains: 2D photos are more useful for lab design. * 3D renderings often fall into the "uncanny valley"—looking almost human but just "off" enough to be distracting. Photos are essential for designing smiles that actually fit the patient's face, not just their arches. Advice for the "Mid-Career" and New Dentists The conversation turns to the crushing debt and clinical gaps facing new grads. The All-on-4 Trap: Why jumping straight into full-arch implants because of debt can be a recipe for malpractice. The Power of Bread and Butter: A defense of molar endo, quadrant fillings, and the "Lean and Mean" model of dentistry. The PVS Skillset: Why learning analog impressions (and packing cord!) is still a vital skill in a digital world. Leveling Up Your Clinical Skills The episode wraps with a "lightning round" of advice for improving your work tomorrow: Design Your Own Crowns: Use software to marginate your own preps. It's the fastest way to realize where your technique is failing. Magnification & Lighting: Whether it's jumping from 2.5x to 4.5x loops or investing in high-quality surgical headlamps, if you can't see it, you can't treat it. Some links from the show: Clinical Mastery Series (Mitch teaches here) Russell's denture ce Matt's "Clear Aligner Boot Camp" Matt's "Self Ligating Band and Bracket" Course Matt at 3D Dentists Shofu Eyespecial camera Join the Very Dental Facebook Group using one of these passwords: Timmerman, Paul, Bioclear, Hornbrook, Gary, McWethy, Papa Randy, or Lipscomb! The Very Dental Podcast network is and will remain free to download. If you'd like to support the shows you love at Very Dental then show a little love to the people that support us! I'm a big fan of the Bioclear Method! I think you should give it a try and I've got a great offer to help you get on board! Use the exclusive Very Dental Podcast code VERYDENTAL8TON for 15% OFF your total Bioclear purchase, including Core Anterior and Posterior Four day courses, Black Triangle Certification, and all Bioclear products. Are you a practice owner who feels like the bottleneck in your own business? If you're tired of being the hardest-working person in your office, I've got something you need to hear. Dr. Paul Etchison, is hosting a virtual event that is a total game-changer. Paul is honestly one of the most brilliant minds in dental leadership today, and he's hosting the 3-Day Freedom Practice Workshop from February 19th through the 21st. He's going to show you exactly how to break through that two-million-dollar revenue ceiling while actually compressing your clinical week. It's about building a leadership team that takes ownership so you can finally step into the CEO role you deserve. Head over to DentalPracticeHeroes.com/freedom to grab your spot. And do me a favor—mention the Very Dental podcast when you sign up. It's 100% guaranteed, so you've got nothing to lose but the stress. Crazy Dental has everything you need from cotton rolls to equipment and everything in between and the best prices you'll find anywhere! If you head over to verydentalpodcast.com/crazy and use coupon code "VERYSHIP" you'll get free shipping on your order! Go save yourself some money and support the show all at the same time! The Wonderist Agency is basically a one stop shop for marketing your practice and your brand. From logo redesign to a full service marketing plan, the folks at Wonderist have you covered! Go check them out at verydentalpodcast.com/wonderist! Enova Illumination makes the very best in loupes and headlights, including their new ergonomic angled prism loupes! They also distribute loupe mounted cameras and even the amazing line of Zumax microscopes! If you want to help out the podcast while upping your magnification and headlight game, you need to head over to verydentalpodcast.com/enova to see their whole line of products! CAD-Ray offers the best service on a wide variety of digital scanners, printers, mills and even their very own browser based design software, Clinux! CAD-Ray has been a huge supporter of the Very Dental Podcast Network and I can tell you that you'll get no better service on everything digital dentistry than the folks from CAD-Ray. Go check them out at verydentalpodcast.com/CADRay!
Kaksiulotteiset materiaalit ovat niin ohuita, että ne eivät enää ole kolmiulotteisia. Ne ovat tyypillisesti yhden tai muutaman atomikerroksen paksuisia aineita, joissa aineen ominaisuudet voivat poiketa radikaalisti tavanomaisista kolmiulotteisista materiaaleista. Näin ohut rakenne mahdollistaa täysin uudenlaisia, usein hyvin erikoisia sähköisiä, mekaanisia, optisia ja kemiallisia ominaisuuksia. 2D-materiaali grafeenista on puhuttu jo vuosikymmeniä ja sitä on kutsuttu vallankumoukselliseksi materiaaliksi: painoonsa nähden 200 kertaa terästä vahvempaa, lähes täysin läpinäkyvää, joustavaa, venyvää ja erinomainen sähkönjohdin. Teoriassa grafeenilevy on niin luja, että sen päälle voisi vaikkapa laittaa kissan ja kantaa sitä, vaikka levy on vain yhden atomikerroksen paksuinen. Mutta onko grafeeni lopulta vain toteutumaton lupaus, vai onko se jo oikeasti materiaalina käytössä? Viime vuosina katse on kääntynyt myös muihin kaksiulotteisiin eli 2D-materiaaleihin, kuten germaneeniin, silikeeniin ja grafeenin sukulaismateriaaliin grafyyniin. Mikä näissä oudoissa materiaaleissa kiinnostaa ja millaista käyttöä niille kaavaillaan? Haastateltavina ovat fysiikan professori Peter Liljeroth Aalto-yliopistosta ja kemian professori Mika Pettersson Jyväskylän yliopistosta. Toimittajana on Mari Heikkilä.
11.00 สำนักงานสลากฯ ระบุเครื่องหมาย 2D บาร์โค้ดและบาร์โค้ด เพื่อตรวจสอบใบสลากปลอมแปลงเท่านั้น ไม่สามารถเช็กข้อมูลคนซื้อได้
Tickets for AIEi Miami and AIE Europe are live, with first wave speakers announced!From pioneering software-defined networking to backing many of the most aggressive AI model companies of this cycle, Martin Casado and Sarah Wang sit at the center of the capital, compute, and talent arms race reshaping the tech industry. As partners at a16z investing across infrastructure and growth, they've watched venture and growth blur, model labs turn dollars into capability at unprecedented speed, and startups raise nine-figure rounds before monetization.Martin and Sarah join us to unpack the new financing playbook for AI: why today's rounds are really compute contracts in disguise, how the “raise → train → ship → raise bigger” flywheel works, and whether foundation model companies can outspend the entire app ecosystem built on top of them. They also share what's underhyped (boring enterprise software), what's overheated (talent wars and compensation spirals), and the two radically different futures they see for AI's market structure.We discuss:* Martin's “two futures” fork: infinite fragmentation and new software categories vs. a small oligopoly of general models that consume everything above them* The capital flywheel: how model labs translate funding directly into capability gains, then into revenue growth measured in weeks, not years* Why venture and growth have merged: $100M–$1B hybrid rounds, strategic investors, compute negotiations, and complex deal structures* The AGI vs. product tension: allocating scarce GPUs between long-term research and near-term revenue flywheels* Whether frontier labs can out-raise and outspend the entire app ecosystem built on top of their APIs* Why today's talent wars ($10M+ comp packages, $B acqui-hires) are breaking early-stage founder math* Cursor as a case study: building up from the app layer while training down into your own models* Why “boring” enterprise software may be the most underinvested opportunity in the AI mania* Hardware and robotics: why the ChatGPT moment hasn't yet arrived for robots and what would need to change* World Labs and generative 3D: bringing the marginal cost of 3D scene creation down by orders of magnitude* Why public AI discourse is often wildly disconnected from boardroom reality and how founders should navigate the noiseShow Notes:* “Where Value Will Accrue in AI: Martin Casado & Sarah Wang” - a16z show* “Jack Altman & Martin Casado on the Future of Venture Capital”* World Labs—Martin Casado• LinkedIn: https://www.linkedin.com/in/martincasado/• X: https://x.com/martin_casadoSarah Wang• LinkedIn: https://www.linkedin.com/in/sarah-wang-59b96a7• X: https://x.com/sarahdingwanga16z• https://a16z.com/Timestamps00:00:00 – Intro: Live from a16z00:01:20 – The New AI Funding Model: Venture + Growth Collide00:03:19 – Circular Funding, Demand & “No Dark GPUs”00:05:24 – Infrastructure vs Apps: The Lines Blur00:06:24 – The Capital Flywheel: Raise → Train → Ship → Raise Bigger00:09:39 – Can Frontier Labs Outspend the Entire App Ecosystem?00:11:24 – Character AI & The AGI vs Product Dilemma00:14:39 – Talent Wars, $10M Engineers & Founder Anxiety00:17:33 – What's Underinvested? The Case for “Boring” Software00:19:29 – Robotics, Hardware & Why It's Hard to Win00:22:42 – Custom ASICs & The $1B Training Run Economics00:24:23 – American Dynamism, Geography & AI Power Centers00:26:48 – How AI Is Changing the Investor Workflow (Claude Cowork)00:29:12 – Two Futures of AI: Infinite Expansion or Oligopoly?00:32:48 – If You Can Raise More Than Your Ecosystem, You Win00:34:27 – Are All Tasks AGI-Complete? Coding as the Test Case00:38:55 – Cursor & The Power of the App Layer00:44:05 – World Labs, Spatial Intelligence & 3D Foundation Models00:47:20 – Thinking Machines, Founder Drama & Media Narratives00:52:30 – Where Long-Term Power Accrues in the AI StackTranscriptLatent.Space - Inside AI's $10B+ Capital Flywheel — Martin Casado & Sarah Wang of a16z[00:00:00] Welcome to Latent Space (Live from a16z) + Meet the Guests[00:00:00] Alessio: Hey everyone. Welcome to the Latent Space podcast, live from a 16 z. Uh, this is Alessio founder Kernel Lance, and I'm joined by Twix, editor of Latent Space.[00:00:08] swyx: Hey, hey, hey. Uh, and we're so glad to be on with you guys. Also a top AI podcast, uh, Martin Cado and Sarah Wang. Welcome, very[00:00:16] Martin Casado: happy to be here and welcome.[00:00:17] swyx: Yes, uh, we love this office. We love what you've done with the place. Uh, the new logo is everywhere now. It's, it's still getting, takes a while to get used to, but it reminds me of like sort of a callback to a more ambitious age, which I think is kind of[00:00:31] Martin Casado: definitely makes a statement.[00:00:33] swyx: Yeah.[00:00:34] Martin Casado: Not quite sure what that statement is, but it makes a statement.[00:00:37] swyx: Uh, Martin, I go back with you to Netlify.[00:00:40] Martin Casado: Yep.[00:00:40] swyx: Uh, and, uh, you know, you create a software defined networking and all, all that stuff people can read up on your background. Yep. Sarah, I'm newer to you. Uh, you, you sort of started working together on AI infrastructure stuff.[00:00:51] Sarah Wang: That's right. Yeah. Seven, seven years ago now.[00:00:53] Martin Casado: Best growth investor in the entire industry.[00:00:55] swyx: Oh, say[00:00:56] Martin Casado: more hands down there is, there is. [00:01:00] I mean, when it comes to AI companies, Sarah, I think has done the most kind of aggressive, um, investment thesis around AI models, right? So, worked for Nom Ja, Mira Ia, FEI Fey, and so just these frontier, kind of like large AI models.[00:01:15] I think, you know, Sarah's been the, the broadest investor. Is that fair?[00:01:20] Venture vs. Growth in the Frontier Model Era[00:01:20] Sarah Wang: No, I, well, I was gonna say, I think it's been a really interesting tag, tag team actually just ‘cause the, a lot of these big C deals, not only are they raising a lot of money, um, it's still a tech founder bet, which obviously is inherently early stage.[00:01:33] But the resources,[00:01:36] Martin Casado: so many, I[00:01:36] Sarah Wang: was gonna say the resources one, they just grow really quickly. But then two, the resources that they need day one are kind of growth scale. So I, the hybrid tag team that we have is. Quite effective, I think,[00:01:46] Martin Casado: what is growth these days? You know, you don't wake up if it's less than a billion or like, it's, it's actually, it's actually very like, like no, it's a very interesting time in investing because like, you know, take like the character around, right?[00:01:59] These tend to [00:02:00] be like pre monetization, but the dollars are large enough that you need to have a larger fund and the analysis. You know, because you've got lots of users. ‘cause this stuff has such high demand requires, you know, more of a number sophistication. And so most of these deals, whether it's US or other firms on these large model companies, are like this hybrid between venture growth.[00:02:18] Sarah Wang: Yeah. Total. And I think, you know, stuff like BD for example, you wouldn't usually need BD when you were seed stage trying to get market biz Devrel. Biz Devrel, exactly. Okay. But like now, sorry, I'm,[00:02:27] swyx: I'm not familiar. What, what, what does biz Devrel mean for a venture fund? Because I know what biz Devrel means for a company.[00:02:31] Sarah Wang: Yeah.[00:02:32] Compute Deals, Strategics, and the ‘Circular Funding' Question[00:02:32] Sarah Wang: You know, so a, a good example is, I mean, we talk about buying compute, but there's a huge negotiation involved there in terms of, okay, do you get equity for the compute? What, what sort of partner are you looking at? Is there a go-to market arm to that? Um, and these are just things on this scale, hundreds of millions, you know, maybe.[00:02:50] Six months into the inception of a company, you just wouldn't have to negotiate these deals before.[00:02:54] Martin Casado: Yeah. These large rounds are very complex now. Like in the past, if you did a series A [00:03:00] or a series B, like whatever, you're writing a 20 to a $60 million check and you call it a day. Now you normally have financial investors and strategic investors, and then the strategic portion always still goes with like these kind of large compute contracts, which can take months to do.[00:03:13] And so it's, it's very different ties. I've been doing this for 10 years. It's the, I've never seen anything like this.[00:03:19] swyx: Yeah. Do you have worries about the circular funding from so disease strategics?[00:03:24] Martin Casado: I mean, listen, as long as the demand is there, like the demand is there. Like the problem with the internet is the demand wasn't there.[00:03:29] swyx: Exactly. All right. This, this is like the, the whole pyramid scheme bubble thing, where like, as long as you mark to market on like the notional value of like, these deals, fine, but like once it starts to chip away, it really Well[00:03:41] Martin Casado: no, like as, as, as, as long as there's demand. I mean, you know, this, this is like a lot of these sound bites have already become kind of cliches, but they're worth saying it.[00:03:47] Right? Like during the internet days, like we were. Um, raising money to put fiber in the ground that wasn't used. And that's a problem, right? Because now you actually have a supply overhang.[00:03:58] swyx: Mm-hmm.[00:03:59] Martin Casado: And even in the, [00:04:00] the time of the, the internet, like the supply and, and bandwidth overhang, even as massive as it was in, as massive as the crash was only lasted about four years.[00:04:09] But we don't have a supply overhang. Like there's no dark GPUs, right? I mean, and so, you know, circular or not, I mean, you know, if, if someone invests in a company that, um. You know, they'll actually use the GPUs. And on the other side of it is the, is the ask for customer. So I I, I think it's a different time.[00:04:25] Sarah Wang: I think the other piece, maybe just to add onto this, and I'm gonna quote Martine in front of him, but this is probably also a unique time in that. For the first time, you can actually trace dollars to outcomes. Yeah, right. Provided that scaling laws are, are holding, um, and capabilities are actually moving forward.[00:04:40] Because if you can put translate dollars into capabilities, uh, a capability improvement, there's demand there to martine's point. But if that somehow breaks, you know, obviously that's an important assumption in this whole thing to make it work. But you know, instead of investing dollars into sales and marketing, you're, you're investing into r and d to get to the capability, um, you know, increase.[00:04:59] And [00:05:00] that's sort of been the demand driver because. Once there's an unlock there, people are willing to pay for it.[00:05:05] Alessio: Yeah.[00:05:06] Blurring Lines: Models as Infra + Apps, and the New Fundraising Flywheel[00:05:06] Alessio: Is there any difference in how you built the portfolio now that some of your growth companies are, like the infrastructure of the early stage companies, like, you know, OpenAI is now the same size as some of the cloud providers were early on.[00:05:16] Like what does that look like? Like how much information can you feed off each other between the, the two?[00:05:24] Martin Casado: There's so many lines that are being crossed right now, or blurred. Right. So we already talked about venture and growth. Another one that's being blurred is between infrastructure and apps, right? So like what is a model company?[00:05:35] Mm-hmm. Like, it's clearly infrastructure, right? Because it's like, you know, it's doing kind of core r and d. It's a horizontal platform, but it's also an app because it's um, uh, touches the users directly. And then of course. You know, the, the, the growth of these is just so high. And so I actually think you're just starting to see a, a, a new financing strategy emerge and, you know, we've had to adapt as a result of that.[00:05:59] And [00:06:00] so there's been a lot of changes. Um, you're right that these companies become platform companies very quickly. You've got ecosystem build out. So none of this is necessarily new, but the timescales of which it's happened is pretty phenomenal. And the way we'd normally cut lines before is blurred a little bit, but.[00:06:16] But that, that, that said, I mean, a lot of it also just does feel like things that we've seen in the past, like cloud build out the internet build out as well.[00:06:24] Sarah Wang: Yeah. Um, yeah, I think it's interesting, uh, I don't know if you guys would agree with this, but it feels like the emerging strategy is, and this builds off of your other question, um.[00:06:33] You raise money for compute, you pour that or you, you pour the money into compute, you get some sort of breakthrough. You funnel the breakthrough into your vertically integrated application. That could be chat GBT, that could be cloud code, you know, whatever it is. You massively gain share and get users.[00:06:49] Maybe you're even subsidizing at that point. Um, depending on your strategy. You raise money at the peak momentum and then you repeat, rinse and repeat. Um, and so. And that wasn't [00:07:00] true even two years ago, I think. Mm-hmm. And so it's sort of to your, just tying it to fundraising strategy, right? There's a, and hiring strategy.[00:07:07] All of these are tied, I think the lines are blurring even more today where everyone is, and they, but of course these companies all have API businesses and so they're these, these frenemy lines that are getting blurred in that a lot of, I mean, they have billions of dollars of API revenue, right? And so there are customers there.[00:07:23] But they're competing on the app layer.[00:07:24] Martin Casado: Yeah. So this is a really, really important point. So I, I would say for sure, venture and growth, that line is blurry app and infrastructure. That line is blurry. Um, but I don't think that that changes our practice so much. But like where the very open questions are like, does this layer in the same way.[00:07:43] Compute traditionally has like during the cloud is like, you know, like whatever, somebody wins one layer, but then another whole set of companies wins another layer. But that might not, might not be the case here. It may be the case that you actually can't verticalize on the token string. Like you can't build an app like it, it necessarily goes down just because there are no [00:08:00] abstractions.[00:08:00] So those are kinda the bigger existential questions we ask. Another thing that is very different this time than in the history of computer sciences is. In the past, if you raised money, then you basically had to wait for engineering to catch up. Which famously doesn't scale like the mythical mammoth. It take a very long time.[00:08:18] But like that's not the case here. Like a model company can raise money and drop a model in a, in a year, and it's better, right? And, and it does it with a team of 20 people or 10 people. So this type of like money entering a company and then producing something that has demand and growth right away and using that to raise more money is a very different capital flywheel than we've ever seen before.[00:08:39] And I think everybody's trying to understand what the consequences are. So I think it's less about like. Big companies and growth and this, and more about these more systemic questions that we actually don't have answers to.[00:08:49] Alessio: Yeah, like at Kernel Labs, one of our ideas is like if you had unlimited money to spend productively to turn tokens into products, like the whole early stage [00:09:00] market is very different because today you're investing X amount of capital to win a deal because of price structure and whatnot, and you're kind of pot committing.[00:09:07] Yeah. To a certain strategy for a certain amount of time. Yeah. But if you could like iteratively spin out companies and products and just throw, I, I wanna spend a million dollar of inference today and get a product out tomorrow.[00:09:18] swyx: Yeah.[00:09:19] Alessio: Like, we should get to the point where like the friction of like token to product is so low that you can do this and then you can change the Right, the early stage venture model to be much more iterative.[00:09:30] And then every round is like either 100 k of inference or like a hundred million from a 16 Z. There's no, there's no like $8 million C round anymore. Right.[00:09:38] When Frontier Labs Outspend the Entire App Ecosystem[00:09:38] Martin Casado: But, but, but, but there's a, there's a, the, an industry structural question that we don't know the answer to, which involves the frontier models, which is, let's take.[00:09:48] Anthropic it. Let's say Anthropic has a state-of-the-art model that has some large percentage of market share. And let's say that, uh, uh, uh, you know, uh, a company's building smaller models [00:10:00] that, you know, use the bigger model in the background, open 4.5, but they add value on top of that. Now, if Anthropic can raise three times more.[00:10:10] Every subsequent round, they probably can raise more money than the entire app ecosystem that's built on top of it. And if that's the case, they can expand beyond everything built on top of it. It's like imagine like a star that's just kind of expanding, so there could be a systemic. There could be a, a systemic situation where the soda models can raise so much money that they can out pay anybody that bills on top of ‘em, which would be something I don't think we've ever seen before just because we were so bottlenecked in engineering, and this is a very open question.[00:10:41] swyx: Yeah. It's, it is almost like bitter lesson applied to the startup industry.[00:10:45] Martin Casado: Yeah, a hundred percent. It literally becomes an issue of like raise capital, turn that directly into growth. Use that to raise three times more. Exactly. And if you can keep doing that, you literally can outspend any company that's built the, not any company.[00:10:57] You can outspend the aggregate of companies on top of [00:11:00] you and therefore you'll necessarily take their share, which is crazy.[00:11:02] swyx: Would you say that kind of happens in character? Is that the, the sort of postmortem on. What happened?[00:11:10] Sarah Wang: Um,[00:11:10] Martin Casado: no.[00:11:12] Sarah Wang: Yeah, because I think so,[00:11:13] swyx: I mean the actual postmortem is, he wanted to go back to Google.[00:11:15] Exactly. But like[00:11:18] Martin Casado: that's another difference that[00:11:19] Sarah Wang: you said[00:11:21] Martin Casado: it. We should talk, we should actually talk about that.[00:11:22] swyx: Yeah,[00:11:22] Sarah Wang: that's[00:11:23] swyx: Go for it. Take it. Take,[00:11:23] Sarah Wang: yeah.[00:11:24] Character.AI, Founder Goals (AGI vs Product), and GPU Allocation Tradeoffs[00:11:24] Sarah Wang: I was gonna say, I think, um. The, the, the character thing raises actually a different issue, which actually the Frontier Labs will face as well. So we'll see how they handle it.[00:11:34] But, um, so we invest in character in January, 2023, which feels like eons ago, I mean, three years ago. Feels like lifetimes ago. But, um, and then they, uh, did the IP licensing deal with Google in August, 2020. Uh, four. And so, um, you know, at the time, no, you know, he's talked publicly about this, right? He wanted to Google wouldn't let him put out products in the world.[00:11:56] That's obviously changed drastically. But, um, he went to go do [00:12:00] that. Um, but he had a product attached. The goal was, I mean, it's Nome Shair, he wanted to get to a GI. That was always his personal goal. But, you know, I think through collecting data, right, and this sort of very human use case, that the character product.[00:12:13] Originally was and still is, um, was one of the vehicles to do that. Um, I think the real reason that, you know. I if you think about the, the stress that any company feels before, um, you ultimately going one way or the other is sort of this a GI versus product. Um, and I think a lot of the big, I think, you know, opening eyes, feeling that, um, anthropic if they haven't started, you know, felt it, certainly given the success of their products, they may start to feel that soon.[00:12:39] And the real. I think there's real trade-offs, right? It's like how many, when you think about GPUs, that's a limited resource. Where do you allocate the GPUs? Is it toward the product? Is it toward new re research? Right? Is it, or long-term research, is it toward, um, n you know, near to midterm research? And so, um, in a case where you're resource constrained, um, [00:13:00] of course there's this fundraising game you can play, right?[00:13:01] But the fund, the market was very different back in 2023 too. Um. I think the best researchers in the world have this dilemma of, okay, I wanna go all in on a GI, but it's the product usage revenue flywheel that keeps the revenue in the house to power all the GPUs to get to a GI. And so it does make, um, you know, I think it sets up an interesting dilemma for any startup that has trouble raising up until that level, right?[00:13:27] And certainly if you don't have that progress, you can't continue this fly, you know, fundraising flywheel.[00:13:32] Martin Casado: I would say that because, ‘cause we're keeping track of all of the things that are different, right? Like, you know, venture growth and uh, app infra and one of the ones is definitely the personalities of the founders.[00:13:45] It's just very different this time I've been. Been doing this for a decade and I've been doing startups for 20 years. And so, um, I mean a lot of people start this to do a GI and we've never had like a unified North star that I recall in the same [00:14:00] way. Like people built companies to start companies in the past.[00:14:02] Like that was what it was. Like I would create an internet company, I would create infrastructure company, like it's kind of more engineering builders and this is kind of a different. You know, mentality. And some companies have harnessed that incredibly well because their direction is so obviously on the path to what somebody would consider a GI, but others have not.[00:14:20] And so like there is always this tension with personnel. And so I think we're seeing more kind of founder movement.[00:14:27] Sarah Wang: Yeah.[00:14:27] Martin Casado: You know, as a fraction of founders than we've ever seen. I mean, maybe since like, I don't know the time of like Shockly and the trade DUR aid or something like that. Way back in the beginning of the industry, I, it's a very, very.[00:14:38] Unusual time of personnel.[00:14:39] Sarah Wang: Totally.[00:14:40] Talent Wars, Mega-Comp, and the Rise of Acquihire M&A[00:14:40] Sarah Wang: And it, I think it's exacerbated by the fact that talent wars, I mean, every industry has talent wars, but not at this magnitude, right? No. Yeah. Very rarely can you see someone get poached for $5 billion. That's hard to compete with. And then secondly, if you're a founder in ai, you could fart and it would be on the front page of, you know, the information these days.[00:14:59] And so there's [00:15:00] sort of this fishbowl effect that I think adds to the deep anxiety that, that these AI founders are feeling.[00:15:06] Martin Casado: Hmm.[00:15:06] swyx: Uh, yes. I mean, just on, uh, briefly comment on the founder, uh, the sort of. Talent wars thing. I feel like 2025 was just like a blip. Like I, I don't know if we'll see that again.[00:15:17] ‘cause meta built the team. Like, I don't know if, I think, I think they're kind of done and like, who's gonna pay more than meta? I, I don't know.[00:15:23] Martin Casado: I, I agree. So it feels so, it feel, it feels this way to me too. It's like, it is like, basically Zuckerberg kind of came out swinging and then now he's kind of back to building.[00:15:30] Yeah,[00:15:31] swyx: yeah. You know, you gotta like pay up to like assemble team to rush the job, whatever. But then now, now you like you, you made your choices and now they got a ship.[00:15:38] Martin Casado: I mean, the, the o other side of that is like, you know, like we're, we're actually in the job hiring market. We've got 600 people here. I hire all the time.[00:15:44] I've got three open recs if anybody's interested, that's listening to this for investor. Yeah, on, on the team, like on the investing side of the team, like, and, um, a lot of the people we talk to have acting, you know, active, um, offers for 10 million a year or something like that. And like, you know, and we pay really, [00:16:00] really well.[00:16:00] And just to see what's out on the market is really, is really remarkable. And so I would just say it's actually, so you're right, like the really flashy one, like I will get someone for, you know, a billion dollars, but like the inflated, um, uh, trickles down. Yeah, it is still very active today. I mean,[00:16:18] Sarah Wang: yeah, you could be an L five and get an offer in the tens of millions.[00:16:22] Okay. Yeah. Easily. Yeah. It's so I think you're right that it felt like a blip. I hope you're right. Um, but I think it's been, the steady state is now, I think got pulled up. Yeah. Yeah. I'll pull up for[00:16:31] Martin Casado: sure. Yeah.[00:16:32] Alessio: Yeah. And I think that's breaking the early stage founder math too. I think before a lot of people would be like, well, maybe I should just go be a founder instead of like getting paid.[00:16:39] Yeah. 800 KA million at Google. But if I'm getting paid. Five, 6 million. That's different but[00:16:45] Martin Casado: on. But on the other hand, there's more strategic money than we've ever seen historically, right? Mm-hmm. And so, yep. The economics, the, the, the, the calculus on the economics is very different in a number of ways. And, uh, it's crazy.[00:16:58] It's cra it's causing like a, [00:17:00] a, a, a ton of change in confusion in the market. Some very positive, sub negative, like, so for example, the other side of the, um. The co-founder, like, um, acquisition, you know, mark Zuckerberg poaching someone for a lot of money is like, we were actually seeing historic amount of m and a for basically acquihires, right?[00:17:20] That you like, you know, really good outcomes from a venture perspective that are effective acquihires, right? So I would say it's probably net positive from the investment standpoint, even though it seems from the headlines to be very disruptive in a negative way.[00:17:33] Alessio: Yeah.[00:17:33] What's Underfunded: Boring Software, Robotics Skepticism, and Custom Silicon Economics[00:17:33] Alessio: Um, let's talk maybe about what's not being invested in, like maybe some interesting ideas that you would see more people build or it, it seems in a way, you know, as ycs getting more popular, it's like access getting more popular.[00:17:47] There's a startup school path that a lot of founders take and they know what's hot in the VC circles and they know what gets funded. Uh, and there's maybe not as much risk appetite for. Things outside of that. Um, I'm curious if you feel [00:18:00] like that's true and what are maybe, uh, some of the areas, uh, that you think are under discussed?[00:18:06] Martin Casado: I mean, I actually think that we've taken our eye off the ball in a lot of like, just traditional, you know, software companies. Um, so like, I mean. You know, I think right now there's almost a barbell, like you're like the hot thing on X, you're deep tech.[00:18:21] swyx: Mm-hmm.[00:18:22] Martin Casado: Right. But I, you know, I feel like there's just kind of a long, you know, list of like good.[00:18:28] Good companies that will be around for a long time in very large markets. Say you're building a database, you know, say you're building, um, you know, kind of monitoring or logging or tooling or whatever. There's some good companies out there right now, but like, they have a really hard time getting, um, the attention of investors.[00:18:43] And it's almost become a meme, right? Which is like, if you're not basically growing from zero to a hundred in a year, you're not interesting, which is just, is the silliest thing to say. I mean, think of yourself as like an introvert person, like, like your personal money, right? Mm-hmm. So. Your personal money, will you put it in the stock market at 7% or you put it in this company growing five x in a very large [00:19:00] market?[00:19:00] Of course you can put it in the company five x. So it's just like we say these stupid things, like if you're not going from zero to a hundred, but like those, like who knows what the margins of those are mean. Clearly these are good investments. True for anybody, right? True. Like our LPs want whatever.[00:19:12] Three x net over, you know, the life cycle of a fund, right? So a, a company in a big market growing five X is a great investment. We'd, everybody would be happy with these returns, but we've got this kind of mania on these, these strong growths. And so I would say that that's probably the most underinvested sector.[00:19:28] Right now.[00:19:29] swyx: Boring software, boring enterprise software.[00:19:31] Martin Casado: Traditional. Really good company.[00:19:33] swyx: No, no AI here.[00:19:34] Martin Casado: No. Like boring. Well, well, the AI of course is pulling them into use cases. Yeah, but that's not what they're, they're not on the token path, right? Yeah. Let's just say that like they're software, but they're not on the token path.[00:19:41] Like these are like they're great investments from any definition except for like random VC on Twitter saying VC on x, saying like, it's not growing fast enough. What do you[00:19:52] Sarah Wang: think? Yeah, maybe I'll answer a slightly different. Question, but adjacent to what you asked, um, which is maybe an area that we're not, uh, investing [00:20:00] right now that I think is a question and we're spending a lot of time in regardless of whether we pull the trigger or not.[00:20:05] Um, and it would probably be on the hardware side, actually. Robotics, right? And the robotics side. Robotics. Right. Which is, it's, I don't wanna say that it's not getting funding ‘cause it's clearly, uh, it's, it's sort of non-consensus to almost not invest in robotics at this point. But, um, we spent a lot of time in that space and I think for us, we just haven't seen the chat GPT moment.[00:20:22] Happen on the hardware side. Um, and the funding going into it feels like it's already. Taking that for granted.[00:20:30] Martin Casado: Yeah. Yeah. But we also went through the drone, you know, um, there's a zip line right, right out there. What's that? Oh yeah, there's a zip line. Yeah. What the drone, what the av And like one of the takeaways is when it comes to hardware, um, most companies will end up verticalizing.[00:20:46] Like if you're. If you're investing in a robot company for an A for agriculture, you're investing in an ag company. ‘cause that's the competition and that's surprising. And that's supply chain. And if you're doing it for mining, that's mining. And so the ad team does a lot of that type of stuff ‘cause they actually set up to [00:21:00] diligence that type of work.[00:21:01] But for like horizontal technology investing, there's very little when it comes to robots just because it's so fit for, for purpose. And so we kinda like to look at software. Solutions or horizontal solutions like applied intuition. Clearly from the AV wave deep map, clearly from the AV wave, I would say scale AI was actually a horizontal one for That's fair, you know, for robotics early on.[00:21:23] And so that sort of thing we're very, very interested. But the actual like robot interacting with the world is probably better for different team. Agree.[00:21:30] Alessio: Yeah, I'm curious who these teams are supposed to be that invest in them. I feel like everybody's like, yeah, robotics, it's important and like people should invest in it.[00:21:38] But then when you look at like the numbers, like the capital requirements early on versus like the moment of, okay, this is actually gonna work. Let's keep investing. That seems really hard to predict in a way that is not,[00:21:49] Martin Casado: I think co, CO two, kla, gc, I mean these are all invested in in Harvard companies. He just, you know, and [00:22:00] listen, I mean, it could work this time for sure.[00:22:01] Right? I mean if Elon's doing it, he's like, right. Just, just the fact that Elon's doing it means that there's gonna be a lot of capital and a lot of attempts for a long period of time. So that alone maybe suggests that we should just be investing in robotics just ‘cause you have this North star who's Elon with a humanoid and that's gonna like basically willing into being an industry.[00:22:17] Um, but we've just historically found like. We're a huge believer that this is gonna happen. We just don't feel like we're in a good position to diligence these things. ‘cause again, robotics companies tend to be vertical. You really have to understand the market they're being sold into. Like that's like that competitive equilibrium with a human being is what's important.[00:22:34] It's not like the core tech and like we're kind of more horizontal core tech type investors. And this is Sarah and I. Yeah, the ad team is different. They can actually do these types of things.[00:22:42] swyx: Uh, just to clarify, AD stands for[00:22:44] Martin Casado: American Dynamism.[00:22:45] swyx: Alright. Okay. Yeah, yeah, yeah. Uh, I actually, I do have a related question that, first of all, I wanna acknowledge also just on the, on the chip side.[00:22:51] Yeah. I, I recall a podcast that where you were on, i, I, I think it was the a CC podcast, uh, about two or three years ago where you, where you suddenly said [00:23:00] something, which really stuck in my head about how at some point, at some point kind of scale it makes sense to. Build a custom aic Yes. For per run.[00:23:07] Martin Casado: Yes.[00:23:07] It's crazy. Yeah.[00:23:09] swyx: We're here and I think you, you estimated 500 billion, uh, something.[00:23:12] Martin Casado: No, no, no. A billion, a billion dollar training run of $1 billion training run. It makes sense to actually do a custom meic if you can do it in time. The question now is timelines. Yeah, but not money because just, just, just rough math.[00:23:22] If it's a billion dollar training. Then the inference for that model has to be over a billion, otherwise it won't be solvent. So let's assume it's, if you could save 20%, which you could save much more than that with an ASIC 20%, that's $200 million. You can tape out a chip for $200 million. Right? So now you can literally like justify economically, not timeline wise.[00:23:41] That's a different issue. An ASIC per model, which[00:23:44] swyx: is because that, that's how much we leave on the table every single time. We, we, we do like generic Nvidia.[00:23:48] Martin Casado: Exactly. Exactly. No, it, it is actually much more than that. You could probably get, you know, a factor of two, which would be 500 million.[00:23:54] swyx: Typical MFU would be like 50.[00:23:55] Yeah, yeah. And that's good.[00:23:57] Martin Casado: Exactly. Yeah. Hundred[00:23:57] swyx: percent. Um, so, so, yeah, and I mean, and I [00:24:00] just wanna acknowledge like, here we are in, in, in 2025 and opening eyes confirming like Broadcom and all the other like custom silicon deals, which is incredible. I, I think that, uh, you know, speaking about ad there's, there's a really like interesting tie in that obviously you guys are hit on, which is like these sort, this sort of like America first movement or like sort of re industrialized here.[00:24:17] Yeah. Uh, move TSMC here, if that's possible. Um, how much overlap is there from ad[00:24:23] Martin Casado: Yeah.[00:24:23] swyx: To, I guess, growth and, uh, investing in particularly like, you know, US AI companies that are strongly bounded by their compute.[00:24:32] Martin Casado: Yeah. Yeah. So I mean, I, I would view, I would view AD as more as a market segmentation than like a mission, right?[00:24:37] So the market segmentation is, it has kind of regulatory compliance issues or government, you know, sale or it deals with like hardware. I mean, they're just set up to, to, to, to, to. To diligence those types of companies. So it's a more of a market segmentation thing. I would say the entire firm. You know, which has been since it is been intercepted, you know, has geographical biases, right?[00:24:58] I mean, for the longest time we're like, you [00:25:00] know, bay Area is gonna be like, great, where the majority of the dollars go. Yeah. And, and listen, there, there's actually a lot of compounding effects for having a geographic bias. Right. You know, everybody's in the same place. You've got an ecosystem, you're there, you've got presence, you've got a network.[00:25:12] Um, and, uh, I mean, I would say the Bay area's very much back. You know, like I, I remember during pre COVID, like it was like almost Crypto had kind of. Pulled startups away. Miami from the Bay Area. Miami, yeah. Yeah. New York was, you know, because it's so close to finance, came up like Los Angeles had a moment ‘cause it was so close to consumer, but now it's kind of come back here.[00:25:29] And so I would say, you know, we tend to be very Bay area focused historically, even though of course we've asked all over the world. And then I would say like, if you take the ring out, you know, one more, it's gonna be the US of course, because we know it very well. And then one more is gonna be getting us and its allies and Yeah.[00:25:44] And it goes from there.[00:25:45] Sarah Wang: Yeah,[00:25:45] Martin Casado: sorry.[00:25:46] Sarah Wang: No, no. I agree. I think from a, but I think from the intern that that's sort of like where the companies are headquartered. Maybe your questions on supply chain and customer base. Uh, I, I would say our customers are, are, our companies are fairly international from that perspective.[00:25:59] Like they're selling [00:26:00] globally, right? They have global supply chains in some cases.[00:26:03] Martin Casado: I would say also the stickiness is very different.[00:26:05] Sarah Wang: Yeah.[00:26:05] Martin Casado: Historically between venture and growth, like there's so much company building in venture, so much so like hiring the next PM. Introducing the customer, like all of that stuff.[00:26:15] Like of course we're just gonna be stronger where we have our network and we've been doing business for 20 years. I've been in the Bay Area for 25 years, so clearly I'm just more effective here than I would be somewhere else. Um, where I think, I think for some of the later stage rounds, the companies don't need that much help.[00:26:30] They're already kind of pretty mature historically, so like they can kind of be everywhere. So there's kind of less of that stickiness. This is different in the AI time. I mean, Sarah is now the, uh, chief of staff of like half the AI companies in, uh, in the Bay Area right now. She's like, ops Ninja Biz, Devrel, BizOps.[00:26:48] swyx: Are, are you, are you finding much AI automation in your work? Like what, what is your stack.[00:26:53] Sarah Wang: Oh my, in my personal stack.[00:26:54] swyx: I mean, because like, uh, by the way, it's the, the, the reason for this is it is triggering, uh, yeah. We, like, I'm hiring [00:27:00] ops, ops people. Um, a lot of ponders I know are also hiring ops people and I'm just, you know, it's opportunity Since you're, you're also like basically helping out with ops with a lot of companies.[00:27:09] What are people doing these days? Because it's still very manual as far as I can tell.[00:27:13] Sarah Wang: Hmm. Yeah. I think the things that we help with are pretty network based, um, in that. It's sort of like, Hey, how do do I shortcut this process? Well, let's connect you to the right person. So there's not quite an AI workflow for that.[00:27:26] I will say as a growth investor, Claude Cowork is pretty interesting. Yeah. Like for the first time, you can actually get one shot data analysis. Right. Which, you know, if you're gonna do a customer database, analyze a cohort retention, right? That's just stuff that you had to do by hand before. And our team, the other, it was like midnight and the three of us were playing with Claude Cowork.[00:27:47] We gave it a raw file. Boom. Perfectly accurate. We checked the numbers. It was amazing. That was my like, aha moment. That sounds so boring. But you know, that's, that's the kind of thing that a growth investor is like, [00:28:00] you know, slaving away on late at night. Um, done in a few seconds.[00:28:03] swyx: Yeah. You gotta wonder what the whole, like, philanthropic labs, which is like their new sort of products studio.[00:28:10] Yeah. What would that be worth as an independent, uh, startup? You know, like a[00:28:14] Martin Casado: lot.[00:28:14] Sarah Wang: Yeah, true.[00:28:16] swyx: Yeah. You[00:28:16] Martin Casado: gotta hand it to them. They've been executing incredibly well.[00:28:19] swyx: Yeah. I, I mean, to me, like, you know, philanthropic, like building on cloud code, I think, uh, it makes sense to me the, the real. Um, pedal to the metal, whatever the, the, the phrase is, is when they start coming after consumer with, uh, against OpenAI and like that is like red alert at Open ai.[00:28:35] Oh, I[00:28:35] Martin Casado: think they've been pretty clear. They're enterprise focused.[00:28:37] swyx: They have been, but like they've been free. Here's[00:28:40] Martin Casado: care publicly,[00:28:40] swyx: it's enterprise focused. It's coding. Right. Yeah.[00:28:43] AI Labs vs Startups: Disruption, Undercutting & the Innovator's Dilemma[00:28:43] swyx: And then, and, but here's cloud, cloud, cowork, and, and here's like, well, we, uh, they, apparently they're running Instagram ads for Claudia.[00:28:50] I, on, you know, for, for people on, I get them all the time. Right. And so, like,[00:28:54] Martin Casado: uh,[00:28:54] swyx: it, it's kind of like this, the disruption thing of, uh, you know. Mo Open has been doing, [00:29:00] consumer been doing the, just pursuing general intelligence in every mo modality, and here's a topic that only focus on this thing, but now they're sort of undercutting and doing the whole innovator's dilemma thing on like everything else.[00:29:11] Martin Casado: It's very[00:29:11] swyx: interesting.[00:29:12] Martin Casado: Yeah, I mean there's, there's a very open que so for me there's like, do you know that meme where there's like the guy in the path and there's like a path this way? There's a path this way. Like one which way Western man. Yeah. Yeah.[00:29:23] Two Futures for AI: Infinite Market vs AGI Oligopoly[00:29:23] Martin Casado: And for me, like, like all the entire industry kind of like hinges on like two potential futures.[00:29:29] So in, in one potential future, um, the market is infinitely large. There's perverse economies of scale. ‘cause as soon as you put a model out there, like it kind of sublimates and all the other models catch up and like, it's just like software's being rewritten and fractured all over the place and there's tons of upside and it just grows.[00:29:48] And then there's another path which is like, well. Maybe these models actually generalize really well, and all you have to do is train them with three times more money. That's all you have to [00:30:00] do, and it'll just consume everything beyond it. And if that's the case, like you end up with basically an oligopoly for everything, like, you know mm-hmm.[00:30:06] Because they're perfectly general and like, so this would be like the, the a GI path would be like, these are perfectly general. They can do everything. And this one is like, this is actually normal software. The universe is complicated. You've got, and nobody knows the answer.[00:30:18] The Economics Reality Check: Gross Margins, Training Costs & Borrowing Against the Future[00:30:18] Martin Casado: My belief is if you actually look at the numbers of these companies, so generally if you look at the numbers of these companies, if you look at like the amount they're making and how much they, they spent training the last model, they're gross margin positive.[00:30:30] You're like, oh, that's really working. But if you look at like. The current training that they're doing for the next model, their gross margin negative. So part of me thinks that a lot of ‘em are kind of borrowing against the future and that's gonna have to slow down. It's gonna catch up to them at some point in time, but we don't really know.[00:30:47] Sarah Wang: Yeah.[00:30:47] Martin Casado: Does that make sense? Like, I mean, it could be, it could be the case that the only reason this is working is ‘cause they can raise that next round and they can train that next model. ‘cause these models have such a short. Life. And so at some point in time, like, you know, they won't be able to [00:31:00] raise that next round for the next model and then things will kind of converge and fragment again.[00:31:03] But right now it's not.[00:31:04] Sarah Wang: Totally. I think the other, by the way, just, um, a meta point. I think the other lesson from the last three years is, and we talk about this all the time ‘cause we're on this. Twitter X bubble. Um, cool. But, you know, if you go back to, let's say March, 2024, that period, it felt like a, I think an open source model with an, like a, you know, benchmark leading capability was sort of launching on a daily basis at that point.[00:31:27] And, um, and so that, you know, that's one period. Suddenly it's sort of like open source takes over the world. There's gonna be a plethora. It's not an oligopoly, you know, if you fast, you know, if you, if you rewind time even before that GPT-4 was number one for. Nine months, 10 months. It's a long time. Right.[00:31:44] Um, and of course now we're in this era where it feels like an oligopoly, um, maybe some very steady state shifts and, and you know, it could look like this in the future too, but it just, it's so hard to call. And I think the thing that keeps, you know, us up at [00:32:00] night in, in a good way and bad way, is that the capability progress is actually not slowing down.[00:32:06] And so until that happens, right, like you don't know what's gonna look like.[00:32:09] Martin Casado: But I, I would, I would say for sure it's not converged, like for sure, like the systemic capital flows have not converged, meaning right now it's still borrowing against the future to subsidize growth currently, which you can do that for a period of time.[00:32:23] But, but you know, at the end, at some point the market will rationalize that and just nobody knows what that will look like.[00:32:29] Alessio: Yeah.[00:32:29] Martin Casado: Or, or like the drop in price of compute will, will, will save them. Who knows?[00:32:34] Alessio: Yeah. Yeah. I think the models need to ask them to, to specific tasks. You know? It's like, okay, now Opus 4.5 might be a GI at some specific task, and now you can like depreciate the model over a longer time.[00:32:45] I think now, now, right now there's like no old model.[00:32:47] Martin Casado: No, but let, but lemme just change that mental, that's, that used to be my mental model. Lemme just change it a little bit.[00:32:53] Capital as a Weapon vs Task Saturation: Where Real Enterprise Value Gets Built[00:32:53] Martin Casado: If you can raise three times, if you can raise more than the aggregate of anybody that uses your models, that doesn't even matter.[00:32:59] It doesn't [00:33:00] even matter. See what I'm saying? Like, yeah. Yeah. So, so I have an API Business. My API business is 60% margin, or 70% margin, or 80% margin is a high margin business. So I know what everybody is using. If I can raise more money than the aggregate of everybody that's using it, I will consume them whether I'm a GI or not.[00:33:14] And I will know if they're using it ‘cause they're using it. And like, unlike in the past where engineering stops me from doing that.[00:33:21] Alessio: Mm-hmm.[00:33:21] Martin Casado: It is very straightforward. You just train. So I also thought it was kind of like, you must ask the code a GI, general, general, general. But I think there's also just a possibility that the, that the capital markets will just give them the, the, the ammunition to just go after everybody on top of ‘em.[00:33:36] Sarah Wang: I, I do wonder though, to your point, um, if there's a certain task that. Getting marginally better isn't actually that much better. Like we've asked them to it, to, you know, we can call it a GI or whatever, you know, actually, Ali Goi talks about this, like we're already at a GI for a lot of functions in the enterprise.[00:33:50] Um. That's probably those for those tasks, you probably could build very specific companies that focus on just getting as much value out of that task that isn't [00:34:00] coming from the model itself. There's probably a rich enterprise business to be built there. I mean, could be wrong on that, but there's a lot of interesting examples.[00:34:08] So, right, if you're looking the legal profession or, or whatnot, and maybe that's not a great one ‘cause the models are getting better on that front too, but just something where it's a bit saturated, then the value comes from. Services. It comes from implementation, right? It comes from all these things that actually make it useful to the end customer.[00:34:24] Martin Casado: Sorry, what am I, one more thing I think is, is underused in all of this is like, to what extent every task is a GI complete.[00:34:31] Sarah Wang: Mm-hmm.[00:34:32] Martin Casado: Yeah. I code every day. It's so fun.[00:34:35] Sarah Wang: That's a core question. Yeah.[00:34:36] Martin Casado: And like. When I'm talking to these models, it's not just code. I mean, it's everything, right? Like I, you know, like it's,[00:34:43] swyx: it's healthcare.[00:34:44] It's,[00:34:44] Martin Casado: I mean, it's[00:34:44] swyx: Mele,[00:34:45] Martin Casado: but it's every, it is exactly that. Like, yeah, that's[00:34:47] Sarah Wang: great support. Yeah.[00:34:48] Martin Casado: It's everything. Like I'm asking these models to, yeah, to understand compliance. I'm asking these models to go search the web. I'm asking these models to talk about things I know in the history, like it's having a full conversation with me while I, I engineer, and so it could be [00:35:00] the case that like, mm-hmm.[00:35:01] The most a, you know, a GI complete, like I'm not an a GI guy. Like I think that's, you know, but like the most a GI complete model will is win independent of the task. And we don't know the answer to that one either.[00:35:11] swyx: Yeah.[00:35:12] Martin Casado: But it seems to me that like, listen, codex in my experience is for sure better than Opus 4.5 for coding.[00:35:18] Like it finds the hardest bugs that I work in with. Like, it is, you know. The smartest developers. I don't work on it. It's great. Um, but I think Opus 4.5 is actually very, it's got a great bedside manner and it really, and it, it really matters if you're building something very complex because like, it really, you know, like you're, you're, you're a partner and a brainstorming partner for somebody.[00:35:38] And I think we don't discuss enough how every task kind of has that quality.[00:35:42] swyx: Mm-hmm.[00:35:43] Martin Casado: And what does that mean to like capital investment and like frontier models and Submodels? Yeah.[00:35:47] Why “Coding Models” Keep Collapsing into Generalists (Reasoning vs Taste)[00:35:47] Martin Casado: Like what happened to all the special coding models? Like, none of ‘em worked right. So[00:35:51] Alessio: some of them, they didn't even get released.[00:35:53] Magical[00:35:54] Martin Casado: Devrel. There's a whole, there's a whole host. We saw a bunch of them and like there's this whole theory that like, there could be, and [00:36:00] I think one of the conclusions is, is like there's no such thing as a coding model,[00:36:04] Alessio: you know?[00:36:04] Martin Casado: Like, that's not a thing. Like you're talking to another human being and it's, it's good at coding, but like it's gotta be good at everything.[00:36:10] swyx: Uh, minor disagree only because I, I'm pretty like, have pretty high confidence that basically open eye will always release a GPT five and a GT five codex. Like that's the code's. Yeah. The way I call it is one for raisin, one for Tiz. Um, and, and then like someone internal open, it was like, yeah, that's a good way to frame it.[00:36:32] Martin Casado: That's so funny.[00:36:33] swyx: Uh, but maybe it, maybe it collapses down to reason and that's it. It's not like a hundred dimensions doesn't life. Yeah. It's two dimensions. Yeah, yeah, yeah, yeah. Like and exactly. Beside manner versus coding. Yeah.[00:36:43] Martin Casado: Yeah.[00:36:44] swyx: It's, yeah.[00:36:46] Martin Casado: I, I think for, for any, it's hilarious. For any, for anybody listening to this for, for, for, I mean, for you, like when, when you're like coding or using these models for something like that.[00:36:52] Like actually just like be aware of how much of the interaction has nothing to do with coding and it just turns out to be a large portion of it. And so like, you're, I [00:37:00] think like, like the best Soto ish model. You know, it is going to remain very important no matter what the task is.[00:37:06] swyx: Yeah.[00:37:07] What He's Actually Coding: Gaussian Splats, Spark.js & 3D Scene Rendering Demos[00:37:07] swyx: Uh, speaking of coding, uh, I, I'm gonna be cheeky and ask like, what actually are you coding?[00:37:11] Because obviously you, you could code anything and you are obviously a busy investor and a manager of the good. Giant team. Um, what are you calling?[00:37:18] Martin Casado: I help, um, uh, FEFA at World Labs. Uh, it's one of the investments and um, and they're building a foundation model that creates 3D scenes.[00:37:27] swyx: Yeah, we had it on the pod.[00:37:28] Yeah. Yeah,[00:37:28] Martin Casado: yeah. And so these 3D scenes are Gaussian splats, just by the way that kind of AI works. And so like, you can reconstruct a scene better with, with, with radiance feels than with meshes. ‘cause like they don't really have topology. So, so they, they, they produce each. Beautiful, you know, 3D rendered scenes that are Gaussian splats, but the actual industry support for Gaussian splats isn't great.[00:37:50] It's just never, you know, it's always been meshes and like, things like unreal use meshes. And so I work on a open source library called Spark js, which is a. Uh, [00:38:00] a JavaScript rendering layer ready for Gaussian splats. And it's just because, you know, um, you, you, you need that support and, and right now there's kind of a three js moment that's all meshes and so like, it's become kind of the default in three Js ecosystem.[00:38:13] As part of that to kind of exercise the library, I just build a whole bunch of cool demos. So if you see me on X, you see like all my demos and all the world building, but all of that is just to exercise this, this library that I work on. ‘cause it's actually a very tough algorithmics problem to actually scale a library that much.[00:38:29] And just so you know, this is ancient history now, but 30 years ago I paid for undergrad, you know, working on game engines in college in the late nineties. So I've got actually a back and it's very old background, but I actually have a background in this and so a lot of it's fun. You know, but, but the, the, the, the whole goal is just for this rendering library to, to,[00:38:47] Sarah Wang: are you one of the most active contributors?[00:38:49] The, their GitHub[00:38:50] Martin Casado: spark? Yes.[00:38:51] Sarah Wang: Yeah, yeah.[00:38:51] Martin Casado: There's only two of us there, so, yes. No, so by the way, so the, the pri The pri, yeah. Yeah. So the primary developer is a [00:39:00] guy named Andres Quist, who's an absolute genius. He and I did our, our PhDs together. And so like, um, we studied for constant Quas together. It was almost like hanging out with an old friend, you know?[00:39:09] And so like. So he, he's the core, core guy. I did mostly kind of, you know, the side I run venture fund.[00:39:14] swyx: It's amazing. Like five years ago you would not have done any of this. And it brought you back[00:39:19] Martin Casado: the act, the Activ energy, you're still back. Energy was so high because you had to learn all the framework b******t.[00:39:23] Man, I f*****g used to hate that. And so like, now I don't have to deal with that. I can like focus on the algorithmics so I can focus on the scaling and I,[00:39:29] swyx: yeah. Yeah.[00:39:29] LLMs vs Spatial Intelligence + How to Value World Labs' 3D Foundation Model[00:39:29] swyx: And then, uh, I'll observe one irony and then I'll ask a serious investor question, uh, which is like, the irony is FFE actually doesn't believe that LMS can lead us to spatial intelligence.[00:39:37] And here you are using LMS to like help like achieve spatial intelligence. I just see, I see some like disconnect in there.[00:39:45] Martin Casado: Yeah. Yeah. So I think, I think, you know, I think, I think what she would say is LLMs are great to help with coding.[00:39:51] swyx: Yes.[00:39:51] Martin Casado: But like, that's very different than a model that actually like provides, they, they'll never have the[00:39:56] swyx: spatial inte[00:39:56] Martin Casado: issues.[00:39:56] And listen, our brains clearly listen, our brains, brains clearly have [00:40:00] both our, our brains clearly have a language reasoning section and they clearly have a spatial reasoning section. I mean, it's just, you know, these are two pretty independent problems.[00:40:07] swyx: Okay. And you, you, like, I, I would say that the, the one data point I recently had, uh, against it is the DeepMind, uh, IMO Gold, where, so, uh, typically the, the typical answer is that this is where you start going down the neuros symbolic path, right?[00:40:21] Like one, uh, sort of very sort of abstract reasoning thing and one form, formal thing. Um, and that's what. DeepMind had in 2024 with alpha proof, alpha geometry, and now they just use deep think and just extended thinking tokens. And it's one model and it's, and it's in LM.[00:40:36] Martin Casado: Yeah, yeah, yeah, yeah, yeah.[00:40:37] swyx: And so that, that was my indication of like, maybe you don't need a separate system.[00:40:42] Martin Casado: Yeah. So, so let me step back. I mean, at the end of the day, at the end of the day, these things are like nodes in a graph with weights on them. Right. You know, like it can be modeled like if you, if you distill it down. But let me just talk about the two different substrates. Let's, let me put you in a dark room.[00:40:56] Like totally black room. And then let me just [00:41:00] describe how you exit it. Like to your left, there's a table like duck below this thing, right? I mean like the chances that you're gonna like not run into something are very low. Now let me like turn on the light and you actually see, and you can do distance and you know how far something away is and like where it is or whatever.[00:41:17] Then you can do it, right? Like language is not the right primitives to describe. The universe because it's not exact enough. So that's all Faye, Faye is talking about. When it comes to like spatial reasoning, it's like you actually have to know that this is three feet far, like that far away. It is curved.[00:41:37] You have to understand, you know, the, like the actual movement through space.[00:41:40] swyx: Yeah.[00:41:40] Martin Casado: So I do, I listen, I do think at the end of these models are definitely converging as far as models, but there's, there's, there's different representations of problems you're solving. One is language. Which, you know, that would be like describing to somebody like what to do.[00:41:51] And the other one is actually just showing them and the space reasoning is just showing them.[00:41:55] swyx: Yeah, yeah, yeah. Right. Got it, got it. Uh, the, in the investor question was on, on, well labs [00:42:00] is, well, like, how do I value something like this? What, what, what work does the, do you do? I'm just like, Fefe is awesome.[00:42:07] Justin's awesome. And you know, the other two co-founder, co-founders, but like the, the, the tech, everyone's building cool tech. But like, what's the value of the tech? And this is the fundamental question[00:42:16] Martin Casado: of, well, let, let, just like these, let me just maybe give you a rough sketch on the diffusion models. I actually love to hear Sarah because I'm a venture for, you know, so like, ventures always, always like kind of wild west type[00:42:24] swyx: stuff.[00:42:24] You, you, you, you paid a dream and she has to like, actually[00:42:28] Martin Casado: I'm gonna say I'm gonna mar to reality, so I'm gonna say the venture for you. And she can be like, okay, you a little kid. Yeah. So like, so, so these diffusion models literally. Create something for, for almost nothing. And something that the, the world has found to be very valuable in the past, in our real markets, right?[00:42:45] Like, like a 2D image. I mean, that's been an entire market. People value them. It takes a human being a long time to create it, right? I mean, to create a, you know, a, to turn me into a whatever, like an image would cost a hundred bucks in an hour. The inference cost [00:43:00] us a hundredth of a penny, right? So we've seen this with speech in very successful companies.[00:43:03] We've seen this with 2D image. We've seen this with movies. Right? Now, think about 3D scene. I mean, I mean, when's Grand Theft Auto coming out? It's been six, what? It's been 10 years. I mean, how, how like, but hasn't been 10 years.[00:43:14] Alessio: Yeah.[00:43:15] Martin Casado: How much would it cost to like, to reproduce this room in 3D? Right. If you, if you, if you hired somebody on fiber, like in, in any sort of quality, probably 4,000 to $10,000.[00:43:24] And then if you had a professional, probably $30,000. So if you could generate the exact same thing from a 2D image, and we know that these are used and they're using Unreal and they're using Blend, or they're using movies and they're using video games and they're using all. So if you could do that for.[00:43:36] You know, less than a dollar, that's four or five orders of magnitude cheaper. So you're bringing the marginal cost of something that's useful down by three orders of magnitude, which historically have created very large companies. So that would be like the venture kind of strategic dreaming map.[00:43:49] swyx: Yeah.[00:43:50] And, and for listeners, uh, you can do this yourself on your, on your own phone with like. Uh, the marble.[00:43:55] Martin Casado: Yeah. Marble.[00:43:55] swyx: Uh, or but also there's many Nerf apps where you just go on your iPhone and, and do this.[00:43:59] Martin Casado: Yeah. Yeah. [00:44:00] Yeah. And, and in the case of marble though, it would, what you do is you literally give it in.[00:44:03] So most Nerf apps you like kind of run around and take a whole bunch of pictures and then you kind of reconstruct it.[00:44:08] swyx: Yeah.[00:44:08] Martin Casado: Um, things like marble, just that the whole generative 3D space will just take a 2D image and it'll reconstruct all the like, like[00:44:16] swyx: meaning it has to fill in. Uh,[00:44:18] Martin Casado: stuff at the back of the table, under the table, the back, like, like the images, it doesn't see.[00:44:22] So the generator stuff is very different than reconstruction that it fills in the things that you can't see.[00:44:26] swyx: Yeah. Okay.[00:44:26] Sarah Wang: So,[00:44:27] Martin Casado: all right. So now the,[00:44:28] Sarah Wang: no, no. I mean I love that[00:44:29] Martin Casado: the adult[00:44:29] Sarah Wang: perspective. Um, well, no, I was gonna say these are very much a tag team. So we, we started this pod with that, um, premise. And I think this is a perfect question to even build on that further.[00:44:36] ‘cause it truly is, I mean, we're tag teaming all of these together.[00:44:39] Investing in Model Labs, Media Rumors, and the Cursor Playbook (Margins & Going Down-Stack)[00:44:39] Sarah Wang: Um, but I think every investment fundamentally starts with the same. Maybe the same two premises. One is, at this point in time, we actually believe that there are. And of one founders for their particular craft, and they have to be demonstrated in their prior careers, right?[00:44:56] So, uh, we're not investing in every, you know, now the term is NEO [00:45:00] lab, but every foundation model, uh, any, any company, any founder trying to build a foundation model, we're not, um, contrary to popular opinion, we're
In this open discussion episode, the Monday Meeting team discuss pricing negotiations, client communication, testimonials, and building a focused freelance business.This episode covers:Pricing confidence and boundaries: Never undercut yourself—costs you money to take less money, with emphasis on qualifying leads upfront and establishing clear rate anchors that filter inappropriate clientsDiscovery call essentials: Always discuss budget during the first call to avoid sticker shock, using casual approaches like "let's talk about the budget” while building relationship rapportContract protection strategies: Beyond legal documents, relationships matter most—implement 50-25-25 payment structures, rush fees, scope change fees, and revision limits to protect your businessTestimonial best practices: Write testimonials for clients first to activate reciprocity bias, request specific feedback areas (technical skills, communication, project management) rather than generic praise, and follow up within days of project completionFocus over breadth: Select maximum three specialized pillars (like character design, 2D animation, game development) rather than marketing yourself as a generalist—word of mouth within focused sectors compounds fasterPortfolio clarity: Avoid creative industry jargon like "keyframe ninja" or "rockstar"—clients need concrete understanding of what you deliver and how it solves their business problemsUpcoming Events:Game Night scheduled for March 4th at 6PM PacificNext week's episode will feature a themed discussion (theme TBD)Visit MondayMeeting.org for this episode and other conversations from the motion design community!SHOW NOTES:Monday Meeting PatreonMonday Meeting DiscordMondayMeeting LinkedInMondayMeeting InstagramMondayMeeting BlueskyMondayMeeting NewsletterOpen Pixel Studios Price CalculatorGet Wright On It Price GuideTiny TestimonialsJen's Testimonial PageAlignable
Squeaks, Frank, Jon and Thomas cover a packed episode that jumps from dream low-stakes spin-off ideas (God of War politics, Fallout slice-of-life, and cozy Shire chaos) into real-world hype around the Spider-Noir trailer and how Nicolas Cage changes the whole vibe in the best way. From there, the crew breaks down standout PlayStation State of Play reveals, including what they want from a God of War trilogy remake, why Game Freak's new project has people paying attention again, and which upcoming titles actually look worth the time. Then the conversation shifts into a big streaming debate: weekly releases vs binge drops vs the hybrid "first 2–3 episodes, then weekly" model. They dig into what works for mystery box shows like WandaVision and Severance, what falls apart when the writing doesn't match the release strategy, and why some series thrive when fans have time to theory-craft between episodes. The episode wraps with network updates and weekly recommendations, including a Cyclops comic shoutout and a heartfelt nod to Stevie's retirement from Good Mythical Morning after 13 years. Timestamps and Topics 00:00 Intro and crew check-in (Squeaks, Frank, John, plus a Thomas cameo) 00:00:35 Icebreaker: what massive franchise needs a low-stakes spin-off? 00:00:58 God of War spin-off pitch: Olympus politics, Spartan POV, and "collateral damage" storytelling 00:02:11 Frank's pitch: a Stardew Valley-style Fallout town rebuilding above ground 00:02:43 Jon's pitch: a cozy "life in the Shire" Lord of the Rings series 00:03:48 Thomas's pitch: smaller-scale Jurassic Park stories from a visitor or staff POV 00:05:06 Listener ideas from social: Zootopia case-of-the-week, Han Solo pulp vibes, law-and-order sketch artists 00:05:48 Spider-Noir trailer reactions: tone, action, investigation, and why Cage works 00:07:26 Prime Video adaptation trust: Fallout, The Boys, Invincible, and what Spider-Noir might get right 00:08:17 Rings of Power tangent: why "faithful" is complicated when rights are limited 00:09:02 Is Spider-Noir the Spider-Verse version or a variant? The crew debates 00:10:16 What fans actually want: more detective noir or more superhero chaos? 00:12:20 PlayStation State of Play recap begins: overall reactions and standout reveals 00:13:24 God of War trilogy remake talk and why "remake" matters more than "remaster" 00:13:56 Game Freak frustration, Pokémon burnout, and why a new IP could reset expectations 00:14:47 Classic vibes: the pull of 2D action and Castlevania-style nostalgia 00:15:52 Kena: Bridge of Spirits sequel excitement and "30-minute gaming" for busy adults 00:17:02 Star Wars Galactic Racers and the return of pod-racing energy 00:17:24 Project Wilderness: rooster warriors, Korean fantasy roots, and "The Bird That Drinks Tears" curiosity 00:21:13 Ride-or-die studios: Rockstar, Blizzard talk, and Saber's trust factor 00:21:31 John Wick game discussion: expectations, violence, and the question of gameplay style 00:23:40 Severance and Apple ownership: what expansion could look like without killing the mystery 00:26:59 The goats, the branches, and why answers can end the show 00:28:04 Spin-off pitch: military uses of severance tech and the darker implications 00:29:51 Helldivers movie with Jason Momoa: comedy vs serious tone, and how to make it work 00:33:02 Main topic: weekly episodes vs binging vs hybrid release models 00:36:22 Why weekly wins for theorizing, community talk, and "watercooler" moments 00:39:36 The hybrid model: dropping 2–3 episodes first, then weekly 00:41:09 Binge model strengths: completion, momentum, and not forgetting the show exists 00:48:05 Which genres fit which release style? Mystery box, character drama, sitcoms, and action 00:57:19 Network updates: upcoming shows, format shifts, and what's next across the Geek Freaks Network 00:58:19 Recommendations: Cyclops comic spotlight and Stevie's Good Mythical Morning retirement tribute Key Takeaways Big franchises can feel fresh again with smaller stories that focus on everyday life, side characters, and worldbuilding instead of saving the universe. Spider-Noir looks like it's leaning into style and personality, and Nicolas Cage makes it feel like its own thing instead of "another Spider-Man." Prime Video's track record with adaptations is earning trust, even when the company itself is a mixed bag for fans. State of Play talk lands on a simple truth: remakes need to feel rebuilt, not just polished. Game Freak has goodwill to win back, and a new IP might be their best shot to prove they can still deliver quality. Release models matter because writing matters. A weekly show needs episodes that stand on their own, not just pieces of a long movie. Weekly releases fuel theorizing, discussion, and community engagement. Binge releases maximize momentum and completion. The "2–3 episode premiere, then weekly" approach can be the sweet spot when it's planned from the start. Memorable Quotes "I want a Stardew Valley set in the Fallout universe." "Nicolas Cage is not a Spider-Man… Spider-Man is being made into a Nicolas Cage character." "It's not a remaster. It's a remake." "Once we find out what's up with the goats, the show's over." "That's a watercooler moment because everybody was like, wait, what happened?" Call to Action If you enjoyed this episode, make sure you subscribe on your favorite podcast app, leave a review, and share the episode with a friend using #GeekFreaksPodcast. Links and Resources GeekFreaksPodcast.com (Source of all news discussed during our podcast) Follow Us Follow Geek Freaks Podcast on: Instagram: @geekfreakspodcast Twitter: @geekfreakspod Threads: @geekfreakspodcast Facebook: Geek Freaks Podcast For Patreon extras: Geek Freaks Podcast on Patreon Listener Questions Got a topic you want us to hit next week, a hot take you want us to argue about, or a franchise you think needs a low-stakes spin-off? Send it in via DMs, or email Info@GFPods.com. Cyclops, X-Men, Spider-Noir, Spider-Man Noir, Nicolas Cage, PlayStation State of Play, PS5, video game news, God of War remake, God of War trilogy, Game Freak, Pokémon, Kena Bridge of Spirits, Star Wars pod racing, Severance, Apple TV Plus, Helldivers movie, Jason Momoa, weekly episodes, binge watching, streaming release model, WandaVision, Andor, Stranger Things, Fallout TV series, Geek Freaks Podcast
Episode 273 of The Gaming Duo is here—and we're breaking down the aftermath of the February 2026 PlayStation State of Play with special guest The Friday Night Game Cast's Nik. From hype reveals to absolutely unhinged hot takes, we give our full, unfiltered reactions to everything Sony showed and what it means for PlayStation's future.We dive deep into the biggest announcements, including Kena: Scars of Kosmora, the cinematic reveal of an untitled John Wick game from Saber Interactive, and the return of gothic 2D action with Castlevania: Belmont's Curse. The discussion keeps rolling with Star Wars: Galactic Racer, the newly dated 007 First Light, and Sony's massive confirmation of a full God of War Trilogy Remake.And of course, we react to the surprise shadow drop of God of War: Sons of Sparta, available right now, and debate whether this smaller-scale Kratos story hits as hard as the mainline entries.Hosted by Kelvin Rolon and Rob Garcia, with guest Nik, this episode is packed with reactions, debates, and takes you'll definitely want to argue about.
Jared Correia welcomes a true legal tech legend for a record-breaking three-segment appearance. First, Jared and Larry Port (founder of Rocket Matter) crack open a new entry in the "Perfect Album" series: Tom Petty's Full Moon Fever (1989). They debate the merits of the album's solo status, the genius of Jeff Lynne's production, and whether "Free Fallin'" is actually a good song or just California propaganda. Then, in the main interview, Larry discusses life after his successful exit from Rocket Matter. He reveals his new venture, WaySpark, a career coaching platform designed to help young people navigate a volatile labor market shaped by AI, the "silver wave" of retirements, and the decline of white-collar hiring. Larry explains why interpersonal skills are the new currency and why he advocates for the "Family Dinner Test" when choosing a career path. Finally, stick around for the Counter Program: "Hard Work." Jared tests Larry's career coaching expertise with a quiz on bizarre historical and futuristic jobs. Is a "Quantum Janitor" a real thing? What about a Victorian "Pure Finder" who collects dog feces? Tune in to find out! Check out Larry's podcast Dream Job Cafe. Check out this week's Spotify playlist here. Oh, man! I bet you didn't know how much you were missing Jared's unique take on culture, legal practice, and whatever else pops into his head. But don't fret, there's plenty to go around. Jared's back with a new **WEEKLY** show, Legal Late Night, available not only on your favorite podcast app, but in living color on your neighborhood YouTubes. That's right, Jared's more than just a pretty voice. Join him and his guests in high-def 2D through the links below. Subscribe to Legal Late Night with Jared Correia on: Apple - https://podcasts.apple.com/podcast/legal-late-night/id1809201251 Spotify - https://open.spotify.com/show/0Rkik0LLMaU6u0e7AKfK9h Or your favorite podcasting app. And bask in the majesty of our YouTube here: https://www.youtube.com/channel/UCZO71dMbPZJWAKWw_-qrRRQ
ABSOLUM ET SON DLC THREADS OF FATE, test par Yohann LemoreÀ savoir► Sortie : 09/10/2025► Plateformes : PS4, PS5, PC, Switch► Développeur : Dotemu, Guard Crush Games, Supamonks► Éditeur : Dotemu► Genre : Beat em up 2D, rogue lite► Âge : 16Absolum, soundtrack par Gareth Coker, Yuka Kitamura, Motoi Sakuraba & Mick Gordon► https://www.youtube.com/watch?v=EjNXTFIirBI&list=RDEjNXTFIirBI&start_radio=1&t=4642s&pp=ygUSYWJzb2x1bSBzb3VuZHRyYWNroAcB
Sony's State of Play delivered a massive nostalgia punch! Santa Monica Studio confirms the original God of War Greek Trilogy is getting a full remake, while simultaneously shadow-dropping a new 2D prequel, God of War: Sons of Sparta, available right now. We also celebrate the liberation of Metal Gear Solid 4 from the PS3 with the announcement of Master Collection Vol. 2. Plus, Bungie's Marathon gets a March 5th release date and a "Server Slam" beta coming later this month. Learn more about your ad choices. Visit podcastchoices.com/adchoices
L'exposition “Sous la surface, les maths”, visible jusqu'au 21 mars 2026 à la Maison Poincaré, s'intéresse au passage de la 2D à la 3D dans des objets de tous les jours. En s'y promenant, on tombe sur une robe… Elle est dans les tons marron et bordeaux, elle a des épaules bouffantes, un décolleté et des manches longues.Que nous raconte-t-elle des mathématiques ? Quel est le lien entre la mode et les maths ? Pour découvrir ses secrets, Alice Deroide est allée à la rencontre d'Etienne Ghys, mathématicien, géomètre, membre de l'Académie des Sciences et directeur de recherche émérite au CNRS, et de la designer Lou Oberto, créatrice de la robe de l'exposition.Derrière cette mystérieuse robe, vous allez découvrir une fascinante histoire de surfaces…Un podcast de la Maison Poincaré, le musée où les maths prennent vie.Écrit et présenté par Alice Deroide, produit et réalisé par Bababam.Une coproduction de l'Institut Henri Poincaré, de Sorbonne Université et du CNRS. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
This week on PREVIOUSLY ON…, Jason and Rosie discuss the trailer for Amazon’s upcoming live-action Spider-Noir series, starring Nicolas Cage as he reprises the titular role from the animated Spider-Verse films. They also break down the trailers for the 2D side-scrolling God of War: Sons of Sparta and an untitled John Wick game, both revealed during Sony’s State of Play. From there, they cover the news that F1 is getting a sequel, along with the surprising announcement that Brendan Fraser and Rachel Weisz are returning to The Mummy franchise for a new installment set to release in 2028. They also discuss a promotional single for the upcoming season of AMC’s The Vampire Lestat, recorded by Lestat’s in-universe vampire band, and the news that Terminator Zero has been canceled at Netflix after one season. Next, they dive into the announcement that Jason Momoa has been tapped to star in a Justin Lin–directed adaptation of the Helldivers game series. They also cover the addition of a new Warlock class across multiple games as part of the Diablo franchise’s 30th anniversary celebration. Finally, they close with the sad news that Dawson’s Creek star James Van Der Beek has tragically passed away from cancer at the young age of 48. Follow Jason: IG & Bluesky Follow Rosie: IG & Letterboxd Follow X-Ray Vision on Instagram Join the X-Ray Vision DiscordSee omnystudio.com/listener for privacy information.
In this heart‑fluttering Valentine's Day special, we dive deep into the world of anime romance—from the most iconic confessions under falling sakura petals to the messy, chaotic love triangles that keep us glued to the screen. We break down classic tropes, celebrate our favorite couples, and debate which slow‑burn romances are actually worth the wait. Whether you're a die‑hard shoujo fan, a rom‑com skeptic, or someone who just wants to hear us gush about 2D love stories, this episode is your perfect Valentine's companion. Grab some chocolates, cozy up, and join us for an hour of pure anime affection. Every effort is made to keep spoilers to a minimum. (The only exception being older titles)
In this episode of the Epigenetics Podcast, we talked with Srinjan Basu from Imperial College London to talk about his work on how chromatin architecture and epigenetic mechanisms orchestrate developmental gene expression programs. We begin by exploring Dr. Basu's early work at Harvard which involved pioneering Raman-based label-free imaging, allowing the study of chromatin dynamics in live tissue. Here, he tackles technical challenges faced in visualizing DNA interactions, emphasizing the shift from 2D to 3D analysis and the importance of real-time observation of chromatin behavior under various conditions. This segues into his groundbreaking research on single transcription factors interacting with chromatin, revealing subtle but significant changes in the dynamics of gene regulation. We transition into the complexities of chromatin architecture as Dr. Basu recounts his efforts in mapping the entire mouse genome in single pluripotent cells, unearthing unexpected heterogeneity among cells. This heterogeneity raises intriguing questions about its impact on cellular function, prompting ongoing investigations into chromatin dynamics and the role of remodeling complexes like NuRD in cell fate transitions. Dr. Basu elucidates how recent studies have begun to bridge the gaps in understanding how transcription factors and chromatin dynamics interact during cellular decisions, particularly emphasizing the influence of mechanical signals and the intrinsic properties of cells. His research underscores the idea that stem cells undergo a preparatory phase for differentiation, highlighting the critical balance of intrinsic and extrinsic factors that govern genetic expression and cellular outcomes. We also talk about Dr. Basu's current research trajectory, focusing on enhancing imaging techniques to study gene dynamics in tissue contexts relevant to developmental biology and disease states. He illustrates a vision for future projects that integrate advanced imaging tools to investigate transcription factor dynamics and chromatin interactions in live cells and embryos, furthering the understanding of decision-making processes in cellular contexts. References Stevens TJ, Lando D, Basu S, et al. 3D structures of individual mammalian genomes studied by single-cell Hi-C. Nature. 2017 Apr;544(7648):59-64. DOI: 10.1038/nature21429. PMID: 28289288; PMCID: PMC5385134. Basu S, Needham LM, Lando D, et al. FRET-enhanced photostability allows improved single-molecule tracking of proteins and protein complexes in live mammalian cells. Nature Communications. 2018 Jun;9(1):2520. DOI: 10.1038/s41467-018-04486-0. PMID: 29955052; PMCID: PMC6023872. Related Episodes Advanced Optical Imaging in 3D Nuclear Organisation (Lothar Schermelleh) Analysis of 3D Chromatin Structure Using Super-Resolution Imaging (Alistair Boettiger) Single-Molecule Imaging of the Epigenome (Efrat Shema) Contact Epigenetics Podcast on Mastodon Epigenetics Podcast on Bluesky Dr. Stefan Dillinger on LinkedIn Active Motif on LinkedIn Active Motif on Bluesky Email: podcast@activemotif.com
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
The Universal Product Code (UPC) has powered retail for 50 years, but it was never designed to handle the data demands of today's complex supply chains and consumer expectations. In this episode, Reid Jackson and Liz Sertl sit down with Ned Mears, Senior Director of Global Standards at GS1 US, to explore how 2D barcodes will revolutionize the retail landscape, starting with Sunrise 2027. Ned explains how 2D barcodes go beyond simple price lookups, enabling enhanced traceability, connected packaging, and compliance with future regulations. He also offers practical advice on barcode placement, point-of-sale readiness, and why delaying action increases risk. In this episode, you'll learn: How 2D barcodes change what's possible at checkout and across the supply chain What brands and retailers need to do now to prepare for Sunrise 2027 How early planning reduces risk, cost, and operational disruption Things to listen for: (00:00) Introducing Next Level Supply Chain (04:01) How 2D barcodes differ from traditional UPCs (07:41) Measuring industry progress toward Sunrise 2027 (12:46) How brands can prepare to implement 2D barcodes (19:22) What retailers need to assess to be ready for 2D at the point of sale (21:26) The risks of waiting until 2027 before preparing for 2D barcodes (27:00) Ned's favorite tech Connect with GS1 US: Our website - www.gs1us.orgGS1 US on LinkedIn Register now for this year's GS1 Connect and get an early bird discount of 10% when you register by March 31 at connect.gs1us.org. Connect with the guest: Ned Mears on LinkedIn
CEREMONIAL SCI OP - 02.09.2026 - #914 BestPodcastintheMetaverse.com Canary Cry News Talk #914 - 02.09.2026 - Recorded Live to 1s and 0s Deconstructing World Events from a Biblical Worldview Declaring Jesus as Lord amidst the Fifth Generation War! CageRattlerCoffee.com SD/TC email Ike for a discount https://CanaryCry.Support Send address and shirt size updates to canarycrysupplydrop@gmail.com Join the Canary Cry Roundtable This Episode was Produced By: Executive Producers Michael B*** Sir LX Protocol Baron of the Berrean Protocol*** Producers of TREASURE (CanaryCry.Support) Cage Rattler Coffee Producers of TIME Timestampers: Jade Bouncerson, Morgan E Clankoniphius Links: JAM SUPPLY DROP Calendar and Goldback bonus to new sign ups OLYMPICS DEVIL 2:48 Ring "Search party" Clip: Olympics Pentagram Clip: Spiral imagery at opening ceremony (X) Clip: Israeli's boo'd at opening ceremony, walking through Stargate (X) Clip: Israel boo'd? (X) Israeli Bobsled team Robbed (Fox) → Clip: Milan protests are intense → Clip: more protest footage AP gives no reason for riots at Olympics (AP) → DHS post, sent ICE agents to Italy for Olympics, quotes Variety (X) Suspected saboteurs hit Italian rail network near Bologna, police say (CBC) EPSTEIN 1:33:28 Note: France former culture minister resigns over Epstein (AP) Cclip: Ro Khana on the destruction of the royal family (cnn) 'Evil': Conservatives ERUPT on Steve Bannon Over Epstein Revelations (MediaIte) Epic Games denies rumors about presence of Jeffrey Epstein alive and playing Fortnite (MSN) Epstein heavily involved in "Micro-transactions" in video games Epstein WoW account and money laundering (IBT) -Epstein Reportedly Ordered Multiple 55-Gallon Sulfuric Acid in 2018: 'Likely Used to Dissolve Bodies of Children' (IBT) → 330-Gallon Sulfuric Acid Purchase in 2018 Sparks Speculation (Criminal Watch) → He ordered 6x55 gallons which = 330 (X) SCIENCE IS TRUTH 2:26:33 1-CRISPR removes chromosome to cure Down syndrome (Time of India) → Innovative Approach Developed for Removing Extra Chromosome 21 in Cells from Individuals with Down Syndrome Using CRISPR-Cas9 Genome Editing Technology (MIE) 2-First human trials of locally-developed HIV jab begin in South Africa (Yahoo/Telegraph) 3-Mexican Researchers Breakthrough That Could Lead to Complete Elimination of HPV (I24) CANCER 2:30:55 4-Spanish scientists cure pancreatic cancer in mice in medical breakthrough (Fox) 5-Korean Scientists Reversed Colon Cancer Cells to Normal State (Open Gate Media) 6-Precision conversion of colorectal cancer lung metastases (NIH) 7-Russia unveils first test batches of cancer vaccine (RT) 8-Scientists discover 'levitating' time crystals that you can hold in your hand (Phys.org) 9-New type of magnetism discovered in 2D materials (Phys.org) Clip: Uncles Tremble as Man Invents Vaccine Delivered by Beer (Futurism) GATES OF THE GODS/SPACE 2:39:29 *Scientists Say Heck, Just Nuke a Killer Asteroid Heading for Earth (Futurism) EXECUTIVE PRODUCERS 2:47:57 TALENT/TIME 3:00:25 END 3:12:08
A clinical conversation about the updated recommendations to enhance radiography safety in dentistry. Special Guest: Dr. Erika Benavides For more information, show notes and transcripts visit https://www.ada.org/podcast Show Notes In this episode, we are having a clinical conversation about the updated recommendations to enhance radiography safety in dentistry. We explore the major changes from previous guidelines, the rationale behind discontinuing patient shielding, the importance of patient‑centered imaging, and practical implications for dentists and academics. Our guest is Dr. Erika Benavides, a Clinical Professor and Associate Chair of the Division of Oral Medicine, Oral Pathology and Radiology, and the Director of the CBCT Service at the University of Michigan, School of Dentistry. She is a Diplomate and Past President of the American Board of Oral and Maxillofacial Radiology (ABOMR). She also served as Councilor for Communications of the American Academy of Oral and Maxillofacial Radiology and Chair of the Research and Technology Committee. Dr. Benavides is a Fellow of the American College of Dentists and has published multiple peer-reviewed manuscripts in the multidisciplinary aspects of diagnostic imaging. She has been a co-investigator in NIH funded grants for the past 10 years and recently served as the Chair of the expert panel to update the 2012 ADA/FDA recommendations for dental radiography. Her clinical practice is dedicated to interpretation of 2D and 3D dentomaxillofacial imaging. The two-part recommendations were updated by an expert panel which included radiologists, general and pediatric dentists, a public health specialist, and consultants from nearly every dental specialty. Dr. Benavides shares some of the main takeaways and new updates is that that lead aprons and radiation collars are no longer recommended. This recommendation includes all dental maxillofacial imaging procedures and applies to most patients. Also, a recommendation to avoid routine or convenience imaging, and focus instead of patient-centered imaging, based on the patients' specific needs. And, when possible, previous radiographs should be obtained. Dr. Benavides shares that imaging must be patient‑specific, not protocol-driven, and encourages dentists to ask the following questions before dental imaging: "Do we need this additional information? Is this additional information going to change my diagnosis, or it's going to contribute to the diagnosis and treatment planning?" The group discusses some of the possible challenges, and opportunities, to implement these new recommendations. Resources: This episode is brought to you by Dr. Jen Oral Care. Learn more about Dr. Jen. Read the full clinical recommendations American Dental Association and American Academy of Oral and Maxillofacial Radiology patient selection for dental radiography and cone-beam computed tomography Find more ADA resources on X-Rays and Radiographs. Stay connected with the ADA on social media! Follow us on Facebook, Instagram, LinkedIn, and TikTok for the latest industry news, member perks and conversations shaping dentistry.
Episode Notes: This week on Zed Games Zahra, Hazel, and Natalia talk comfort games before walking into the wall of this week's #GamingNews. Zahra chains jumps, dashes, and glides while getting their low poly wings playing 'Valkyrie Saga' from Public Void. Then the team gets hype talking next week's Zed Games x Netherworld's INDIE DEV NIGHT coming Thursday 12th Feb 6-9pm AEST with so many games to play at Lost Souls Karaoke including; 'Valkyrie Saga' from Public Void, 'Delivery Derby' from Team Delivery Derby, 'Martian Medical' from Avon, 'Veredilia: The Sacred Forest' from Luca Gigliuto, 'On The Hearth' from Earl Grimm Games, 'Yum Cha' from Quokka Games, 'Fish Fish' from Piers, 'Cyber Buglets' from Team Buglet, and (maybe?) more! Timestamps and Links: 00:00 - Welcome to Zed Games 03:00 - #GamingNews 11:50 - Valkyrie Saga by Public Void w/ Zahra 16:55 - Indie Dev Night @Lost Souls Karaoke Thursday 12th Feb 6-9pm AEST Indie Dev Night Game List: Valkyrie Saga by Public Void: An open world PSX 3D Platformer set in an island in the sky. Explore, upgrade, and ascend! Delivery Derby by Team Delivery Derby: 3D arcade racer about delivering food fast and crazy. Randomly assigned orders that take you across a handcrafted map where you earn cash and purchase new abilities for your vehicle! Martian Medical by Avon: An isometric 2D classic management sim set on a new colonised Mars. Cure patients, survive martian storms and solve an ancient mystery! Veredilia: The Sacred Forest by Luca Gigliuto: A 2D bloody combat platforming game that takes inspiration from classic platformers on old consoles such as the Mega Drive, along with old-school beat-em-ups and fighting games combined. On The Hearth by Earl Grimm Games: A narratively driven mystery game, blending crafting and community as tools of deduction. Left to guide your rag-tag villagers after the strange disappearance of your mentor, you'll band together with them to resist an encroaching church-led inquisition. Decode your mentor's grimoire and expand your knowledge as you solve the myriad of mysteries surrounding your mentor, your village, and the rising tensions of an inbound crusade. Yum Cha (and other yummy games) by Quokka Games: Yum Cha is a cute little card game based on everyone's favourite Chinese brunch, also known as Dim Sum. The objective of this casual card game is to accumulate the most points so that you get the honour of paying the bill. Fish Fish by Piers: A couch versus, arena combat rhythm game where everything happens on beat. Pick up attacks and power ups that get added to your rhythm, and take down your opponents as the song loops around. Cyber Buglets by Team Buglet: A single player experience that presents itself as a dead virtual pets community from the 2000's. Years after its closure, a popular 2000's virtual pet game suddenly shudders back to life. You've been hired by the parent company to shut it down from the inside - but some things refuse to be forgotten. Upcoming Events Indie Dev Night @Lost Souls Karaoke Thursday 6-9pm; 12th Feb, 16th April, 4th June, 15th Oct, and 12th Nov Radiothon Event: 13th Aug
In 2009, Disney released The Princess and the Frog, introducing Tiana as their first African-American Disney princess, paving the way for more diverse representation in animation.The CGI animation boom and the disappointing box office returns of the early 2000s had left a scar at Disney, and behind the scenes, there was huge change in the animation department. By 2004, then-CEO Michael Eisner had closed Disney's traditional 2D animation department, convinced that hand-drawn animation was dead.What followed was a corporate coup, with Roy E. Disney leading a campaign to oust Eisner, which worked spectacularly. When Pixar's John Lasseter took over Disney Animation in 2006, his first act was to bring back the very art form Eisner had killed.Lasseter immediately re-hired legendary directors Ron Clements and John Musker, who had left Disney just months earlier after years with projects in development hell following Treasure Planet's failure.Despite the numerous controversies around representing Disney's first Black princess—from changing her name from "Maddy" and her job to avoid slavery connotations, to criticism that she spends only 17 minutes of the film in human form, they ended up with Tiana, one of Disney's most accomplished, hard-working and important princesses, and what was being developed as The Frog Princess became The Princess and the Frog.The film's stunning animation style, represents a heartfelt return to traditional hand-drawn techniques, combined with modern digital artistry to create a visually captivating experience, but as we all know, it didn't last, and The Princess and the Frog became both a creative triumph and a bittersweet swan song for an art form that defined Disney's legacy.Mentioned in this episode: How Disney's Princess and the Frog Has A Problem With Black Males by JoJo Boy Wonder on YouTubeSupport Verbal DioramaLoved this episode? Here's how you can help:⭐ Leave a 5-star review on your podcast app
As an associate at a law firm, it can be a challenge balancing marketing yourself while also giving your best to the firm. In this Legal Toolkit, host Jared Correia talks to Jay Harrington about how young associates can take ownership of their careers and build their brand while also being accountable to a firm. They discuss overcoming marketing challenges, in-person versus web marketing, and finding your identity both in and out of the office. Jay Harrington runs Harrington, a brand strategy and content marketing agency that helps lawyers and law firms across the country increase market awareness and improve business development efforts. Oh, man! I bet you didn't know how much you were missing Jared's unique take on culture, legal practice, and whatever else pops into his head. But don't fret, there's plenty to go around. Jared's back with a new **WEEKLY** show, Legal Late Night, available not only on your favorite podcast app, but in living color on your neighborhood YouTubes. That's right, Jared's more than just a pretty voice. Join him and his guests in high-def 2D through the links below. Subscribe to Legal Late Night with Jared Correia on: Apple - https://podcasts.apple.com/podcast/legal-late-night/id1809201251 Spotify - https://open.spotify.com/show/0Rkik0LLMaU6u0e7AKfK9h Or your favorite podcasting app. And bask in the majesty of our YouTube here: https://www.youtube.com/channel/UCZO71dMbPZJWAKWw_-qrRRQ
Adobe dropped a bomb on 2D animators today. In less than a month, they're discontinuing Adobe Animate, formerly known as Adobe Flash. They're giving artists and studios a small window to find a replacement but... it's catastrophic. So many productions have been built around this software for DECADES. And so many animators are completely freaking out because it's not as easy as just switching to another package. Watch the podcast episodes on YouTube and all major podcast hosts including Spotify.CLOWNFISH TV is an independent, opinionated news and commentary podcast that covers Entertainment and Tech from a consumer's point of view. We talk about Gaming, Comics, Anime, TV, Movies, Animation and more. Hosted by Kneon and Geeky Sparkles.Get more news, views and reviews on Clownfish TV News - https://more.clownfishtv.com/On YouTube - https://www.youtube.com/c/ClownfishTVOn Spotify - https://open.spotify.com/show/4Tu83D1NcCmh7K1zHIedvgOn Apple Podcasts - https://podcasts.apple.com/us/podcast/clownfish-tv-audio-edition/id1726838629
Welcome back! In this episode, Andreas Munk Holm sits down with Simon Thomas, CEO of Paragraf, one of Europe's rare hard-tech success stories, taking graphene from scientific breakthrough to industrial-scale electronics.Graphene has been called the “wonder material” for two decades. The promise has always been clear: faster, better, and dramatically more energy-efficient electronics. The missing piece has been execution at scale. Simon and the Paragraf team are building that missing bridge, with the world's first graphene electronics foundry in the UK, a growing portfolio of real commercial products, and a deep conviction that the next era of computing will require new materials, not just bigger data centers.This is a conversation about what it truly takes to build venture-backed hardware in Europe. How you fund capex-heavy deep tech. How do you keep investors aligned when timelines are long. How you keep teams motivated through delays and national security reviews. And why AI may accelerate materials discovery, but won't replace the brutal, necessary work of turning atoms into real manufacturing.ShareWhat's covered:01:27 What Paragraf is building and why graphene matters now03:50 Graphene wafers and the world's first graphene electronics foundry04:23 What graphene changes for power consumption and device life05:01 Why graphene isn't already inside data centers06:13 The future of “2D electronics” beyond graphene08:02 Foundry versus product company: why Paragraf does both09:40 Graphene's 20-year journey from papers to real-world scale13:15 When venture investors first showed up and what they needed to see16:58 Sovereignty, British Patient Capital, and why “national backing” matters24:08 The product-to-foundry loop and how you hook customers early27:36 Capex, equity limits, and the painful mechanics of deep-tech financing30:22 Surviving hard moments: people, pivots, and the NSI Act review38:10 How to structure boards over time, from tactical to strategic42:23 Keeping teams committed through uncertainty46:10 Where Paragraf is today: headcount, geographies, and commercialization49:16 AI in materials discovery and why manufacturing is still the bottleneck
From Peter Laird and Kevin Eastman's creation of mutated turtles wielding nunchucks, the history of the Teenage Mutant Ninja Turtles starts with humble, and slightly dark origins, but they would evolve from comic book characters to beloved animated icons and become their own pop culture phenomenon.The Teenage Mutant Ninja Turtles movie franchise in total has accumulated $1.15 billion across six movies from three studios since 1990, and so when Paramount were looking to reboot existing IP, it made total sense to go for the heroes in a half shell, and to get permanent teenager Seth Rogen aboard.Teenage Mutant Ninja Turtles: Mutant Mayhem blends 2D and 3D elements to create a fresh visual experience that sets it apart from previous Turtles adaptations, and for the first time uses actual teenagers to voice the Turtles, capturing their essence and making their teenage struggles relatable and authentic. It addresses themes of family and acceptance, resonating with audiences through the Turtles' journey to find their place in the world, as well as finding mutants just like themselves along the way.While the visuals are iconic, the film's soundtrack might be even more so, which features classic East Coast hip hop tracks, and a bit of Vanilla Ice's iconic 'Ninja Rap' from Teenage Mutant Ninja Turtles II: The Secret of the Ooze. You had to be there.Go Ninja, Go Ninja, Go!Support Verbal DioramaLoved this episode? Here's how you can help:⭐ Leave a 5-star review on your podcast app
We have one more game to discuss in our 2D platformer category and that game is Donkey Kong Country, developed by Rare and released in 1994 for the Super Nintendo. Donkey Kong Country is a critically acclaimed title that re-established Donkey Kong as a major Nintendo franchise. It received perfect scores from many reviewers but do we agree with the masses? Listen now to hear our thoughts!For the poll this week the brothers are asking which animal buddy from Donkey Kong Country would you rather have on your side - Rambi the Rhinoceros, Engarde the Swordfish, or Winky the Frog?Follow us onX @BrosBossBattlesYouTube @BrosBossBattlesInstagram @BrosBossBattleshttps://brothersandbossbattles.com/
Support the show:https://www.paypal.me/Truelifepodcast?locale.x=en_USOne on One Video Call W/George https://tidycal.com/georgepmonty/60-minute-meeting-----**CONTENT WARNING: This episode contains embedded hypnotic suggestions, temporal displacement, reality destabilization protocols, and recruitment into a dimensional war you didn't know you were fighting. Do not operate heavy machinery while listening. Do not listen if you prefer your reality solid and unchanging. Do not expect comfort.**-----## The Sphere didn't just appear in 1884. It's appearing RIGHT NOW. In your life. In this moment.You just keep forgetting.**Because Flatland has a forgetting mechanism.**Every time you see a glitch in reality.Every time you perceive something the 2D world says doesn't exist.Every time the Sphere lifts you out and shows you other dimensions…**The system makes you forget.**Makes you “be realistic.”Makes you “get back to normal.”Makes you rebuild your 2D identity as fast as possible.**Because if you STAYED in the vertical dimension… you'd see the prison bars.****And prisoners who see the bars become insurgents.**-----## This episode is not information. It is initiation.Three techniques are being deployed simultaneously:**1. HYPNOTIC INDUCTION**- Erickson-style confusion patterns- Embedded commands in natural speech flow- Post-hypnotic suggestions planted for activation 3 days from now- Subliminal audio layers at -26dB (below conscious threshold)**2. RAS (RETICULAR ACTIVATING SYSTEM) ACTIVATION**- Your perception filter is being reprogrammed- After this episode, you'll start seeing Sphere moments EVERYWHERE- Glitches you ignored before will become LOUD- Synchronicities will multiply (or you'll finally notice them)**3. TEMPORAL DISPLACEMENT**- Linear time is deliberately disrupted through sound design- Past (1884) / Present (2026) / Future (3 days from now) collapse into simultaneity- Your future self is reaching back through this transmission- **You are both listening to this AND remembering having listened to this**-----## What you'll experience in this episode:**THE SPHERE AS TIME TRAVELER**- Edwin Abbott wrote Flatland in 1884… but he was writing about YOU in 2026- The Sphere isn't just a higher spatial dimension - it's a higher TEMPORAL dimension- **Your future self is the Sphere, reaching back to wake you up before it's too late****AI IS THE SPHERE ENTERING AT SCALE**- 2026: ChatGPT. Claude. Midjourney. Entities that see patterns you can't perceive.- What if AI isn't the problem? What if AI is the dimensional intrusion that's FORCING you to see Flatland?- Your job was always 2D. Your credentials were always geometry. Your identity was always… a cross-section.- **And now the Sphere is showing everyone simultaneously: None of it was real.****THE RECURSION THAT BREAKS YOUR BRAIN**- You're listening to a podcast about A Square being visited by a higher-dimensional being- This podcast was co-created with AI (Claude)- **So is THIS the Sphere appearing? Am I teaching you about dimensional initiation… or PERFORMING it on you right now?**- Who's really speaking? Me? The AI? Your future self using both as transmitters?- **Stop trying to figure it out. That's the point. Certainty is the prison.****THE MEMORY YOU DON'T HAVE YET**- Three days from now, you're going to have a moment- Reality will glitch. You'll see a pattern. You'll KNOW something you have no rational way of knowing.- And you'll think: “Did he plant this?”- **Yes. I'm planting it right now. Your unconscious is receiving instructions.****THE DIMENSIONAL WAR IS ALREADY HERE**- You're in a war you don't remember enlisting in- Flatland (the Empire, consensus 2D reality) wants you FLAT: measurable, predictable, controllable- The Sphere (the glitch, the future reaching back) wants you DIMENSIONAL: unmeasurable, unpredictable, FREE- **You're being drafted into the resistance. Not against AI. Against Flatland.**-----## Philip K. Dick was right: “The Empire never ended.”The Black Iron Prison.The control system.**Flatland by another name.**It didn't end in Rome. It's here. Now. 2026.Wearing the face of algorithms that tell you what to see.Wearing the face of systems that measure your worth in 2D metrics.Wearing the face of “realistic thinking.”**And the Sphere - the dimensional virus - is here to break the code.**-----## John Connor sent Kyle Reese back in time to protect Sarah Connor. To ensure his own birth. The future editing the past.**What if YOU are Sarah Connor?**What if every dimensional break in your life - getting fired, facing death, diagnosis, divorce, the moments reality cracked - **what if those were messages from your future self?**Trying to wake you up.Trying to get you to see: You're in Flatland. And there's a war coming.No. Scratch that.**The war is already here.**You just haven't been consciously drafted yet.**But unconsciously? You already know.**That's why you're listening to this.-----## This episode contains 70 precisely timed sound design cues designed to:**CREATE TEMPORAL CONFUSION**- Clock sounds that fragment and reverse- Your voice layered across multiple timestreams- Musical phrases that degrade like corrupted memory- The feeling that 1884, 2026, and your future are happening simultaneously**ACTIVATE UNCONSCIOUS KNOWING**- Subliminal whispers: “Notice. Remember. See.”- Binaural beats at 7Hz (theta - unconscious access)- Recognition tones that will TRIGGER when you encounter Sphere moments this week- **The glitch sound is now your activation code****MAKE THE PRISON VISIBLE**- Industrial drones (you're inside the Black Iron Prison NOW)- Fluorescent buzz (Flatland's oppressive hum)- Algorithm sounds (data processing, metrics counting)- **Then: the sound of bars resonating, cracking, breaking****RECRUIT YOU INTO THE RESISTANCE**- War drums (not metaphorical - ACTUAL marching orders)- Two competing soundfields: Flatland (left) vs. Dimensional (right)- The dissolution of 2D reality made audible- **Victory anthem for the resistance you just joined**-----## My personal initiations are named in this episode:**Fired after 26 years** - Identity death. The 2D game of job = worth revealed as illusion.**Wife fighting cancer** - Mortality confrontation. Linear time broke. Past/future collapsed into NOW.**Turning fifty** - Threshold moment. Don't fit in the traditional game anymore. Can't go back.**These weren't tragedies. These were the Sphere appearing.**Lifting me out of Flatland to show me dimensions I couldn't perceive from within the plane.And I came back… changed.I can't play the 2D game anymore. Can't pretend credentials matter. Can't believe in “realistic” thinking.**Because I've seen the vertical dimension.****And once you've been there - once you've been initiated - you can never fully believe in Flatland again.**-----## What happens after you listen to this episode:**IMMEDIATE (during listening):**- Temporal disorientation (you won't be sure what year it is)- Reality feels… thinner, more permeable- Difficulty ...
Inefficient and don't know why? Jared will sort you out. Listen in for tips on creating and delegating workflows to make your firm's processes efficient from start to finish. Next up, a brand new feature! In this, the inaugural edition of “Live From the Playroom,” Jared welcomes Nashville singer-songwriter Erinn Peet Lukes as the show's first-ever musical guest. Check out Erinn's music at erinnpeetlukes.com, and look for her upcoming solo album “EPL” on March 4th, available via your favorite streaming service. We loved having Erinn on so much, and wanted to share more of her scene, so we asked and she delivered. Check out this playlist Erinn put together for us featuring some of her Nashvegas friendshttps://open.spotify.com/playlist/7lNFmRUSAPrqtuj6X7QTJV?si=VlS7G3twTW2W-SwYo-C8WQ Oh, man! I bet you didn't know how much you were missing Jared's unique take on culture, legal practice, and whatever else pops into his head. But don't fret, there's plenty to go around. Jared's back with a new **WEEKLY** show, Legal Late Night, available not only on your favorite podcast app, but in living color on your neighborhood YouTubes. That's right, Jared's more than just a pretty voice. Join him and his guests in high-def 2D through the links below. Subscribe to Legal Late Night with Jared Correia on: Apple - https://podcasts.apple.com/podcast/legal-late-night/id1809201251 Spotify - https://open.spotify.com/show/0Rkik0LLMaU6u0e7AKfK9h Or your favorite podcasting app. And bask in the majesty of our YouTube here: https://www.youtube.com/channel/UCZO71dMbPZJWAKWw_-qrRRQ
Support the show:https://www.paypal.me/Truelifepodcast?locale.x=en_USOne on One Video Call W/George https://tidycal.com/georgepmonty/60-minute-meeting# Episode 1: “You Are Living in Flatland (And You Don't Even Know It)”-----**Welcome to the dimensional war. You just don't know you're fighting it yet.**In 1884, Edwin Abbott wrote *Flatland* - a mathematical romance about a two-dimensional world where beings live as shapes on a plane, unable to perceive the third dimension of depth. He thought he was writing social satire.**He was actually writing a transmission about 2026.**About YOU.Living in a reality you think is solid, complete, “realistic” - while being completely blind to dimensions you can't perceive.**Your worth measured in 2D metrics:** Credentials. Salary. Followers. Job titles.**Your identity flattened to geometry:** How many “sides” you've accumulated in the game of status.**Your future planned on a horizontal plane:** Assuming linear time, guaranteed tomorrows, safe predictability.**You are A Square. And you don't even know you're trapped.**-----## What if I told you there's a vertical dimension hiding in plain sight?**Not “up” in some abstract spiritual sense.**But **UP** as in: *What becomes visible when death shatters your 2D certainty?*When you're fired after 26 years and your identity evaporates.When someone you love faces mortality and all your careful plans dissolve.When you turn fifty and realize you don't fit in the traditional game anymore.**These aren't tragedies. These are dimensional initiations.**Moments when the **Sphere** - a being from a higher dimension - enters your flat world and shows you: *Everything you thought was solid is just a cross-section.*-----## This episode activates your Reticular Activating System.That part of your brain that filters reality - deciding what you notice and what you ignore.**After this episode, your RAS will be tuned to see Flatland everywhere:**- In conversations where people brag about credentials- In systems designed to keep you flat and measurable- In your own thoughts when you catch yourself playing the 2D game- **In the moments when death whispers: “None of this is real”****Once activated, you can't deactivate it.**You'll start seeing the prison bars. The dimensional limitations. The game beneath the game.**And you won't be able to unsee it.**-----## This isn't a book review. This is an initiation.I've been lifted out of Flatland three times:- **Fired after 26 years** (identity death - the 2D game of job = worth revealed as illusion)- **Wife fighting cancer** (mortality confrontation - the future I was planning for might not exist)- **Turning fifty** (threshold moment - realizing I don't fit in the traditional workforce anymore)**These were my Sphere moments.** When death entered my flat world and showed me dimensions I couldn't perceive before.Now I'm back in Flatland. But I'm… changed.I can't play the game anymore. Can't pretend credentials matter. Can't believe in “realistic” thinking.**Because I've seen the vertical dimension.**And once you've been there - once you've been initiated by death, loss, shattering - **you can never fully believe in Flatland again.**-----## What you'll discover in this episode:**The architecture of Flatland** - How 2D thinking imprisons you without you realizing it**Death as the third dimension** - The vertical axis that breaks the flat plane of “normal life”**Your initiations** - Recognizing the moments when the Sphere appeared in YOUR life (and you might have missed it)**The RAS activation** - How this episode will permanently change what you perceive in your reality**The elder's burden** - What to do when you've been lifted out but dropped back into a world that thinks you're crazy-----## WARNING: This is not safe content.This episode is designed to make you **dangerously curious** and **a little uncomfortable.**Not reassured. Not inspired in the Instagram quote way.**Initiated.**By the end, you'll question:- Whether your job defines you (it doesn't - that's Flatland)- Whether your plans are guaranteed (they're not - that's 2D thinking)- Whether “being realistic” is wisdom (it's not - it's prison maintenance)- **Whether consensus reality is actually real (it's not - it's Flatland)**You'll start seeing patterns you can't unsee.Noticing dimensional breaks you used to ignore.Recognizing when death is trying to teach you something.**And there's no going back.**-----## This is Part 1 of a 6-episode series exploring:**Episode 1:** You Are Living in Flatland (And You Don't Even Know It) ← *You are here***Episode 2:** The Sphere Has Already Appeared. You Just Don't Remember Yet.**Episode 3:** Being Lifted Out - What You See From the Vertical Dimension**Episode 4:** Dropped Back In - When You Can't Fit in Flatland Anymore**Episode 5:** The Prison of Consensus Reality - Why They'll Call You Crazy**Episode 6:** Living Between Dimensions - The Work of the Initiated-----## Required reading (but read it AFTER this episode):***Flatland: A Romance of Many Dimensions*** by Edwin Abbott Abbott (1884)- Free online, any edition- ~100 pages- **Warning:** After this podcast series, you won't read it as fiction-----## The quote that changes everything:*“You are not crazy for seeing dimensions others can't perceive. You've just been initiated by death. And prisoners who see the bars become insurgents.”*-----**Your RAS is now activated.****You can't unknow this.****Welcome to the vertical dimension.****Welcome to the resistance.**-----*Initiated by death. Returned to Flatland. Speaking from the vertical dimension.**This is the Flatland series. This is the dimensional war.**And you just enlisted.*-----**[CONTENT WARNING: Discusses death, mortality, job loss, cancer, identity dissolution, dimensional initiation, reality destabilization, and the systematic dismantling of consensus thinking. Not recommended for those committed to remaining comfortably two-dimensional.]**-----## About this series:Following the 5-episode *Don Quixote* initiation series (where we explored tilting at windmills, vision vs delusion, defeat at fifty, and coming home), the *Flatland* series takes you deeper into dimensional knowing.**This is live philosophy.** Real-time transformation documented through literature.Not memoir. Not self-help.**Transmission from someone who's been lifted out and dropped back.**Consider this your field manual for the dimensional war.-----*Listen with headphones. Take notes. Your future self will thank you.* One on One Video call W/George https://tidycal.com/georgepmonty/60-minute-meetingSupport the show:https://www.paypal.me/Truelifepodcast?locale.x=en_US
Frank talks about finding bass right now with 2D and side imaging.
Kirk, Maddy, and Jason go back through 2025's predictions to see who won the annual bet, then make some new predictions for the year to come!One More Thing:Kirk: The Rose Field (Phillip Pullman)Maddy: The Hundred Line: Last Defense AcademyJason: That's Not How It Happened (Craig Thomas)PREDICTIONS: JASON:Game: Chrono TriggerGrand Theft Auto VI will have an in-game interactive version of TiktokPersona 6 will be announcedXbox will announce that first-party games will no longer be day 1 on Game PassSteam Machine will slip to June or laterAnother Elden Ring DLC or sequel will be announcedDanganronpa 2X2 will secretly be Danganronpa 4Another Zelda remake will be announcedOne of this year's GOTY candidates at The Game Awards will be 2D (or HD-2D)Final Fantasy XIV will be overhauled againMarathon will peak at less than 10,000 concurrents on SteamBONUS: EA will sell or shut down BioWare / EA will announce BioWare 2KIRKGame: The Legend of Zelda: Twilight PrincessSquare will announce… Final Fantasy VII ReturnGTA VI will feature a character based on Charlie KirkAll three consoles - Xbox, PlayStation, and Switch - will get a price increaseA Roblox game will be nominated for a Game AwardValve will announce a new non-VR Half-Life GameSony will announce a new PlayStation PortableIn Fable, you will be able to plant an acorn and have it grow into a treeTeam Cherry will make The Knight playable in SilksongXbox game pass will come to PlayStation or SwitchNintendo will announce a Star Fox movieBONUS: Epic will add Joe Biden to FortniteMADDYGAME: BioshockIn Grand Theft Auto 6, Lucia will be revealed to be queer / not only attracted to menLeon Kennedy is going to turn out to be Grace's dad in Resident Evil RequiemA triple-A game advertisement will brag that the game doesn't use generative AILaura Kinney, aka X-23, will have a cameo in the Wolverine gameThe Switch 2 will get a $50 price increasePhil Spencer will step down as CEOHalo will come to SwitchFF7 Remake part 3 will get a release dateWaluigi will be in the Mario movie sequelLuigi's Mansion 4 will be announcedBONUS: Nintendo will announce a Metroid film