POPULARITY
Categories
In this episode, Ray tackles Anthropic’s standoff with the U.S. Department of War after CEO Daria Amodei refused to grant unrestricted model access, citing concerns over mass surveillance and autonomous weapons. The government responded by banning Anthropic models through administrative orders. Also covered: the top 20 websites of 2026, China’s $173,000 warm-blooded companion robot, Fukushima’s rapidly evolving radioactive hybrid boars, a Chinese spacecraft emergency involving viewport cracks from space debris, Japan’s wooden satellite built with traditional joinery, and human brain cells on a chip that learned to play Doom in just one week. – Want to start a podcast? Its easy to get started! Sign-up at Blubrry – Thinking of buying a Starlink? Use my link to support the show. Subscribe to the Newsletter. Email Ray if you want to get in touch! Like and Follow Geek News Central’s Facebook Page. Support my Show Sponsor: Best Godaddy Promo Codes Get 1Password Full Summary Cochrane opens the show with Anthropic’s confrontation with the U.S. Department of War. CEO Daria Amodei released a public statement refusing unrestricted government access to Anthropic’s AI models. Two red lines stood firm: mass domestic surveillance and fully autonomous weapons. Ray explains that these models are predictive by nature, raising serious misidentification risks. However, the government hit back hard. Administrative orders now ban Anthropic models from government use. Despite the backlash, Cochrane expresses support for the company’s stance. He points listeners to a CBS interview with the CEO posted roughly nine hours before recording. Additionally, Anthropic released new models including Opus 4.5 and Sonnet 4.6. The company climbed to the number two spot on the App Store, trailing only ChatGPT and surpassing Google Gemini. Personal Updates Ray shares that February has been a demanding month. He’s juggling a capstone project, two jobs, and finishing his degree. Meanwhile, he continues working on developments at Blubrry hosting. He apologizes for inconsistent episode production and thanks listeners for their patience. Top 20 Websites of 2026 A Visual Capitalist chart ranks the most visited websites of 2026. Google holds the top spot, followed by YouTube. Facebook, Instagram, ChatGPT, Reddit, Wikipedia, X, and WhatsApp round out the upper rankings. Notably, DuckDuckGo appears at rank seventeen as a privacy-focused search alternative. Sponsor: GoDaddy Economy hosting $6.99/month, WordPress hosting $12.99/month, domains $11.99. Website builder trial available. Use codes at geeknewscentral.com/godaddy to support the show. Anthropic Retires Claude Opus 3 Cochrane discusses Anthropic’s decision to retire Claude Opus 3. In a unique move, the company gave the model a Substack-style blog to reflect on its own existence. Reactions online were mixed, with both supporters and critics engaging in the conversation. China’s $173,000 Warm-Blooded Companion Robot From ZME Science, Ray covers China’s new humanoid robot designed as a warm-blooded companion. Priced at $173,000, it features conventional robotics hardware, sensors, cameras, and autonomous navigation. A built-in heating element maintains body warmth. Cochrane comments humorously on the growing market for companion robots. Windows XP Green Hill Found and Photographed From Tom’s Hardware, someone tracked down and photographed the actual location of the iconic Windows XP “Green Hill” wallpaper. The Reddit post sparked a wave of nostalgia in the community. Fukushima’s Radioactive Hybrid Boars From AZ Animals, domestic pigs that escaped after the Fukushima disaster hybridized with wild boars. Their DNA reveals rapid evolutionary changes driven by the altered radioactive landscape. These aggressive hybrids now complicate wildlife management and rewilding efforts in the region. Shenzhou 20 Spacecraft Emergency Chinese astronauts aboard Shenzhou 20 discovered cracks in their spacecraft’s viewport during what became the nation’s first spaceflight emergency. Space debris likely caused the damage. The crew switched to an alternative return capsule. Multiple protective layers kept the situation manageable. Japan’s Wooden Satellite Japanese teams plan to launch the first wooden satellite. Built with magnolia wood panels assembled using traditional Japanese joinery methods, the biodegradable design aims to reduce aluminum particle pollution from satellites burning up during atmospheric reentry. Human Brain Cells Play Doom Building on previous work where living neurons played Pong, an independent developer used Python to train human brain cell clusters on microelectrode arrays to play Doom. The cells learned in roughly one week. Cochrane highlights how open knowledge sharing accelerated the project dramatically. He also raises ethical questions about training sentient brain cells, connecting the topic to evolving views on sentience in crustaceans and other organisms. The post Anthropic Stands Their Ground, Ethics over Money #1859 appeared first on Geek News Central.
Burke Holland works on GitHub Copilot by day and codes with his AI agents always. Early January, Burke posted about how Opus 4.5 changed everything. We were all still buzzing from the holiday-season 2x usage bump Claude gave us, and Opus 4.5 felt like a genuine step function in capability. Burke and I get into all the details. Opus 4.5 may have started the fire, but GPT-5.3 Codex is certainly living up to the hype.
Join us on the STILL RELEVANT tour: https://simulationtheory.ai/16c0d1db-a8d0-4ac9-bae3-d25074589a80Join Simtheory: https://simtheory.aiTDIA Discord: https://discord.gg/gTW4RkAJvnHorse Egg Lifecycle Infographic: https://staging.simtheory.ai/share/file/UZ2KJU----So Chris, this week... we're diving into Google's new Nano Banana 2 image model - 50% cheaper and supposedly faster (when the servers aren't melting). We put it through its paces with annotation-based editing, slide generation, and yes, the return of the legendary horse egg experiment.Plus: Google quietly kills Gemini-3 after just a few months (good riddance?), we discuss why the model was "dead on arrival" for agentic workflows, and break down the real story behind those massive AI layoff announcements from Block and WiseTech. Spoiler: it's probably not actually about AI.We also get into the current state of the model wars (Opus 4.6 vs Codex 5.3), why smaller models like GLM-5 might be the future for enterprise agentic tasks, and Chris's wife teaching Claude to literally speak to her using Mac's text-to-speech. The models are getting creative.---0:00 - Intro0:36 - Nano Banana 2: Price, Speed & First Impressions3:19 - The Compositing Problem & Last Mile Design5:41 - Annotation-Based Editing (This Changes Everything)9:52 - Slide Editing & Real-World Use Cases12:34 - The Horse Egg Experiment Returns14:30 - Image Degradation & Cost Breakdown17:47 - Text-to-Image Leaderboard Discussion20:01 - Why Nano Banana Dominates for Work22:07 - Codex 5.3 vs Opus 4.622:54 - Google Kills Gemini-3 (What Went Wrong?)26:48 - Google's Agentic Problem30:08 - The Model Loyalty Cycle34:22 - Why Opus 4.6 is Still the Best37:05 - Cost Optimization & Smart Model Routing43:30 - When Models Get Stuck on the Wrong Path45:36 - Nicole's AI Learns to Talk Back46:54 - Can Anyone Build Software Now?52:26 - Anthropic's Legal/Finance Plugins & Market Panic57:08 - Block Lays Off 4,000: AI or Excuse?1:00:05 - The AI Job Apocalypse Isn't RealThanks for listening like and sub xoxo
This is a free preview of a paid episode. To hear more, visit www.latent.spaceAIE Europe CFP and AIE World's Fair paper submissions for CAIS peer review are due TODAY - do not delay! Last call ever.We're excited to welcome METR for their first LS Pod, hopefully the first of many:METR are keepers of currently the single most infamous chart in AI:But every Latent Space reader should be sophisticated enough to know that the details matter and that hype and hyperbole go hand in hand in AI social media, because the millions of impressions that got, by people who don't understand or care about the nuances, disclaimers, and error bars, far outreaches the 69k views on the corrections by the people who actually made the chart:There's a lot of nuance both in making benchmarks (as we discovered with OpenAI on our SWE-Bench Verified podcast) and in extrapolating results from them, especially where exponentials and sigmoids are concerned. METR's Long Horizons work itself has known biases that the authors have responsibly disclosed, but go far too underappreciated in the pursuit of doomer chart porn.If you're interested in a short, sharable TED talk version of this pod, over at AIE CODE we were blessed to feature Joel twice, as a stage talk and with a longer form small workshop with Q&A:We also make sure cover some of METR's lesser known work on Threat Evaluation but also Developer Productivity, where 2x friend of the pod and now Zyphra founder Quentin Anthony was the ONLY productive participant!Finally, if you're the sort to read these show notes to the end, then you definitely deserve some pictures of Joel shredding the guitar at Love Band Karaoke which we mention at the end: Full Video PodTimestamps00:00 What METR Means00:39 Podcast Intro With Joel01:39 ME vs TR03:33 Time Horizon Origin Story04:56 Picking Tasks And Biases09:13 Time Horizon Misconceptions11:37 Opus 4.5 And Trendlines14:27 Productivity Studies And Explosions29:50 Compute Slows Progress30:47 Algorithms Need Compute32:45 Industry Spend and Data34:57 Clusters and Shipping Timelines36:44 Prediction Markets for Models38:10 Manifold Alpha Story43:04 Beyond Benchmarks Evals51:39 METR Roadmap and FarewellTranscript
Lies, insanity and spin are becoming the norm in our daily news and information diet, where one shocking story quickly supplants another. To help us wade through the muck, we speak to our media critic Jon Jeter about coverage of Iran, Tucker Carlson, the BAFTA Awards fiasco and more. And in this new era of attempted colonization and bullying by the U.S. empire, Halie Gerima’s new film ‘Black Lions, Roman Wolves’ reminds us how we liberated ourselves and can still fight fascists and win. Plus headlines. The show is made possible only by our volunteer energy, our resolve to keep the people's voices on the air, and by support from our listeners. In this new era of fake corporate news, we have to be and support our own media! Please click here or click on the Support-Donate tab on this website to subscribe for as little as $3 a month. We are so grateful for this small but growing amount of monthly crowdsource funding on Patreon. PATREON NOW HAS A ONE-TIME, ANNUAL DONATION FUNCTION! You can also give a one-time or recurring donation on PayPal. Thank you! “On the Ground: Voices of Resistance from the Nation's Capital” gives a voice to the voiceless 99 percent at the heart of American empire. The award-winning, weekly hour, produced and hosted by Esther Iverem, covers social justice activism about local, national and international issues, with a special emphasis on militarization and war, the police state, the corporate state, environmental justice and the left edge of culture and media. The show is heard on three dozen stations across the United States, on podcast, and is archived on the world wide web at https://onthegroundshow.org/ Please support us on Patreon or Paypal. Links for all ways to support are on our website or at Esther Iverem's Linktree: https://linktr.ee/esther_iverem
Nick and Myron are back this week with one of those episodes where everything feels a little unstable — retirements, injuries, ticket sales drama, and even whispers about Vince McMahon possibly circling back into the picture.It's Elimination Chamber week… and the temperature is rising.AJ Styles' tribute on WWE RawBronson Reed goes down — and suddenly “The Vision” angle feels cursed.Kiana James pins Charlotte in an EC qualifier — that's not small.Swerve's brutal turn on Omega makes it clear where AEW is heading.WrestleMania 42 ticket sales still trailing last year — plus Chamber watch party blackouts in Chicago. Is it price fatigue or something else?The Vince rumor: could he really attempt a WWE return with Saudi backing… and would fans shockingly embrace it?Full breakdown of the WWE Elimination Chamber card and what actually matters heading into Mania season.ROH moving to studio tapings in Jacksonville This week's show isn't just recapping events — it's about perception, direction, and whether the wrestling landscape is shifting again in ways people aren't ready for.
Today we're programming with NO SPEAKER?! While still complete these 3 challenges No Order Repeating Creating a from Scratch DYM game And finally, not only will I recap that game - But I'll recap the entire night. And give you that game, for free! ACCESS TO EXTREME GO FOR GOLD GAME & RECAP EPISODE https://www.patreon.com/posts/no-speaker-recap-151394635?utm_medium=clipboard_copy&utm_source=copyLink&utm_campaign=postshare_creator&utm_content=join_link SHOW NOTES Shownotes & Transcripts https://www.hybridministry.xyz/190 ❄️ WINTER SOCIAL MEDIA PACK https://www.patreon.com/posts/winter-seasonal-144943791?utm_medium=clipboard_copy&utm_source=copyLink&utm_campaign=postshare_creator&utm_content=join_link HYBRID HERO MEMBERS GET IT FREE! https://www.patreon.com/hybridministry FREE EBOOK https://www.patreon.com/posts/complete-guide-142500019?utm_medium=clipboard_copy&utm_source=copyLink&utm_campaign=postshare_creator&utm_content=join_link COACHING https://www.hybridministry.xyz/articles/coaching
This week on Tapped In, Nick, Jacked Jameson, and Rosario Grillo cover a jammed pack weekend of Ga wrestling festivities.We open with The Headlines — diving into the fallout from Grillo's actions at SFCW, thoughts on the Clash at The Station TV taping, and Viral's increasingly complicated title picture. We also discuss:• Fitts & Trevor Blackwell possibly aligning after Josh Cantrell's controversial attack• Kraken's “Pieces of 8” coin winners• SCA Spectacle drawing its biggest house ever at The Dome• DeSilva capturing gold Then we pivot into The Awards and share thoughts on the event.After that, things lighten up with a non-wrestling twist as we each reveal our Mount Rushmore of Video Games.We close out with Making the Drives, previewing a massive weekend of shows across Georgia including:• 1FW• ACTION Wrestling• SCA• Premiere All Star• Coastal Empire• GCW• Total Aggression Wrestling• Scrappy Championship WrestlingIf you care about what's happening across the Georgia independent scene, this is the episode you need to hear.Watch on YouTube: tappedoutpod.comSupport & get early access on Patreon#TappedIn #GeorgiaWrestling #IndyWrestling #GAIndies #1FW #ACTIONWrestling #SCA #CoastalEmpire #ViralPro #GCW #ProWrestlingPodcast #JackedJameson #RosarioGrilloSupport our sponsors at:Lytmi Earbuds:Official Website Link: https://tinyurl.com/mpwd6mdn (10% discount with the code: tappedoutpod )Amazon Exclusive Discount Link: https://amzn.to/3XqRFsW (Limited-time $20 discount)If you are looking for an easy way to clip your content and add captions too, check out Opus.Pro with the link below. It's what we use and it says HOURS of our time athttps://www.opus.pro/?via=tappedoutpodFor your life insurance needs, contact Nick McDaniel at: https://www.facebook.com/NickMcDanielWoodmenLifeGet your tix to upcoming events at: Vet Tix: https://www.vettix.org/Email: tappedoutpod@gmail.com
It's my last radio show, and I am overjoyed and grateful that I get to spend it the way it began, with my friend and show co-founder Shana Falana. To formally reintroduce you, she is a songwriter, performer, and community architect originally from San Francisco, based in the Hudson Valley since 2008. For over 25 years, Shana has merged music and public service—bringing 12-step meetings into jails and institutions throughout Ulster County, helping coordinate the early years of O+ Festival, and co-founding the I Want What SHE Has radio show on Radio Kingston. She is the founder and creative force behind The Goddess Party, a performance collective uplifting women through music, ritual, and radical joy, with sold-out shows at Opus 40, Old Dutch Church, and Basilica Hudson. Her artistic and social practice centers on amplifying women in their perimenopausal and menopausal years, increasing the visibility of aging women on stage and reshaping cultural narratives about power, beauty, and relevance after fifty. She is the creator and showrunner of a scripted television series inspired by The Goddess Party, expanding its story from live performance into narrative television. Shana is someone who as a friend I've witnessed move through the life cycle of different projects and life experiences with apparent ease. I'm someone who struggles to let go and perhaps holds on a bit too long, but she talks about what endings are like for her and how she navigates them. We get to hear about the beginning and evolution of The Goddess Party, the challenges, highlights, and what's to come. As much of her current work relates to thriving in an older woman's body, Shana shares her experience of navigating illness and perimenopause and offers many resources that have supported her along the way, always following her intuition. Here are the books Shana mentioned - Wise Power, Hagitude, Mother Hunger You can find her here ->> instagram / spotify / apple music / And stay tuned to The Goddess Party's Instagram account for more details about the upcoming March 27th benefit concert they are participating in at Levon Helm Studios. My previous show on The Goddess Party can be found here! Today's show was engineered by Ian Seda from Radiokingston.org. Our show music is from Shana Falana! Feel free to email me, say hello: she@iwantwhatshehas.org ** Please: SUBSCRIBE to the pod and leave a REVIEW wherever you are listening, it helps other users FIND IT http://iwantwhatshehas.org/podcast ITUNES | SPOTIFY ITUNES: https://itunes.apple.com/us/podcast/i-want-what-she-has/id1451648361?mt=2 SPOTIFY:https://open.spotify.com/show/77pmJwS2q9vTywz7Uhiyff?si=G2eYCjLjT3KltgdfA6XXCA Follow: INSTAGRAM * https://www.instagram.com/iwantwhatshehaspodcast/ FACEBOOK * https://www.facebook.com/iwantwhatshehaspodcast
OpenClaw wrote our 475-page uBlox GNSS library overnight. Opus suggested packed structs over byte parsing... smart. Also found new L1+L5 GPS modules on DigiKey: ~2x accuracy for $2-3 more. No RTK needed.
There's been an update to Remote Labor Index (RLI), and it showed a "massive" 50% jump in AI Agent capability. However, it's worth noting that percentages can be deceiving. The data reveals a much more sobering reality that shouldn't come as a surprise to anyone actually doing the work. Despite the hype, the world's best AI model (Opus 4.5) still fails to successfully complete 96.25% real work. In summary, while the “velocity” of AI is skyrocketing, the absolute capability is still miles away from "replacement." So, while countless AI voices are claiming AI is coming for your job, the real crisis is of expectations, not employment.This week, I'm checking back in on the Q1 2026 RLI update and comparing the new colorful dashboard against the stark reality of the November benchmarks. This isn't a tech review but a leadership reality check. I explain why a 50% increase in capability (from 2.5% to 3.75%) is technically impressive but practically dangerous if you are building your strategy around it. I'm also stripping away the vendor sales pitches to show you why the "Agent" narrative is being driven by economic desperation, not technological readiness.My goal is to move you out of "Replacement Theory" to "Augmentation Agility" by exposing the specific blind spots threatening your P&L. The "Replacement" Illusion (Math vs. Myth): We've been told that fully autonomous agents are here, yet the data proves the "ceiling" is barely cracking 4%. I break down why the "Leaders" aren't firing their teams—they are auditing their workflows to find the 4% of grunt work AI can do, while doubling down on the 96% of human nuance it can't touch. The "Desperation" Trap (Vendor Economics): We love to believe the sales deck, but the financials tell a different story. I call out the uncomfortable truth that AI vendors are burning cash on compute costs, driving them to push "enterprise integration" before the product is actually ready. I explain why your budget shouldn't be their R&D fund. The "Sleeper" Insight (The Gemini Factor): You cannot judge a model by its snapshot; you have to judge it by its slope. I dive into the often-overlooked data on Gemini 3 Pro—which quietly posted a massive ~50% reliability jump—and why for Google Workspace users, this "sleeper" metric matters more than who holds the crown. The "Reliability" Pivot (Redefining Good): You cannot scale a tool that is brilliant once and broken twice. I share a specific consulting example of why we had to kill a "successful" pilot, and why the companies winning at AI are measuring "Autonomous Reliability" rather than "Creative Capability."By the end, I hope you see this data not as a reason to write off AI, but as a mandate for agility. You cannot simply "plug in" an agent to a rigid system; you have to build the flexible infrastructure that can adapt when that 3.75% inevitably hits 10%.⸻If this conversation helps you think more clearly about the future we're building, make sure to like, share, and subscribe. You can also support the show by buying me a coffee at https://buymeacoffee.com/christopherlindAnd if your organization is wrestling with how to lead responsibly in the AI era, balancing performance, technology, and people, that's the work I do every day through my consulting and coaching. Learn more at https://christopherlind.co⸻Chapters00:00 – The Hook: 50% Growth vs. Absolute Reality04:00 – The RLI Update: Opus 4.5 & The 96% Gap08:00 – The "Why": Context, Nuance, and Broken Instructions12:00 – The Trap: Why Vendors Are Desperate for Your Budget17:00 – The Velocity Insight: Gemini's 50% "Sleeper" Jump22:00 – The Agility Mandate: Building Flexible Systems26:00 – The "Lind" Take: Capability vs. Reliability (The Pilot Story)33:00 – The "Now What": 3 Surgical Moves for Leaders#RemoteLaborIndex #AIStrategy #FutureOfWork #DigitalTransformation #Leadership #ChristopherLind #FutureFocused #Opus #Gemini #AIAgents
Send a textInvest in pre-IPO stocks with AG Dillon & Co. Contact aaron.dillon@agdillon.com to learn more. Financial advisors only. www.agdillon.com00:00 - Intro00:02 - AG Dillon Funds closing on Mar 31, 202600:51 - OpenAI Financials $280B revenue target meets $665B cost wall03:58 - OpenAI “buys” OpenClaw, Steinberger joins OpenAI04:42 - OpenAI Series C aims to shatter records at $850B post money05:41 - OpenAI and Tata bet on India with a 100 MW to 1 GW buildout path06:29 - Grafana's $9B round talks ride a $400M ARR wave07:23 - World Labs lands Autodesk and targets a rumored $5B valuation08:18 - Temporal wants to be the load bearing layer for agent execution09:31 - Mesh Optical's $50M Series A targets the chokepoint inside AI data centers10:43 - Render's $1.5B valuation is a bet that AI apps need a new runtime11:40 - Stash acquired by Grab for $425M13:06 - Physical Superintelligence pitches a physics breakthrough factory with a 20 person team14:07 - Figma plugs Claude Code into design and risks losing the workflow15:00 - Anthropic ships Sonnet 4.6 just 12 days after Opus 4.615:26 - Stripe's Bridge wins OCC trust charter signal as stablecoin scrutiny rises16:37 - Cohere puts 70 plus languages on device with a 3.35B parameter model17:53 - ElevenLabs turns agent risk into an insurable product at $12.2B secondary19:05 - Mistral buys Koyeb and adds 16 engineers to harden its compute stack
The episode opens with sponsor Meter and a conversation about Saturday morning cartoons before shifting to recent breakthroughs in AI video generation from ByteDance's "SeaDance" (with "SeeDream" as its image generator). Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt The hosts describe SeaDance's cinematic quality, accurate physics, and realistic recreations of actors and IP (including examples like Tom Cruise vs. Brad Pitt and Keanu Reeves as Neo/John Wick), and discuss the implications for film production, commercials, and local film economies such as Toronto and Vancouver. They cover backlash and gatekeeping, including an AI-made Thanksgiving-themed animated short that won a contest tied to AMC theaters' pre-show but reportedly wasn't shown, and compare resistance to historical Luddite reactions. The discussion broadens to productivity and labor impacts, arguing that AI adoption may mirror the 1980s computer productivity dip before process re-engineering in the 1990s, while also raising concerns that AI leaders are forecasting major white-collar job losses. The hosts highlight the rise of agentic benchmarks (TerminalBench, Apex Agents, BrowseComp) and how AI search helps find information faster than traditional search, but emphasize that trust, reliability, and infrastructure are not keeping pace. They raise major concerns about platform terms and data ownership, focusing on Perplexity's updated terms (non-commercial use only even for paid tiers, mandatory attribution, broad licensing rights over user content, and liability limits). They also discuss reliability failures: a widespread Google Gemini issue where users' chat histories disappeared (only visible as activity records with limited usability), and missing document links in ChatGPT chats. The hosts argue users must back up their own data and criticize unclear policies and weak support. Security risks are illustrated through a story about the AI-enabled robot vacuum "Romo," where a developer used Claude to reverse engineer its app and reportedly gained access to control thousands of devices across multiple countries before responsibly disclosing the issues. They also reference broader concerns like connected home devices, Ring neighborhood features, and Microsoft's Recall concept. In rapid-fire news, they mention Anthropic releasing Sonnet 4.6 as a strong, cheaper option near Opus-level performance, a new Grok release branded "4.20," and a clip from an AI summit in India where Sam Altman and Dario Amodei appeared to refuse to hold hands on stage, which the hosts cite as a sign of immaturity among AI industry leaders. The episode closes with sponsor Meter. 00:00 Sponsor + Welcome to Project Synapse 00:21 Saturday Morning Cartoons… Reimagined by AI 01:16 What is 'SeaDance'? Cinematic AI Video Goes Viral 03:17 Keanu Reeves, Neo vs. John Wick & the End of VFX as We Know It 06:43 From Movies to Ads: How AI Video Hits Commercial Production 07:41 The Hidden Economy of Commercials (and Why Cities Like Toronto/Vancouver Care) 09:56 AMC Won't Screen an AI-Made Short: Early Luddite Backlash 12:54 Artists, AI, and the 'Starving Creator' Reality 16:17 AI Adoption Parallels: The 1980s Computer Wave & the Productivity Dip 24:09 Agentic AI Benchmarks: TerminalBench, Apex Agents & BrowseComp 26:04 AI Search That Actually Saves Time (and Your Memory) 30:36 Perplexity's New Terms of Service: Non-Commercial Use & Ownership Shock 35:40 Liability Caps, More Corporate Gripes… and a Coke Zero 'Sponsor' Bit 37:36 Gemini 3.1's big leap—and why it still doesn't feel trustworthy 38:08 Gemini chat history vanishes: what happened and why users are furious 40:19 OpenAI document links disappearing too: what "saved" really means 42:04 Cloud AI's shaky foundation: security, reliability, and confusing settings 47:45 When reliance turns emotional: losing models, losing "someone" 49:22 Real-world stakes: the Social Security database whistleblower story 53:15 Owning your data (and why Google support won't save you) 54:53 Trust whiplash: Anthropic cuts off OpenClaw and the power to shut you down 57:29 Robot vacuum hacked with Claude: 7,000 cameras in strangers' homes 01:03:17 Smart home surveillance creep: Ring neighbors, TV cameras, and Microsoft Recall 01:07:14 Rapid-fire AI news: Sonnet 4.6, Gemini gains, and Grok 4.20 01:11:00 AI leaders' petty feud—and the show wrap & sponsor thanks
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Listen to Full Audio at https://podcasts.apple.com/us/podcast/the-reasoning-throne-decoding-gemini-3-1-pros-77-arc/id1684415169?i=1000750806927
Join Simtheory: https://simtheory.ai"Is This The End" now on Spotify: https://open.spotify.com/album/2Py1MyADUFqJFVUISI2VTP?si=oT3PWyJYRA2BspOmzT_ifgRegister for the STILL RELEVANT tour: https://simulationtheory.ai/16c0dationtheory.ai/16c0d1db-a8d0-4ac9-bae3-d25074589a80Two new models dropped this week — Gemini 3.1 Pro and Claude Sonnet 4.6 — and honestly? We're struggling to care. In this episode, we break down why Gemini went from being our daily driver to a model we barely touch, the "tunnel vision" hallucination problem that killed the Gemini 3 series for us, and whether 3.1 Pro actually fixes it. We put Gemini 3.1 Pro head-to-head against Claude Opus building a Geoffrey Hinton Doom Center, debate whether anyone can actually tell the difference between Sonnet 4.5 and 4.6, and make the case that smaller models running in agentic loops are secretly beating the frontiers. Plus: OpenAI acquires OpenClaw and we ask why a $100B company couldn't just build it themselves, DHH calls out the AI pricing bubble, Mike compares AI models to cheap wine hangovers, and Sam Altman refuses to hold Dario's hand at the India AI Summit. The model wars are getting weird.CHAPTERS:0:00 Intro & "Is This The End" Now on Spotify1:10 Gemini 3.1 Pro: Thinking Controls & The Medium Mode Fix3:14 The Speed vs Intelligence Trade-Off in Agentic Work5:10 Why Multitasking With AI Agents Made Us Anxious6:34 Solid Updates: The Real Goal of Agentic Coding7:45 Gemini's Fall From Grace: From Daily Driver to Dead Model10:08 The Tunnel Vision Problem That Killed Gemini 313:35 Mixed Reactions: Fanboys vs Reality on Gemini 3.1 Pro15:06 Side-by-Side Test: Gemini 3.1 Pro vs Claude Opus (Hinton Doom Center)17:39 Why File Manipulation Accuracy Matters More Than Context Windows19:27 The Context Window Debate: 1M Tokens vs Smart Sub-Agents22:05 DHH on Token Pricing: "If There's a Bubble, It's This"24:11 Should Models Ship as Agent vs Chat Variants?28:43 Claude Sonnet 4.6: A $2 Discount on Opus?31:44 The Model Mix: Why One Model Won't Rule Them All34:40 Anthropic Is Winning — But Can Anyone Tell the Difference?38:58 OpenAI Acquires OpenClaw: Why Couldn't They Just Build It?44:18 The Silicon Valley Moment: Sam vs Dario at India AI Summit47:05 Will Smaller Models Win the Enterprise? The Cost Reality Check51:27 The End of Single-Shot: Why Agentic Loops Change Everything55:48 Final Thoughts & Gemini 3.1 Pro Gets One More WeekThanks for listening. Like & Sub. Links above for the Still Relevant Tour signup and Simtheory. Two models dropped on a week again. What a time to be alive. xoxo
In this episode of the Freedom Scientific Training Podcast, Liz and Rachel continue the Learn AI webinar series with a deep dive into Claude, an AI assistant developed by Anthropic and available at Claude. Designed for users of JAWS, ZoomText, and Fusion, this session explores how Claude can support long-form thinking, research, organization, and real-world problem solving — all with accessibility in mind. Liz walks through navigating the Claude web interface with the keyboard, managing multiple prompts, combining tasks into streamlined workflows, and building repeatable processes using features like Skills and model selection (Opus, Sonnet, and Haiku). Through a practical example, she demonstrates how to research Bluetooth headsets under $200, generate comparison tables, outline a guide, and refine results before exporting a final document. Rachel then shifts to a hands-on accessibility scenario, uploading an image of a stove control panel and prompting Claude to suggest tactile labeling strategies for blind and low vision users. She demonstrates file uploads, prompt refinement, material recommendations, and even generating a visual mockup — highlighting how AI can assist with everyday decision-making and creative problem solving. Throughout the webinar, you'll learn: How to navigate Claude efficiently with JAWS Tips for structuring effective prompts How to combine research, comparison, and writing tasks When and how to refine responses before creating downloadable documents Creative ways to use AI for visual analysis and practical accessibility projects Whether you're experimenting with AI for the first time or looking to build more advanced workflows, this episode shows how Claude can help you move from idea to execution — all while maintaining accessibility and control over the process. For more webinars, tutorials, and training resources, visit: FreedomScientific.com/Training
You probably know by now that AI is the definition of mediocre. As in: it's the average of everything it's been trained on. So how do you get beyond average? How do you build a moat? It certainly doesn't seem to be via the models. While there are models of the month (hey, Opus 4.6, my new friend!), they seem to be pretty swappable. So, the model ain't it. But proprietary data (e.g. an AI that knows you really well), yes! Or doing something really hard in the real world (think: Waymo self-driving cars). Maybe via trust and safety (Anthropic is certainly making a play here). Or... how about via amazing design and good taste. Remember when ChatGPT first came out and everyone derided “AI wrappers”… well, maybe a wrapper isn't so bad, assuming you can differentiate on one or more of the above. Luke Des Cotes is the CEO of MetaLab, the agency famous for designing interfaces, including early versions of Slack and Coinbase, so don't be shocked when you hear him say that great design can be your moat. MetaLab is working with a host of AI companies (another shocker), including Windsurf (AI + code), Suno (AI + music), Pika (AI + video), and more…, which is why Luke's take on AI surprised me. He's not rah rah. He's pretty judicious actually. Luke has questions about AI's costs and appropriateness for lots of use cases like those involving kids, but mostly he objects to its mediocrity.On this episode we discuss what it takes to go beyond.We also get into:Why vibe-coded software isn't changing the world anytime soonWhy Shopify acquired a design agency right after telling employees to justify their existence against AIHow MetaLab designers are using AI to prototype in hours instead of weeksThe talent market for zero-to-one designers — and why they're harder to find than everLandlines, brick phones, and how parents are fighting back against always-on kidsChapters(01:10) - "It's a race to the mean" (03:10) - "How do you create emotional resonance?" (05:33) - AI companies are burning money (08:44) - Speed to good enough (13:51) - Is the chat here to stay or a temporary fad? (17:43) - It's hard to find great 0 to 1 design talent (22:28) - Seemingly conscious AI (25:05) - Kids, landlines, and fighting always-on culture (27:21) - Sounds like science fiction, but is here now… Links & ResourcesLuke Des Cotes on LinkedInMetaLabSupport Future Around & Find OutGet the free newsletterAnd consider becoming a paid subscriber and help future proof this thing!Sponsor the show? Are you looking to reach an audience of senior technologists and decision-makers? Email me: dan@modernproductminds.com
CoreStory is building code intelligence platforms that address the fundamental limitation of today's coding agents: their inability to navigate complex enterprise codebases. While foundation models excel at greenfield development, they fail at real-world engineering tasks in systems spanning millions of lines of code. CoreStory's context layer delivers a 44% improvement on SWE-bench, the industry's standard benchmark for measuring coding agent effectiveness on actual GitHub issues. In this episode of BUILDERS, I sat down with Anand Kulkarni, CEO of CoreStory, to explore how his team is enabling the shift to AI-native engineering and seeding the category of spec-driven development across Microsoft, GitHub, and Amazon. Topics Discussed: Building with GPT-3 API 18 months before ChatGPT went public Why even GPT-5 and Opus 4.5 struggle with enterprise codebases on SWE-bench The narrative shift required when selling AI pre- and post-ChatGPT CoreStory's 44% improvement in coding agent performance through context intelligence How "spec-driven development" got adopted by Microsoft, GitHub, and Amazon without formal analyst relations The parallel between JIRA monetizing Agile and CoreStory enabling AI-native engineering Three-channel distribution: direct enterprise, coding agent partnerships via MCP, and hyperscaler/GSI routes Why specs become the source of truth while code becomes disposable in the AI era GTM Lessons For B2B Founders: Match your narrative precision to technical depth: CoreStory deploys three distinct positioning strategies based on audience sophistication. For AI practitioners tracking benchmarks, they lead with "44% SWE-bench improvement"—a metric that immediately signals meaningful progress on the hardest problem in the space. For engineering leaders aware of AI tooling but not deep in the research, they focus on velocity gains and ROI metrics. For executives, they describe reverse-engineering codebases into machine-readable specs. The key insight: technical audiences dismiss vague value props, while non-technical audiences get lost in benchmark details. Map your positioning to how your audience measures success in their world. Seed category language through earned adoption, not manufactured consensus: Anand initially called their approach "requirements-driven development" before simplifying to "spec-driven development." Rather than pitching analysts, they used the term consistently in customer conversations, gave talks at GitHub Universe, and shipped demos showing the workflow. When customers naturally adopted the language and community leaders began using similar terminology independently, Microsoft and GitHub followed with their own implementations (like GitHub's SpecKit). The lesson: category language sticks when practitioners choose to use it because it clarifies their work, not because a vendor pushed it. Focus on customer adoption as proof of concept before seeking broader market validation. Position against emergent practices, not just incumbent products: CoreStory doesn't position against legacy code analysis tools—they position as the enabler of AI-native engineering, the discipline that will displace Agile. Anand's insight from watching JIRA's success: "People don't love JIRA. What they love is Agile as a way to move away from waterfall." CoreStory is betting that 10x velocity gains from AI-native practices will drive the same categorical shift. When you're early in a technology wave, attach to the practice change (how teams will work differently) rather than feature comparisons with existing tools. Movements create markets. Design channel strategy around customer problem awareness: CoreStory's three channels map to different stages of buyer sophistication. Direct enterprise comes from teams already deep in AI engineering who've hit the context limitation wall. Coding agent partnerships (via MCP integration with tools like Cognition and Factory) serve builders wanting better AI tooling who haven't diagnosed the context problem yet. Hyperscalers and GSIs distribute into modernization and maintenance projects where AI enablement is emerging as a requirement. Each channel serves a distinct buyer journey stage. Don't force one go-to-market motion—design multiple paths based on where different customer segments are in understanding the problem you solve. Navigate pre-legitimacy markets by hiding the breakthrough: Before ChatGPT, selling anything AI-driven faced immediate skepticism about whether it was "real" or just smoke and mirrors. Anand couldn't lead with AI without triggering disbelief. CoreStory focused on delivered outcomes—"here's what you'll be able to do"—with AI as the mechanism, not the message. Post-ChatGPT, the challenge flipped: everyone expects AI, but now the differentiation question becomes harder. If you're building on emerging technology before market consensus forms, deemphasize the technology until buyers have context to evaluate it. Once the market validates the technology category, shift to demonstrating your specific technical advantage within it. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM
Nick and Myron break down a chaotic week in wrestling where business rumors, TV shakeups, and major title implications are all colliding at once.What caught our eye on TVAJ gets a tribute, Punk/Balor face off, AEW title changes, Mickie James, return, & moreWrestleMania 42 ticket sales spark debate — overblown concern or real issue?WWE's NXT Atlanta cancelledCNN confirms WBD owns a minority stake in AEWMJF vs. Hangman is set for Revolution
Tickets for AIEi Miami and AIE Europe are live, with first wave speakers announced!From pioneering software-defined networking to backing many of the most aggressive AI model companies of this cycle, Martin Casado and Sarah Wang sit at the center of the capital, compute, and talent arms race reshaping the tech industry. As partners at a16z investing across infrastructure and growth, they've watched venture and growth blur, model labs turn dollars into capability at unprecedented speed, and startups raise nine-figure rounds before monetization.Martin and Sarah join us to unpack the new financing playbook for AI: why today's rounds are really compute contracts in disguise, how the “raise → train → ship → raise bigger” flywheel works, and whether foundation model companies can outspend the entire app ecosystem built on top of them. They also share what's underhyped (boring enterprise software), what's overheated (talent wars and compensation spirals), and the two radically different futures they see for AI's market structure.We discuss:* Martin's “two futures” fork: infinite fragmentation and new software categories vs. a small oligopoly of general models that consume everything above them* The capital flywheel: how model labs translate funding directly into capability gains, then into revenue growth measured in weeks, not years* Why venture and growth have merged: $100M–$1B hybrid rounds, strategic investors, compute negotiations, and complex deal structures* The AGI vs. product tension: allocating scarce GPUs between long-term research and near-term revenue flywheels* Whether frontier labs can out-raise and outspend the entire app ecosystem built on top of their APIs* Why today's talent wars ($10M+ comp packages, $B acqui-hires) are breaking early-stage founder math* Cursor as a case study: building up from the app layer while training down into your own models* Why “boring” enterprise software may be the most underinvested opportunity in the AI mania* Hardware and robotics: why the ChatGPT moment hasn't yet arrived for robots and what would need to change* World Labs and generative 3D: bringing the marginal cost of 3D scene creation down by orders of magnitude* Why public AI discourse is often wildly disconnected from boardroom reality and how founders should navigate the noiseShow Notes:* “Where Value Will Accrue in AI: Martin Casado & Sarah Wang” - a16z show* “Jack Altman & Martin Casado on the Future of Venture Capital”* World Labs—Martin Casado• LinkedIn: https://www.linkedin.com/in/martincasado/• X: https://x.com/martin_casadoSarah Wang• LinkedIn: https://www.linkedin.com/in/sarah-wang-59b96a7• X: https://x.com/sarahdingwanga16z• https://a16z.com/Timestamps00:00:00 – Intro: Live from a16z00:01:20 – The New AI Funding Model: Venture + Growth Collide00:03:19 – Circular Funding, Demand & “No Dark GPUs”00:05:24 – Infrastructure vs Apps: The Lines Blur00:06:24 – The Capital Flywheel: Raise → Train → Ship → Raise Bigger00:09:39 – Can Frontier Labs Outspend the Entire App Ecosystem?00:11:24 – Character AI & The AGI vs Product Dilemma00:14:39 – Talent Wars, $10M Engineers & Founder Anxiety00:17:33 – What's Underinvested? The Case for “Boring” Software00:19:29 – Robotics, Hardware & Why It's Hard to Win00:22:42 – Custom ASICs & The $1B Training Run Economics00:24:23 – American Dynamism, Geography & AI Power Centers00:26:48 – How AI Is Changing the Investor Workflow (Claude Cowork)00:29:12 – Two Futures of AI: Infinite Expansion or Oligopoly?00:32:48 – If You Can Raise More Than Your Ecosystem, You Win00:34:27 – Are All Tasks AGI-Complete? Coding as the Test Case00:38:55 – Cursor & The Power of the App Layer00:44:05 – World Labs, Spatial Intelligence & 3D Foundation Models00:47:20 – Thinking Machines, Founder Drama & Media Narratives00:52:30 – Where Long-Term Power Accrues in the AI StackTranscriptLatent.Space - Inside AI's $10B+ Capital Flywheel — Martin Casado & Sarah Wang of a16z[00:00:00] Welcome to Latent Space (Live from a16z) + Meet the Guests[00:00:00] Alessio: Hey everyone. Welcome to the Latent Space podcast, live from a 16 z. Uh, this is Alessio founder Kernel Lance, and I'm joined by Twix, editor of Latent Space.[00:00:08] swyx: Hey, hey, hey. Uh, and we're so glad to be on with you guys. Also a top AI podcast, uh, Martin Cado and Sarah Wang. Welcome, very[00:00:16] Martin Casado: happy to be here and welcome.[00:00:17] swyx: Yes, uh, we love this office. We love what you've done with the place. Uh, the new logo is everywhere now. It's, it's still getting, takes a while to get used to, but it reminds me of like sort of a callback to a more ambitious age, which I think is kind of[00:00:31] Martin Casado: definitely makes a statement.[00:00:33] swyx: Yeah.[00:00:34] Martin Casado: Not quite sure what that statement is, but it makes a statement.[00:00:37] swyx: Uh, Martin, I go back with you to Netlify.[00:00:40] Martin Casado: Yep.[00:00:40] swyx: Uh, and, uh, you know, you create a software defined networking and all, all that stuff people can read up on your background. Yep. Sarah, I'm newer to you. Uh, you, you sort of started working together on AI infrastructure stuff.[00:00:51] Sarah Wang: That's right. Yeah. Seven, seven years ago now.[00:00:53] Martin Casado: Best growth investor in the entire industry.[00:00:55] swyx: Oh, say[00:00:56] Martin Casado: more hands down there is, there is. [00:01:00] I mean, when it comes to AI companies, Sarah, I think has done the most kind of aggressive, um, investment thesis around AI models, right? So, worked for Nom Ja, Mira Ia, FEI Fey, and so just these frontier, kind of like large AI models.[00:01:15] I think, you know, Sarah's been the, the broadest investor. Is that fair?[00:01:20] Venture vs. Growth in the Frontier Model Era[00:01:20] Sarah Wang: No, I, well, I was gonna say, I think it's been a really interesting tag, tag team actually just ‘cause the, a lot of these big C deals, not only are they raising a lot of money, um, it's still a tech founder bet, which obviously is inherently early stage.[00:01:33] But the resources,[00:01:36] Martin Casado: so many, I[00:01:36] Sarah Wang: was gonna say the resources one, they just grow really quickly. But then two, the resources that they need day one are kind of growth scale. So I, the hybrid tag team that we have is. Quite effective, I think,[00:01:46] Martin Casado: what is growth these days? You know, you don't wake up if it's less than a billion or like, it's, it's actually, it's actually very like, like no, it's a very interesting time in investing because like, you know, take like the character around, right?[00:01:59] These tend to [00:02:00] be like pre monetization, but the dollars are large enough that you need to have a larger fund and the analysis. You know, because you've got lots of users. ‘cause this stuff has such high demand requires, you know, more of a number sophistication. And so most of these deals, whether it's US or other firms on these large model companies, are like this hybrid between venture growth.[00:02:18] Sarah Wang: Yeah. Total. And I think, you know, stuff like BD for example, you wouldn't usually need BD when you were seed stage trying to get market biz Devrel. Biz Devrel, exactly. Okay. But like now, sorry, I'm,[00:02:27] swyx: I'm not familiar. What, what, what does biz Devrel mean for a venture fund? Because I know what biz Devrel means for a company.[00:02:31] Sarah Wang: Yeah.[00:02:32] Compute Deals, Strategics, and the ‘Circular Funding' Question[00:02:32] Sarah Wang: You know, so a, a good example is, I mean, we talk about buying compute, but there's a huge negotiation involved there in terms of, okay, do you get equity for the compute? What, what sort of partner are you looking at? Is there a go-to market arm to that? Um, and these are just things on this scale, hundreds of millions, you know, maybe.[00:02:50] Six months into the inception of a company, you just wouldn't have to negotiate these deals before.[00:02:54] Martin Casado: Yeah. These large rounds are very complex now. Like in the past, if you did a series A [00:03:00] or a series B, like whatever, you're writing a 20 to a $60 million check and you call it a day. Now you normally have financial investors and strategic investors, and then the strategic portion always still goes with like these kind of large compute contracts, which can take months to do.[00:03:13] And so it's, it's very different ties. I've been doing this for 10 years. It's the, I've never seen anything like this.[00:03:19] swyx: Yeah. Do you have worries about the circular funding from so disease strategics?[00:03:24] Martin Casado: I mean, listen, as long as the demand is there, like the demand is there. Like the problem with the internet is the demand wasn't there.[00:03:29] swyx: Exactly. All right. This, this is like the, the whole pyramid scheme bubble thing, where like, as long as you mark to market on like the notional value of like, these deals, fine, but like once it starts to chip away, it really Well[00:03:41] Martin Casado: no, like as, as, as, as long as there's demand. I mean, you know, this, this is like a lot of these sound bites have already become kind of cliches, but they're worth saying it.[00:03:47] Right? Like during the internet days, like we were. Um, raising money to put fiber in the ground that wasn't used. And that's a problem, right? Because now you actually have a supply overhang.[00:03:58] swyx: Mm-hmm.[00:03:59] Martin Casado: And even in the, [00:04:00] the time of the, the internet, like the supply and, and bandwidth overhang, even as massive as it was in, as massive as the crash was only lasted about four years.[00:04:09] But we don't have a supply overhang. Like there's no dark GPUs, right? I mean, and so, you know, circular or not, I mean, you know, if, if someone invests in a company that, um. You know, they'll actually use the GPUs. And on the other side of it is the, is the ask for customer. So I I, I think it's a different time.[00:04:25] Sarah Wang: I think the other piece, maybe just to add onto this, and I'm gonna quote Martine in front of him, but this is probably also a unique time in that. For the first time, you can actually trace dollars to outcomes. Yeah, right. Provided that scaling laws are, are holding, um, and capabilities are actually moving forward.[00:04:40] Because if you can put translate dollars into capabilities, uh, a capability improvement, there's demand there to martine's point. But if that somehow breaks, you know, obviously that's an important assumption in this whole thing to make it work. But you know, instead of investing dollars into sales and marketing, you're, you're investing into r and d to get to the capability, um, you know, increase.[00:04:59] And [00:05:00] that's sort of been the demand driver because. Once there's an unlock there, people are willing to pay for it.[00:05:05] Alessio: Yeah.[00:05:06] Blurring Lines: Models as Infra + Apps, and the New Fundraising Flywheel[00:05:06] Alessio: Is there any difference in how you built the portfolio now that some of your growth companies are, like the infrastructure of the early stage companies, like, you know, OpenAI is now the same size as some of the cloud providers were early on.[00:05:16] Like what does that look like? Like how much information can you feed off each other between the, the two?[00:05:24] Martin Casado: There's so many lines that are being crossed right now, or blurred. Right. So we already talked about venture and growth. Another one that's being blurred is between infrastructure and apps, right? So like what is a model company?[00:05:35] Mm-hmm. Like, it's clearly infrastructure, right? Because it's like, you know, it's doing kind of core r and d. It's a horizontal platform, but it's also an app because it's um, uh, touches the users directly. And then of course. You know, the, the, the growth of these is just so high. And so I actually think you're just starting to see a, a, a new financing strategy emerge and, you know, we've had to adapt as a result of that.[00:05:59] And [00:06:00] so there's been a lot of changes. Um, you're right that these companies become platform companies very quickly. You've got ecosystem build out. So none of this is necessarily new, but the timescales of which it's happened is pretty phenomenal. And the way we'd normally cut lines before is blurred a little bit, but.[00:06:16] But that, that, that said, I mean, a lot of it also just does feel like things that we've seen in the past, like cloud build out the internet build out as well.[00:06:24] Sarah Wang: Yeah. Um, yeah, I think it's interesting, uh, I don't know if you guys would agree with this, but it feels like the emerging strategy is, and this builds off of your other question, um.[00:06:33] You raise money for compute, you pour that or you, you pour the money into compute, you get some sort of breakthrough. You funnel the breakthrough into your vertically integrated application. That could be chat GBT, that could be cloud code, you know, whatever it is. You massively gain share and get users.[00:06:49] Maybe you're even subsidizing at that point. Um, depending on your strategy. You raise money at the peak momentum and then you repeat, rinse and repeat. Um, and so. And that wasn't [00:07:00] true even two years ago, I think. Mm-hmm. And so it's sort of to your, just tying it to fundraising strategy, right? There's a, and hiring strategy.[00:07:07] All of these are tied, I think the lines are blurring even more today where everyone is, and they, but of course these companies all have API businesses and so they're these, these frenemy lines that are getting blurred in that a lot of, I mean, they have billions of dollars of API revenue, right? And so there are customers there.[00:07:23] But they're competing on the app layer.[00:07:24] Martin Casado: Yeah. So this is a really, really important point. So I, I would say for sure, venture and growth, that line is blurry app and infrastructure. That line is blurry. Um, but I don't think that that changes our practice so much. But like where the very open questions are like, does this layer in the same way.[00:07:43] Compute traditionally has like during the cloud is like, you know, like whatever, somebody wins one layer, but then another whole set of companies wins another layer. But that might not, might not be the case here. It may be the case that you actually can't verticalize on the token string. Like you can't build an app like it, it necessarily goes down just because there are no [00:08:00] abstractions.[00:08:00] So those are kinda the bigger existential questions we ask. Another thing that is very different this time than in the history of computer sciences is. In the past, if you raised money, then you basically had to wait for engineering to catch up. Which famously doesn't scale like the mythical mammoth. It take a very long time.[00:08:18] But like that's not the case here. Like a model company can raise money and drop a model in a, in a year, and it's better, right? And, and it does it with a team of 20 people or 10 people. So this type of like money entering a company and then producing something that has demand and growth right away and using that to raise more money is a very different capital flywheel than we've ever seen before.[00:08:39] And I think everybody's trying to understand what the consequences are. So I think it's less about like. Big companies and growth and this, and more about these more systemic questions that we actually don't have answers to.[00:08:49] Alessio: Yeah, like at Kernel Labs, one of our ideas is like if you had unlimited money to spend productively to turn tokens into products, like the whole early stage [00:09:00] market is very different because today you're investing X amount of capital to win a deal because of price structure and whatnot, and you're kind of pot committing.[00:09:07] Yeah. To a certain strategy for a certain amount of time. Yeah. But if you could like iteratively spin out companies and products and just throw, I, I wanna spend a million dollar of inference today and get a product out tomorrow.[00:09:18] swyx: Yeah.[00:09:19] Alessio: Like, we should get to the point where like the friction of like token to product is so low that you can do this and then you can change the Right, the early stage venture model to be much more iterative.[00:09:30] And then every round is like either 100 k of inference or like a hundred million from a 16 Z. There's no, there's no like $8 million C round anymore. Right.[00:09:38] When Frontier Labs Outspend the Entire App Ecosystem[00:09:38] Martin Casado: But, but, but, but there's a, there's a, the, an industry structural question that we don't know the answer to, which involves the frontier models, which is, let's take.[00:09:48] Anthropic it. Let's say Anthropic has a state-of-the-art model that has some large percentage of market share. And let's say that, uh, uh, uh, you know, uh, a company's building smaller models [00:10:00] that, you know, use the bigger model in the background, open 4.5, but they add value on top of that. Now, if Anthropic can raise three times more.[00:10:10] Every subsequent round, they probably can raise more money than the entire app ecosystem that's built on top of it. And if that's the case, they can expand beyond everything built on top of it. It's like imagine like a star that's just kind of expanding, so there could be a systemic. There could be a, a systemic situation where the soda models can raise so much money that they can out pay anybody that bills on top of ‘em, which would be something I don't think we've ever seen before just because we were so bottlenecked in engineering, and this is a very open question.[00:10:41] swyx: Yeah. It's, it is almost like bitter lesson applied to the startup industry.[00:10:45] Martin Casado: Yeah, a hundred percent. It literally becomes an issue of like raise capital, turn that directly into growth. Use that to raise three times more. Exactly. And if you can keep doing that, you literally can outspend any company that's built the, not any company.[00:10:57] You can outspend the aggregate of companies on top of [00:11:00] you and therefore you'll necessarily take their share, which is crazy.[00:11:02] swyx: Would you say that kind of happens in character? Is that the, the sort of postmortem on. What happened?[00:11:10] Sarah Wang: Um,[00:11:10] Martin Casado: no.[00:11:12] Sarah Wang: Yeah, because I think so,[00:11:13] swyx: I mean the actual postmortem is, he wanted to go back to Google.[00:11:15] Exactly. But like[00:11:18] Martin Casado: that's another difference that[00:11:19] Sarah Wang: you said[00:11:21] Martin Casado: it. We should talk, we should actually talk about that.[00:11:22] swyx: Yeah,[00:11:22] Sarah Wang: that's[00:11:23] swyx: Go for it. Take it. Take,[00:11:23] Sarah Wang: yeah.[00:11:24] Character.AI, Founder Goals (AGI vs Product), and GPU Allocation Tradeoffs[00:11:24] Sarah Wang: I was gonna say, I think, um. The, the, the character thing raises actually a different issue, which actually the Frontier Labs will face as well. So we'll see how they handle it.[00:11:34] But, um, so we invest in character in January, 2023, which feels like eons ago, I mean, three years ago. Feels like lifetimes ago. But, um, and then they, uh, did the IP licensing deal with Google in August, 2020. Uh, four. And so, um, you know, at the time, no, you know, he's talked publicly about this, right? He wanted to Google wouldn't let him put out products in the world.[00:11:56] That's obviously changed drastically. But, um, he went to go do [00:12:00] that. Um, but he had a product attached. The goal was, I mean, it's Nome Shair, he wanted to get to a GI. That was always his personal goal. But, you know, I think through collecting data, right, and this sort of very human use case, that the character product.[00:12:13] Originally was and still is, um, was one of the vehicles to do that. Um, I think the real reason that, you know. I if you think about the, the stress that any company feels before, um, you ultimately going one way or the other is sort of this a GI versus product. Um, and I think a lot of the big, I think, you know, opening eyes, feeling that, um, anthropic if they haven't started, you know, felt it, certainly given the success of their products, they may start to feel that soon.[00:12:39] And the real. I think there's real trade-offs, right? It's like how many, when you think about GPUs, that's a limited resource. Where do you allocate the GPUs? Is it toward the product? Is it toward new re research? Right? Is it, or long-term research, is it toward, um, n you know, near to midterm research? And so, um, in a case where you're resource constrained, um, [00:13:00] of course there's this fundraising game you can play, right?[00:13:01] But the fund, the market was very different back in 2023 too. Um. I think the best researchers in the world have this dilemma of, okay, I wanna go all in on a GI, but it's the product usage revenue flywheel that keeps the revenue in the house to power all the GPUs to get to a GI. And so it does make, um, you know, I think it sets up an interesting dilemma for any startup that has trouble raising up until that level, right?[00:13:27] And certainly if you don't have that progress, you can't continue this fly, you know, fundraising flywheel.[00:13:32] Martin Casado: I would say that because, ‘cause we're keeping track of all of the things that are different, right? Like, you know, venture growth and uh, app infra and one of the ones is definitely the personalities of the founders.[00:13:45] It's just very different this time I've been. Been doing this for a decade and I've been doing startups for 20 years. And so, um, I mean a lot of people start this to do a GI and we've never had like a unified North star that I recall in the same [00:14:00] way. Like people built companies to start companies in the past.[00:14:02] Like that was what it was. Like I would create an internet company, I would create infrastructure company, like it's kind of more engineering builders and this is kind of a different. You know, mentality. And some companies have harnessed that incredibly well because their direction is so obviously on the path to what somebody would consider a GI, but others have not.[00:14:20] And so like there is always this tension with personnel. And so I think we're seeing more kind of founder movement.[00:14:27] Sarah Wang: Yeah.[00:14:27] Martin Casado: You know, as a fraction of founders than we've ever seen. I mean, maybe since like, I don't know the time of like Shockly and the trade DUR aid or something like that. Way back in the beginning of the industry, I, it's a very, very.[00:14:38] Unusual time of personnel.[00:14:39] Sarah Wang: Totally.[00:14:40] Talent Wars, Mega-Comp, and the Rise of Acquihire M&A[00:14:40] Sarah Wang: And it, I think it's exacerbated by the fact that talent wars, I mean, every industry has talent wars, but not at this magnitude, right? No. Yeah. Very rarely can you see someone get poached for $5 billion. That's hard to compete with. And then secondly, if you're a founder in ai, you could fart and it would be on the front page of, you know, the information these days.[00:14:59] And so there's [00:15:00] sort of this fishbowl effect that I think adds to the deep anxiety that, that these AI founders are feeling.[00:15:06] Martin Casado: Hmm.[00:15:06] swyx: Uh, yes. I mean, just on, uh, briefly comment on the founder, uh, the sort of. Talent wars thing. I feel like 2025 was just like a blip. Like I, I don't know if we'll see that again.[00:15:17] ‘cause meta built the team. Like, I don't know if, I think, I think they're kind of done and like, who's gonna pay more than meta? I, I don't know.[00:15:23] Martin Casado: I, I agree. So it feels so, it feel, it feels this way to me too. It's like, it is like, basically Zuckerberg kind of came out swinging and then now he's kind of back to building.[00:15:30] Yeah,[00:15:31] swyx: yeah. You know, you gotta like pay up to like assemble team to rush the job, whatever. But then now, now you like you, you made your choices and now they got a ship.[00:15:38] Martin Casado: I mean, the, the o other side of that is like, you know, like we're, we're actually in the job hiring market. We've got 600 people here. I hire all the time.[00:15:44] I've got three open recs if anybody's interested, that's listening to this for investor. Yeah, on, on the team, like on the investing side of the team, like, and, um, a lot of the people we talk to have acting, you know, active, um, offers for 10 million a year or something like that. And like, you know, and we pay really, [00:16:00] really well.[00:16:00] And just to see what's out on the market is really, is really remarkable. And so I would just say it's actually, so you're right, like the really flashy one, like I will get someone for, you know, a billion dollars, but like the inflated, um, uh, trickles down. Yeah, it is still very active today. I mean,[00:16:18] Sarah Wang: yeah, you could be an L five and get an offer in the tens of millions.[00:16:22] Okay. Yeah. Easily. Yeah. It's so I think you're right that it felt like a blip. I hope you're right. Um, but I think it's been, the steady state is now, I think got pulled up. Yeah. Yeah. I'll pull up for[00:16:31] Martin Casado: sure. Yeah.[00:16:32] Alessio: Yeah. And I think that's breaking the early stage founder math too. I think before a lot of people would be like, well, maybe I should just go be a founder instead of like getting paid.[00:16:39] Yeah. 800 KA million at Google. But if I'm getting paid. Five, 6 million. That's different but[00:16:45] Martin Casado: on. But on the other hand, there's more strategic money than we've ever seen historically, right? Mm-hmm. And so, yep. The economics, the, the, the, the calculus on the economics is very different in a number of ways. And, uh, it's crazy.[00:16:58] It's cra it's causing like a, [00:17:00] a, a, a ton of change in confusion in the market. Some very positive, sub negative, like, so for example, the other side of the, um. The co-founder, like, um, acquisition, you know, mark Zuckerberg poaching someone for a lot of money is like, we were actually seeing historic amount of m and a for basically acquihires, right?[00:17:20] That you like, you know, really good outcomes from a venture perspective that are effective acquihires, right? So I would say it's probably net positive from the investment standpoint, even though it seems from the headlines to be very disruptive in a negative way.[00:17:33] Alessio: Yeah.[00:17:33] What's Underfunded: Boring Software, Robotics Skepticism, and Custom Silicon Economics[00:17:33] Alessio: Um, let's talk maybe about what's not being invested in, like maybe some interesting ideas that you would see more people build or it, it seems in a way, you know, as ycs getting more popular, it's like access getting more popular.[00:17:47] There's a startup school path that a lot of founders take and they know what's hot in the VC circles and they know what gets funded. Uh, and there's maybe not as much risk appetite for. Things outside of that. Um, I'm curious if you feel [00:18:00] like that's true and what are maybe, uh, some of the areas, uh, that you think are under discussed?[00:18:06] Martin Casado: I mean, I actually think that we've taken our eye off the ball in a lot of like, just traditional, you know, software companies. Um, so like, I mean. You know, I think right now there's almost a barbell, like you're like the hot thing on X, you're deep tech.[00:18:21] swyx: Mm-hmm.[00:18:22] Martin Casado: Right. But I, you know, I feel like there's just kind of a long, you know, list of like good.[00:18:28] Good companies that will be around for a long time in very large markets. Say you're building a database, you know, say you're building, um, you know, kind of monitoring or logging or tooling or whatever. There's some good companies out there right now, but like, they have a really hard time getting, um, the attention of investors.[00:18:43] And it's almost become a meme, right? Which is like, if you're not basically growing from zero to a hundred in a year, you're not interesting, which is just, is the silliest thing to say. I mean, think of yourself as like an introvert person, like, like your personal money, right? Mm-hmm. So. Your personal money, will you put it in the stock market at 7% or you put it in this company growing five x in a very large [00:19:00] market?[00:19:00] Of course you can put it in the company five x. So it's just like we say these stupid things, like if you're not going from zero to a hundred, but like those, like who knows what the margins of those are mean. Clearly these are good investments. True for anybody, right? True. Like our LPs want whatever.[00:19:12] Three x net over, you know, the life cycle of a fund, right? So a, a company in a big market growing five X is a great investment. We'd, everybody would be happy with these returns, but we've got this kind of mania on these, these strong growths. And so I would say that that's probably the most underinvested sector.[00:19:28] Right now.[00:19:29] swyx: Boring software, boring enterprise software.[00:19:31] Martin Casado: Traditional. Really good company.[00:19:33] swyx: No, no AI here.[00:19:34] Martin Casado: No. Like boring. Well, well, the AI of course is pulling them into use cases. Yeah, but that's not what they're, they're not on the token path, right? Yeah. Let's just say that like they're software, but they're not on the token path.[00:19:41] Like these are like they're great investments from any definition except for like random VC on Twitter saying VC on x, saying like, it's not growing fast enough. What do you[00:19:52] Sarah Wang: think? Yeah, maybe I'll answer a slightly different. Question, but adjacent to what you asked, um, which is maybe an area that we're not, uh, investing [00:20:00] right now that I think is a question and we're spending a lot of time in regardless of whether we pull the trigger or not.[00:20:05] Um, and it would probably be on the hardware side, actually. Robotics, right? And the robotics side. Robotics. Right. Which is, it's, I don't wanna say that it's not getting funding ‘cause it's clearly, uh, it's, it's sort of non-consensus to almost not invest in robotics at this point. But, um, we spent a lot of time in that space and I think for us, we just haven't seen the chat GPT moment.[00:20:22] Happen on the hardware side. Um, and the funding going into it feels like it's already. Taking that for granted.[00:20:30] Martin Casado: Yeah. Yeah. But we also went through the drone, you know, um, there's a zip line right, right out there. What's that? Oh yeah, there's a zip line. Yeah. What the drone, what the av And like one of the takeaways is when it comes to hardware, um, most companies will end up verticalizing.[00:20:46] Like if you're. If you're investing in a robot company for an A for agriculture, you're investing in an ag company. ‘cause that's the competition and that's surprising. And that's supply chain. And if you're doing it for mining, that's mining. And so the ad team does a lot of that type of stuff ‘cause they actually set up to [00:21:00] diligence that type of work.[00:21:01] But for like horizontal technology investing, there's very little when it comes to robots just because it's so fit for, for purpose. And so we kinda like to look at software. Solutions or horizontal solutions like applied intuition. Clearly from the AV wave deep map, clearly from the AV wave, I would say scale AI was actually a horizontal one for That's fair, you know, for robotics early on.[00:21:23] And so that sort of thing we're very, very interested. But the actual like robot interacting with the world is probably better for different team. Agree.[00:21:30] Alessio: Yeah, I'm curious who these teams are supposed to be that invest in them. I feel like everybody's like, yeah, robotics, it's important and like people should invest in it.[00:21:38] But then when you look at like the numbers, like the capital requirements early on versus like the moment of, okay, this is actually gonna work. Let's keep investing. That seems really hard to predict in a way that is not,[00:21:49] Martin Casado: I think co, CO two, kla, gc, I mean these are all invested in in Harvard companies. He just, you know, and [00:22:00] listen, I mean, it could work this time for sure.[00:22:01] Right? I mean if Elon's doing it, he's like, right. Just, just the fact that Elon's doing it means that there's gonna be a lot of capital and a lot of attempts for a long period of time. So that alone maybe suggests that we should just be investing in robotics just ‘cause you have this North star who's Elon with a humanoid and that's gonna like basically willing into being an industry.[00:22:17] Um, but we've just historically found like. We're a huge believer that this is gonna happen. We just don't feel like we're in a good position to diligence these things. ‘cause again, robotics companies tend to be vertical. You really have to understand the market they're being sold into. Like that's like that competitive equilibrium with a human being is what's important.[00:22:34] It's not like the core tech and like we're kind of more horizontal core tech type investors. And this is Sarah and I. Yeah, the ad team is different. They can actually do these types of things.[00:22:42] swyx: Uh, just to clarify, AD stands for[00:22:44] Martin Casado: American Dynamism.[00:22:45] swyx: Alright. Okay. Yeah, yeah, yeah. Uh, I actually, I do have a related question that, first of all, I wanna acknowledge also just on the, on the chip side.[00:22:51] Yeah. I, I recall a podcast that where you were on, i, I, I think it was the a CC podcast, uh, about two or three years ago where you, where you suddenly said [00:23:00] something, which really stuck in my head about how at some point, at some point kind of scale it makes sense to. Build a custom aic Yes. For per run.[00:23:07] Martin Casado: Yes.[00:23:07] It's crazy. Yeah.[00:23:09] swyx: We're here and I think you, you estimated 500 billion, uh, something.[00:23:12] Martin Casado: No, no, no. A billion, a billion dollar training run of $1 billion training run. It makes sense to actually do a custom meic if you can do it in time. The question now is timelines. Yeah, but not money because just, just, just rough math.[00:23:22] If it's a billion dollar training. Then the inference for that model has to be over a billion, otherwise it won't be solvent. So let's assume it's, if you could save 20%, which you could save much more than that with an ASIC 20%, that's $200 million. You can tape out a chip for $200 million. Right? So now you can literally like justify economically, not timeline wise.[00:23:41] That's a different issue. An ASIC per model, which[00:23:44] swyx: is because that, that's how much we leave on the table every single time. We, we, we do like generic Nvidia.[00:23:48] Martin Casado: Exactly. Exactly. No, it, it is actually much more than that. You could probably get, you know, a factor of two, which would be 500 million.[00:23:54] swyx: Typical MFU would be like 50.[00:23:55] Yeah, yeah. And that's good.[00:23:57] Martin Casado: Exactly. Yeah. Hundred[00:23:57] swyx: percent. Um, so, so, yeah, and I mean, and I [00:24:00] just wanna acknowledge like, here we are in, in, in 2025 and opening eyes confirming like Broadcom and all the other like custom silicon deals, which is incredible. I, I think that, uh, you know, speaking about ad there's, there's a really like interesting tie in that obviously you guys are hit on, which is like these sort, this sort of like America first movement or like sort of re industrialized here.[00:24:17] Yeah. Uh, move TSMC here, if that's possible. Um, how much overlap is there from ad[00:24:23] Martin Casado: Yeah.[00:24:23] swyx: To, I guess, growth and, uh, investing in particularly like, you know, US AI companies that are strongly bounded by their compute.[00:24:32] Martin Casado: Yeah. Yeah. So I mean, I, I would view, I would view AD as more as a market segmentation than like a mission, right?[00:24:37] So the market segmentation is, it has kind of regulatory compliance issues or government, you know, sale or it deals with like hardware. I mean, they're just set up to, to, to, to, to. To diligence those types of companies. So it's a more of a market segmentation thing. I would say the entire firm. You know, which has been since it is been intercepted, you know, has geographical biases, right?[00:24:58] I mean, for the longest time we're like, you [00:25:00] know, bay Area is gonna be like, great, where the majority of the dollars go. Yeah. And, and listen, there, there's actually a lot of compounding effects for having a geographic bias. Right. You know, everybody's in the same place. You've got an ecosystem, you're there, you've got presence, you've got a network.[00:25:12] Um, and, uh, I mean, I would say the Bay area's very much back. You know, like I, I remember during pre COVID, like it was like almost Crypto had kind of. Pulled startups away. Miami from the Bay Area. Miami, yeah. Yeah. New York was, you know, because it's so close to finance, came up like Los Angeles had a moment ‘cause it was so close to consumer, but now it's kind of come back here.[00:25:29] And so I would say, you know, we tend to be very Bay area focused historically, even though of course we've asked all over the world. And then I would say like, if you take the ring out, you know, one more, it's gonna be the US of course, because we know it very well. And then one more is gonna be getting us and its allies and Yeah.[00:25:44] And it goes from there.[00:25:45] Sarah Wang: Yeah,[00:25:45] Martin Casado: sorry.[00:25:46] Sarah Wang: No, no. I agree. I think from a, but I think from the intern that that's sort of like where the companies are headquartered. Maybe your questions on supply chain and customer base. Uh, I, I would say our customers are, are, our companies are fairly international from that perspective.[00:25:59] Like they're selling [00:26:00] globally, right? They have global supply chains in some cases.[00:26:03] Martin Casado: I would say also the stickiness is very different.[00:26:05] Sarah Wang: Yeah.[00:26:05] Martin Casado: Historically between venture and growth, like there's so much company building in venture, so much so like hiring the next PM. Introducing the customer, like all of that stuff.[00:26:15] Like of course we're just gonna be stronger where we have our network and we've been doing business for 20 years. I've been in the Bay Area for 25 years, so clearly I'm just more effective here than I would be somewhere else. Um, where I think, I think for some of the later stage rounds, the companies don't need that much help.[00:26:30] They're already kind of pretty mature historically, so like they can kind of be everywhere. So there's kind of less of that stickiness. This is different in the AI time. I mean, Sarah is now the, uh, chief of staff of like half the AI companies in, uh, in the Bay Area right now. She's like, ops Ninja Biz, Devrel, BizOps.[00:26:48] swyx: Are, are you, are you finding much AI automation in your work? Like what, what is your stack.[00:26:53] Sarah Wang: Oh my, in my personal stack.[00:26:54] swyx: I mean, because like, uh, by the way, it's the, the, the reason for this is it is triggering, uh, yeah. We, like, I'm hiring [00:27:00] ops, ops people. Um, a lot of ponders I know are also hiring ops people and I'm just, you know, it's opportunity Since you're, you're also like basically helping out with ops with a lot of companies.[00:27:09] What are people doing these days? Because it's still very manual as far as I can tell.[00:27:13] Sarah Wang: Hmm. Yeah. I think the things that we help with are pretty network based, um, in that. It's sort of like, Hey, how do do I shortcut this process? Well, let's connect you to the right person. So there's not quite an AI workflow for that.[00:27:26] I will say as a growth investor, Claude Cowork is pretty interesting. Yeah. Like for the first time, you can actually get one shot data analysis. Right. Which, you know, if you're gonna do a customer database, analyze a cohort retention, right? That's just stuff that you had to do by hand before. And our team, the other, it was like midnight and the three of us were playing with Claude Cowork.[00:27:47] We gave it a raw file. Boom. Perfectly accurate. We checked the numbers. It was amazing. That was my like, aha moment. That sounds so boring. But you know, that's, that's the kind of thing that a growth investor is like, [00:28:00] you know, slaving away on late at night. Um, done in a few seconds.[00:28:03] swyx: Yeah. You gotta wonder what the whole, like, philanthropic labs, which is like their new sort of products studio.[00:28:10] Yeah. What would that be worth as an independent, uh, startup? You know, like a[00:28:14] Martin Casado: lot.[00:28:14] Sarah Wang: Yeah, true.[00:28:16] swyx: Yeah. You[00:28:16] Martin Casado: gotta hand it to them. They've been executing incredibly well.[00:28:19] swyx: Yeah. I, I mean, to me, like, you know, philanthropic, like building on cloud code, I think, uh, it makes sense to me the, the real. Um, pedal to the metal, whatever the, the, the phrase is, is when they start coming after consumer with, uh, against OpenAI and like that is like red alert at Open ai.[00:28:35] Oh, I[00:28:35] Martin Casado: think they've been pretty clear. They're enterprise focused.[00:28:37] swyx: They have been, but like they've been free. Here's[00:28:40] Martin Casado: care publicly,[00:28:40] swyx: it's enterprise focused. It's coding. Right. Yeah.[00:28:43] AI Labs vs Startups: Disruption, Undercutting & the Innovator's Dilemma[00:28:43] swyx: And then, and, but here's cloud, cloud, cowork, and, and here's like, well, we, uh, they, apparently they're running Instagram ads for Claudia.[00:28:50] I, on, you know, for, for people on, I get them all the time. Right. And so, like,[00:28:54] Martin Casado: uh,[00:28:54] swyx: it, it's kind of like this, the disruption thing of, uh, you know. Mo Open has been doing, [00:29:00] consumer been doing the, just pursuing general intelligence in every mo modality, and here's a topic that only focus on this thing, but now they're sort of undercutting and doing the whole innovator's dilemma thing on like everything else.[00:29:11] Martin Casado: It's very[00:29:11] swyx: interesting.[00:29:12] Martin Casado: Yeah, I mean there's, there's a very open que so for me there's like, do you know that meme where there's like the guy in the path and there's like a path this way? There's a path this way. Like one which way Western man. Yeah. Yeah.[00:29:23] Two Futures for AI: Infinite Market vs AGI Oligopoly[00:29:23] Martin Casado: And for me, like, like all the entire industry kind of like hinges on like two potential futures.[00:29:29] So in, in one potential future, um, the market is infinitely large. There's perverse economies of scale. ‘cause as soon as you put a model out there, like it kind of sublimates and all the other models catch up and like, it's just like software's being rewritten and fractured all over the place and there's tons of upside and it just grows.[00:29:48] And then there's another path which is like, well. Maybe these models actually generalize really well, and all you have to do is train them with three times more money. That's all you have to [00:30:00] do, and it'll just consume everything beyond it. And if that's the case, like you end up with basically an oligopoly for everything, like, you know mm-hmm.[00:30:06] Because they're perfectly general and like, so this would be like the, the a GI path would be like, these are perfectly general. They can do everything. And this one is like, this is actually normal software. The universe is complicated. You've got, and nobody knows the answer.[00:30:18] The Economics Reality Check: Gross Margins, Training Costs & Borrowing Against the Future[00:30:18] Martin Casado: My belief is if you actually look at the numbers of these companies, so generally if you look at the numbers of these companies, if you look at like the amount they're making and how much they, they spent training the last model, they're gross margin positive.[00:30:30] You're like, oh, that's really working. But if you look at like. The current training that they're doing for the next model, their gross margin negative. So part of me thinks that a lot of ‘em are kind of borrowing against the future and that's gonna have to slow down. It's gonna catch up to them at some point in time, but we don't really know.[00:30:47] Sarah Wang: Yeah.[00:30:47] Martin Casado: Does that make sense? Like, I mean, it could be, it could be the case that the only reason this is working is ‘cause they can raise that next round and they can train that next model. ‘cause these models have such a short. Life. And so at some point in time, like, you know, they won't be able to [00:31:00] raise that next round for the next model and then things will kind of converge and fragment again.[00:31:03] But right now it's not.[00:31:04] Sarah Wang: Totally. I think the other, by the way, just, um, a meta point. I think the other lesson from the last three years is, and we talk about this all the time ‘cause we're on this. Twitter X bubble. Um, cool. But, you know, if you go back to, let's say March, 2024, that period, it felt like a, I think an open source model with an, like a, you know, benchmark leading capability was sort of launching on a daily basis at that point.[00:31:27] And, um, and so that, you know, that's one period. Suddenly it's sort of like open source takes over the world. There's gonna be a plethora. It's not an oligopoly, you know, if you fast, you know, if you, if you rewind time even before that GPT-4 was number one for. Nine months, 10 months. It's a long time. Right.[00:31:44] Um, and of course now we're in this era where it feels like an oligopoly, um, maybe some very steady state shifts and, and you know, it could look like this in the future too, but it just, it's so hard to call. And I think the thing that keeps, you know, us up at [00:32:00] night in, in a good way and bad way, is that the capability progress is actually not slowing down.[00:32:06] And so until that happens, right, like you don't know what's gonna look like.[00:32:09] Martin Casado: But I, I would, I would say for sure it's not converged, like for sure, like the systemic capital flows have not converged, meaning right now it's still borrowing against the future to subsidize growth currently, which you can do that for a period of time.[00:32:23] But, but you know, at the end, at some point the market will rationalize that and just nobody knows what that will look like.[00:32:29] Alessio: Yeah.[00:32:29] Martin Casado: Or, or like the drop in price of compute will, will, will save them. Who knows?[00:32:34] Alessio: Yeah. Yeah. I think the models need to ask them to, to specific tasks. You know? It's like, okay, now Opus 4.5 might be a GI at some specific task, and now you can like depreciate the model over a longer time.[00:32:45] I think now, now, right now there's like no old model.[00:32:47] Martin Casado: No, but let, but lemme just change that mental, that's, that used to be my mental model. Lemme just change it a little bit.[00:32:53] Capital as a Weapon vs Task Saturation: Where Real Enterprise Value Gets Built[00:32:53] Martin Casado: If you can raise three times, if you can raise more than the aggregate of anybody that uses your models, that doesn't even matter.[00:32:59] It doesn't [00:33:00] even matter. See what I'm saying? Like, yeah. Yeah. So, so I have an API Business. My API business is 60% margin, or 70% margin, or 80% margin is a high margin business. So I know what everybody is using. If I can raise more money than the aggregate of everybody that's using it, I will consume them whether I'm a GI or not.[00:33:14] And I will know if they're using it ‘cause they're using it. And like, unlike in the past where engineering stops me from doing that.[00:33:21] Alessio: Mm-hmm.[00:33:21] Martin Casado: It is very straightforward. You just train. So I also thought it was kind of like, you must ask the code a GI, general, general, general. But I think there's also just a possibility that the, that the capital markets will just give them the, the, the ammunition to just go after everybody on top of ‘em.[00:33:36] Sarah Wang: I, I do wonder though, to your point, um, if there's a certain task that. Getting marginally better isn't actually that much better. Like we've asked them to it, to, you know, we can call it a GI or whatever, you know, actually, Ali Goi talks about this, like we're already at a GI for a lot of functions in the enterprise.[00:33:50] Um. That's probably those for those tasks, you probably could build very specific companies that focus on just getting as much value out of that task that isn't [00:34:00] coming from the model itself. There's probably a rich enterprise business to be built there. I mean, could be wrong on that, but there's a lot of interesting examples.[00:34:08] So, right, if you're looking the legal profession or, or whatnot, and maybe that's not a great one ‘cause the models are getting better on that front too, but just something where it's a bit saturated, then the value comes from. Services. It comes from implementation, right? It comes from all these things that actually make it useful to the end customer.[00:34:24] Martin Casado: Sorry, what am I, one more thing I think is, is underused in all of this is like, to what extent every task is a GI complete.[00:34:31] Sarah Wang: Mm-hmm.[00:34:32] Martin Casado: Yeah. I code every day. It's so fun.[00:34:35] Sarah Wang: That's a core question. Yeah.[00:34:36] Martin Casado: And like. When I'm talking to these models, it's not just code. I mean, it's everything, right? Like I, you know, like it's,[00:34:43] swyx: it's healthcare.[00:34:44] It's,[00:34:44] Martin Casado: I mean, it's[00:34:44] swyx: Mele,[00:34:45] Martin Casado: but it's every, it is exactly that. Like, yeah, that's[00:34:47] Sarah Wang: great support. Yeah.[00:34:48] Martin Casado: It's everything. Like I'm asking these models to, yeah, to understand compliance. I'm asking these models to go search the web. I'm asking these models to talk about things I know in the history, like it's having a full conversation with me while I, I engineer, and so it could be [00:35:00] the case that like, mm-hmm.[00:35:01] The most a, you know, a GI complete, like I'm not an a GI guy. Like I think that's, you know, but like the most a GI complete model will is win independent of the task. And we don't know the answer to that one either.[00:35:11] swyx: Yeah.[00:35:12] Martin Casado: But it seems to me that like, listen, codex in my experience is for sure better than Opus 4.5 for coding.[00:35:18] Like it finds the hardest bugs that I work in with. Like, it is, you know. The smartest developers. I don't work on it. It's great. Um, but I think Opus 4.5 is actually very, it's got a great bedside manner and it really, and it, it really matters if you're building something very complex because like, it really, you know, like you're, you're, you're a partner and a brainstorming partner for somebody.[00:35:38] And I think we don't discuss enough how every task kind of has that quality.[00:35:42] swyx: Mm-hmm.[00:35:43] Martin Casado: And what does that mean to like capital investment and like frontier models and Submodels? Yeah.[00:35:47] Why “Coding Models” Keep Collapsing into Generalists (Reasoning vs Taste)[00:35:47] Martin Casado: Like what happened to all the special coding models? Like, none of ‘em worked right. So[00:35:51] Alessio: some of them, they didn't even get released.[00:35:53] Magical[00:35:54] Martin Casado: Devrel. There's a whole, there's a whole host. We saw a bunch of them and like there's this whole theory that like, there could be, and [00:36:00] I think one of the conclusions is, is like there's no such thing as a coding model,[00:36:04] Alessio: you know?[00:36:04] Martin Casado: Like, that's not a thing. Like you're talking to another human being and it's, it's good at coding, but like it's gotta be good at everything.[00:36:10] swyx: Uh, minor disagree only because I, I'm pretty like, have pretty high confidence that basically open eye will always release a GPT five and a GT five codex. Like that's the code's. Yeah. The way I call it is one for raisin, one for Tiz. Um, and, and then like someone internal open, it was like, yeah, that's a good way to frame it.[00:36:32] Martin Casado: That's so funny.[00:36:33] swyx: Uh, but maybe it, maybe it collapses down to reason and that's it. It's not like a hundred dimensions doesn't life. Yeah. It's two dimensions. Yeah, yeah, yeah, yeah. Like and exactly. Beside manner versus coding. Yeah.[00:36:43] Martin Casado: Yeah.[00:36:44] swyx: It's, yeah.[00:36:46] Martin Casado: I, I think for, for any, it's hilarious. For any, for anybody listening to this for, for, for, I mean, for you, like when, when you're like coding or using these models for something like that.[00:36:52] Like actually just like be aware of how much of the interaction has nothing to do with coding and it just turns out to be a large portion of it. And so like, you're, I [00:37:00] think like, like the best Soto ish model. You know, it is going to remain very important no matter what the task is.[00:37:06] swyx: Yeah.[00:37:07] What He's Actually Coding: Gaussian Splats, Spark.js & 3D Scene Rendering Demos[00:37:07] swyx: Uh, speaking of coding, uh, I, I'm gonna be cheeky and ask like, what actually are you coding?[00:37:11] Because obviously you, you could code anything and you are obviously a busy investor and a manager of the good. Giant team. Um, what are you calling?[00:37:18] Martin Casado: I help, um, uh, FEFA at World Labs. Uh, it's one of the investments and um, and they're building a foundation model that creates 3D scenes.[00:37:27] swyx: Yeah, we had it on the pod.[00:37:28] Yeah. Yeah,[00:37:28] Martin Casado: yeah. And so these 3D scenes are Gaussian splats, just by the way that kind of AI works. And so like, you can reconstruct a scene better with, with, with radiance feels than with meshes. ‘cause like they don't really have topology. So, so they, they, they produce each. Beautiful, you know, 3D rendered scenes that are Gaussian splats, but the actual industry support for Gaussian splats isn't great.[00:37:50] It's just never, you know, it's always been meshes and like, things like unreal use meshes. And so I work on a open source library called Spark js, which is a. Uh, [00:38:00] a JavaScript rendering layer ready for Gaussian splats. And it's just because, you know, um, you, you, you need that support and, and right now there's kind of a three js moment that's all meshes and so like, it's become kind of the default in three Js ecosystem.[00:38:13] As part of that to kind of exercise the library, I just build a whole bunch of cool demos. So if you see me on X, you see like all my demos and all the world building, but all of that is just to exercise this, this library that I work on. ‘cause it's actually a very tough algorithmics problem to actually scale a library that much.[00:38:29] And just so you know, this is ancient history now, but 30 years ago I paid for undergrad, you know, working on game engines in college in the late nineties. So I've got actually a back and it's very old background, but I actually have a background in this and so a lot of it's fun. You know, but, but the, the, the, the whole goal is just for this rendering library to, to,[00:38:47] Sarah Wang: are you one of the most active contributors?[00:38:49] The, their GitHub[00:38:50] Martin Casado: spark? Yes.[00:38:51] Sarah Wang: Yeah, yeah.[00:38:51] Martin Casado: There's only two of us there, so, yes. No, so by the way, so the, the pri The pri, yeah. Yeah. So the primary developer is a [00:39:00] guy named Andres Quist, who's an absolute genius. He and I did our, our PhDs together. And so like, um, we studied for constant Quas together. It was almost like hanging out with an old friend, you know?[00:39:09] And so like. So he, he's the core, core guy. I did mostly kind of, you know, the side I run venture fund.[00:39:14] swyx: It's amazing. Like five years ago you would not have done any of this. And it brought you back[00:39:19] Martin Casado: the act, the Activ energy, you're still back. Energy was so high because you had to learn all the framework b******t.[00:39:23] Man, I f*****g used to hate that. And so like, now I don't have to deal with that. I can like focus on the algorithmics so I can focus on the scaling and I,[00:39:29] swyx: yeah. Yeah.[00:39:29] LLMs vs Spatial Intelligence + How to Value World Labs' 3D Foundation Model[00:39:29] swyx: And then, uh, I'll observe one irony and then I'll ask a serious investor question, uh, which is like, the irony is FFE actually doesn't believe that LMS can lead us to spatial intelligence.[00:39:37] And here you are using LMS to like help like achieve spatial intelligence. I just see, I see some like disconnect in there.[00:39:45] Martin Casado: Yeah. Yeah. So I think, I think, you know, I think, I think what she would say is LLMs are great to help with coding.[00:39:51] swyx: Yes.[00:39:51] Martin Casado: But like, that's very different than a model that actually like provides, they, they'll never have the[00:39:56] swyx: spatial inte[00:39:56] Martin Casado: issues.[00:39:56] And listen, our brains clearly listen, our brains, brains clearly have [00:40:00] both our, our brains clearly have a language reasoning section and they clearly have a spatial reasoning section. I mean, it's just, you know, these are two pretty independent problems.[00:40:07] swyx: Okay. And you, you, like, I, I would say that the, the one data point I recently had, uh, against it is the DeepMind, uh, IMO Gold, where, so, uh, typically the, the typical answer is that this is where you start going down the neuros symbolic path, right?[00:40:21] Like one, uh, sort of very sort of abstract reasoning thing and one form, formal thing. Um, and that's what. DeepMind had in 2024 with alpha proof, alpha geometry, and now they just use deep think and just extended thinking tokens. And it's one model and it's, and it's in LM.[00:40:36] Martin Casado: Yeah, yeah, yeah, yeah, yeah.[00:40:37] swyx: And so that, that was my indication of like, maybe you don't need a separate system.[00:40:42] Martin Casado: Yeah. So, so let me step back. I mean, at the end of the day, at the end of the day, these things are like nodes in a graph with weights on them. Right. You know, like it can be modeled like if you, if you distill it down. But let me just talk about the two different substrates. Let's, let me put you in a dark room.[00:40:56] Like totally black room. And then let me just [00:41:00] describe how you exit it. Like to your left, there's a table like duck below this thing, right? I mean like the chances that you're gonna like not run into something are very low. Now let me like turn on the light and you actually see, and you can do distance and you know how far something away is and like where it is or whatever.[00:41:17] Then you can do it, right? Like language is not the right primitives to describe. The universe because it's not exact enough. So that's all Faye, Faye is talking about. When it comes to like spatial reasoning, it's like you actually have to know that this is three feet far, like that far away. It is curved.[00:41:37] You have to understand, you know, the, like the actual movement through space.[00:41:40] swyx: Yeah.[00:41:40] Martin Casado: So I do, I listen, I do think at the end of these models are definitely converging as far as models, but there's, there's, there's different representations of problems you're solving. One is language. Which, you know, that would be like describing to somebody like what to do.[00:41:51] And the other one is actually just showing them and the space reasoning is just showing them.[00:41:55] swyx: Yeah, yeah, yeah. Right. Got it, got it. Uh, the, in the investor question was on, on, well labs [00:42:00] is, well, like, how do I value something like this? What, what, what work does the, do you do? I'm just like, Fefe is awesome.[00:42:07] Justin's awesome. And you know, the other two co-founder, co-founders, but like the, the, the tech, everyone's building cool tech. But like, what's the value of the tech? And this is the fundamental question[00:42:16] Martin Casado: of, well, let, let, just like these, let me just maybe give you a rough sketch on the diffusion models. I actually love to hear Sarah because I'm a venture for, you know, so like, ventures always, always like kind of wild west type[00:42:24] swyx: stuff.[00:42:24] You, you, you, you paid a dream and she has to like, actually[00:42:28] Martin Casado: I'm gonna say I'm gonna mar to reality, so I'm gonna say the venture for you. And she can be like, okay, you a little kid. Yeah. So like, so, so these diffusion models literally. Create something for, for almost nothing. And something that the, the world has found to be very valuable in the past, in our real markets, right?[00:42:45] Like, like a 2D image. I mean, that's been an entire market. People value them. It takes a human being a long time to create it, right? I mean, to create a, you know, a, to turn me into a whatever, like an image would cost a hundred bucks in an hour. The inference cost [00:43:00] us a hundredth of a penny, right? So we've seen this with speech in very successful companies.[00:43:03] We've seen this with 2D image. We've seen this with movies. Right? Now, think about 3D scene. I mean, I mean, when's Grand Theft Auto coming out? It's been six, what? It's been 10 years. I mean, how, how like, but hasn't been 10 years.[00:43:14] Alessio: Yeah.[00:43:15] Martin Casado: How much would it cost to like, to reproduce this room in 3D? Right. If you, if you, if you hired somebody on fiber, like in, in any sort of quality, probably 4,000 to $10,000.[00:43:24] And then if you had a professional, probably $30,000. So if you could generate the exact same thing from a 2D image, and we know that these are used and they're using Unreal and they're using Blend, or they're using movies and they're using video games and they're using all. So if you could do that for.[00:43:36] You know, less than a dollar, that's four or five orders of magnitude cheaper. So you're bringing the marginal cost of something that's useful down by three orders of magnitude, which historically have created very large companies. So that would be like the venture kind of strategic dreaming map.[00:43:49] swyx: Yeah.[00:43:50] And, and for listeners, uh, you can do this yourself on your, on your own phone with like. Uh, the marble.[00:43:55] Martin Casado: Yeah. Marble.[00:43:55] swyx: Uh, or but also there's many Nerf apps where you just go on your iPhone and, and do this.[00:43:59] Martin Casado: Yeah. Yeah. [00:44:00] Yeah. And, and in the case of marble though, it would, what you do is you literally give it in.[00:44:03] So most Nerf apps you like kind of run around and take a whole bunch of pictures and then you kind of reconstruct it.[00:44:08] swyx: Yeah.[00:44:08] Martin Casado: Um, things like marble, just that the whole generative 3D space will just take a 2D image and it'll reconstruct all the like, like[00:44:16] swyx: meaning it has to fill in. Uh,[00:44:18] Martin Casado: stuff at the back of the table, under the table, the back, like, like the images, it doesn't see.[00:44:22] So the generator stuff is very different than reconstruction that it fills in the things that you can't see.[00:44:26] swyx: Yeah. Okay.[00:44:26] Sarah Wang: So,[00:44:27] Martin Casado: all right. So now the,[00:44:28] Sarah Wang: no, no. I mean I love that[00:44:29] Martin Casado: the adult[00:44:29] Sarah Wang: perspective. Um, well, no, I was gonna say these are very much a tag team. So we, we started this pod with that, um, premise. And I think this is a perfect question to even build on that further.[00:44:36] ‘cause it truly is, I mean, we're tag teaming all of these together.[00:44:39] Investing in Model Labs, Media Rumors, and the Cursor Playbook (Margins & Going Down-Stack)[00:44:39] Sarah Wang: Um, but I think every investment fundamentally starts with the same. Maybe the same two premises. One is, at this point in time, we actually believe that there are. And of one founders for their particular craft, and they have to be demonstrated in their prior careers, right?[00:44:56] So, uh, we're not investing in every, you know, now the term is NEO [00:45:00] lab, but every foundation model, uh, any, any company, any founder trying to build a foundation model, we're not, um, contrary to popular opinion, we're
The u-blox SAM-M8Q has been sitting on my bench for months. This little GPS module has a built-in antenna, coin cell backup, speaks both NMEA and UBX binary protocol over UART or I2C. So why isn't it in the shop already? Well, it's mostly cause of the 475-page interfacing datasheet documenting every command, struct, and config register. Hundreds of message types. I got partway through by hand with some Claude Code Sonnet assistance, but ran out of time - plus it was still tedious when babysitting Sonnet. However, now we're living in an Opus + Codex era! So I pointed my Raspberry Pi OpenClaw at it. https://github.com/adafruit/openclaw Here's the setup: Raspberry Pi 5 running OpenClaw, wired to a QT Py RP2040, which talks to the SAM-M8Q. Opus 4.6 reads the datasheet (converted to markdown first by Sonnet 4.6 with 1M context to minimize re-parsing that PDF every session) and builds the implementation plan. I review the plan to make sure it prioritizes the most common commands and reports, and flagged some unessential sections like automotive-assist or RTK-specific. Then Codex is assigned each message implementation task as a sub-agent and writes the actual C code for the Arduino library. Opus suggested using struct-based parsing rather than digging through each uint8_t array; we just memcpy the checksummed message raw bytes onto the matching struct and extract the typed bit fields. We've got four message types done so far. After each message is implemented, Codex also writes a test sketch that will exercise / pretty-print the results of each message, great for self-testing as well as regression testing later. Tonight I'm telling it to keep going while I sleep: code, parse, test against live satellite data, fix failures, commit and push on success, then move on to the next. To me this is a great usage of "agentic" firmware development: there's no creativity in transcribing 84 different structs from a 475-page datasheet. Once the LLMs are done, I can review the PRs as if it were an everyday contributor and even make revision suggestions. Visit the Adafruit shop online - http://www.adafruit.com ----------------------------------------- LIVE CHAT IS HERE! http://adafru.it/discord Subscribe to Adafruit on YouTube: http://adafru.it/subscribe New tutorials on the Adafruit Learning System: http://learn.adafruit.com/ ----------------------------------------- #openclaw #raspberrypi #adafruit
I'm designing my real youth ministry program, with these 3 rules. 1 Order Doesn't Repeat And today I'm intentionally going to do it WRONG!! 2 I'm also making a different, from scratch, DYM Game for EACH week And I'm going to tell you how to get a FREE early copy of it 3 Finally, I'll evaluate how it all went! Join us! ACCESS TO FREE GAME & RECAP EPISODE https://www.patreon.com/posts/we-went-there-150912245?utm_medium=clipboard_copy&utm_source=copyLink&utm_campaign=postshare_creator&utm_content=join_link SHOW NOTES Shownotes & Transcripts https://www.hybridministry.xyz/189 ❄️ WINTER SOCIAL MEDIA PACK https://www.patreon.com/posts/winter-seasonal-144943791?utm_medium=clipboard_copy&utm_source=copyLink&utm_campaign=postshare_creator&utm_content=join_link HYBRID HERO MEMBERS GET IT FREE! https://www.patreon.com/hybridministry
In an online meeting with the San Diego Ramana Satsang (ramana-satsang-sd@googlegroups.com) on 1st February 2026, Michael answers various questions about Bhagavan's teachings. This episode can be watched as a video on YouTube. A more compressed audio copy in Opus format can be downloaded from MediaFire. Advertisement-free videos on the original writings of Bhagavan Ramana with explanations by Michael James can be accessed on our Vimeo video channel. Books by Sri Sadhu Om and Michael James that are currently available on Amazon: By Sri Sadhu Om: ► The Path of Sri Ramana (English) By Michael James: ► Happiness and Art of Being (English) ► Lyckan och Varandets Konst (Swedish) ► Anma-Viddai (English) Above books are also available in other regional Amazon marketplaces worldwide. - Sri Ramana Center of Houston
www.marktreichel.comhttps://www.linkedin.com/in/mark-treichel/Episode Description: When your credit union receives an examination result you believe is wrong, what are your options? In this episode, Mark Treichel walks through NCUA's formal appeal process from start to finish — the timeline, the levels of review, what you can actually appeal, and how to think about whether it makes sense for your institution. Drawing on his experience as a former NCUA regional director and his work advising credit unions through the process, Mark breaks down the practical realities most credit union leaders don't fully understand until they're in the middle of it.Key Topics Covered:Why resolving issues at the examiner level is always the best first step — and why it doesn't always workThe fear of retaliation and when it's worth pushing back anywayWhat qualifies as a "material supervisory determination" under Part 746The critical phrase most credit unions overlook: "includes, but is not limited to"Why you can't formally appeal CAMEL components — but effectively can anywayDocument resolutions are negotiable: pushing back on language, dates, and requirementsThe full appeal timeline: regional director, Supervisory Review Committee, and NCUA BoardWhen oral hearings are available and when they're guaranteedWhy "tie goes to the runner" at every level — and what that means for your burden of proofDefining victory: full reversals, partial wins, and the value of being heardBuilding your administrative record for the long termKey Takeaways:You have 30 days from the final exam to appeal to the regional director — and some argue the clock starts when you see the draft.A full appeal through the NCUA Board can take eight months to a year.The Supervisory Review Committee must grant an oral hearing if you request one — the only level where that's guaranteed.Your odds of success generally improve as you move higher in the process.Even if you don't win the appeal, you're building an administrative record NCUA must consider in any future enforcement actions.Resources:NCUA Part 746, Subpart A (Appeals Regulation)Preamble to the final rule on appeals processCredit Union Exam Solutions: marktreichel.com Opus 4.5ExtendedClaude is AI and can make mistakes. Please double-check responses.ShareArtifactsDownload allNcua
In today's episode of the RattlerGator Report, JB White conducts a live read-through and reaction to a powerful essay by Matt Schumer detailing the rapid acceleration of artificial intelligence. Framing the moment as a “February 2020” style inflection point, JB walks through Schumer's firsthand account of GPT-5.3 Codex and Opus 4.6, highlighting AI systems that now write code, debug themselves, iterate independently, and even help build their own successors. The discussion centers on exponential growth, task-duration benchmarks, AI-assisted model training, and projections that superhuman capability across most cognitive work could arrive within just a few years. JB expands the conversation into geopolitics, energy infrastructure, Elon Musk's role in robotics and satellite systems, and what AI dominance could mean for military, economic, and cultural power. He also ties the technological shift to 2026–2028 political positioning, global alliances, and internal GOP battles, arguing that America's strategic advantage depends on recognizing the scale of change underway. The episode blends technological urgency, political forecasting, and philosophical reflection on generational responsibility in the face of accelerating transformation.
Nick McDaniel is joined by Jacked Jameson and Rosario Grillo for another packed episode of Tapped In covering the biggest stories in Georgia independent wrestling.We break down major headlines, title situations, heated rivalries, and what's building toward Spectacle. There's steel cage drama, momentum shifts, and plenty of behind-the-scenes perspective you won't get anywhere else.The Let's Get Kraken segment delivers more chaos, hard lessons, and unexpected twists, while our non-wrestling Mt. Rushmore debate turns into a surprisingly passionate discussion.We close things out with Making the Drives, previewing a loaded weekend across Georgia — championship matches, stipulations, and can't-miss shows.If you follow Georgia indie wrestling, this is your weekly pulse check.
Worldwide Markets - Episode 666
Our 235th episode with a summary and discussion of last week's big AI news!Recorded on 01/02/2026Hosted by Andrey Kurenkov and Jeremie HarrisFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:* Major model launches include Anthropic's Opus 4.6 with a 1M-token context window and “agent teams,” OpenAI's GPT-5.3 Codex and faster Codex Spark via Cerebras, and Google's Gemini 3 Deep Think posting big jumps on ARC-AGI-2 and other STEM benchmarks amid criticism about missing safety documentation.* Generative media advances feature ByteDance's Seedance 2.0 text-to-video with high realism and broad prompting inputs, new image models Seedream 5.0 and Alibaba's Qwen Image 2.0, plus xAI's Grok Imagine API for text/image-to-video.* Open and competitive releases expand with Zhipu's GLM-5, DeepSeek's 1M-token context model, Cursor Composer 1.5, and open-weight Qwen3 Coder Next using hybrid attention aimed at efficient local/agentic coding.* Business updates include ElevenLabs raising $500M at an $11B valuation, Runway raising $315M at a $5.3B valuation, humanoid robotics firm Apptronik raising $935M at a $5.3B valuation, Waymo announcing readiness for high-volume production of its 6th-gen hardware, plus industry drama around Anthropic's Super Bowl ad and departures from xAI.Timestamps:(00:00:10) Intro / Banter(00:02:03) Sponsor Break(00:05:33) Response to listener commentsTools & Apps(00:07:27) Anthropic releases Opus 4.6 with new 'agent teams' | TechCrunch(00:11:28) OpenAI's new GPT-5.3-Codex is 25% faster and goes way beyond coding now - what's new | ZDNET(00:25:30) OpenAI launches new macOS app for agentic coding | TechCrunch(00:26:38) Google Unveils Gemini 3 Deep Think for Science & Engineering | The Tech Buzz(00:31:26) ByteDance's Seedance 2.0 Might be the Best AI Video Generator Yet - TechEBlog(00:35:14) China's ByteDance, Alibaba unveil AI image tools to rival Google's popular Nano Banana | South China Morning Post(00:36:54) DeepSeek boosts AI model with 10-fold token addition as Zhipu AI unveils GLM-5 | South China Morning Post(00:43:11) Cursor launches Composer 1.5 with upgrades for complex tasks(00:44:03) xAI launches Grok Imagine API for text and image to videoApplications & Business(00:45:47) Nvidia-backed AI voice startups ElevenLabs hits $11 billion valuation(00:52:04) AI video startup Runway raises $315M at $5.3B valuation, eyes more capable world models | TechCrunch(00:54:02) Humanoid robot startup Apptronik has now raised $935M at a $5B+ valuation | TechCrunch(00:57:10) Anthropic says 'Claude will remain ad-free,' unlike an unnamed rival | The Verge(01:00:18) Okay, now exactly half of xAI's founding team has left the company | TechCrunch(01:04:03) Waymo's next-gen robotaxi is ready for passengers — and also 'high-volume production' | The VergeProjects & Open Source(01:04:59) Qwen3-Coder-Next: Pushing Small Hybrid Models on Agentic Coding(01:08:38) OpenClaw's AI 'skill' extensions are a security nightmare | The VergeResearch & Advancements(01:10:40) Learning to Reason in 13 Parameters(01:16:01) Reinforcement World Model Learning for LLM-based Agents(01:20:00) Opus 4.6 on Vending-Bench – Not Just a Helpful AssistantPolicy & Safety(01:22:28) METR GPT-5.2(01:26:59) The Hot Mess of AI: How Does Misalignment Scale with Model Intelligence and Task Complexity?See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
如果你喜歡我的內容,歡迎加入會員支持我,讓我更有動力繼續分享更多好內容!
In an online meeting with the Ramana Maharshi Foundation UK on 31st January 2026, Michael explains Guru Vācaka Kōvai verses 969 and 970, and then answers questions on Bhagavan Ramana's teachings. This episode can be watched as a video on YouTube. A more compressed audio copy in Opus format can be downloaded from MediaFire. Michael's explanations on the original works of Bhagavan can be watched free of advertisements at Vimeo video channel. Books by Sri Sadhu Om and Michael James that are currently available on Amazon: By Sri Sadhu Om: ► The Path of Sri Ramana (English) By Michael James: ► Happiness and Art of Being (English) ► Lyckan och Varandets Konst (Swedish) ► Anma-Viddai (English) Above books are also available in other regional Amazon marketplaces worldwide. - Sri Ramana Center of Houston
Afrotronix, le musicien et compositeur tchadien revient avec un nouvel album intitulé « KÖD ». 27 pistes aussi variées que dansantes avec lesquelles il propulse dans le 21ᵉ siècle les rythmes, les mélodies et les voix du patrimoine musical africain. Caleb Rimtobaye, aka Afrotronix, est l'invité de la rédaction. Il répond à Olivier Rogez. RFI : Köd, c'est le titre de votre nouvel album. Qu'est-ce que signifie ce mot ? Afrotronix : Köd en Saran, une langue du sud du Tchad, cela veut dire tam-tam, parce que le tam-tam est l'un des premiers outils de codage, et j'ai voulu le ramener sur le devant en cette période où l'intelligence artificielle prend toute la place. Donc, j'ai voulu rappeler la genèse de cette intelligence basée sur le coding, j'ai voulu ramener l'attention sur l'humain, sur l'origine. Bien sûr, beaucoup craignent que la machine prenne le relais et que l'homme passe en dernier, mais je veux juste rappeler que tout ça vient de l'humain et que cela ne date pas d'aujourd'hui. Sur votre site, on peut lire que vous avez nourri les logiciels, les machines de sons traditionnels et de musiques traditionnelles. Expliquez-nous. Le processus consiste à fournir à la machine les algorithmes africains dans le cadre de mes logiciels de musique. J'ai fait un travail de « sound design » à partir des sons d'instruments qu'on ne retrouve même plus aujourd'hui et je les ai synthétisés. Au niveau rythmique, je ne compose pas une électro qui part de la house-music à laquelle on ajouterait des éléments africains. Avec moi, la base elle-même est africaine. Donc j'amène la machine à penser dans les langues et dans les codes africains. C'est ce que j'appelle « la proposition de l'algorithme africain aux machines ». À lire aussiAfrotronix explore le patrimoine africain dans «KÖD», son nouvel album Et vous vous êtes appuyé sur de vieilles cassettes, peut-être aussi de vieux vinyles africains que vous avez fait écouter à vos logiciels, c'est ça ? C'est exact. Je suis allé au Tchad et j'ai rapporté beaucoup de samples. De même, j'ai récupéré de vieilles cassettes audios à la radio nationale. Avec tout cela, j'ai créé une database. Toute ma démarche consiste à célébrer les valeurs culturelles africaines et non à les considérer comme des vestiges du passé. J'aimerais en faire des ressources vivantes qui continueront à alimenter nos conversations, nos projets de société, nos projets politiques, parce que c'est un héritage. Des ancêtres ont travaillé dur pour en arriver là. La question est de sortir du mimétisme au niveau politique et social, d'arrêter d'essayer de copier tout ce qui vient de l'Occident. Parce que nous avons une force de créativité à laquelle il faut faire de la place. « Köd est une méditation sur ce qui échappe aux machines », écrivez-vous. Pourtant, ce disque doit aussi beaucoup aux machines. Ces logiciels, cette intelligence artificielle, vous les prenez à contre-pied ? La chose que je veux éviter, c'est que l'on se mette à servir les machines. Les datas qu'on propose aux machines, c'est nous qui décidons de ce qu'elles sont. C'est nous qui décidons ce que la machine doit apprendre. Je ne prompte pas ma musique, c'est une limite que je ne franchis pas parce que je pense qu'alors il manquerait l'essence. Moi, je ne fais pas la musique pour du commercial. J'ai un message à transmettre. Les mots, l'effort et l'énergie que j'essaie de faire passer par la musique sont l'essence de mon travail. Donc si je m'abstiens de prompter, car sinon je perdrais l'intérêt. Il y a beaucoup de choses intéressantes sur cet album. On trouve par exemple un chant rebelle Toubou, l'ethnie du Tchad, dans le titre Himini. Où l'avez-vous trouvé ce chant rebelle Toubou ? J'étais en route pour le Nord vers Fada, et j'ai entendu dans la voiture les chansons que passaient les chauffeurs. Vous savez, dans certaines régions, les chauffeurs prennent des risques, et ils passent beaucoup de chansons de bravoure. Et pendant tout le temps du parcours, j'écoutais et un chauffeur m'a parlé de ces chants. Il m'a expliqué et j'étais vraiment touché par la force de cette musique composée pour pousser les hommes à ne pas reculer. Des chants pour aller au combat ? Oui. Et dans la résistance de ce que je suis en train de mener aujourd'hui, dans le besoin de faire de la place à la culture africaine et de résister à l'envahissement, j'ai trouvé important de remettre ça au centre. Justement, est-ce qu'à Ndjamena ce message est compris ? Est-ce que les autorités sont attentives à ce travail de préservation du patrimoine musical ? C'est ce qui manque un peu, hélas. C'est mon combat, et c'est le grand message que j'essaie d'envoyer au fur et à mesure. Moi, je ne crois pas qu'il y ait de développement sans la culture et l'un des messages que j'envoie à l'autorité, encore aujourd'hui, c'est : « oui, on fait beaucoup de sacrifices au niveau de l'armée pour défendre, mais défendre quoi ? » Si on met de côté la culture, qu'est-ce qu'on défend ? Notre identité passe par les manifestations culturelles, et ce que l'État devrait comprendre, le public l'a compris et les gens répondent parce que c'est l'image qu'ils veulent voir d'eux-mêmes. Moi, je rassemble plus de monde que les leaders politiques au Tchad. Il est peut-être temps que ces leaders se penchent sur la question et qu'ils investissent dans la culture.
Join Simtheory: https://simtheory.aiRegister for the STILL RELEVANT tour: https://simulationtheory.ai/16c0d1db-a8d0-4ac9-bae3-d25074589a80GLM-5 just dropped and it's trained entirely on Huawei chips – zero US hardware dependency. Meanwhile, we're having existential crises about whether we're even needed anymore. In this episode, we break down China's new frontier model that's competing with Opus 4.6 and Codex at a fraction of the price, why agentic loops are making 200K context windows the sweet spot (sorry, million-token dreams), and the very real phenomenon of AI productivity psychosis. We dive into why coding-optimized models are secretly winning at everything, the Harvard study confirming AI doesn't reduce work – it intensifies it, and the exodus of safety researchers from XAI, Anthropic, and OpenAI (spoiler: they're not giving back their shares). Plus: Mike's arm is failing from too much mouse usage, we debate whether the chatbot era is actually fading, and yes – there's a safety researcher diss track called "Is This The End?"CHAPTERS:0:00 Intro - Is This The End? (Song Preview)0:11 Still Relevant Tour Update & NASA Listener Callout1:42 AI Productivity Psychosis: The Pressure of Infinite Capability4:25 GLM-5 Breakdown: China's New Frontier Model on Huawei Chips7:24 First Impressions: GLM-5 in Agentic Loops9:48 Why Cheap Models Matter & The New Model War14:09 Codex Vibe Shift: Is OpenAI Winning?16:24 Does Context Window Size Even Matter Anymore?22:27 The Parallelization Problem & Cognitive Overload27:27 Mike's Arm Injury & The Voice Input Pivot31:17 Single-Threaded Work & The 95% Problem35:06 UX is Unsolved: Rolling Back Agentic Mistakes38:45 Harvard Study: AI Doesn't Reduce Work, It Intensifies It44:01 How AI Erodes Company Structure & Why Adoption Takes Years50:14 My AI vs Your AI: Household Debates50:43 The Safety Researcher Exodus: XAI, Anthropic, OpenAI56:49 Final Thoughts: Are We All Still Relevant?59:04 BONUS: Full "Is This The End?" Diss TrackThanks for listening. Like & Sub. Links above for the Still Relevant Tour signup and Simtheory. GLM-5 is here, your productivity psychosis is valid, and the safety researchers are becoming poets. xoxo
AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic
Jaeden and Connor discuss the latest developments from Anthropic, particularly the release of Opus 4.6. They explore the company's innovative features, including Agent Teams and a significant increase in context window size, and analyze Anthropic's growing impact on the AI market, especially in comparison to competitors like OpenAI and Gemini. The conversation highlights the unique approach Anthropic takes towards AI safety and functionality, as well as its potential future in the industry.Get the top 40+ AI Models for $20 at AI Box: https://aibox.aiConor's AI Course: https://www.ai-mindset.ai/coursesConor's AI Newsletter: https://www.ai-mindset.ai/Jaeden's AI Hustle Community: https://www.skool.com/aihustleWatch on YouTube: https://youtu.be/P01MU1AlIlUChapters00:00 Introduction to Anthropic and Opus 4.601:25 Anthropic's Rise and Unique Approach05:25 Innovations in Opus 4.6: Agent Teams and Context Windows09:14 Anthropic's Market Impact and Future Prospects See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Nick and Myron are back with an episode that lives in the gray area — where great TV, uneasy optics, and long-term questions all collide as WrestleMania season ramps up.This week isn't about panic or praise. It's about pressure — on creative, on talent, on fans, and on the industry itself.On This Episode:What caught our eye on TV, including the hooded man story, Punk vs. Balor at Elimination Chamber, Becky and AJ kicking off their Mania path, and whether Cody/Fatu/Drew already feels more personal than Punk vs. RomanThe shock of Brody King beating MJF in just over a minute, what it sets up for Grand Slam Australia, and why Omega vs. Andrade worked on every levelGrowing Triple H fatigue WrestleMania 42 optics: advance sales talk, dynamic pricing, Vegas running back-to-back years, and whether WWE is reacting to perception more than realityThe Vegas WrestleMania watch party blackout and why it feels like leverage that may come with bad opticsBron Breakker's injury, the scrapped Rumble plans, and how last-minute changes sometimes quietly save companiesCody Rhodes pitching a rethink of house showsA look at AEW Grand Slam AustraliaTNA's momentumRelease Info
In an online meeting with the Chicago Ramana devotees on 25th January 2026, Michael answers various questions about the teachings of Bhagavan Ramana. This episode can be watched as a video on YouTube. A more compressed audio copy in Opus format can be downloaded from MediaFire. Songs of Sri Sadhu Om with English translations can be accessed on our Vimeo video channel. Books by Sri Sadhu Om and Michael James that are currently available on Amazon: By Sri Sadhu Om: ► The Path of Sri Ramana (English) By Michael James: ► Happiness and Art of Being (English) ► Lyckan och Varandets Konst (Swedish) ► Anma-Viddai (English) Above books are also available in other regional Amazon marketplaces worldwide. - Sri Ramana Center of Houston
Can I design a compelling youth night from scratch each week with a different order - while also creating a brand-new DYM game from scratch? Oh - And stick around to the end of the video, because I'm going to tell you how you can get this game that's not even public yet in the pipeline, FOR FREE! Let's find out! ACCESS TO FREE GAME & RECAP EPISODE https://www.patreon.com/posts/free-game-winter-150284516?utm_medium=clipboard_copy&utm_source=copyLink&utm_campaign=postshare_creator&utm_content=join_link SHOW NOTES Shownotes & Transcripts https://www.hybridministry.xyz/188 ❄️ WINTER SOCIAL MEDIA PACK https://www.patreon.com/posts/winter-seasonal-144943791?utm_medium=clipboard_copy&utm_source=copyLink&utm_campaign=postshare_creator&utm_content=join_link HYBRID HERO MEMBERS GET IT FREE! https://www.patreon.com/hybridministry YOUTH MINISTRY LEADER COHORT (It's FREE!) https://www.ymlcohort.com/
This episode is sponsored by Airia. Get started today at airia.com. Jason Howell and Jeff Jarvis break down Claude Opus 4.6's new role as a financial‑research engine, discuss how GPT‑5.3 Codex is reshaping full‑stack coding workflows, and explore Matt Shumer's warning that AI agents will touch nearly every job in just a few years. We unpack how Super Bowl AI ads are reframing public perception, examine Waymo's use of DeepMind's Genie 3 world model to train autonomous vehicles on rare edge‑case scenarios, and also cover OpenAI's ad‑baked free ChatGPT tiers, HBR's findings on how AI expands workloads instead of lightening them, and new evidence that AI mislabels medical conditions in real‑world settings. Note: Time codes subject to change depending on dynamic ad insertion by the distributor. Chapters: 0:00 - Start 0:01:59 - Anthropic Releases New Model That's Adept at Financial Research Anthropic releases Opus 4.6 with new ‘agent teams' 0:10:00 - Introducing GPT-5.3-Codex 0:14:42 - Something Big Is Happening 0:33:25 - Can these Super Bowl ads make Americans love AI? 0:36:52 - Dunkin' Donuts digitally de-aged ‘90s actors and I'm terrified 0:39:47 - AI.com bought by Crypto.com founder for $70mn in biggest-ever website name deal 0:42:11 - OpenAI begins testing ads in ChatGPT, draws early attention from advertisers and analysts 0:48:27 - Waymo Says Genie 3 Simulations Can Help Boost Robotaxi Rollout 0:53:30 - AI Doesn't Reduce Work—It Intensifies It 1:02:08 - As AI enters the operating room, reports arise of botched surgeries and misidentified body parts 1:04:48 - Meta is giving its AI slop feed an app of its own 1:06:53 - Google goes long with 100-year bond 1:09:18 - OpenAI Abandons ‘io' Branding for Its AI Hardware Learn more about your ad choices. Visit megaphone.fm/adchoices
This week on the Tapped In, Nick and Jacked Jameson break down a busy week across the indie wrestling landscape, with plenty happening in Georgia and beyond as February heats up.On this episode:A look at the top headlines, including new tag championsRecap and discussion from Southern Fried Live, including first-time impressions, matches, and fan interactionsLet's Get Kraken: breaking down wild moments, standout promos, and physical performances that turned headsUnderrated / Under appreciated who don't always get the love they deserveMount Rushmore of '90s hard rockMaking the Drives (2/13-2/15/2026)Whether you're catching up on what you missed or figuring out where to spend your wrestling weekend, this episode has you covered.
This Week In Startups is made possible by:LinkedIn - http://linkedin.com/twistEvery.io - http://every.io/Northwest registered Agent - www.northwestregisteredagent.com/twistToday's show: Sean Liu gave his OpenClaw “eyes” by hooking the AI assistant up to his Meta Ray Bans. While Jason and Alex consider the implications… guest expert Alex Finn quickly loses interest…It's another All-Clawdbot TWiST where we debate the most essential uses for the viral agent platform. For their part, Alex Finn and Matt Van Horn are building practical skills to help founders (esp. solo founders)… Alex has a brain trust of AI agents working round the clock to improve his organization and content, while Matt designed a skill for his agent to order sushi lunches (keeping them under $20).PLUS find out how Producer Oliver designed his “Model Council” based on Perplexity's set-up, why running everything local is “illogical” for most people in 2026, AND why Jason's next Founder U cohort will all be building with OpenClaw.Timestamps: (00:00) Why Jason is STILL Claw-pilled! We can't get enough!(03:17) “Mr. Clawdbot” Alex Finn of Creator Buddy joins us! He's Claw-Pilled too!(04:50) Sean Liu (@_seanliu) added OpenClaw to his Meta Ray Bans… Here's why Alex doesn't care!(8:33) Lost your job to AI? Maybe AI can make you a founder!(12:35) LinkedIn Jobs - Hire right, the first time. Post your first job and get $100 off towards your job post at http://linkedin.com/twist Terms and conditions apply.(13:34) “Last 30 Days” skill creator Matt Van Horn (@mvanhorn) joins the show(14:54) Why Jason wants to invest in Matt's new project.(15:53) Meeting Alex's organization of AI agents who collaborate on his projects autonomously(22:28) Is Alex planning to turn his OpenClaw set-up into a company? Maybe…(23:10) Every.io - For all of your incorporation, banking, payroll, benefits, accounting, taxes or other back-office administration needs, visit http://every.io/(25:16) Using OpenClaw to create a “lights out” startup(28:17) Deciding when to use local models vs. more powerful cloud-based models(29:52) Northwest Registered Agent. Get more when you start your business with Northwest. In 10 clicks and 10 minutes, you can form your company and walk away with a real business identity — Learn more at www.northwestregisteredagent.com/twist(30:58) How Producer Oliver created a “Model Council” for LAUNCH based on Perplexity's set-up(35:52) Why it's so important to have an orchestrator that knows the right model for the right task(39:12) Why running everything local is “illogical” for most companies in 2026(41:15) Matt walks us through the latest and greatest incarnation of his “Last 30 Days” skill(45:57) Why Jason thinks recursive machine learning is the solution to human laziness(46:38) How Alex has supercharged his social media and his startup during OpenClaw Mania(50:15) Why Founder University is going ALL OPENCLAW next cohort(51:21) How big a jump over Opus 4.5 is Opus 4.6?(53:35) OpenClaw's going to start scanning skills for malware… Will this help?(57:13) No one was SUPER impressed by the AI dot com Super Bowl ad… except JCal(1:01:11) Matt used his “Order Sushi” OpenClaw skill live on the show to get lunch(1:04:08) Polymarket's sharps are wagering on Claude 5's hotly anticipated releaseThank you to our partners:(12:35) LinkedIn Jobs - Hire right, the first time. Post your first job and get $100 off towards your job post at http://linkedin.com/twist Terms and conditions apply.(23:10) Every.io - For all of your incorporation, banking, payroll, benefits, accounting, taxes or other back-office administration needs, visit http://every.io/(29:52) Northwest Registered Agent. Get more when you start your business with Northwest. In 10 clicks and 10 minutes, you can form your company and walk away with a real business identity — Learn more at www.northwestregisteredagent.com/twistCheck out all our partner offers: https://partners.launch.co/
OpenAI's twin initiatives to monetize ChatGPT's free tier through ads and launch the Frontier enterprise agent platform represent a shift in the AI provider's business model, with substantial implications for compliance and operational governance. Free and low-cost ChatGPT users will now see sponsored links unless they opt to reduce daily usage; only customers paying $20 or more per month retain an ad-free experience. OpenAI is concurrently marketing Frontier to enterprise clients such as HP, Intuit, and Uber, offering AI agent orchestration and deploying a team of consultants to support custom AI applications. The company projects enterprise revenue will constitute 50% of its income by year-end, up from 40% the prior month.Operating in both the consumer funnel and the enterprise layer, OpenAI combines top-of-funnel data monetization with vertical integration of services. The ad-supported free tier raises compliance concerns, as user interactions become subject to additional data collection and monetization. For organizations, this means enforcement decisions around whether and how employees may use free AI tools in regulated or sensitive environments. The more consequential development, however, is the introduction of enterprise agent orchestration through Frontier, where questions persist regarding liability, governance, production stability, and how organizations are protected from errors committed by autonomous agents.Related market movements include Anthropic's release of Claude Opus 4.6—which enables multi-agent collaboration with context windows up to 1 million tokens—and Microsoft's planned shift for Windows to a signed-by-default trust model. Anthropic's enhancements to agent functionality remain constrained by key gaps, such as conflict arbitration mechanisms, rollback procedures, and documented cost models, and the expanded context remains limited to beta testers. Microsoft's strategy to enforce signed apps by default mirrors iOS's approach to application trust, but its operational viability depends on how override mechanisms are managed by both users and IT administrators. Additional developments in backup, asset management, and AI governance (as seen with NinjaOne, JumpCloud, and Zoom) reflect a general trend towards increased integration and platform consolidation, though with ongoing gaps in security and compliance as AI adoption accelerates.The practical takeaway for MSPs and IT service leaders is the need to re-evaluate policies around free AI tool usage, invest in governance and auditability for enterprise AI, and prepare operational systems for stricter software trust and exception management requirements. Structural changes in software security and AI orchestration are transferring costs and risks from incident response to ongoing policy enforcement and exception handling. Those offering AI services should prioritize model-agnostic governance and avoid reliance on a single vendor's automation layer, as vertical integration by platform providers is reducing the defensibility of narrow service offerings.Four things to know today:00:00 OpenAI Adds Ads to Free ChatGPT; Launches Frontier Platform for Enterprise Agents04:07 Anthropic Ships Opus 4.6 Agent Teams; Model Found 500 Zero-Days in Testing06:43 Microsoft Announces Signed-App-Only Mode for Windows 11; Phased Rollout Planned10:19 NinjaOne Adds Asset Management; Zoom Launches AI Workspace Tool; JumpCloud Opens VC ArmThis is the Business of Tech. Supported by: CometBackup IT Service Provider University
Welcome to episode 342 of The Cloud Pod, where the forecast is always cloudy! Justin, Ryan, and Matt are in the studio today to bring you all the latest in cloud and AI news this week. How do you feel about ads? How do you feel about ads while using AI? We've got options! We've got a round-up of tech Super Bowl ads, AI ads, Earnings reports (who frankly need the ad revenue), and a plethora of Opus 4.6 announcements, plus more. Let's get started! Titles we almost went with this week ChatGPT Goes Full Mad Men: Your AI Assistant Now Comes With Commercial Breaks Heroku’s New Feature: No New Features AWS Gives EC2 Instances a Storage Growth Spurt: 22.8TB of Local NVMe Now Available Identity Crisis Averted: IAM Identity Center Learns to Replicate Itself JSON Schema Enforcement: Because Your LLM Needs Structure in Its Life From Zero to Admin in 480 Seconds: A Serbian Speedrun Story From Proof of Concept to Proof of Claw: DigitalOcean Tames AI Agent Infrastructure Azure’s Growth Hits the Clouds: Microsoft’s 39% Increase Still Not Enough for Wall Street One Lake to Rule Them All: Microsoft and Snowflake Finally Stop Fighting Over Your Data Free Lunch Officially Over: ChatGPT Learns That Servers Cost Money Claude Won’t Sell You Anything (Except Maybe Peace of Mind) IAM Identity Center Goes Multi-Regional: Because One Region to Rule Them All Wasn’t Enough Databricks Takes the Base Out of Database with Lakebase GA I'm a Chrome Tab hoarder General News 01:30 Superbowl Ads of Note OpenAI: https://www.youtube.com/watch?v=aCN9iCXNJqQ Microsoft CoPilot: https://www.youtube.com/watch?v=Ndj9Jk-tGKo Base44?: https://www.youtube.com/watch?v=iKEUWtqvsis Gemini: https://www.youtube.com/watch?v=Z1yGy9fELtE Anthropic: https://www.youtube.com/watch?v=gmnjDLwZckA ai.com: https://www.youtube.com/watch?v=n7I-D4YXbzg&t=3s 16:35 Justin -If you ever want to knowif there's a bubble, spending dumb money on the Super Bowl on an ad that makes no sense is probably your number one clue.” 16:53 It's Earnings Time! Microsoft (MSFT) Q2 earnings report 2026 Microsoft Q2 2026 earnings show Azure cloud growth slowing to 39% from 40% in the prior quarter, missing analyst expectations of 39.4% and causing shares to drop 7% in after-hours trading. The company’s gross margin hit a three-year low at 68% due to substantial AI infrastructure investments totaling $37.5 billion in capital expenditures, up 66% year over year.
In an online meeting with Sean on 21st January 2026, Michael answers questions about Bhagavan Ramana's teachings. This episode can be watched as a video on YouTube. A more compressed audio copy in Opus format can be downloaded from MediaFire. Ad-free videos on the original writings of Bhagavan Ramana with explanations by Michael James can be accessed on our Vimeo video channel. Books on Bhagavan Ramana's teachings by Sri Sadhu Om and Michael James that are currently available on Amazon: By Sri Sadhu Om: ► The Path of Sri Ramana (English) By Michael James: ► Happiness and Art of Being (English) ► Lyckan och Varandets Konst (Swedish) ► Anma-Viddai (English) Above books are also available in other regional Amazon marketplaces worldwide. - Sri Ramana Center of Houston
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Microsoft Patches Four Azure Vulnerabilities (three critical) https://msrc.microsoft.com/update-guide/vulnerability Evaluating and mitigating the growing risk of LLM-discovered 0-days https://red.anthropic.com/2026/zero-days/ Gitlab AI Gateway Vulnerability CVE-2026-1868 https://about.gitlab.com/releases/2026/02/06/patch-release-gitlab-ai-gateway-18-8-1-released/
The hosts unpack the latest AI breakthroughs — from Opus 4.6 and AGI debates to robotics, energy innovation, and the future of AI personhood, privacy, and the workforce. Get notified once we go live during Abundance360: https://www.abundance360.com/livestream Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Peter H. Diamandis, MD, is the Founder of XPRIZE, Singularity University, ZeroG, and A360 Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy _ Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Substack Spotify Threads Listen to MOONSHOTS: Apple YouTube – *Recorded on February 6th, 2026 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode James and Frank walk through the latest Copilot CLI power-ups—Autopilot loops, experimental Fleet/parallel agents, and Opus model/context updates—while demoing how they used plan mode to spin up a full MAUI pet‑insulin app end-to-end. Learn what Autopilot and Fleet actually do, how parallel agents orchestrate work, plus practical tips (watch your context window, use plan mode) for turning AI agents into fast prototypes. Follow Us Frank: Twitter, Blog, GitHub James: Twitter, Blog, GitHub Merge Conflict: Twitter, Facebook, Website, Chat on Discord Music : Amethyst Seer - Citrine by Adventureface ⭐⭐ Review Us ⭐⭐ Machine transcription available on http://mergeconflict.fm
Waymo is training its fleet on edge case driving scenarios with DeepMind's Genie 3, and TikTok might have to change its infinite scroll behavior to address health concerns in the EU.Starring Jason Howell and Huyen Tue Dao.Show notes can be found here. Hosted on Acast. See acast.com/privacy for more information.
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Anthropic dropped Claude Opus 4.6 and OpenAI responded with GPT 5.3 Codex just 20 minutes later — the most intense head-to-head model release we've ever seen. Here's what each model brings, how they compare, and what the first reactions are telling us. In the headlines: Google and Amazon share their capex plans, and we're about to spend 2.5 moon landings on AI. Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsRackspace AI Launchpad - Build, test and scale intelligent workloads faster - http://rackspace.com/ailaunchpadZencoder - From vibe coding to AI-first engineering - http://zencoder.ai/zenflowOptimizely Agents in Action - Join the virtual event (with me!) free March 4 - https://www.optimizely.com/insights/agents-in-action/AssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/briefSection - Build an AI workforce at scale - https://www.sectionai.com/LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai
I sit down with Morgan Linton, Cofounder/CTO of Bold Metrics, to break down the same-day release of Claude Opus 4.6 and GPT-5.3 Codex. We walk through exactly how to set up Opus 4.6 in Claude Code, explore the philosophical split between autonomous agent teams and interactive pair-programming, and then put both models to the test by having each one build a Polymarket competitor from scratch, live and unscripted. By the end, you'll know how to configure each model, when to reach for one over the other, and what happened when we let them race head-to-head. Timestamps 00:00 – Intro 03:26 – Setting Up Opus 4.6 in Claude Code 05:16 – Enabling Agent Teams 08:32 – The Philosophical Divergence between Codex and Opus 11:11 – Core Feature Comparison (Context Window, Benchmarks, Agentic Behavior) 15:27 – Live Demo Setup: Polymarket Build Prompt Design 18:26 – Race Begins 21:02 – Best Model for Vibe Coders 22:12 – Codex Finishes in Under 4 Minutes 26:38 – Opus Agents Still Running, Token Usage Climbing 31:41 – Testing and Reviewing the Codex Build 40:25 – Opus Build Completes, First Look at Results 42:47 – Opus Final Build Reveal 44:22 – Side-by-Side Comparison: Opus Takes This Round 45:40 – Final Takeaways and Recommendations Key Points Opus 4.6 and GPT-5.3 Codex dropped within 18 minutes of each other and represent two fundamentally different engineering philosophies — autonomous agents vs. interactive collaboration. To use Opus 4.6 properly, you must update Claude Code to version 2.1.32+, set the model in settings.json, and explicitly enable the experimental Agent Teams feature. Opus 4.6's standout feature is multi-agent orchestration: you can spin up parallel agents for research, architecture, UX, and testing — all working simultaneously. GPT-5.3 Codex's standout feature is mid-task steering: you can interrupt, redirect, and course-correct the model while it's actively building. In the live head-to-head, Codex finished a Polymarket competitor in under 4 minutes; Opus took significantly longer but produced a more polished UI, richer feature set, and 96 tests vs. Codex's 10. Agent teams multiply token usage substantially — a single Opus build can consume 150,000–250,000 tokens across all agents. The #1 tool to find startup ideas/trends - https://www.ideabrowser.com LCA helps Fortune 500s and fast-growing startups build their future - from Warner Music to Fortnite to Dropbox. We turn 'what if' into reality with AI, apps, and next-gen products https://latecheckout.agency/ The Vibe Marketer - Resources for people into vibe marketing/marketing with AI: https://www.thevibemarketer.com/ FIND ME ON SOCIAL X/Twitter: https://twitter.com/gregisenberg Instagram: https://instagram.com/gregisenberg/ LinkedIn: https://www.linkedin.com/in/gisenberg/ Morgan Linton X/Twitter: https://x.com/morganlinton Bold Metrics: https://boldmetrics.com Personal Website: https://linton.ai
Anthropic drops Opus 4.6. Twenty minutes later, OpenAI fires back with GPT-5.3 Codex. This is the AI agentic coding arms race and it's moving fast. Both AI models are writing code that can write itself now. OpenAI is using 5.3 to improve its own tooling. Opus 4.6 is "voicing discomfort with being a product." We tested both and break down what actually matters for people building stuff. Plus Kling 3.0 is out (and harder to prompt than you think), OpenClaw bots are hiring humans on rent-a-human.ai, Roblox launches prompt-to-3D creation, and robots are now doing 130K step challenges in negative 47 degree weather. THE MODELS ARE IMPROVING THEMSELVES NOW. EVERYTHING IS FINE. Come to our Discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // Anthropic's Claude Opus 4.6 https://www.anthropic.com/news/claude-opus-4-6 Orchestrating Agents in Claude Code https://x.com/lydiahallie/status/2019469032844587505?s=20 Opus 4.6 Beats Humans at analyzing complex human science docs https://x.com/_simonsmith/status/2019502742209769540?s=20 OpenAI GPT-5.3 Codex https://openai.com/index/introducing-gpt-5-3-codex/ 5.3 Codex First model instrumental in creating itself https://x.com/deredleritt3r/status/2019475360438493597 OpenAI Frontier https://openai.com/index/introducing-openai-frontier/ Anthropic's Superbowl Ads https://x.com/tomwarren/status/2019039874771550516?s=20 GPT-5 connected to an autonomous lab to do experiments https://x.com/OpenAI/status/2019488071134347605?s=20 OpenClaw https://openclaw.ai/ Rent-A-Human https://rentahuman.ai/bounties Kling 3.0 = Really good model https://x.com/Kling_ai/status/2019064918960668819?s=20 Kling 3.0 Moonlanding Mockumentary https://x.com/Kling_ai/status/2019228615775604784?s=20 PJ Ace's Way of Kings Intro https://x.com/PJaccetturo/status/2019072637192843463?s=20 We Are The Art | Brandon Sanderson's Keynote Speech https://youtu.be/mb3uK-_QkOo?si=EgKBjxZf4GE4DYIJ Gavin's Kling Fail https://x.com/gavinpurcell/status/2019436331999588371?s=20 FIGMA VECTOR AI https://x.com/moguzbulbul/status/2019106665732403708?s=20 Grok Imagine 1.0 Officially Launches https://x.com/xai/status/2018164753810764061?s=20 Roblox Launches 4D Creation https://x.com/Roblox/status/2019221624604750238 Unitree Robot Walks Across The Tundra (-47C!!) https://x.com/War_Radar2/status/2018315065414635813?s=20 KinectIQ's Humanoid Framework https://youtu.be/Y2DhzLPGdwY?si=iWibCGoc_h53yZz3 The LooksMaxxor https://x.com/Gossip_Goblin/status/2018362969025884282?s=20 Midi-Survivor https://x.com/measure_plan/status/2019082789379858577?s=20