POPULARITY
Categories
Help support the free broadcast by donating to our PayPal fundraiser! https://www.paypal.com/ncp/payment/RL... *Behind the Bunker Paintball Podcast* is a long-running weekly show dedicated to everything paintball. Hosted by passionate players and industry veterans, the podcast dives into the latest happenings in the sport, from new gear releases and product reviews to updates on tournaments and events around the world. It has built a loyal audience by combining serious paintball discussion with a lighthearted, entertaining approach that keeps both new players and seasoned veterans engaged.
Read the essay here.Timestamps00:00:00 What are we scaling?00:03:11 The value of human labor00:05:04 Economic diffusion lag is cope00:06:34 Goal-post shifting is justified00:08:23 RL scaling00:09:18 Broadly deployed intelligence explosion Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Send us a textRecorded Dec 20, 2025 - Enjoy the final episode in our teaching series, "The Biblical Roots of Christmas." This week, we turn to the celebration of Christmas itself—asking not only what we celebrate, but why. We'll explore how the fulfillment found in Jesus Christ naturally gives rise to joyful remembrance, how the early people of God marked God's redemptive acts, and what Scripture teaches us about honoring Christ through meaningful celebration.Let's rediscover together why Christmas is not merely a tradition to defend or dismiss, but a gospel truth to rejoice in.The Biblical Roots MinistriesOur websiteOur YouTube ChannelProf. Solberg's BlogSupport our Ministry (Thank you!)What if Christmas felt sacred again? Full of Grace and Truth, the new book from award-winning author R. L. Solberg, invites you to rediscover the biblical story at the heart of the season. Available now in paperback and Kindle, with all proceeds supporting The Biblical Roots Ministries. Get your copy today on Amazon.com.
Ari Morcos and Rob Toews return for their spiciest conversation yet. Fresh from NeurIPS, they debate whether models are truly plateauing or if we're just myopically focused on LLMs while breakthroughs happen in other modalities.They reveal why infinite capital at labs may actually constrain innovation, explain the narrow "Goldilocks zone" where RL actually works, and argue why U.S. chip restrictions may have backfired catastrophically—accelerating China's path to self-sufficiency by a decade. The conversation covers OpenAI's code red moment and structural vulnerabilities, the mystique surrounding SSI and Ilya's "two words," and why the real bottleneck in AI research is compute, not ideas.The episode closes with bold 2026 predictions: Rob forecasts Sam Altman won't be OpenAI's CEO by year-end, while Ari gives 50%+ odds a Chinese open-source model will be the world's best at least once next year. (0:00) Intro(1:51) Reflections on NeurIPS Conference(5:14) Are AI Models Plateauing?(11:12) Reinforcement Learning and Enterprise Adoption(16:16) Future Research Vectors in AI(28:40) The Role of Neo Labs(39:35) The Myth of the Great Man Theory in Science(41:47) OpenAI's Code Red and Market Position(47:19) Disney and OpenAI's Strategic Partnership(51:28) Meta's Super Intelligence Team Challenges(54:33) US-China AI Chip Dynamics(1:00:54) Amazon's Nova Forge and Enterprise AI(1:03:38) End of Year Reflections and Predictions With your co-hosts:@jacobeffron - Partner at Redpoint, Former PM Flatiron Health@patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn@ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare)@jordan_segall - Partner at Redpoint
Gemini 3 was a landmark frontier model launch in AI this year — but the story behind its performance isn't just about adding more compute. In this episode, I sit down with Sebastian Bourgeaud, a pre-training lead for Gemini 3 at Google DeepMind and co-author of the seminal RETRO paper. In his first-ever podcast interview, Sebastian takes us inside the lab mindset behind Google's most powerful model — what actually changed, and why the real work today is no longer “training a model,” but building a full system.We unpack the “secret recipe” idea — the notion that big leaps come from better pre-training and better post-training — and use it to explore a deeper shift in the industry: moving from an “infinite data” era to a data-limited regime, where curation, proxies, and measurement matter as much as web-scale volume. Sebastian explains why scaling laws aren't dead, but evolving, why evals have become one of the hardest and most underrated problems (including benchmark contamination), and why frontier research is increasingly a full-stack discipline that spans data, infrastructure, and engineering as much as algorithms.From the intuition behind Deep Think, to the rise (and risks) of synthetic data loops, to the future of long-context and retrieval, this is a technical deep dive into the physics of frontier AI. We also get into continual learning — what it would take for models to keep updating with new knowledge over time, whether via tools, expanding context, or new training paradigms — and what that implies for where foundation models are headed next. If you want a grounded view of pre-training in late 2025 beyond the marketing layer, this conversation is a blueprint.Google DeepMindWebsite - https://deepmind.googleX/Twitter - https://x.com/GoogleDeepMindSebastian BorgeaudLinkedIn - https://www.linkedin.com/in/sebastian-borgeaud-8648a5aa/X/Twitter - https://x.com/borgeaud_sFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) – Cold intro: “We're ahead of schedule” + AI is now a system(00:58) – Oriol's “secret recipe”: better pre- + post-training(02:09) – Why AI progress still isn't slowing down(03:04) – Are models actually getting smarter?(04:36) – Two–three years out: what changes first?(06:34) – AI doing AI research: faster, not automated(07:45) – Frontier labs: same playbook or different bets?(10:19) – Post-transformers: will a disruption happen?(10:51) – DeepMind's advantage: research × engineering × infra(12:26) – What a Gemini 3 pre-training lead actually does(13:59) – From Europe to Cambridge to DeepMind(18:06) – Why he left RL for real-world data(20:05) – From Gopher to Chinchilla to RETRO (and why it matters)(20:28) – “Research taste”: integrate or slow everyone down(23:00) – Fixes vs moonshots: how they balance the pipeline(24:37) – Research vs product pressure (and org structure)(26:24) – Gemini 3 under the hood: MoE in plain English(28:30) – Native multimodality: the hidden costs(30:03) – Scaling laws aren't dead (but scale isn't everything)(33:07) – Synthetic data: powerful, dangerous(35:00) – Reasoning traces: what he can't say (and why)(37:18) – Long context + attention: what's next(38:40) – Retrieval vs RAG vs long context(41:49) – The real boss fight: evals (and contamination)(42:28) – Alignment: pre-training vs post-training(43:32) – Deep Think + agents + “vibe coding”(46:34) – Continual learning: updating models over time(49:35) – Advice for researchers + founders(53:35) – “No end in sight” for progress + closing
Datawizz is pioneering continuous reinforcement learning infrastructure for AI systems that need to evolve in production, not ossify after deployment. After building and exiting RapidAPI—which served 10 million developers and had at least one team at 75% of Fortune 500 companies using and paying for the platform—Founder and CEO Iddo Gino returned to building when he noticed a pattern: nearly every AI agent pitch he reviewed as an angel investor assumed models would simultaneously get orders of magnitude better and cheaper. In a recent episode of BUILDERS, we sat down with Iddo to explore why that dual assumption breaks most AI economics, how traditional ML training approaches fail in the LLM era, and why specialized models will capture 50-60% of AI inference by 2030. Topics Discussed Why running two distinct businesses under one roof—RapidAPI's developer marketplace and enterprise API hub—ultimately capped scale despite compelling synergy narratives The "Big Short moment" reviewing AI pitches: every business model assumed simultaneous 1-2 order of magnitude improvements in accuracy and cost Why companies spending 2-3 months on fine-tuning repeatedly saw frontier models (GPT-4, Claude 3) obsolete their custom work The continuous learning flywheel: online evaluation → suspect inference queuing → human validation → daily/weekly RL batches → deployment How human evaluation companies like Scale AI shift from offline batch labeling to real-time inference correction queues Early GTM through LinkedIn DMs to founders running serious agent production volume, working backward through less mature adopters ICP discovery: qualifying on whether 20% accuracy gains or 10x cost reductions would be transformational versus incremental The integration layer approach: orchestrating the continuous learning loop across observability, evaluation, training, and inference tools Why the first $10M is about selling to believers in continuous learning, not evangelizing the category GTM Lessons For B2B Founders Recognize when distribution narratives mask structural incompatibility: RapidAPI had 10 million developers and teams at 75% of Fortune 500 paying for the platform—massive distribution that theoretically fed enterprise sales. The problem: Iddo could always find anecdotes where POC teams had used RapidAPI, creating a compelling story about grassroots adoption. The critical question he should have asked earlier: "Is self-service really the driver for why we're winning deals, or is it a nice-to-have contributor?" When two businesses have fundamentally different product roadmaps, cultures, and buying journeys, distribution overlap doesn't create a sustainable single company. Stop asking if synergies exist—ask if they're causal. Qualify on whether improvements cross phase-transition thresholds: Datawizz disqualifies prospects who acknowledge value but lack acute pain. The diagnostic questions: "If we improved model accuracy by 20%, how impactful is that?" and "If we cut your costs 10x, what does that mean?" Companies already automating human labor often respond that inference costs are rounding errors compared to savings. The ideal customers hit differently: "We need accuracy at X% to fully automate this process and remove humans from the loop. Until then, it's just AI-assisted. Getting over that line is a step-function change in how we deploy this agent." Qualify on whether your improvement crosses a threshold that changes what's possible, not just what's better. Use discovery to map market structure, not just validate hypotheses: Iddo validated that the most mature companies run specialized, fine-tuned models in production. The surprise: "The chasm between them and everybody else was a lot wider than I thought." This insight reshaped their entire strategy—the tooling gap, approaches to model development, and timeline to maturity differed dramatically across segments. Most founders use discovery to confirm their assumptions. Better founders use it to understand where different cohorts sit on the maturity curve, what bridges or blocks their progression, and which segments can buy versus which need multi-year evangelism. Target spend thresholds that indicate real commitment: Datawizz focuses on companies spending "at a minimum five to six figures a month on AI and specifically on LLM inference, using the APIs directly"—meaning they're building on top of OpenAI/Anthropic/etc., not just using ChatGPT. This filters for companies with skin in the game. Below that threshold, AI is an experiment. Above it, unit economics and quality bars matter operationally. For infrastructure plays, find the spend level that indicates your problem is a daily operational reality, not a future consideration. Structure discovery to extract insight, not close deals: Iddo's framework: "If I could run [a call where] 29 of 30 minutes could be us just asking questions and learning, that would be the perfect call in my mind." He compared it to "the dentist with the probe trying to touch everything and see where it hurts." The most valuable calls weren't those that converted to POCs—they came from people who approached the problem differently or had conflicting considerations. In hot markets with abundant budgets, founders easily collect false positives by selling when they should be learning. The discipline: exhaust your question list before explaining what you build. If they don't eventually ask "What do you do?" you're not surfacing real pain. Avoid the false-positive trap in well-funded categories: Iddo identified a specific risk in AI: "You can very easily run these calls, you think you're doing discovery, really you're doing sales, you end up getting a bunch of POCs and maybe some paying customers. So you get really good initial signs but you've never done any actual discovery. You have all the wrong indications—you're getting a lot of false positive feedback while building the completely wrong thing." When capital is abundant and your space is hot, early revenue can mask product-market misalignment. Good initial signs aren't validation if you skipped the work to understand why people bought. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM
Our 228th episode with a summary and discussion of last week's big AI news!Recorded on 12/12/2025Hosted by Andrey Kurenkov and Jeremie HarrisFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:OpenAI's latest model GPT-5.2 demonstrates improved performance and enhanced multi-modal capabilities but comes with increased costs and a different knowledge cutoff date.Disney invests $1 billion in OpenAI to generate Disney character content, creating unique licensing agreements across characters from Marvel, Pixar, and Star Wars franchises.The U.S. government imposes new AI chip export rules involving security reviews, while simultaneously moving to prevent states from independently regulating AI.DeepMind releases a paper outlining the challenges and findings in scaling multi-agent systems, highlighting the complexities of tool coordination and task performance.Timestamps:(00:00:00) Intro / Banter(00:01:19) News PreviewTools & Apps(00:01:58) GPT-5.2 is OpenAI's latest move in the agentic AI battle | The Verge(00:08:48) Runway releases its first world model, adds native audio to latest video model | TechCrunch(00:11:51) Google says it will link to more sources in AI Mode | The Verge(00:12:24) ChatGPT can now use Adobe apps to edit your photos and PDFs for free | The Verge(00:13:05) Tencent releases Hunyuan 2.0 with 406B parametersApplications & Business(00:16:15) China set to limit access to Nvidia's H200 chips despite Trump export approval(00:21:02) Disney investing $1 billion in OpenAI, will allow characters on Sora(00:24:48) Unconventional AI confirms its massive $475M seed round(00:29:06) Slack CEO Denise Dresser to join OpenAI as chief revenue officer | TechCrunch(00:31:18) The state of enterprise AIProjects & Open Source(00:33:49) [2512.10791] The FACTS Leaderboard: A Comprehensive Benchmark for Large Language Model Factuality(00:36:27) Claude 4.5 Opus' Soul DocumentResearch & Advancements(00:43:49) [2512.08296] Towards a Science of Scaling Agent Systems(00:48:43) Evaluating Gemini Robotics Policies in a Veo World Simulator(00:52:10) Guided Self-Evolving LLMs with Minimal Human Supervision(00:56:08) Martingale Score: An Unsupervised Metric for Bayesian Rationality in LLM Reasoning(01:00:39) [2512.07783] On the Interplay of Pre-Training, Mid-Training, and RL on Reasoning Language Models(01:04:42) Stabilizing Reinforcement Learning with LLMs: Formulation and Practices(01:09:42) Google's AI unit DeepMind announces UK 'automated research lab'Policy & Safety(01:10:28) Trump Moves to Stop States From Regulating AI With a New Executive Order - The New York Times(01:13:54) [2512.09742] Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs(01:17:57) Forecasting AI Time Horizon Under Compute Slowdowns(01:20:46) AI Security Institute focuses on AI measurements and evaluations(01:21:16) Nvidia AI Chips to Undergo Unusual U.S. Security Review Before Export to China(01:22:01) U.S. Authorities Shut Down Major China-Linked AI Tech Smuggling NetworkSynthetic Media & Art(01:24:01) RSL 1.0 has arrived, allowing publishers to ask AI companies pay to scrape content | The VergeSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
We Like Shooting Episode 641 This episode of We Like Shooting is brought to you by: C&G Holsters, Midwest Industries, Gideon Optics, Primary Arms, Medical Gear Outfitters, Die Free Co., Blue Alpha, and Bowers Group Welcome to the We Like Shooting Show, episode 641! Our cast tonight is Jeremy Pozderac, Aaron Krieger, Nick Lynch, and me Shawn Herrin, welcome to the show! Text Dear WLS or Reviews. +1 743 500 2171 - Gear Chat Shawn - PopStop™ Review: Innovative Solutions for Shooting Enthusiasts PopStop™ is a device designed to eliminate first round pop (FRP) in suppressors by injecting inert carbon dioxide to replace oxygen, thereby reducing impulse noise and suppressor flash. It has been shown to achieve noise reductions of up to 9 dB and can stabilize velocity standard deviations. The product is not compatible with all firearms, particularly 9mm pistols, and requires specific barrel measurements for proper use. Its introduction aims to enhance suppressor performance within the gun community. Shawn - RL-100 Pre-Order Announcement Cloud Defensive has announced the RL-100, a new entry-level rifle light that combines performance with affordability, priced at $149.99 for early pre-orders. Designed for reliability and ease of use, the RL-100 aims to provide a high-quality lighting option for budget-conscious users and agencies without sacrificing performance. This product's introduction may impact the gun community by offering a cost-effective alternative to higher-priced weapon lights, which could enhance accessibility for everyday users and law enforcement. Shawn - Long Range Shooting Tips Advanced long range shooting by Cleckner Nick - KRG Bravo KRG Bravo Shawn - Hi Point's AR-15 Fun Hi Point AR-15 Shawn - Precision Shooting Simplified Kelbly Precision Element Shawn - C&G Holsters News! C&G Holsters Announcement Jeremy - Savage 24F and Chiappa 12ga barrel inserts Bullet Points Chiappa 44 mag Gun Fights Step right up for "Gun Fights," the high-octane segment hosted by Nick Lynch, where our cast members go head-to-head in a game show-style showdown! Each contestant tries to prove their gun knowledge dominance. It's a wild ride of bids, bluffs, and banter—who will come out on top? Tune in to find out! Agency Brief AGENCY BRIEF: SHAYS' REBELLION 1780 – 1785: Economic Conditions Veterans' Pay: Paid in depreciated Continental currency/IOUs. State Policy: Massachusetts demands taxes in hard currency (gold/silver). The Debt: Boston merchants control state debt; courts aggressively foreclose on farms and imprison debtors. August – October 1786: Escalation Aug 29: 1,500 "Regulators" seize the Northampton courthouse to stop debtor trials. Sept: Armed shutdowns spread to Worcester, Concord, and Great Barrington. Captain Daniel Shays emerges as leader. Sept 26: Shays (600 men) vs. Gen. Shepard (militia) at Springfield Supreme Judicial Court. No fire exchanged; court adjourns. Oct 20: Continental Congress authorizes troops but lacks funds. MA passes Riot Act (arrests without bail). January 1787: The Private Army Jan 4: Gov. Bowdoin authorizes a private militia. Funding: 125 Boston merchants subscribe £6,000. Force: 3,000 mercenaries raised, led by Gen. Benjamin Lincoln. January 25, 1787: Springfield Arsenal (The Climax) Objective: Shays leads ~1,200 men to seize 7,000 muskets/cannons at the federal arsenal. Defense: Gen. Shepard (900 militia) defends the arsenal. The Engagement: Shepard fires artillery warning shots over rebels' heads. Rebels advance. Shepard fires grapeshot directly into the ranks. Casualties: 4 rebels dead, 20 wounded. Rebels flee without firing. February – June 1787: The Fallout Feb 4: Gen. Lincoln marches overnight through a blizzard to Petersham, surprising retreating rebels. 150 captured; Shays escapes to Vermont. Spring Election: Gov. Bowdoin is voted out in a landslide; John Hancock elected Governor. June: Hancock issues broad pardons. Legislature enacts debt moratoriums and lowers taxes. 1787 – 1791: Constitutional Impact May 1787: Constitutional Convention convenes; Washington/Madison cite Shays' Rebellion as proof the Articles of Confederation failed. 1788: Anti-Federalists demand a Bill of Rights to check the power of the proposed federal standing army. 1791: Second Amendment ratified. Modern Parallels Narrative: Veterans labeled "insurrectionists" for resisting economic policy. Tactics: Use of private capital to fund state enforcement when tax revenue failed. Legal Precedent: Establishing the "well-regulated militia" as a counter-balance to federal military power. WLS is Lifestyle Jelly Roll and Gun Rights Jelly Roll wants his gun rights back to hunt after losing them for felonies. Deadpool Unleashed Dead pool Machine Head Introduces 94-Proof Bourbon Whiskey Machine Head has launched Shotgun Blast Whiskey, a 94-proof bourbon designed for fans who enjoy stronger spirits. This product aligns with the band's aggressive identity while remaining accessible as a traditional bourbon. The whiskey emphasizes classic bourbon flavors and is marketed as a lifestyle product, mirroring a trend of music collaborations in the spirits industry. Aaron's Alley Going Ballistic Manhunt Madness: Another Day, Another Gun Control Fail (no summary available) More Giffords Nonsense: Gun Control Before Facts (no summary available) When "Gun Control" Meets Reality: The Brown University Attack Details (no summary available) Gun Control: An Epic Fail at Bondi Beach (no summary available) "Legal Gun Ownership: The Unintended Target of Gun Control Fanatics" (no summary available) When Antique Gun Ownership Becomes a Crime: UK Cops Confiscate 129 Legal Firearms (no summary available) New Jersey's Carry Ban: Lawsuit Showdown or Just Another Dance with Gun Control? (no summary available) Traveling with NFA to get easier? Reviews ⭐⭐⭐⭐⭐ - from TwinDadARguy - Great show, been listening for about 4 or so years. Just heard the convo about Aaron's weird ability to pull interest from the fairer sex. You couldn't come up with a good word for it - I'm here to help. The perfect word is conFAUXdence. You're welcome. ⭐⭐⭐⭐⭐ - from Devin K - Where is the damn squares button!? Love this show and all the antics that come along with it. Lever action debate that would be fun to listen too. What's your favorite lever action caliber for whitetail hunting? What would be the one you would take if you needed to defend that SSB. #171, #fuckthethumb. ⭐⭐⭐⭐⭐ - from System AI - A review and comparison to bring us all back to Dungeon Crawler Carl. Let's pair each cast member to a Character from DCC. First, Shawn, obviously he's Carl. He's the main character. He's powerful. He's the reason we are all here. There may or may not be a Cat that led him here. He likely has someone obsessed with his feet and definitely only has heart boxers on behind his desk. Second, Aaron, he's Prepotene. Smart and powerful. Sometimes on the team, sometimes in the way, sometimes nowhere to be seen. Probably rides a Goat. Screams nonsense from time to time. Would be dead without the rest of the team. Third, Jeremy. Jeremy is Quasar. Swears constantly Hates the leader/rulers of the galaxy and game. Is there everytime we need him. Will likely be the reason the rest end up in a prison. Fourth, Savage. He's JuiceBox. Extremely smart. AI generated. Self aware. Playing the same game but may have a different motive. Likely to lead to the downfall of the show. Last, Nick. Nick is Samantha. Much more powerful then he's willing to let on. Always growing in power. A very important member to keep the show running. Would be dangerous if all his organs worked correctly. And Shawn has definitely been inside him. These comparisons can not be altered. Debate will result in acceleration. Thanks for your attention to this matter. Signed, Gary/System AI. #nonotes Before we let you go - Join Gun Owners of America Tell your friends about the show and get backstage access by joining the Gun Cult at theguncult.com. No matter how tough your battle is today, we want you here fight with us tomorrow. Don't struggle in silence, you can contact the suicide prevention line by dialing 988 from your phone. Remember - Always prefer Dangerous Freedom over peaceful slavery. We'll see you next time! Nick - @busbuiltsystems | Bus Built Systems Jeremy - @ret_actual | Rivers Edge Tactical Aaron - @machinegun_moses Savage - @savage1r Shawn - @dangerousfreedomyt | @camorado.cam | Camorado
The hosts dive into paintball basics for beginners, breaking down the sport into approachable steps. They explain the essential gear (marker/gun, hopper, tank, protective mask) and highlight what newcomers often overlook—like the importance of a well-fitting mask and reliable loader system.Next, they cover the fundamental rules and game formats: capture the flag, elimination, scenario play. They emphasise safety protocols (never removing your mask on the field, always chronograph your marker to legal FPS, clear communication). They also stress field etiquette—don't move thrown bunkers, call your hits honestly, and respect referees.They then shift into strategy tips: how to pick your playing style (aggressive front-player vs. back-field support), coordinate with teammates, and use the available bunkers/cover effectively. A good tip: keep your body low, pop out for shots, and always move quickly between cover to avoid being an easy target.The hosts share some common rookie mistakes—shooting wildly rather than taking aimed bursts, failing to reload/have backup paint, focusing too much on your own play instead of good team positioning. They recommend new players practice first in low-pressure games, watch experienced players, and ask for feedback.Finally, they talk about choosing your first marker: set budget, reliability, ease of maintenance, and the local field's rental gear—sometimes starting with rentals is a smart move until you know you're invested. They wrap up by encouraging listeners to get out on the field, learn by doing, and enjoy the camaraderie and fun of paintball rather than stressing perfect play.Help support the free broadcast by donating to our PayPal fundraiser! https://www.paypal.com/ncp/payment/RL... *Behind the Bunker Paintball Podcast* is a long-running weekly show dedicated to everything paintball. Hosted by passionate players and industry veterans, the podcast dives into the latest happenings in the sport, from new gear releases and product reviews to updates on tournaments and events around the world. It has built a loyal audience by combining serious paintball discussion with a lighthearted, entertaining approach that keeps both new players and seasoned veterans engaged.
Send us a textRecorded Dec 13, 2025 - Enjoy episode 3 of our 4-week teaching series, The Biblical Roots of Christmas. This week, we turn from promise to fulfillment, exploring how the hopes of Israel find their “Yes” in Jesus Christ. We'll examine how the birth of Christ fulfills the Law and the Prophets, why the Incarnation stands at the center of God's redemptive plan, and what it means to say that in Jesus, the long-awaited Messiah has finally come. The Biblical Roots MinistriesOur websiteOur YouTube ChannelProf. Solberg's BlogSupport our Ministry (Thank you!)What if Christmas felt sacred again? Full of Grace and Truth, the new book from award-winning author R. L. Solberg, invites you to rediscover the biblical story at the heart of the season. Available now in paperback and Kindle, with all proceeds supporting The Biblical Roots Ministries. Get your copy today on Amazon.com.What if Christmas felt sacred again? Full of Grace and Truth, the new book from award-winning author R. L. Solberg, invites you to rediscover the biblical story at the heart of the season. Available now in paperback and Kindle, with all proceeds supporting The Biblical Roots Ministries. Get your copy today on Amazon.com.
AI models feel smarter than their real-world impact. They ace benchmarks, yet still struggle with reliability, strange bugs, and shallow generalization. Why is there such a gap between what they can do on paper and in practiceIn this episode from The Dwarkesh Podcast, Dwarkesh talks with Ilya Sutskever, cofounder of SSI and former OpenAI chief scientist, about what is actually blocking progress toward AGI. They explore why RL and pretraining scale so differently, why models outperform on evals but underperform in real use, and why human style generalization remains far ahead.Ilya also discusses value functions, emotions as a built-in reward system, the limits of pretraining, continual learning, superintelligence, and what an AI driven economy could look like. Resources:Transcript: https://www.dwarkesh.com/p/ilya-sutsk...Apple Podcasts: https://podcasts.apple.com/us/podcast...Spotify: https://open.spotify.com/episode/7naO... Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures](http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Previous: 2024, 2022 “Our greatest fear should not be of failure, but of succeeding at something that doesn't really matter.” –attributed to DL Moody[1] 1. Background & threat model The main threat model I'm working to address is the same as it's been since I was hobby-blogging about AGI safety in 2019. Basically, I think that: The “secret sauce” of human intelligence is a big uniform-ish learning algorithm centered around the cortex; This learning algorithm is different from and more powerful than LLMs; Nobody knows how it works today; Someone someday will either reverse-engineer this learning algorithm, or reinvent something similar; And then we'll have Artificial General Intelligence (AGI) and superintelligence (ASI). I think that, when this learning algorithm is understood, it will be easy to get it to do powerful and impressive things, and to make money, as long as it's weak enough that humans can keep it under control. But past that stage, we'll be relying on the AGIs to have good motivations, and not be egregiously misaligned and scheming to take over the world and wipe out humanity. Alas, I claim that the latter kind of motivation is what we should expect to occur, in [...] ---Outline:(00:26) 1. Background & threat model(02:24) 2. The theme of 2025: trying to solve the technical alignment problem(04:02) 3. Two sketchy plans for technical AGI alignment(07:05) 4. On to what I've actually been doing all year!(07:14) Thrust A: Fitting technical alignment into the bigger strategic picture(09:46) Thrust B: Better understanding how RL reward functions can be compatible with non-ruthless-optimizers(12:02) Thrust C: Continuing to develop my thinking on the neuroscience of human social instincts(13:33) Thrust D: Alignment implications of continuous learning and concept extrapolation(14:41) Thrust E: Neuroscience odds and ends(16:21) Thrust F: Economics of superintelligence(17:18) Thrust G: AGI safety miscellany(17:41) Thrust H: Outreach(19:13) 5. Other stuff(20:05) 6. Plan for 2026(21:03) 7. Acknowledgements The original text contained 7 footnotes which were omitted from this narration. --- First published: December 11th, 2025 Source: https://www.lesswrong.com/posts/CF4Z9mQSfvi99A3BR/my-agi-safety-research-2025-review-26-plans --- Narrated by TYPE III AUDIO. ---Images from the article:
What does AI look like on the ground in California? Fresh from a week on the US West Coast, Investment Manager Tom Miedema shares what he heard in boardrooms and labs – from data centre build‑outs and custom chips to the realities of software adoption and monetisation. This episode cuts through the hype to explore who is likely to win, where supply chains are straining, and why some AI milestones may be further away than headlines suggest. Higher margin, lower risk – why we like/there's a lot to like about Microsoft and Amazon's data centre model Winning the “daily knife fight” – why experience matters in the AI infrastructure build outAI is disrupting the software-as-a-service industry, but many incumbents are learning fastMonetisation of AI has been a slow burn, but an inflection point may be comingHow “the march of the nines” explains why some AI progress might be further away than we think Listen to the podcast to learn more.ResourcesDwarkesh PodcastAndrej Karpathy — AGI is still a decade away Ilya Sutskever – We're moving from the age of scaling to the age of researchSatya Nadella — How Microsoft is preparing for AGIRichard Sutton – Father of RL thinks LLMs are a dead endArtificial meat is harder than artificial intelligence — Lewis BollardThe podcast is intended for investment professionals only and should not be construed as investment advice or a recommendation. Any stock examples discussed are given in the context of the theme being explored, and the views expressed are those of the presenters at the time of the recording.
Hey everyone, December started strong and does NOT want to slow down!? OpenAI showed us their response to the Code Red and it's GPT 5.2, which doesn't feel like a .1 upgrade! We got it literally as breaking news at the end of the show, and oh boy! The new kind of LLMs is here. GPT, then Gemini, then Opus and now GPT again... Who else feels like we're on a trippy AI rolercoaster? Just me?
Send us a textRecorded Dec 6, 2025. The 2nd episode of our 4-week teaching series, "The Biblical Roots of Christmas." This week, we turn to the great storyline of Scripture to examine the promises and prophecies that set the stage for the birth of Christ. From Eden to Abraham to the prophets of Israel, we trace the unfolding hope of a coming Redeemer and explore how the Incarnation fulfills God's ancient covenant promises. Bring your Bibles and your questions, and let's rediscover together how the long-awaited Messiah entered history in the fullness of time.The Biblical Roots MinistriesOur websiteOur YouTube ChannelProf. Solberg's BlogSupport our Ministry (Thank you!)What if Christmas felt sacred again? Full of Grace and Truth, the new book from award-winning author R. L. Solberg, invites you to rediscover the biblical story at the heart of the season. Available now in paperback and Kindle, with all proceeds supporting The Biblical Roots Ministries. Get your copy today on Amazon.com.
Edwin Chen is the founder and CEO of Surge AI, the company that teaches AI what's good vs. what's bad, powering frontier labs with elite data, environments, and evaluations. Surge surpassed $1 billion in revenue with under 100 employees last year, completely bootstrapped—the fastest company in history to reach this milestone. Before founding Surge, Edwin was a research scientist at Google, Facebook, and Twitter and studied mathematics, computer science, and linguistics at MIT.We discuss:1. How Surge reached over $1 billion in revenue with fewer than 100 people by obsessing over quality2. The story behind how Claude Code got so good at coding and writing3. The problems with AI benchmarks and why they're pushing AI in the wrong direction4. How RL environments are the next frontier in AI training5. Why Edwin believes we're still a decade away from AGI6. Why taste and human judgment shape which AI models become industry leaders7. His contrarian approach to company building that rejects Silicon Valley's “pivot and blitzscale” playbook8. How AI models will become increasingly differentiated based on the values of the companies building them—Brought to you by:Vanta—Automate compliance. Simplify security.WorkOS—Modern identity platform for B2B SaaS, free up to 1 million MAUsCoda—The all-in-one collaborative workspace—Transcript: https://www.lennysnewsletter.com/p/surge-ai-edwin-chen—My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/180055059/my-biggest-takeaways-from-this-conversation—Where to find Edwin Chen:• X: https://x.com/echen• LinkedIn: https://www.linkedin.com/in/edwinzchen• Surge's blog: https://surgehq.ai/blog—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Edwin Chen(04:48) AI's role in business efficiency(07:08) Building a contrarian company(08:55) An explanation of what Surge AI does(09:36) The importance of high-quality data(13:31) How Claude Code has stayed ahead(17:37) Edwin's skepticism toward benchmarks(21:54) AGI timelines and industry trends(28:33) The Silicon Valley machine(33:07) Reinforcement learning and future AI training(39:37) Understanding model trajectories(41:11) How models have advanced and will continue to advance(42:55) Adapting to industry needs(44:39) Surge's research approach(48:07) Predictions for the next few years in AI(50:43) What's underhyped and overhyped in AI(52:55) The story of founding Surge AI(01:02:18) Lightning round and final thoughts—Referenced:• Surge: https://surgehq.ai• Surge's product page: https://surgehq.ai/products• Claude Code: https://www.claude.com/product/claude-code• Gemini 3: https://aistudio.google.com/models/gemini-3• Sora: https://openai.com/sora• Terrence Rohan on LinkedIn: https://www.linkedin.com/in/terrencerohan• Richard Sutton—Father of RL thinks LLMs are a dead end: https://www.dwarkesh.com/p/richard-sutton• The Bitter Lesson: http://www.incompleteideas.net/IncIdeas/BitterLesson.html• Reinforcement learning: https://en.wikipedia.org/wiki/Reinforcement_learning• Grok: https://grok.com• Warren Buffett on X: https://x.com/WarrenBuffett• OpenAI's CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil (CPO at OpenAI, ex-Instagram, Twitter): https://www.lennysnewsletter.com/p/kevin-weil-open-ai• Anthropic's CPO on what comes next | Mike Krieger (co-founder of Instagram): https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what-comes-next• Brian Armstrong on LinkedIn: https://www.linkedin.com/in/barmstrong• Interstellar on Prime Video: https://www.amazon.com/Interstellar-Matthew-McConaughey/dp/B00TU9UFTS• Arrival on Prime Video: https://www.amazon.com/Arrival-Amy-Adams/dp/B01M2C4NP8• Travelers on Netflix: https://www.netflix.com/title/80105699• Waymo: https://waymo.com• Soda versus pop: https://flowingdata.com/2012/07/09/soda-versus-pop-on-twitter—Recommended books:• Stories of Your Life and Others: https://www.amazon.com/Stories-Your-Life-Others-Chiang/dp/1101972122• The Myth of Sisyphus: https://www.amazon.com/Myth-Sisyphus-Vintage-International/dp/0525564454• Le Ton Beau de Marot: In Praise of the Music of Language: https://www.amazon.com/dp/0465086454• Gödel, Escher, Bach: An Eternal Golden Braid: https://www.amazon.com/G%C3%B6del-Escher-Bach-Eternal-Golden/dp/0465026567—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com
Hello beautiful people, and welcome to another edition of The Soul Damn 'Phisticated! Radio Show — where like-minded people meet groove, melody meets meaning, and soul music gets to breathe. I bring you two hours of the finest selections… curated with intention and delivered with love. We have the usual features, The Funk Drop and Catch The Beat as well as music from Ledisi & Omar, Deniece Williams, Em Cee, Tanya Nolan and much more besides.Stop on by and hear what you've been missing . . . Welcome To The Soul Show (Soundsational)Got Me A Feeling - Misty Oldland Feat. OmarHow'd I Know That Love Would Slip Away - Deniece WilliamsNobody Better Soulpersona Smooth Mix - Tanya NolanSpinning Around - KristoVibe with You - Will DowningWhat You Won't Do For Love - Roy AyersNights Like This - Donae'o, Omar, Lemar & House Gospel ChoirSomatic Tick (BiG AL Remix) - Kerrier Collective, Polly MeyrickBetter For Me (Original Mix) - Richard Earnshaw & Sabrina ChyldMaking A Way - LedisiShaft In Africa - Johnny PateRest Of Your Life - Cee Lo GreenIf That's Your Boyfriend (He Wasn't Last Night) - Me'Shell NdegéocelloStare at Me - JANE HANDCOCK & Anderson .PaakFeel Good! - Em-CeeBad Weather - The SupremesMy Part Of Town - Tuff CrewStreet Games - The BlackbyrdsPlay This Song (feat. Anderson .Paak) - Mariah CareyGhost - TheARTI$TYou'll Like It Too - FunkadelicMovin' - Brass ConstructionBody Music - The StrikersNow I'm Fine - Grey & HanksNaked (Remix) - Case, RL & Raheem DevaughnLove Is Like (feat. India.Arie) - OMARAlive - Otis Kane & CASASANTIThank You Thank You - Roy Ayers
Hello beautiful people, welcome to another edition of The Soul Damn 'Phisticated! Radio Show — where the like-minded meet groove, melody meets meaning, and soul music gets to breathe. I'll be bringing you you two hours of the finest selections… curated with intention and delivered with love. On this week's journey we have the usual features, The Funk Drop and Catch The Beat as well as music from Lemar, Marculk Miller, WIll Downing, Kayta Burgess and much more besides.Stop on by and hear what you've been missing . . . Welcome To The Soul Show (Soundsational) How I Do (Feat. Cy Young & Dj Jazzy Jeff) - Shawn Stockman Of Boyz II MenNow You Know - NV Remix - Kayta BurgessSo Caught Up - Shaun EscofferyYou're Gonna Win - Lady WrayNaked (Remix) - Case, RL & Raheem DevaughnBack to Us - Will DowningDays Like These (feat. Cherri V) - IncognitoJust Make a Move (And Be Yourself) - Tower Of PowerThe Bottle (feat. OMAR) - Brian Jackson, Kenny Dope & Louie VegaOh What A Lonely Feeling (Delfonic Rework) - Roy AyersAll I Ever Do - TashanCruel Summers Bring FIRE LIFE!! - De La Soul & YukimiYou're The Sweetest One - Luther VandrossYou Don't Know - Natalie Williams50/50 - LemarMuch Too Much - Marcus MillerLove In The Middle - Kevin RossThere's Much Love In The World - OmarI Love My Life With You (feat. Brian McKnight) - Jeff LorberHouse Party - Fred WesleyA Funky Song - RippleMr. Big Stuff - Jean KnightThe Ghetto - Danny HathawayThey Don't Know (Soulshock & Karlin Club Mix) - Jon BNext Move - Jarrod Lawson, Raquel RodriguezThank You Thank You - Roy Ayers
From building Medal into a 12M-user game clipping platform with 3.8B highlight moments to turning down a reported $500M offer from OpenAI (https://www.theinformation.com/articles/openai-offered-pay-500-million-startup-videogame-data) and raising a $134M seed from Khosla (https://techcrunch.com/2025/10/16/general-intuition-lands-134m-seed-to-teach-agents-spatial-reasoning-using-video-game-clips/) to spin out General Intuition, Pim is betting that world models trained on peak human gameplay are the next frontier after LLMs. We sat down with Pim to dig into why game highlights are “episodic memory for simulation” (and how Medal's privacy-first action labels became a world-model goldmine https://medal.tv/blog/posts/enabling-state-of-the-art-security-and-protections-on-medals-new-apm-and-controller-overlay-features), what it takes to build fully vision-based agents that just see frames and output actions in real time, how General Intuition transfers from games to real-world video and then into robotics, why world models and LLMs are complementary rather than rivals, what founders with proprietary datasets should know before selling or licensing to labs, and his bet that spatial-temporal foundation models will power 80% of future atoms-to-atoms interactions in both simulation and the real world. We discuss: How Medal's 3.8B action-labeled highlight clips became a privacy-preserving goldmine for world models Building fully vision-based agents that only see frames and output actions yet play like (and sometimes better than) humans Transferring from arcade-style games to realistic games to real-world video using the same perception–action recipe Why world models need actions, memory, and partial observability (smoke, occlusion, camera shake) vs. “just” pretty video generation Distilling giant policies into tiny real-time models that still navigate, hide, and peek corners like real players Pim's path from RuneScape private servers, Tourette's, and reverse engineering to leading a frontier world-model lab How data-rich founders should think about valuing their datasets, negotiating with big labs, and deciding when to go independent GI's first customers: replacing brittle behavior trees in games, engines, and controller-based robots with a “frames in, actions out” API Using Medal clips as “episodic memory of simulation” to move from imitation learning to RL via world models and negative events The 2030 vision: spatial–temporal foundation models that power the majority of atoms-to-atoms interactions in simulation and the real world — Pim X: https://x.com/PimDeWitte LinkedIn: https://www.linkedin.com/in/pimdw/ Where to find Latent Space X: https://x.com/latentspacepod Substack: https://www.latent.space/ Chapters 00:00:00 Introduction and Medal's Gaming Data Advantage 00:02:08 Exclusive Demo: Vision-Based Gaming Agents 00:06:17 Action Prediction and Real-World Video Transfer 00:08:41 World Models: Interactive Video Generation 00:13:42 From Runescape to AI: Pim's Founder Journey 00:16:45 The Research Foundations: Diamond, Genie, and SEMA 00:33:03 Vinod Khosla's Largest Seed Bet Since OpenAI 00:35:04 Data Moats and Why GI Stayed Independent 00:38:42 Self-Teaching AI Fundamentals: The Francois Fleuret Course 00:40:28 Defining World Models vs Video Generation 00:41:52 Why Simulation Complexity Favors World Models 00:43:30 World Labs, Yann LeCun, and the Spatial Intelligence Race 00:50:08 Business Model: APIs, Agents, and Game Developer Partnerships 00:58:57 From Imitation Learning to RL: Making Clips Playable 01:00:15 Open Research, Academic Partnerships, and Hiring 01:02:09 2030 Vision: 80 Percent of Atoms-to-Atoms AI Interactions
No Priors: Artificial Intelligence | Machine Learning | Technology | Startups
In just over three years, Harvey has not only scaled to nearly one thousand customers, including Walmart, PwC, and other giants of the Fortune 500, but fundamentally transformed how legal work is delivered. Sarah Guo and Elad Gil are joined by Harvey's co-founder and president Gabe Pereyra to discuss why the future of legal AI isn't only about individual productivity, but also about putting together complex client matters to make law firms more profitable. They also talk about how Harvey analyzes complex tasks like fund formation or M&A and deploys agents to handle research and drafting, the strategic reasoning behind enabling law firms rather than competing with them, and why AI won't replace partners but will change law firm leverage models and training for associates. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @gabepereyra | @Harvey Chapters: 00:00 – Gabe Pereyra Introduction 00:09 – Introduction to Harvey 02:04 – Expanding Harvey's Reach 03:22 – Understanding Legal Workflows 06:20 – Agentic AI Applications in Law 09:06 – The Future Evolution of Law Firms 13:36 – RL in Law 19:46 – Deploying Harvey and Customization 23:46 – Adoption and Customer Success 25:28– Why Harvey Isn't Building a Law Firm 27:25 – Challenges and Opportunities in Legal Tech 29:26 – Building a Company During the Rise of Gen AI 37:24 – Hiring at Harvey 40:19 – Future Predictions 44:17 – Conclusion
TDC 076: Worldviews from Viewers: Real Perspectives On How to Make Sense of this Post-AI World...World views from readers reveal what's really shaping how thoughtful people navigate today's chaos.Episode SummaryIn this special episode of The Digital Contrarian, host Ryan Levesque shares thought-provoking reader responses to last week's worldview challenge.You'll discover seven diverse principles shaping how people make sense of this moment in history, explore frameworks for navigating complexity, and hear perspectives that might challenge your own assumptions.Question of the Day
We're told that AI progress is slowing down, that pre-training has hit a wall, that scaling laws are running out of road. Yet we're releasing this episode in the middle of a wild couple of weeks that saw GPT-5.1, GPT-5.1 Codex Max, fresh reasoning modes and long-running agents ship from OpenAI — on top of a flood of new frontier models elsewhere. To make sense of what's actually happening at the edge of the field, I sat down with someone who has literally helped define both of the major AI paradigms of our time.Łukasz Kaiser is one of the co-authors of “Attention Is All You Need,” the paper that introduced the Transformer architecture behind modern LLMs, and is now a leading research scientist at OpenAI working on reasoning models like those behind GPT-5.1. In this conversation, he explains why AI progress still looks like a smooth exponential curve from inside the labs, why pre-training is very much alive even as reinforcement-learning-based reasoning models take over the spotlight, how chain-of-thought actually works under the hood, and what it really means to “train the thinking process” with RL on verifiable domains like math, code and science. We talk about the messy reality of low-hanging fruit in engineering and data, the economics of GPUs and distillation, interpretability work on circuits and sparsity, and why the best frontier models can still be stumped by a logic puzzle from his five-year-old's math book.We also go deep into Łukasz's personal journey — from logic and games in Poland and France, to Ray Kurzweil's team, Google Brain and the inside story of the Transformer, to joining OpenAI and helping drive the shift from chatbots to genuine reasoning engines. Along the way we cover GPT-4 → GPT-5 → GPT-5.1, post-training and tone, GPT-5.1 Codex Max and long-running coding agents with compaction, alternative architectures beyond Transformers, whether foundation models will “eat” most agents and applications, what the translation industry can teach us about trust and human-in-the-loop, and why he thinks generalization, multimodal reasoning and robots in the home are where some of the most interesting challenges still lie.OpenAIWebsite - https://openai.comX/Twitter - https://x.com/OpenAIŁukasz KaiserLinkedIn - https://www.linkedin.com/in/lukaszkaiser/X/Twitter - https://x.com/lukaszkaiserFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) – Cold open and intro(01:29) – “AI slowdown” vs a wild week of new frontier models(08:03) – Low-hanging fruit: infra, RL training and better data(11:39) – What is a reasoning model, in plain language?(17:02) – Chain-of-thought and training the thinking process with RL(21:39) – Łukasz's path: from logic and France to Google and Kurzweil(24:20) – Inside the Transformer story and what “attention” really means(28:42) – From Google Brain to OpenAI: culture, scale and GPUs(32:49) – What's next for pre-training, GPUs and distillation(37:29) – Can we still understand these models? Circuits, sparsity and black boxes(39:42) – GPT-4 → GPT-5 → GPT-5.1: what actually changed(42:40) – Post-training, safety and teaching GPT-5.1 different tones(46:16) – How long should GPT-5.1 think? Reasoning tokens and jagged abilities(47:43) – The five-year-old's dot puzzle that still breaks frontier models(52:22) – Generalization, child-like learning and whether reasoning is enough(53:48) – Beyond Transformers: ARC, LeCun's ideas and multimodal bottlenecks(56:10) – GPT-5.1 Codex Max, long-running agents and compaction(1:00:06) – Will foundation models eat most apps? The translation analogy and trust(1:02:34) – What still needs to be solved, and where AI might go next
Ilya & I discuss SSI's strategy, the problems with pre-training, how to improve the generalization of AI models, and how to ensure AGI goes well.Watch on YouTube; read the transcript.Sponsors* Gemini 3 is the first model I've used that can find connections I haven't anticipated. I recently wrote a blog post on RL's information efficiency, and Gemini 3 helped me think it all through. It also generated the relevant charts and ran toy ML experiments for me with zero bugs. Try Gemini 3 today at gemini.google* Labelbox helped me create a tool to transcribe our episodes! I've struggled with transcription in the past because I don't just want verbatim transcripts, I want transcripts reworded to read like essays. Labelbox helped me generate the exact data I needed for this. If you want to learn how Labelbox can help you (or if you want to try out the transcriber tool yourself), go to labelbox.com/dwarkesh* Sardine is an AI risk management platform that brings together thousands of device, behavior, and identity signals to help you assess a user's risk of fraud & abuse. Sardine also offers a suite of agents to automate investigations so that as fraudsters use AI to scale their attacks, you can use AI to scale your defenses. Learn more at sardine.ai/dwarkeshTo sponsor a future episode, visit dwarkesh.com/advertise.Timestamps(00:00:00) – Explaining model jaggedness(00:09:39) - Emotions and value functions(00:18:49) – What are we scaling?(00:25:13) – Why humans generalize better than models(00:35:45) – SSI's plan to straight-shot superintelligence(00:46:47) – SSI's model will learn from deployment(00:55:07) – How to think about powerful AGIs(01:18:13) – “We are squarely an age of research company”(01:20:23) – Self-play and multi-agent(01:32:42) – Research taste Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
A young guitarist disappears for months—and returns playing like no human ever could. They say Robert Johnson met the Devil at a lonely Mississippi crossroads—trading his soul for the sound that birthed the blues. But what really happened that night? Was it a deal, a myth, or something darker still? Join us as we journey into the Delta, where music, magic, and the supernatural collide. SOURCES (for show notes)https://www.openculture.com/2020/10/the-legend-of-how-bluesman-robert-johnson-sold-his-soul-to-the-devil-at-the-crossroads.htmlhttps://entertainment.howstuffworks.com/devil-and-robert-johnson.htm?utm_source=chatgpt.comhttps://nashvilleghosts.com/the-crossroads-the-king-of-delta-blues-the-devil/?utm_source=chatgpt.comhttps://www.thevintagenews.com/2018/04/05/crossroads/?utm_source=chatgpt.comhttps://genius.com/artists/Robert-johnsonhttps://www.britannica.com/biography/Robert-Johnson-American-musicianhttps://blackpast.org/african-american-history/johnson-robert-1911-1938/https://www.vialma.com/en/articles/266/Niccolo-Paganini-The-Devils-Violinisthttps://www.gutenberg.org/files/14591/14591-h/14591-h.htmBiographies and historical accountsUp Jumped the Devil: The Real Life of Robert Johnson by Bruce Conforth and Gayle Dean Wardlow: A comprehensive look at the legendary bluesman's life.Searching for Robert Johnson by Peter Guralnick: Explores the myth and reality of Johnson's life and career.Escaping the Delta: Robert Johnson and the Invention of the Blues by Elijah Wald: Analyzes Johnson's music and its impact on the blues genre.Biography of a Phantom: A Robert Johnson Blues Odyssey by Robert Mack McCormick: A biographical exploration of Johnson's life.Robert Johnson: Lost and Found by Barry Lee Pearson: A scholarly account that delves into the details of Johnson's life.Personal memoirs and graphic novelsBrother Robert: Growing Up with Robert Johnson by Annye C. Anderson: A firsthand account of Johnson's life from his niece's perspective.Love in Vain: Robert Johnson, 1911–1938 by Mezzo and J.M. Dupont: A graphic novel that tells the story of Johnson's life through illustrations.RL's Dream by Walter Mosley: A fictional novel inspired by the legend of Robert Johnson
TDC 075: How To Craft Your Worldview With AIWhat I learned at Tony Robbins and Dean Graziosi's $250,000 private mastermind this week.Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque dives into building a comprehensive worldview and why it's your hidden operating system.You'll learn how to surface your existing beliefs, discover the three levels of reality that shape decisions, and explore a six-step AI-assisted process for crafting worldviews that drive real results.Question of the Day
Detta avsnitt är i samarbete med Probi.Vi möter Caroline Montelius, forskare och vetenskapskommunikatör vid Probi, för ett samtal om mag- och tarmhälsa, och varför din tarm kan vara nyckeln till både välmående och lång livslängd.Caroline berättar hur upp till 90% av våra välfärdssjukdomar – som diabetes, cancer, övervikt och ledsjukdomar – kan kopplas till en obalanserad tarmflora. Vi pratar om läckande tarm, inflammationer, IBS och varför så många idag lider av återkommande magbesvär utan att förstå orsaken. Hon delar den senaste forskningen om probiotika och hur goda bakterier påverkar allt från immunförsvaret till hjärnan. Du får också konkreta råd för hur du kan förbättra din tarmhälsa – genom rätt kost, mer fibrer och medvetna val av probiotika med kliniskt dokumenterad effekt.Dessutom pratar vi om hur tarmhälsa hänger ihop med stress, klimakteriet, idrottsprestationer och livslängd. Detta är ett avsnitt för dig som vill förstå kroppen på djupet och ta kontroll över din hälsa – från insidan och ut.Följ Probi härLäs mer om Probi här.Ta del av Framgångsakademins kurser.Beställ "Mitt Framgångsår".Följ Alexander Pärleros på Instagram.Följ Alexander Pärleros på Tiktok.Bästa tipsen från avsnittet i Nyhetsbrevet.I samarbete med Convendum.I samarbete med Convendum. Hosted on Acast. See acast.com/privacy for more information.
On this episode of Drunken Book Club we live read the Tales to Give You Goosebumps short story The Cat's Tale to pair with The Barking Ghost. It's time to find out if RL can write a decent scary story around an animal. Follow the linktree here and find where you can listen to and follow us! https://linktr.ee/drunkenbookclub Support us on https://www.patreon.com/dbcanddmm All of the content is $1! Make sure to check out our Patrons 1. Trey 2. Weese https://www.youtube.com/user/pikidoo1
In this special release episode, Matt sits down with Nathan Lambert and Luca Soldaini from Ai2 (the Allen Institute for AI) to break down one of the biggest open-source AI drops of the year: OLMo 3. At a moment when most labs are offering “open weights” and calling it a day, AI2 is doing the opposite — publishing the models, the data, the recipes, and every intermediate checkpoint that shows how the system was built. It's an unusually transparent look into the inner machinery of a modern frontier-class model.Nathan and Luca walk us through the full pipeline — from pre-training and mid-training to long-context extension, SFT, preference tuning, and RLVR. They also explain what a thinking model actually is, why reasoning models have exploded in 2025, and how distillation from DeepSeek and Qwen reasoning models works in practice. If you've been trying to truly understand the “RL + reasoning” era of LLMs, this is the clearest explanation you'll hear.We widen the lens to the global picture: why Meta's retreat from open source created a “vacuum of influence,” how Chinese labs like Qwen, DeepSeek, Kimi, and Moonshot surged into that gap, and why so many U.S. companies are quietly building on Chinese open models today. Nathan and Luca offer a grounded, insider view of whether America can mount an effective open-source response — and what that response needs to look like.Finally, we talk about where AI is actually heading. Not the hype, not the doom — but the messy engineering reality behind modern model training, the complexity tax that slows progress, and why the transformation between now and 2030 may be dramatic without ever delivering a single “AGI moment.” If you care about the future of open models and the global AI landscape, this is an essential conversation.Allen Institute for AI (AI2)Website - https://allenai.orgX/Twitter - https://x.com/allen_aiNathan LambertBlog - https://www.interconnects.aiLinkedIn - https://www.linkedin.com/in/natolambert/X/Twitter - https://x.com/natolambertLuca SoldainiBlog - https://soldaini.netLinkedIn - https://www.linkedin.com/in/soldni/X/Twitter - https://x.com/soldniFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) – Cold Open(00:39) – Welcome & today's big announcement(01:18) – Introducing the Olmo 3 model family(02:07) – What “base models” really are (and why they matter)(05:51) – Dolma 3: the data behind Olmo 3(08:06) – Performance vs Qwen, Gemma, DeepSeek(10:28) – What true open source means (and why it's rare)(12:51) – Intermediate checkpoints, transparency, and why AI2 publishes everything(16:37) – Why Qwen is everywhere (including U.S. startups)(18:31) – Why Chinese labs go open source (and why U.S. labs don't)(20:28) – Inside ATOM: the U.S. response to China's model surge(22:13) – The rise of “thinking models” and inference-time scaling(35:58) – The full Olmo pipeline, explained simply(46:52) – Pre-training: data, scale, and avoiding catastrophic spikes(50:27) – Mid-training (tail patching) and avoiding test leakage(52:06) – Why long-context training matters(55:28) – SFT: building the foundation for reasoning(1:04:53) – Preference tuning & why DPO still works(1:10:51) – The hard part: RLVR, long reasoning chains, and infrastructure pain(1:13:59) – Why RL is so technically brutal(1:18:17) – Complexity tax vs AGI hype(1:21:58) – How everyone can contribute to the future of AI(1:27:26) – Closing thoughts
My fellow pro-growth/progress/abundance Up Wingers in America and around the world:What really gets AI optimists excited isn't the prospect of automating customer service departments or human resources. Imagine, rather, what might happen to the pace of scientific progress if AI becomes a super research assistant. Tom Davidson's new paper, How Quick and Big Would a Software Intelligence Explosion Be?, explores that very scenario.Today on Faster, Please! — The Podcast, I talk with Davidson about what it would mean for automated AI researchers to rapidly improve their own algorithms, thus creating a self-reinforcing loop of innovation. We talk about the economic effects of self-improving AI research and how close we are to that reality.Davidson is a senior research fellow at Forethought, where he explores AI and explosive growth. He was previously a senior research fellow at Open Philanthropy and a research scientist at the UK government's AI Security Institute.In This Episode* Making human minds (1:43)* Theory to reality (6:45)* The world with automated research (10:59)* Considering constraints (16:30)* Worries and what-ifs (19:07)Below is a lightly edited transcript of our conversation. Making human minds (1:43). . . you don't have to build any more computer chips, you don't have to build any more fabs . . . In fact, you don't have to do anything at all in the physical world.Pethokoukis: A few years ago, you wrote a paper called “Could Advanced AI Drive Explosive Economic Growth?,” which argued that growth could accelerate dramatically if AI would start generating ideas the way human researchers once did. In your view, population growth historically powered kind of an ideas feedback loop. More people meant more researchers meant more ideas, rising incomes, but that loop broke after the demographic transition in the late-19th century but you suggest that AI could restart it: more ideas, more output, more AI, more ideas. Does this new paper in a way build upon that paper? “How quick and big would a software intelligence explosion be?”The first paper you referred to is about the biggest-picture dynamic of economic growth. As you said, throughout the long run history, when we produced more food, the population increased. That additional output transferred itself into more people, more workers. These days that doesn't happen. When GDP goes up, that doesn't mean people have more kids. In fact, the demographic transition, the richer people get, the fewer kids they have. So now we've got more output, we're getting even fewer people as a result, so that's been blocked.This first paper is basically saying, look, if we can manufacture human minds or human-equivalent minds in any way, be it by building more computer chips, or making better computer chips, or any way at all, then that feedback loop gets going again. Because if we can manufacture more human minds, then we can spend output again to create more workers. That's the first paper.The second paper double clicks on one specific way that we can use output to create more human minds. It's actually, in a way, the scariest way because it's the way of creating human minds which can happen the quickest. So this is the way where you don't have to build any more computer chips, you don't have to build any more fabs, as they're called, these big factories that make computer chips. In fact, you don't have to do anything at all in the physical world.It seems like most of the conversation has been about how much investment is going to go into building how many new data centers, and that seems like that is almost the entire conversation, in a way, at the moment. But you're not looking at compute, you're looking at software.Exactly, software. So the idea is you don't have to build anything. You've already got loads of computer chips and you just make the algorithms that run the AIs on those computer chips more efficient. This is already happening, but it isn't yet a big deal because AI isn't that capable. But already, one year out, Epoch, this AI forecasting organization, estimates that just in one year, it becomes 10 times to 1000 times cheaper to run the same AI system. Just wait 12 months, and suddenly, for the same budget, you are able to run 10 times as many AI systems, or maybe even 1000 times as many for their most aggressive estimate. As I said, not a big deal today, but if we then develop an AI system which is better than any human at doing research, then now, in 10 months, you haven't built anything, but you've got 10 times as many researchers that you can set to work or even more than that. So then we get this feedback loop where you make some research progress, you improve your algorithms, now you've got loads more researchers, you set them all to work again, finding even more algorithmic improvements. So today we've got maybe a few hundred people that are advancing state-of-the-art AI algorithms.I think they're all getting paid a billion dollars a person, too.Exactly. But maybe we can 10x that initially by having them replaced by AI researchers that do the same thing. But then those AI researchers improve their own algorithms. Now you have 10x as many again, you have them building more computer chips, you're just running them more efficiently, and then the cycle continues. You're throwing more and more of these AI researchers at AI progress itself, and the algorithms are improving in what might be a very powerful feedback loop.In this case, it seems me that you're not necessarily talking about artificial general intelligence. This is certainly a powerful intelligence, but it's narrow. It doesn't have to do everything, it doesn't have to play chess, it just has to be able to do research.It's certainly not fully general. You don't need it to be able to control a robot body. You don't need it to be able to solve the Riemann hypothesis. You don't need it to be able to even be very persuasive or charismatic to a human. It's not narrow, I wouldn't say, it has to be able to do literally anything that AI researchers do, and that's a wide range of tasks: They're coding, they're communicating with each other, they're managing people, they are planning out what to work on, they are thinking about reviewing the literature. There's a fairly wide range of stuff. It's extremely challenging. It's some of the hardest work in the world to do, so I wouldn't say it's now, but it's not everything. It's some kind of intermediate level of generality in between a mere chess algorithm that just does chess and the kind of AGI that can literally do anything.Theory to reality (6:45)I think it's a much smaller gap for AI research than it is for many other parts of the economy.I think people who are cautiously optimistic about AI will say something like, “Yeah, I could see the kind of intelligence you're referring to coming about within a decade, but it's going to take a couple of big breakthroughs to get there.” Is that true, or are we actually getting pretty close?Famously, predicting the future of technology is very, very difficult. Just a few years before people invented the nuclear bomb, famous, very well-respected physicists were saying, “It's impossible, this will never happen.” So my best guess is that we do need a couple of fairly non-trivial breakthroughs. So we had the start of RL training a couple of years ago, became a big deal within the language model paradigm. I think we'll probably need another couple of breakthroughs of that kind of size.We're not talking a completely new approach, throw everything out, but we're talking like, okay, we need to extend the current approach in a meaningfully different way. It's going to take some inventiveness, it's going to take some creativity, we're going to have to try out a few things. I think, probably, we'll need that to get to the researcher that can fully automate OpenAI, is a nice way of putting it — OpenAI doesn't employ any humans anymore, they've just got AIs there.There's a difference between what a model can do on some benchmark versus becoming actually productive in the real world. That's why, while all the benchmark stuff is interesting, the thing I pay attention to is: How are businesses beginning to use this technology? Because that's the leap. What is that gap like, in your scenario, versus an AI model that can do a theoretical version of the lab to actually be incorporated in a real laboratory?It's definitely a gap. I think it's a pretty big gap. I think it's a much smaller gap for AI research than it is for many other parts of the economy. Let's say we are talking about car manufacturing and you're trying to get an AI to do everything that happens there. Man, it's such a messy process. There's a million different parts of the supply chain. There's all this tacit knowledge and all the human workers' minds. It's going to be really tough. There's going to be a very big gap going from those benchmarks to actually fully automating the supply chain for cars.For automating what OpenAI does, there's still a gap, but it's much smaller, because firstly, all of the work is virtual. Everyone at OpenAI could, in principle, work remotely. Their top research scientists, they're just on a computer all day. They're not picking up bricks and doing stuff like that. So also that already means it's a lot less messy. You get a lot less of that kind of messy world reality stuff slowing down adoption. And also, a lot of it is coding, and coding is almost uniquely clean in that, for many coding tasks, you can define clearly defined metrics for success, and so that makes AI much better. You can just have a go. Did AI succeed in the test? If not, try something else or do a gradient set update.That said, there's still a lot of messiness here, as any coder will know, when you're writing good code, it's not just about whether it does the function that you've asked it to do, it needs to be well-designed, it needs to be modular, it needs to be maintainable. These things are much harder to evaluate, and so AIs often pass our benchmarks because they can do the function that you asked it to do, the code runs, but they kind of write really spaghetti code — code that no one wants to look at, that no one can understand, and so no company would want to use that.So there's still going to be a pretty big benchmark-to-reality gap, even for OpenAI, and I think that's one of the big uncertainties in terms of, will this happen in three years versus will this happen in 10 years, or even 15 years?Since you brought up the timeline, what's your guess? I didn't know whether to open with that question or conclude with that question — we'll stick it right in the middle of our chat.Great. Honestly, my best guess about this does change more often than I would like it to, which I think tells us, look, there's still a state of flux. This is just really something that's very hard to know about. Predicting the future is hard. My current best guess is it's about even odds that we're able to fully automate OpenAI within the next 10 years. So maybe that's a 50-50.The world with AI research automation (10:59). . . I'm talking about 30 percent growth every year. I think it gets faster than that. If you want to know how fast it eventually gets, you can think about the question of how fast can a kind of self-replicating system double itself?So then what really would be the impact of that kind of AI research automation? How would you go about quantifying that kind of acceleration? What does the world look like?Yeah, so many possibilities, but I think what strikes me is that there is a plausible world where it is just way, way faster than almost everyone is expecting it to be. So that's the world where you fully automate OpenAI, and then we get that feedback loop that I was talking about earlier where AIs make their algorithms way more efficient, now you've got way more of them, then they make their algorithms way more efficient again, now they're way smarter. Now they're thinking a hundred times faster. The feedback loop continues and maybe within six months you now have a billion superintelligent AIs running on this OpenAI data center. The combined cognitive abilities of all these AIs outstrips the whole of the United States, outstrips anything we've seen from any kind of company or entity before, and they can all potentially be put towards any goal that OpenAI wants to. And then there's, of course, the risk that OpenAI's lost control of these systems, often discussed, in which case these systems could all be working together to pursue a particular goal. And so what we're talking about here is really a huge amount of power. It's a threat to national security for any government in which this happens, potentially. It is a threat to everyone if we lose control of these systems, or if the company that develops them uses them for some kind of malicious end. And, in terms of economic impacts, I personally think that that again could happen much more quickly than people think, and we can get into that.In the first paper we mentioned, it was kind of a thought experiment, but you were really talking about moving the decimal point in GDP growth, instead of talking about two and three percent, 20 and 30 percent. Is that the kind of world we're talking about?I speak to economists a lot, and —They hate those kinds of predictions, by the way.Obviously, they think I'm crazy. Not all of them. There are economists that take it very seriously. I think it's taken more seriously than everyone else realizes. It's like it's a bit embarrassing, at the moment, to admit that you take it seriously, but there are a few really senior economists who absolutely know their stuff. They're like, “Yep, this checks out. I think that's what's going to happen.” And I've had conversation with them where they're like, “Yeah, I think this is going to happen.” But the really loud, dominant view where I think people are a little bit scared to speak out against is they're like, “Obviously this is sci-fi.”One analogy I like to give to people who are very, very confident that this is all sci-fi and it's rubbish is to imagine that we were sitting there in the year 1400, imagine we had an economics professor who'd been studying the rate of economic growth, and they've been like, “Yeah, we've always had 0.1 percent growth every single year throughout history. We've never seen anything higher.” And then there was some kind of futurist economist rogue that said, “Actually, I think that if I extrapolate the curves in this way and we get this kind of technology, maybe we could have one percent growth.” And then all the other economists laugh at them, tell them they're insane – that's what happened. In 1400, we'd never had growth that was at all fast, and then a few hundred years later, we developed industrial technology, we started that feedback loop, we were investing more and more resources in scientific progress and in physical capital, and we did see much faster growth.So I think it can be useful to try and challenge economists and say, “Okay, I know it sounds crazy, but history was crazy. This crazy thing happened where growth just got way, way faster. No one would've predicted it. You would not have predicted it.” And I think being in that mindset can encourage people to be like, “Yeah, okay. You know what? Maybe if we do get AI that's really that powerful, it can really do everything, and maybe it is possible.”But to answer your question, yeah, I'm talking about 30 percent growth every year. I think it gets faster than that. If you want to know how fast it eventually gets, you can think about the question of how fast can a kind of self-replicating system double itself? So ultimately, what the economy is going to be like is it's going to have robots and factories that are able to fully create new versions of themselves. Everything you need: the roads, the electricity, the robots, the buildings, all of that will be replicated. And so you can look at actually biology and say, do we have any examples of systems which fully replicate themselves? How long does it take? And if you look at rats, for example, they're able to double the number of rats by grabbing resources from the environment, and giving birth, and whatnot. The doubling time is about six weeks for some types of rats. So that's an example of here's a physical system — ultimately, everything's made of physics — a physical system that has some intelligence that's able to go out into the world, gather resources, replicate itself. The doubling time is six weeks.Now, who knows how long it'll take us to get to AI that's that good? But when we do, you could see the whole physical economy, maybe a part that humans aren't involved with, a whole automated city without any humans just doubling itself every few weeks. If that happens, and the amount of stuff we're able to reduce as a civilization is doubling again on the order of weeks. And, in fact, there are some animals that double faster still, in days, but that's the kind of level of craziness. Now we're talking about 1000 percent growth, at that point. We don't know how crazy it could get, but I think we should take even the really crazy possibilities, we shouldn't fully rule them out.Considering constraints (16:30)I really hope people work less. If we get this good future, and the benefits are shared between all . . . no one should work. But that doesn't stop growth . . .There's this great AI forecast chart put out by the Federal Reserve Bank of Dallas, and I think its main forecast — the one most economists would probably agree with — has a line showing AI improving GDP by maybe two tenths of a percent. And then there are two other lines: one is more or less straight up, and the other one is straight down, because in the first, AI created a utopia, and in the second, AI gets out of control and starts killing us, and whatever. So those are your three possibilities.If we stick with the optimistic case for a moment, what constraints do you see as most plausible — reduced labor supply from rising incomes, social pushback against disruption, energy limits, or something else?Briefly, the ones you've mentioned, people not working, 100 percent. I really hope people work less. If we get this good future, and the benefits are shared between all — which isn't guaranteed — if we get that, then yeah, no one should work. But that doesn't stop growth, because when AI and robots can do everything that humans do, you don't need humans in the loop anymore. That whole thing is just going and kind of self-replicating itself and making as many goods as services as we want. Sure, if you want your clothes to be knitted by a human, you're in trouble, then your consumption is stuck. Bad luck. If you're happy to consume goods and services produced by AI systems or robots, fine if no one wants to work.Pushback: I think, for me, this is the biggest one. Obviously, the economy doubling every year is very scary as a thought. Tech progress will be going much faster. Imagine if you woke up and, over the course of the year, you go from not having any telephones at all in the world, to everyone's on their smartphones and social media and all the apps. That's a transition that took decades. If that happened in a year, that would be very disconcerting.Another example is the development of nuclear weapons. Nuclear weapons were developed over a number of years. If that happened in a month, or two months, that could be very dangerous. There'd be much less time for different countries, different actors to figure out how they're going to handle it. So I think pushback is the strongest one that we might as a society choose, “Actually, this is insane. We're going to go slower than we could.” That requires, potentially, coordination, but I think there would be broad support for some degree of coordination there.Worries and what-ifs (19:07)If suddenly no one has any jobs, what will we want to do with ourselves? That's a very, very consequential transition for the nature of human society.I imagine you certainly talk with people who are extremely gung-ho about this prospect. What is the common response you get from people who are less enthusiastic? Do they worry about a future with no jobs? Maybe they do worry about the existential kinds of issues. What's your response to those people? And how much do you worry about those things?I think there are loads of very worrying things that we're going to be facing. One class of pushback, which I think is very common, is worries about employment. It's a source of income for all of us, employment, but also, it's a source of pride, it's a source of meaning. If suddenly no one has any jobs, what will we want to do with ourselves? That's a very, very consequential transition for the nature of human society. I think people aren't just going to be down to just do it. I think people are scared about three AI companies literally now taking all the revenues that all of humanity used to be earning. It is naturally a very scary prospect. So that's one kind of pushback, and I'm sympathetic with it.I think that there are solutions, if we find a way to tax AI systems, which isn't necessarily easy, because it's very easy to move physical assets between countries. It's a lot easier to tax labor than capital already when rich people can move their assets around. We're going to have the same problem with AI, but if we can find a way to tax it, and we maintain a good democratic country, and we can just redistribute the wealth broadly, it can be solved. So I think it's a big problem, but it is doable.Then there's the problem of some people want to stop this now because they're worried about AI killing everyone. Their literally worry is that everyone will be dead because superintelligent AI will want that to happen. I think there's a real risk there. It's definitely above one percent, in my opinion. I wouldn't go above 10 percent, myself, but I think it's very scary, and that's a great reason to slow things down. I personally don't want to stop quite yet. I think you want to stop when the AI is a bit more powerful and a bit more useful than it is today so it can kind of help us figure out what to do about all of this crazy stuff that's coming.On what side of that line is AI as an AI researcher?That's a really great question. Should we stop? I think it's very hard to stop just after you've got the AI researcher AI, because that's when it's suddenly really easy to go very, very fast. So my out-of-the-box proposal here, which is probably very flawed, would be: When we're within a few spits distance — not spitting distance, but if you did that three times, and we can see we're almost at that AI automating OpenAI — then you pause, because you're not going to accidentally then go all the way. It is actually still a little bit a fair distance away, but it's actually still, at that point, probably a very powerful AI that can really help.Then you pause and do what?Great question. So then you pause, and you use your AI systems to help you firstly solve the problem of AI alignment, make extra, double sure that every time we increase the notch of AI capabilities, the AI is still loyal to humanity, not to its own kind of secret goals.Secondly, you solve the problem of, how are we going to make sure that no one person in government or no one CEO of an AI company ensures that this whole AI army is loyal to them, personally? How are we going to ensure that everyone, the whole world gets influenced over what this AI is ultimately programmed to do? That's the second problem.And then there's just a whole host of other things: unemployment that we've talked about, competition between different countries, US and China, there's a whole host of other things that I think you want to research on, figure out, get consensus on, and then slowly ratchet up the capabilities in what is now a very safe and controlled way.What else should we be working on? What are you working on next?One problem I'm excited about is people have historically worried about AI having its own goals. We need to make it loyal to humanity. But as we've got closer, it's become increasingly obvious, “loyalty to humanity” is very vague. What specifically do you want the AI to be programmed to do? I mean, it's not programmed, it's grown, but if it were programmed, if you're writing a rule book for AI, some organizations have employee handbooks: Here's the philosophy of the organization, here's how you should behave. Imagine you're doing that for the AI, but you're going super detailed, exactly how you want your AI assistant to behave in all kinds of situations. What should that be? Essentially, what should we align the AI to? Not any individual person, probably following the law, probably loads of other things. I think basically designing what is the character of this AI system is a really exciting question, and if we get that right, maybe the AI can then help us solve all these other problems.Maybe you have no interest in science fiction, but is there any film, TV, book that you think is useful for someone in your position to be aware of, or that you find useful in any way? Just wondering.I think there's this great post called “AI 2027,” which lays out a concrete scenario for how AI could go wrong or how maybe it could go right. I would recommend that. I think that's the only thing that's coming top of mind. I often read a lot of the stuff I read is I read a lot of LessWrong, to be honest. There's a lot of stuff from there that I don't love, but a lot of new ideas, interesting content there.Any fiction?I mean, I read fiction, but honestly, I don't really love the AI fiction that I've read because often it's quite unrealistic, and so I kind of get a bit overly nitpicky about it. But I mean, yeah, there's this book called Harry Potter and the Methods of Rationality, which I read maybe 10 years ago, which I thought was pretty fun.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
Outline00:00 – Intro07:22 – Anatomy of a feedback loop15:12 – A brief historical recap on the history of feedback23:40 – Inventing the negative feedback amplifier34:28 – Feedback in biology, economics, society, and ... board games!52:44 – Negative vs positive feedback59:15 – Feedback, causality, and the arrow of time1:06:22 – Classics: fundamental limitations, uncertainty, robustness1:21:30 – Adaptive control: learning in the loop1:29:50 – Modern AI feedback loops (RL, social media, alignment)1:40:40 – OutroLinksWatt's flyball governor: https://tinyurl.com/ne5nene3Maxwell - "On Governors": https://tinyurl.com/2a7cxj7m Black - "Inventing the negative-feedback amplifier": https://tinyurl.com/yevsemdpNyquist Criterion: https://tinyurl.com/33hfbw8mBode's integral: https://tinyurl.com/53sxkdzuWiener - "Cybernetics": https://tinyurl.com/yta899ayApoptosis: https://tinyurl.com/mcxjycka Predator–prey dynamics (Lotka–Volterra): https://tinyurl.com/5cvx33tn Bird migration cues (photoperiodism): https://tinyurl.com/y2e7t22v Neuron action potentials: https://tinyurl.com/2wemwdn4Economic equilibrium & feedback: https://tinyurl.com/nhdx7r3sEcho chambers: https://tinyurl.com/4v8yk7e8 Game design: https://tinyurl.com/bdhdhv38Gap metric (Vinnicombe): https://tinyurl.com/y9nw3yveGeorgiou, Smith - "Feedback Control and the Arrow of Time": https://tinyurl.com/5xvj76jrAnnaswamy, Fradkov - "A Historical Perspective of Adaptive Control and Learning": https://tinyurl.com/4nfew8vm Algorithmic trading flash crash (2010): https://tinyurl.com/2dsrs8j2AI alignment: https://tinyurl.com/yvs3wnj8Support the showPodcast infoPodcast website: https://www.incontrolpodcast.com/Apple Podcasts: https://tinyurl.com/5n84j85jSpotify: https://tinyurl.com/4rwztj3cRSS: https://tinyurl.com/yc2fcv4yYoutube: https://tinyurl.com/bdbvhsj6Facebook: https://tinyurl.com/3z24yr43Twitter: https://twitter.com/IncontrolPInstagram: https://tinyurl.com/35cu4kr4Acknowledgments and sponsorsThis episode was supported by the National Centre of Competence in Research on «Dependable, ubiquitous automation» and the IFAC Activity fund. The podcast benefits from the help of an incredibly talented and passionate team. Special thanks to L. Seward, E. Cahard, F. Banis, F. Dörfler, J. Lygeros, ETH studio and mirrorlake . Music was composed by A New Element.
TDC 074: The First Trillion Dollar Thought Leader: Being Known for How You Think, Not What You ConsumeWhy being known for how you think beats influence every time.Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque dives into the critical distinction between influencers and thought leaders in the AI era.You'll learn why chasing followers is the wrong game, how thought leadership transforms ideas into equity, and discover the unsexy immediate next step to start building your own trillion-dollar personal brand.Question of the Day
TDC: The Efficiency vs. Resiliency DilemmaWhat's the biggest vulnerability in your business right now?Episode Summary:In this episode of The Digital Contrarian, host Ryan Levesque explores the dangerous trade-off between efficiency and resiliency in business.You'll learn why single-channel dependency threatens your business survival, discover how the Irish Potato Famine reveals critical marketing insights, and explore how to build an antifragile strategic content ecosystem that can weather any storm.Question of the Day
Frontier AI is colliding with real-world infrastructure. Eiso Kant (Co-CEO & Co-Founder, Poolside) joins the MAD Podcast to unpack Project Horizon— a multi-gigawatt West Texas build—and why frontier labs must own energy, compute, and intelligence to compete. We map token economics, cloud-style margins, and the staged 250 MW rollout using 2.5 MW modular skids.Then we get operational: the CoreWeave anchor partnership, environmental choices (SCR, renewables + gas + batteries), community impact, and how Poolside plans to bring capacity online quickly without renting away margin—plus the enterprise motion (defense to Fortune 500) powered by forward deployed research engineers.Finally, we go deep on training. Eiso lays out RL2L (Reinforcement Learning to Learn)— aimed at reverse-engineering the web's thoughts and actions— why intelligence may commoditize, what that means for agents, and how coding served as a proxy for long-horizon reasoning before expanding to broader knowledge work.PoolsideWebsite - https://poolside.aiX/Twitter - https://x.com/poolsideaiEiso KantLinkedIn - https://www.linkedin.com/in/eisokant/X/Twitter - https://x.com/eisokantFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)Blog - https://www.mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Cold open – “Intelligence becomes a commodity”(00:23) Host intro – Project Horizon & RL2L(01:19) Why Poolside exists amid frontier labs(04:38) Project Horizon: building one of the largest US data center campuses(07:20) Why own infra: scale, cost, and avoiding “cosplay”(10:06) Economics deep dive: $8B for 250 MW, capex/opex, margins(16:47) CoreWeave partnership: anchor tenant + flexible scaling(18:24) Hiring the right tail: building a physical infra org(30:31) RL today → agentic RL and long-horizon tasks(37:23) RL2L revealed: reverse-engineering the web's thoughts & actions(39:32) Continuous learning and the “hot stove” limitation(43:30) Agents debate: thin wrappers, differentiation, and model collapse(49:10) “Is AI plateauing?”—chip cycles, scale limits, and new axes(53:49) Why software was the proxy; expanding to enterprise knowledge work(55:17) Model status: Malibu → Laguna (small/medium/large)(57:31) Poolside's Commercial Reality today: defense; Fortune 500; FDRE (1:02:43) Global team, avoiding the echo chamber(1:04:34) Next 12–18 months: frontier models + infra scale(1:05:52) Closing
Learn more about Michael Wenderoth, Executive Coach: www.changwenderoth.comThere's a huge chance you're being passed over for top jobs – and you're not even aware of it. How has AI changed recruiting and job search, and what does that mean for you? In this episode of 97% Effective, host Michael Wenderoth speaks with Nick Day, founder and CEO of JGA Recruitment Group, a leading global recruiter and payroll advisory based in the UK. Nick provides sharp, practical advice on how to be visible, stand out, and land your dream job in an increasingly competitive talent market, where recruiters and other job seekers are increasingly using AI. He talks about the two versions of your CV that you most need, suggests you answer problems that aren't being advertised, and explains why visibility is the currency of credibility in today's job market. You'll leave this episode with a strong understanding of AI's impact on job seekers – and a much deeper appreciation for the human touch that will get you the best results.SHOW NOTES:What Nick's social post about Costa Rica this summer reveals about him – and JGA“Done Lists” and how Nick sets his intentions each dayHow fear disguises itself as wisdom: Nick's hard truth about AI and the current job marketWhat's the right depth and place to “jump in” learning about AIWhy great job candidates are being left in the coldTip#1: Optimize yourself for the algorithms – and produce two versions of your resume.What?! How excellent candidates with high level strategic resumes are getting rejectedTip#2: Make sure to add that personal element to your CV, because everyone's submitting the “perfect” resumeTip#3: The 3 necessary approaches to getting your dream jobHow “easy apply” is overwhelming recruiters and ensures top candidates never get the lookTip#4: Go back to basics and cater your CV to a position – and tell a story that shows your value“Answer a problem that isn't being advertised”Your CV/resume is the most important document you will ever writeWhy most CV/resume services are a big waste of moneyTip#5: Have an achievement section at the top of your CV – don't wait for the recruiter to find them on page 2Michael's highlight: How Nick's job search tips are also best practice that help you get promoted“The important bits” that Nick says we should save for the interview (and not put on the resume)The importance of generating connectionTip#6: Treat your job as a campaign, not as a checkboxWhere Nick sees AI systems doing more harm than goodTip#7: Change your resume for the position, but also change your persona for the person that's interviewing youHow to be creative – but not lie – in your resume, to helps you work with the algorithm, even if don't have the exact experienceGetting over imposter syndrome to become your biggest advocateTip#8: The most underrated skill in business is storytelling“Visibility is the currency of credibility”Reaching Nick: No AI, no PA's. Nick responds personally!BIO AND LINKS:Nick Day is a globally recognised HR and payroll authority with over 20 years of experience leading the sector through innovation, insight and influence. As CEO and founder of JGA Recruitment Group, Nick has built one of the most respected payroll and HR talent consultancies in the UK and abroad. Nick's voice reaches tens of thousands of professionals through his acclaimed platforms: The Payroll Podcast, the H&R L&D Podcast, and the Mindful Paths Podcast.Nick on LinkedIn: https://www.linkedin.com/in/nickday/?originalSubdomain=ukJGA Recruitment Group: https://jgarecruitment.comNick's post on his daughter, in Costa Rica: https://tinyurl.com/2s3f3n7jWalk the Camino Santiago: https://en.wikipedia.org/wiki/Camino_de_Santiago“Life Moves Pretty Fast..” (The epic line, from Ferris Bueller): https://tinyurl.com/bdns8pa7Nick's Mindful Paths Podcast: https://podcasts.apple.com/us/podcast/mindful-paths-podcast/id1682002299Done Lists: https://tinyurl.com/2s3hrdfdAs an Algorithm Blocking Your Job Search? (WSJ Podcast): https://tinyurl.com/ujy6yttnBohemiam Rhapsody Flashmob in France: https://www.youtube.com/watch?v=rfUEstWJUkAMichael on Nick's HR L&D Podcast, “Mastering Office Politics: Power, Promotion & Playing to Win: https://www.youtube.com/watch?v=ARVsf7dFOyYMichael's Award-Winning book, Get Promoted: What Your Really Missing at Work That's Holding You Back https://tinyurl.com/453txk74Watch this episode on video, the 97% Effective Youtube channel: https://www.youtube.com/@97PercentEffectiveAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
This is the latest in a series of essays on AI Scaling. You can find the others on my site. Summary: RL-training for LLMs scales surprisingly poorly. Most of its gains are from allowing LLMs to productively use longer chains of thought, allowing them to think longer about a problem. There is some improvement for a fixed length of answer, but not enough to drive AI progress. Given the scaling up of pre-training compute also stalled, we'll see less AI progress via compute scaling than you might have thought, and more of it will come from inference scaling (which has different effects on the world). That lengthens timelines and affects strategies for AI governance and safety. The current era of improving AI capabilities using reinforcement learning (from verifiable rewards) involves two key types of scaling: Scaling the amount of compute used for RL during training Scaling [...] ---Outline:(09:12) How do these compare to pre-training scaling?(13:42) Conclusion --- First published: October 22nd, 2025 Source: https://forum.effectivealtruism.org/posts/TysuCdgwDnQjH3LyY/how-well-does-rl-scale --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Reader beware, you're in for a scare! If you were a kid, tween, or teen at all in the '90s, you knew Goosebumps books, and you were obsessed with those colorful and creepy covers! Mark welcomes the original Goosebumps cover illustrator Tim Jacobus on the show to talk about his career, the process of making all those iconic covers, and what cover designs are most special to him. This is Part 1 of a Goosebumps celebration, and Part 2 will focus on the premiere episodes of the Fox TV series for its 30th anniversary. Follow Tim @timjacobus on Instagram, check out his work at jacobusstudios.com, and say hi to him at future conventions, where he sells prints of his work and gives a portion of the income to help feed those who are in need.
TDC 072: How to Become A "Category of One" Thought Leader.Three questions that separate thought leaders from everyone else in their market.Episode Summary:In this episode of The Digital Contrarian, host Ryan Levesque reveals the Category of One framework that's made his consulting practice oversubscribed and generated 24 speaking invitations in 18 months.You'll learn the three critical questions that establish thought leadership positioning, discover why timing and novelty matter more than credentials, and understand how to make all roads lead naturally to you.Question of the Day
My guest today on the Online for Authors podcast is RL Carpentier III, author of the book Our Lady of the Overlook. Rodney is a lifelong writer of stories and poems and songs and now, novels. As a kid he and his mom would try to plot whodunits on Sunday afternoons; in middle school he and his friends tried to develop their own superhero comic book; and in college he wrote song lyrics for his pop-punk band Gone Ashley. But he's had a novel stuck in his head for the last 20 years and it's about time for him to let it all out. Professionally, he has been a storyteller throughout his long law enforcement career. He has written factually about the mundane to the maniacal. He has told his peers, his bosses, and juries about what he has seen and done, what he's investigated and what he's able to prove. It is a procedural and clinical style; something he has brought over to his fiction writing. So, at this crossroad of his life and career, well into his 40's and in the downswing toward retirement, he is going to live out his dream of being a writer. He has his wife and daughter in his corner and the whole world for an audience. In my book review, I stated Our Lady of the Overlook is the first in a trilogy of murder suspense novels. And talk about suspense! From the beginning, RL makes the reader wonder who did it and why. We meet Mike Ellis, a man who has come home to be a cop - but remains in his dead father's shadow, a man who was Police Chief of the town before his heart attack. Before long, he is sucked into a murder investigation that is quite like his father's first murder case and one that has long since gone cold. Despite no evidence, Mike believes that the two have to be related. But with a 40-year span between them, will he ever be able to prove his theory? But more importantly, will he get the chance? There are those on the force who want to set him up for failure - retribution against his father. And his personal ghosts work against him as well. This novel is full of intrigue, half-clues, secrets, and trauma, and the ending will leave you wanting more! Subscribe to Online for Authors to learn about more great books! https://www.youtube.com/@onlineforauthors?sub_confirmation=1 Join the Novels N Latte Book Club community to discuss this and other books with like-minded readers: https://www.facebook.com/groups/3576519880426290 You can follow Author RL Carpentier III Website: https://www.rlcarpentierwriter.com/ Social media: FB: @R.L. Carpentier - Debut Novelist IG: @rlcarpentier Purchase Our Lady of the Overlook on Amazon: Paperback: https://amzn.to/4lW3uBe Ebook: https://amzn.to/4mePSA Teri M Brown, Author and Podcast Host: https://www.terimbrown.com FB: @TeriMBrownAuthor IG: @terimbrown_author X: @terimbrown1 Want to be a guest on Online for Authors? Send Teri M Brown a message on PodMatch, here: https://www.podmatch.com/member/onlineforauthors #rlcarpentier #ourladyoftheoverlook #thriller #mystery #terimbrownauthor #authorpodcast #onlineforauthors #characterdriven #researchjunkie #awardwinningauthor #podcasthost #podcast #readerpodcast #bookpodcast #writerpodcast #author #books #goodreads #bookclub #fiction #writer #bookreview *As an Amazon Associate I earn from qualifying purchases.
A16z Podcast: Read the notes at at podcastnotes.org. Don't forget to subscribe for free to our newsletter, the top 10 ideas of the week, every Monday --------- Amjad Masad, founder and CEO of Replit, joins a16z's Marc Andreessen and Erik Torenberg to discuss the new world of AI agents, the future of programming, and how software itself is beginning to build software.They trace the history of computing to the rise of AI agents that can now plan, reason, and code for hours without breaking, and explore how Replit is making it possible for anyone to create complex applications in natural language. Amjad explains how RL unlocked reasoning for modern models, why verification loops changed everything, whether LLMs are hitting diminishing returns — and if “good enough” AI might actually block progress toward true general intelligence. Resources:Follow Amjad on X: https://x.com/amasadFollow Marc on X: https://x.com/pmarcaFollow Erik on X: https://x.com/eriktorenberg Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
TDC 071: How to Get Your Website to Show Up in ChatGPT…The 2025 version of ranking #1 on Google—master these AI concepts before your competition does.Episode Summary:In this episode of The Digital Contrarian, host Ryan Levesque dives into Answer Engine Optimization (AEO) and the advanced AI concepts determining which brands get recommended by ChatGPT.You'll learn how to optimize for vector embeddings instead of keywords, discover entity graph gap analysis techniques, and master information gain rate principles that make AI systems choose your content over competitors'.Question of the Day
2 sections- third (and final) sub conversation within RL and RE regarding the inherited korban's ownership (and kapara-ability) for deceased or inheritors, discussion if the korban slaughtered with wrong intent achieves kapara for owner
Amjad Masad, founder and CEO of Replit, joins a16z's Marc Andreessen and Erik Torenberg to discuss the new world of AI agents, the future of programming, and how software itself is beginning to build software.They trace the history of computing to the rise of AI agents that can now plan, reason, and code for hours without breaking, and explore how Replit is making it possible for anyone to create complex applications in natural language. Amjad explains how RL unlocked reasoning for modern models, why verification loops changed everything, whether LLMs are hitting diminishing returns — and if “good enough” AI might actually block progress toward true general intelligence. Resources:Follow Amjad on X: https://x.com/amasadFollow Marc on X: https://x.com/pmarcaFollow Erik on X: https://x.com/eriktorenberg Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Are we failing to understand the exponential, again?My guest is Julian Schrittwieser (top AI researcher at Anthropic; previously Google DeepMind on AlphaGo Zero & MuZero). We unpack his viral post (“Failing to Understand the Exponential, again”) and what it looks like when task length doubles every 3–4 months—pointing to AI agents that can work a full day autonomously by 2026 and expert-level breadth by 2027. We talk about the original Move 37 moment and whether today's AI models can spark alien insights in code, math, and science—including Julian's timeline for when AI could produce Nobel-level breakthroughs.We go deep on the recipe of the moment—pre-training + RL—why it took time to combine them, what “RL from scratch” gets right and wrong, and how implicit world models show up in LLM agents. Julian explains the current rewards frontier (human prefs, rubrics, RLVR, process rewards), what we know about compute & scaling for RL, and why most builders should start with tools + prompts before considering RL-as-a-service. We also cover evals & Goodhart's law (e.g., GDP-Val vs real usage), the latest in mechanistic interpretability (think “Golden Gate Claude”), and how safety & alignment actually surface in Anthropic's launch process.Finally, we zoom out: what 10× knowledge-work productivity could unlock across medicine, energy, and materials, how jobs adapt (complementarity over 1-for-1 replacement), and why the near term is likely a smooth ramp—fast, but not a discontinuity.Julian SchrittwieserBlog - https://www.julian.acX/Twitter - https://x.com/mononofuViral post: Failing to understand the exponential, again (9/27/2025)AnthropicWebsite - https://www.anthropic.comX/Twitter - https://x.com/anthropicaiMatt Turck (Managing Director)Blog - https://www.mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturckFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCap(00:00) Cold open — “We're not seeing any slowdown.”(00:32) Intro — who Julian is & what we cover(01:09) The “exponential” from inside frontier labs(04:46) 2026–2027: agents that work a full day; expert-level breadth(08:58) Benchmarks vs reality: long-horizon work, GDP-Val, user value(10:26) Move 37 — what actually happened and why it mattered(13:55) Novel science: AlphaCode/AlphaTensor → when does AI earn a Nobel?(16:25) Discontinuity vs smooth progress (and warning signs)(19:08) Does pre-training + RL get us there? (AGI debates aside)(20:55) Sutton's “RL from scratch”? Julian's take(23:03) Julian's path: Google → DeepMind → Anthropic(26:45) AlphaGo (learn + search) in plain English(30:16) AlphaGo Zero (no human data)(31:00) AlphaZero (one algorithm: Go, chess, shogi)(31:46) MuZero (planning with a learned world model)(33:23) Lessons for today's agents: search + learning at scale(34:57) Do LLMs already have implicit world models?(39:02) Why RL on LLMs took time (stability, feedback loops)(41:43) Compute & scaling for RL — what we see so far(42:35) Rewards frontier: human prefs, rubrics, RLVR, process rewards(44:36) RL training data & the “flywheel” (and why quality matters)(48:02) RL & Agents 101 — why RL unlocks robustness(50:51) Should builders use RL-as-a-service? Or just tools + prompts?(52:18) What's missing for dependable agents (capability vs engineering)(53:51) Evals & Goodhart — internal vs external benchmarks(57:35) Mechanistic interpretability & “Golden Gate Claude”(1:00:03) Safety & alignment at Anthropic — how it shows up in practice(1:03:48) Jobs: human–AI complementarity (comparative advantage)(1:06:33) Inequality, policy, and the case for 10× productivity → abundance(1:09:24) Closing thoughts
TDC 070: Seven "Non-Obvious" Email Lessons I've Learned Writing This Email Newsletter Each Week.Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque reveals the seven most impactful lessons from writing 70 consecutive weekly newsletters by hand.You'll learn how less AI usage led to higher engagement, why creative volume beats perfectionism, and discover the custom dashboard metrics that actually matter for email success.Question of the Day
In this deep dive with Kyle Corbitt, co-founder and CEO of OpenPipe (recently acquired by CoreWeave), we explore the evolution of fine-tuning in the age of AI agents and the critical shift from supervised fine-tuning to reinforcement learning. Kyle shares his journey from leading YC's Startup School to building OpenPipe, initially focused on distilling expensive GPT-4 workflows into smaller, cheaper models before pivoting to RL-based agent training as frontier model prices plummeted. The conversation reveals why 90% of AI projects remain stuck in proof-of-concept purgatory - not due to capability limitations, but reliability issues that Kyle believes can be solved through continuous learning from real-world experience. He discusses the breakthrough of RULER (Relative Universal Reinforcement Learning Elicited Rewards), which uses LLMs as judges to rank agent behaviors relatively rather than absolutely, making RL training accessible without complex reward engineering. Kyle candidly assesses the challenges of building realistic training environments for agents, explaining why GRPO (despite its advantages) may be a dead end due to its requirement for perfectly reproducible parallel rollouts. He shares insights on why LoRAs remain underrated for production deployments, why JAPA and prompt optimization haven't lived up to the hype in his testing, and why the hardest part of deploying agents isn't the AI - it's sandboxing real-world systems with all their bugs and edge cases intact. The discussion also covers OpenPipe's acquisition by CoreWeave, the launch of their serverless reinforcement learning platform, and Kyle's vision for a future where every deployed agent continuously learns from production experience. He predicts that solving the reliability problem through continuous RL could unlock 10x more AI inference demand from projects currently stuck in development, fundamentally changing how we think about agent deployment and maintenance. Key Topics: • The rise and fall of fine-tuning as a business model • Why 90% of AI projects never reach production • RULER: Making RL accessible through relative ranking • The environment problem: Why sandboxing is harder than training • GRPO vs PPO and the future of RL algorithms • LoRAs: The underrated deployment optimization • Why JAPA and prompt optimization disappointed in practice • Building world models as synthetic training environments • The $500B Stargate bet and OpenAI's potential crypto play • Continuous learning as the path to reliable agents
What does it really mean when GPT-5 “thinks”? In this conversation, OpenAI's VP of Research Jerry Tworek explains how modern reasoning models work in practice—why pretraining and reinforcement learning (RL/RLHF) are both essential, what that on-screen “thinking” actually does, and when extra test-time compute helps (or doesn't). We trace the evolution from O1 (a tech demo good at puzzles) to O3 (the tool-use shift) to GPT-5 (Jerry calls it “03.1-ish”), and talk through verifiers, reward design, and the real trade-offs behind “auto” reasoning modes.We also go inside OpenAI: how research is organized, why collaboration is unusually transparent, and how the company ships fast without losing rigor. Jerry shares the backstory on competitive-programming results like ICPC, what they signal (and what they don't), and where agents and tool use are genuinely useful today. Finally, we zoom out: could pretraining + RL be the path to AGI? This is the MAD Podcast —AI for the 99%. If you're curious about how these systems actually work (without needing a PhD), this episode is your map to the current AI frontier.OpenAIWebsite - https://openai.comX/Twitter - https://x.com/OpenAIJerry TworekLinkedIn - https://www.linkedin.com/in/jerry-tworek-b5b9aa56X/Twitter - https://x.com/millionintFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Intro(01:01) What Reasoning Actually Means in AI(02:32) Chain of Thought: Models Thinking in Words(05:25) How Models Decide Thinking Time(07:24) Evolution from O1 to O3 to GPT-5(11:00) Before OpenAI: Growing up in Poland, Dropping out of School, Trading(20:32) Working on Robotics and Rubik's Cube Solving(23:02) A Day in the Life: Talking to Researchers(24:06) How Research Priorities Are Determined(26:53) Collaboration vs IP Protection at OpenAI(29:32) Shipping Fast While Doing Deep Research(31:52) Using OpenAI's Own Tools Daily(32:43) Pre-Training Plus RL: The Modern AI Stack(35:10) Reinforcement Learning 101: Training Dogs(40:17) The Evolution of Deep Reinforcement Learning(42:09) When GPT-4 Seemed Underwhelming at First(45:39) How RLHF Made GPT-4 Actually Useful(48:02) Unsupervised vs Supervised Learning(49:59) GRPO and How DeepSeek Accelerated US Research(53:05) What It Takes to Scale Reinforcement Learning(55:36) Agentic AI and Long-Horizon Thinking(59:19) Alignment as an RL Problem(1:01:11) Winning ICPC World Finals Without Specific Training(1:05:53) Applying RL Beyond Math and Coding(1:09:15) The Path from Here to AGI(1:12:23) Pure RL vs Language Models
TDC 069: The Circular AI Money Shell Game?? | What To Do About The AI Bubble That May Soon Be Coming To an EndThe AI bubble may be closer to popping than you think—and the evidence is hiding in plain sight.Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque goes into the current state of the AI bubble and where we might be on the hype cycle curve.You'll learn why enterprise AI adoption is actually declining, discover the circular money shell game happening between major AI players, and uncover a potential opportunity hiding in OpenAI's recent job postings that could rival the early days of Facebook Ads.Question of the Day
Help support the free broadcast by donating to our PayPal fundraiser!https://www.paypal.com/ncp/payment/RL... *Behind the Bunker Paintball Podcast* is a long-running weekly show dedicated to everything paintball. Hosted by passionate players and industry veterans, the podcast dives into the latest happenings in the sport, from new gear releases and product reviews to updates on tournaments and events around the world. It has built a loyal audience by combining serious paintball discussion with a lighthearted, entertaining approach that keeps both new players and seasoned veterans engaged.A big part of the podcast's appeal comes from its interactive nature. Viewers and listeners are encouraged to join in live chats, submit questions, and even share their own paintball stories and gear setups. The hosts often highlight community input, creating a sense of connection between fans and the broader paintball scene. This makes the podcast not just a source of information, but also a hub where players can feel part of a larger community that shares their enthusiasm for the sport.Beyond news and discussions, *Behind the Bunker* mixes in humor, playful banter, and fun segments that give the show a casual and friendly vibe. Whether it's debating the best gear setups, laughing over paintball fails, or breaking down the history of the game, the podcast manages to balance informative content with entertainment. This blend has made it one of the most recognizable and trusted voices in the paintball community, continuing to grow and evolve alongside the sport itself.
All over the world, for all of human history – and probably going back to our earliest hominid ancestors – people have found ways to try to keep themselves clean. But how did soap come about? Research: “Soap, N. (1), Etymology.” Oxford English Dictionary, Oxford UP, June 2025, https://doi.org/10.1093/OED/1115187665. American Cleaning Institute. “Soaps & Detergents History.” https://www.cleaninginstitute.org/understanding-products/why-clean/soaps-detergents-history Beckmann, John. “History of Inventions, Discoveries and Origins.” William Johnston, translator. Bosart, L.W. “The Early History of the Soap Industry.” The American Oil Chemists' Society. Journal of Oil & Fat Industries 1924-10: Vol 1 Iss 2. Cassidy, Cody. “Who Discovered Soap? What to Know About the Origins of the Life-Saving Substance.” Time. 5/5/2020. https://time.com/5831828/soap-origins/ Ciftyurek, Muge, and Kasim Ince. "Selahattin Okten Soap Factory in Antakya and an Evaluation on Soap Factory Plan Typology/Antakya'da Bulunan Selahattin Okten Sabunhanesi ve Sabunhane Plan Tipolojisi Uzerine Bir Degerlendirme." Art-Sanat, no. 19, Jan. 2023, pp. 133+. Gale Academic OneFile, dx.doi.org/10.26650/artsanat.2023.19.1106544. Accessed 18 Aug. 2025. Costa, Albert B. “Michel-Eugène Chevreul.” Encyclopedia Britannica. https://www.britannica.com/biography/Michel-Eugene-Chevreul Curtis, Valerie A. “Dirt, disgust and disease: a natural history of hygiene.” Journal of epidemiology and community health vol. 61,8 (2007): 660-4. doi:10.1136/jech.2007.062380 Dijkstra, Albert J. “How Chevreul (1786-1889) based his conclusions on his analytical results.” OCL. Vol. 16, No. 1. January-February 2009. Gibbs, F.W. “The History and Manufacture of Soap.” Annals of Science. 1939. Koeppel, Dan. “The History of Soap.” 4/15/2020. https://www.nytimes.com/wirecutter/blog/history-of-soap/ List, Gary, and Michael Jackson. “Giants of the Past: The Battle Over Hydrogenation (1903-1920).” https://www.ars.usda.gov/research/publications/publication/?seqNo115=210614 Maniatis, George C. “Guild Organized Soap Manufacturing Industry in Constantinople: Tenth-Twelfth Centuries.” Byzantion, 2010, Vol. 80 (2010). https://www.jstor.org/stable/44173107 National Museum of American History. “Bathing (Body Soaps and Cleansers).” https://americanhistory.si.edu/collections/object-groups/health-hygiene-and-beauty/bathing-body-soaps-and-cleansers New Mexico Historic Sites. “Making Soap from the Leaves of the Soaptree Yucca.” https://nmhistoricsites.org/assets/files/selden/Virtual%20Classroom_Soaptree%20Yucca%20Soap%20Making.pdf “The history of soapmaking.” 8/30/2019. https://www.open.edu/openlearn/history-the-arts/history/history-science-technology-and-medicine/history-science/the-history-soapmaking Pliny the Elder. “The Natural History of Pliny. Translated, With Copious Notes and Illustrations.” Vol. 5. John Bostock, translator. https://www.gutenberg.org/files/60688/60688-h/60688-h.htm Pointer, Sally. “An Experimental Exploration of the Earliest Soapmaking.” EXARC Journal. 2024/3. 8/22/2024. https://exarc.net/issue-2024-3/at/experimental-exploration-earliest-soapmaking Ridner, Judith. “The dirty history of soap.” The Conversation. 5/12/2020. https://theconversation.com/the-dirty-history-of-soap-136434 Routh, Hirak Behari et al. “Soaps: From the Phoenicians to the 20th Century - A Historical Review.” Clinics in Dermatology. Vol. No. 3. 1996. Smith, Cyril Stanley, and John G. Hawthorne. “Mappae Clavicula: A Little Key to the World of Medieval Techniques.” Transactions of the American Philosophical Society, vol. 64, no. 4, 1974, pp. 1–128. JSTOR, https://doi.org/10.2307/1006317. Accessed 18 Aug. 2025. Timilsena, Yakindra Prasad et al. “Perspectives on Saponins: Food Functionality and Applications.” International journal of molecular sciences vol. 24,17 13538. 31 Aug. 2023, doi:10.3390/ijms241713538 “Craftsmanship of Aleppo Ghar soap.” https://ich.unesco.org/en/RL/craftsmanship-of-aleppo-ghar-soap-02132 “Tradition of Nabulsi soap making in Palestine.” https://ich.unesco.org/en/RL/tradition-of-nabulsi-soap-making-in-palestine-02112 “Soaps.” https://www.fs.usda.gov/wildflowers/ethnobotany/soaps.shtml van Dijk, Kees. “Soap is the onset of civilization.” From Cleanliness and Culture. Kees van Dijk and Jean Gelman Taylor, eds. Brill. 2011. https://www.jstor.org/stable/10.1163/j.ctvbnm4n9.4 Wei, Huang. “The Sordid, Sudsy Rise of Soap in China.” Sixth Tone. 8/11/2020. https://www.sixthtone.com/news/1006041 See omnystudio.com/listener for privacy information.