POPULARITY
Categories
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop interviews Aurelio Gialluca, an economist and full stack data professional who works across finance, retail, and AI as both a data engineer and machine learning developer, while also exploring human consciousness and psychology. Their wide-ranging conversation covers the intersection of science and psychology, the unique cultural characteristics that make Argentina a haven for eccentrics (drawing parallels to the United States), and how Argentine culture has produced globally influential figures from Borges to Maradona to Che Guevara. They explore the current AI landscape as a "centralizing force" creating cultural homogenization (particularly evident in LinkedIn's cookie-cutter content), discuss the potential futures of AI development from dystopian surveillance states to anarchic chaos, and examine how Argentina's emotionally mature, non-linear communication style might offer insights for navigating technological change. The conversation concludes with Gialluca describing his ambitious project to build a custom water-cooled workstation with industrial-grade processors for his quantitative hedge fund, highlighting the practical challenges of heat management and the recent tripling of RAM prices due to market consolidation.Timestams00:00 Exploring the Intersection of Psychology and Science02:55 Cultural Eccentricity: Argentina vs. the United States05:36 The Influence of Religion on National Identity08:50 The Unique Argentine Cultural Landscape11:49 Soft Power and Cultural Influence14:48 Political Figures and Their Cultural Impact17:50 The Role of Sports in Shaping National Identity20:49 The Evolution of Argentine Music and Subcultures23:41 AI and the Future of Cultural Dynamics26:47 Navigating the Chaos of AI in Culture33:50 Equilibrating Society for a Sustainable Future35:10 The Patchwork Age: Decentralization and Society35:56 The Impact of AI on Human Connection38:06 Individualism vs. Collective Rules in Society39:26 The Future of AI and Global Regulations40:16 Biotechnology: The Next Frontier42:19 Building a Personal AI Lab45:51 Tiers of AI Labs: From Personal to Industrial48:35 Mathematics and AI: The Foundation of Innovation52:12 Stochastic Models and Predictive Analytics55:47 Building a Supercomputer: Hardware InsightsKey Insights1. Argentina's Cultural Exceptionalism and Emotional Maturity: Argentina stands out globally for allowing eccentrics to flourish and having a non-linear communication style that Gialluca describes as "non-monotonous systems." Argentines can joke profoundly and be eccentric while simultaneously being completely organized and straightforward, demonstrating high emotional intelligence and maturity that comes from their unique cultural blend of European romanticism and Latino lightheartedness.2. Argentina as an Underrecognized Cultural Superpower: Despite being introverted about their achievements, Argentina produces an enormous amount of global culture through music, literature, and iconic figures like Borges, Maradona, Messi, and Che Guevara. These cultural exports have shaped entire generations worldwide, with Argentina "stealing the thunder" from other nations and creating lasting soft power influence that people don't fully recognize as Argentine.3. AI's Cultural Impact Follows Oscillating Patterns: Culture operates as a dynamic system that oscillates between centralization and decentralization like a sine wave. AI currently represents a massive centralizing force, as seen in LinkedIn's homogenized content, but this will inevitably trigger a decentralization phase. The speed of this cultural transformation has accelerated dramatically, with changes that once took generations now happening in years.4. The Coming Bifurcation of AI Futures: Gialluca identifies two extreme possible endpoints for AI development: complete centralized control (the "Mordor" scenario with total surveillance) or complete chaos where everyone has access to dangerous capabilities like creating weapons or viruses. Finding a middle path between these extremes is essential for society's survival, requiring careful equilibrium between accessibility and safety.5. Individual AI Labs Are Becoming Democratically Accessible: Gialluca outlines a tier system for AI capabilities, where individuals can now build "tier one" labs capable of fine-tuning models and processing massive datasets for tens of thousands of dollars. This democratization means that capabilities once requiring teams of PhD scientists can now be achieved by dedicated individuals, fundamentally changing the landscape of AI development and access.6. Hardware Constraints Are the New Limiting Factor: While AI capabilities are rapidly advancing, practical implementation is increasingly constrained by hardware availability and cost. RAM prices have tripled in recent months, and the challenge of managing enormous heat output from powerful processors requires sophisticated cooling systems. These physical limitations are becoming the primary bottleneck for individual AI development.7. Data Quality Over Quantity Is the Critical Challenge: The main bottleneck for AI advancement is no longer energy or GPUs, but high-quality data for training. Early data labeling efforts produced poor results because labelers lacked domain expertise. The future lies in reinforcement learning (RL) environments where AI systems can generate their own high-quality training data, representing a fundamental shift in how AI systems learn and develop.
At 22, Brendan Foody is both the youngest Conversations with Tyler guest ever and the youngest unicorn founder on record. His company Mercor hires the experts who train frontier AI models—from poets grading verse to economists building evaluation frameworks—and has become one of the fastest-growing startups in history. Tyler and Brendan discuss why Mercor pays poets $150 an hour, why AI labs need rubrics more than raw text, whether we should enshrine the aesthetic standards of past eras rather than current ones, how quickly models are improving at economically valuable tasks, how long until AI can stump Cass Sunstein, the coming shift toward knowledge workers building RL environments instead of doing repetitive analysis, how to interview without falling for vibes, why nepotism might make a comeback as AI optimizes everyone's cover letters, scaling the Thiel Fellowship 100,000X, what his 8th-grade donut empire taught him about driving out competition, the link between dyslexia and entrepreneurship, dining out and dating in San Francisco, Mercor's next steps, and more. Read a full transcript enhanced with helpful links, or watch the full video on the new dedicated Conversations with Tyler channel. Recorded October 16th, 2025. Other ways to connect Follow us on X and Instagram Follow Tyler on X Follow Brendan on X Sign up for our newsletter Join our Discord Email us: cowenconvos@mercatus.gmu.edu Learn more about Conversations with Tyler and other Mercatus Center podcasts here. Timestamps 00:00:00 - Hiring poets to teach AI 00:05:29 - Measuring real-world AI progress 00:13:25 - Why rubrics are the new oil 00:18:44 - Enshrining taste in LLMs 00:22:38 - Turning society into one giant RL machine 00:26:37 - When AI will stump experts 00:30:46 - AI and employment 00:35:05 - Why vibes-based hiring fails 00:39:55 - Solving labor market matching problems 00:45:01 - Scaling the Thiel Fellowship 00:48:11 - A hypothetical gap year 00:50:31 - Donuts, debates, and dyslexia 00:56:15 - Dating and dining out 00:59:01 - Mercor's next steps
Help support the free broadcast by donating to our PayPal fundraiser! https://www.paypal.com/ncp/payment/RL... Behind the Bunker Paintball Podcast is a long-running weekly show dedicated to everything paintball. Hosted by passionate players and industry veterans, the podcast dives into the latest happenings in the sport, from new gear releases and product reviews to updates on tournaments and events around the world. It has built a loyal audience by combining serious paintball discussion with a lighthearted, entertaining approach that keeps both new players and seasoned veterans engaged.
Physical Intelligence's Karol Hausman and Tobi Springenberg believe that robotics has been held back not by hardware limitations, but by an intelligence bottleneck that foundation models can solve. Their end-to-end learning approach combines vision, language, and action into models like π0 and π*0.6, enabling robots to learn generalizable behaviors rather than task-specific programs. The team prioritizes real-world deployment and uses RL from experience to push beyond what imitation learning alone can achieve. Their philosophy—that a single general-purpose model can handle diverse physical tasks across different robot embodiments—represents a fundamental shift in how we think about building intelligent machines for the physical world. Hosted by Alfred Lin and Sonya Huang, Sequoia Capital
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Josh Halliday, who works on training super intelligence with frontier data at Turing. The conversation explores the fascinating world of reinforcement learning (RL) environments, synthetic data generation, and the crucial role of high-quality human expertise in AI training. Josh shares insights from his years working at Unity Technologies building simulated environments for everything from oil and gas safety scenarios to space debris detection, and discusses how the field has evolved from quantity-focused data collection to specialized, expert-verified training data that's becoming the key bottleneck in AI development. They also touch on the philosophical implications of our increasing dependence on AI technology and the emerging job market around AI training and data acquisition.Timestamps00:00 Introduction to AI and Reinforcement Learning03:12 The Evolution of AI Training Data05:59 Gaming Engines and AI Development08:51 Virtual Reality and Robotics Training11:52 The Future of Robotics and AI Collaboration14:55 Building Applications with AI Tools17:57 The Philosophical Implications of AI20:49 Real-World Workflows and RL Environments26:35 The Impact of Technology on Human Cognition28:36 Cultural Resistance to AI and Data Collection31:12 The Bottleneck of High-Quality Data in AI32:57 Philosophical Perspectives on Data35:43 The Future of AI Training and Human Collaboration39:09 The Role of Subject Matter Experts in Data Quality43:20 The Evolution of Work in the Age of AI46:48 Convergence of AI and Human ExperienceKey Insights1. Reinforcement Learning environments are sophisticated simulations that replicate real-world enterprise workflows and applications. These environments serve as training grounds for AI agents by creating detailed replicas of tools like Salesforce, complete with specific tasks and verification systems. The agent attempts tasks, receives feedback on failures, and iterates until achieving consistent success rates, effectively learning through trial and error in a controlled digital environment.2. Gaming engines like Unity have evolved into powerful platforms for generating synthetic training data across diverse industries. From oil and gas companies needing hazardous scenario data to space intelligence firms tracking orbital debris, these real-time 3D engines with advanced physics can create high-fidelity simulations that capture edge cases too dangerous or expensive to collect in reality, bridging the gap where real-world data falls short.3. The bottleneck in AI development has fundamentally shifted from data quantity to data quality. The industry has completely reversed course from the previous "scale at all costs" approach to focusing intensively on smaller, higher-quality datasets curated by subject matter experts. This represents a philosophical pivot toward precision over volume in training next-generation AI systems.4. Remote teleoperation through VR is creating a new global workforce for robotics training. Workers wearing VR headsets can remotely control humanoid robots across the globe, teaching them tasks through direct demonstration. This creates opportunities for distributed talent while generating the nuanced human behavioral data needed to train autonomous systems.5. Human expertise remains irreplaceable in the AI training pipeline despite advancing automation. Subject matter experts provide crucial qualitative insights that go beyond binary evaluations, offering the contextual "why" and "how" that transforms raw data into meaningful training material. The challenge lies in identifying, retaining, and properly incentivizing these specialists as demand intensifies.6. First-person perspective data collection represents the frontier of human-like AI training. Companies are now paying people to life-log their daily experiences, capturing petabytes of egocentric data to train models more similarly to how human children learn through constant environmental observation, rather than traditional batch-processing approaches.7. The convergence of simulation, robotics, and AI is creating unprecedented philosophical and practical challenges. As synthetic worlds become indistinguishable from reality and AI agents gain autonomy, we're entering a phase where the boundaries between digital and physical, human and artificial intelligence, become increasingly blurred, requiring careful consideration of dependency, agency, and the preservation of human capabilities.
From undergraduate research seminars at Princeton to winning Best Paper award at NeurIPS 2025, Kevin Wang, Ishaan Javali, Michał Bortkiewicz, Tomasz Trzcinski, Benjamin Eysenbach defied conventional wisdom by scaling reinforcement learning networks to 1,000 layers deep—unlocking performance gains that the RL community thought impossible. We caught up with the team live at NeurIPS to dig into the story behind RL1000: why deep networks have worked in language and vision but failed in RL for over a decade (spoiler: it's not just about depth, it's about the objective), how they discovered that self-supervised RL (learning representations of states, actions, and future states via contrastive learning) scales where value-based methods collapse, the critical architectural tricks that made it work (residual connections, layer normalization, and a shift from regression to classification), why scaling depth is more parameter-efficient than scaling width (linear vs. quadratic growth), how Jax and GPU-accelerated environments let them collect hundreds of millions of transitions in hours (the data abundance that unlocked scaling in the first place), the "critical depth" phenomenon where performance doesn't just improve—it multiplies once you cross 15M+ transitions and add the right architectural components, why this isn't just "make networks bigger" but a fundamental shift in RL objectives (their code doesn't have a line saying "maximize rewards"—it's pure self-supervised representation learning), how deep teacher, shallow student distillation could unlock deployment at scale (train frontier capabilities with 1000 layers, distill down to efficient inference models), the robotics implications (goal-conditioned RL without human supervision or demonstrations, scaling architecture instead of scaling manual data collection), and their thesis that RL is finally ready to scale like language and vision—not by throwing compute at value functions, but by borrowing the self-supervised, representation-learning paradigms that made the rest of deep learning work. We discuss: The self-supervised RL objective: instead of learning value functions (noisy, biased, spurious), they learn representations where states along the same trajectory are pushed together, states along different trajectories are pushed apart—turning RL into a classification problem Why naive scaling failed: doubling depth degraded performance, doubling again with residual connections and layer norm suddenly skyrocketed performance in one environment—unlocking the "critical depth" phenomenon Scaling depth vs. width: depth grows parameters linearly, width grows quadratically—depth is more parameter-efficient and sample-efficient for the same performance The Jax + GPU-accelerated environments unlock: collecting thousands of trajectories in parallel meant data wasn't the bottleneck, and crossing 15M+ transitions was when deep networks really paid off The blurring of RL and self-supervised learning: their code doesn't maximize rewards directly, it's an actor-critic goal-conditioned RL algorithm, but the learning burden shifts to classification (cross-entropy loss, representation learning) instead of TD error regression Why scaling batch size unlocks at depth: traditional RL doesn't benefit from larger batches because networks are too small to exploit the signal, but once you scale depth, batch size becomes another effective scaling dimension — RL1000 Team (Princeton) 1000 Layer Networks for Self-Supervised RL: Scaling Depth Can Enable New Goal-Reaching Capabilities: https://openreview.net/forum?id=s0JVsx3bx1 Chapters 00:00:00 Introduction: Best Paper Award and NeurIPS Poster Experience 00:01:11 Team Introductions and Princeton Research Origins 00:03:35 The Deep Learning Anomaly: Why RL Stayed Shallow 00:04:35 Self-Supervised RL: A Different Approach to Scaling 00:05:13 The Breakthrough Moment: Residual Connections and Critical Depth 00:07:15 Architectural Choices: Borrowing from ResNets and Avoiding Vanishing Gradients 00:07:50 Clarifying the Paper: Not Just Big Networks, But Different Objectives 00:08:46 Blurring the Lines: RL Meets Self-Supervised Learning 00:09:44 From TD Errors to Classification: Why This Objective Scales 00:11:06 Architecture Details: Building on Braw and SymbaFowl 00:12:05 Robotics Applications: Goal-Conditioned RL Without Human Supervision 00:13:15 Efficiency Trade-offs: Depth vs Width and Parameter Scaling 00:15:48 JAX and GPU-Accelerated Environments: The Data Infrastructure 00:18:05 World Models and Next State Classification 00:22:37 Unlocking Batch Size Scaling Through Network Capacity 00:24:10 Compute Requirements: State-of-the-Art on a Single GPU 00:21:02 Future Directions: Distillation, VLMs, and Hierarchical Planning 00:27:15 Closing Thoughts: Challenging Conventional Wisdom in RL Scaling
From pre-training data curation to shipping GPT-4o, o1, o3, and now GPT-5 thinking and the shopping model, Josh McGrath has lived through the full arc of OpenAI's post-training evolution—from the PPO vs DPO debates of 2023 to today's RLVR era, where the real innovation isn't optimization methods but data quality, signal trust, and token efficiency. We sat down with Josh at NeurIPS 2025 to dig into the state of post-training heading into 2026: why RLHF and RLVR are both just policy gradient methods (the difference is the input data, not the math), how GRPO from DeepSeek Math was underappreciated as a shift toward more trustworthy reward signals (math answers you can verify vs. human preference you can't), why token efficiency matters more than wall-clock time (GPT-5 to 5.1 bumped evals and slashed tokens), how Codex has changed his workflow so much he feels "trapped" by 40-minute design sessions followed by 15-minute agent sprints, the infrastructure chaos of scaling RL ("way more moving parts than pre-training"), why long context will keep climbing but agents + graph walks might matter more than 10M-token windows, the shopping model as a test bed for interruptability and chain-of-thought transparency, why personality toggles (Anton vs Clippy) are a real differentiator users care about, and his thesis that the education system isn't producing enough people who can do both distributed systems and ML research—the exact skill set required to push the frontier when the bottleneck moves every few weeks. We discuss: Josh's path: pre-training data curation → post-training researcher at OpenAI, shipping GPT-4o, o1, o3, GPT-5 thinking, and the shopping model Why he switched from pre-training to post-training: "Do I want to make 3% compute efficiency wins, or change behavior by 40%?" The RL infrastructure challenge: way more moving parts than pre-training (tasks, grading setups, external partners), and why babysitting runs at 12:30am means jumping into unfamiliar code constantly How Codex has changed his workflow: 40-minute design sessions compressed into 15-minute agent sprints, and the strange "trapped" feeling of waiting for the agent to finish The RLHF vs RLVR debate: both are policy gradient methods, the real difference is data quality and signal trust (human preference vs. verifiable correctness) Why GRPO (from DeepSeek Math) was underappreciated: not just an optimization trick, but a shift toward reward signals you can actually trust (math answers over human vibes) The token efficiency revolution: GPT-5 to 5.1 bumped evals and slashed tokens, and why thinking in tokens (not wall-clock time) unlocks better tool-calling and agent workflows Personality toggles: Anton (tool, no warmth) vs Clippy (friendly, helpful), and why Josh uses custom instructions to make his model "just a tool" The router problem: having a router at the top (GPT-5 thinking vs non-thinking) and an implicit router (thinking effort slider) creates weird bumps, and why the abstractions will eventually merge Long context: climbing Graph Blocks evals, the dream of 10M+ token windows, and why agents + graph walks might matter more than raw context length Why the education system isn't producing enough people who can do both distributed systems and ML research, and why that's the bottleneck for frontier labs The 2026 vision: neither pre-training nor post-training is dead, we're in the fog of war, and the bottleneck will keep moving (so emotional stability helps) — Josh McGrath OpenAI: https://openai.com https://x.com/j_mcgraph Chapters 00:00:00 Introduction: Josh McGrath on Post-Training at OpenAI 00:04:37 The Shopping Model: Black Friday Launch and Interruptability 00:07:11 Model Personality and the Anton vs Clippy Divide 00:08:26 Beyond PPO vs DPO: The Data Quality Spectrum in RL 00:01:40 Infrastructure Challenges: Why Post-Training RL is Harder Than Pre-Training 00:13:12 Token Efficiency: The 2D Plot That Matters Most 00:03:45 Codex Max and the Flow Problem: 40 Minutes of Planning, 15 Minutes of Waiting 00:17:29 Long Context and Graph Blocks: Climbing Toward Perfect Context 00:21:23 The ML-Systems Hybrid: What's Hard to Hire For 00:24:50 Pre-Training Isn't Dead: Living Through Technological Revolution
From Berkeley robotics and OpenAI's 2017 Dota-era internship to shipping RL breakthroughs on GPT-4o, o1, and o3, and now leading model development at Cursor, Ashvin Nair has done it all. We caught up with Ashvin at NeurIPS 2025 to dig into the inside story of OpenAI's reasoning team (spoiler: it went from a dozen people to 300+), why IOI Gold felt reachable in 2022 but somehow didn't change the world when o1 actually achieved it, how RL doesn't generalize beyond the training distribution (and why that means you need to bring economically useful tasks into distribution by co-designing products and models), the deeper lessons from the RL research era (2017–2022) and why most of it didn't pan out because the community overfitted to benchmarks, how Cursor is uniquely positioned to do continual learning at scale with policy updates every two hours and product-model co-design that keeps engineers in the loop instead of context-switching into ADHD hell, and his bet that the next paradigm shift is continual learning with infinite memory—where models experience something once (a bug, a mistake, a user pattern) and never forget it, storing millions of deployment tokens in weights without overloading capacity. We discuss: Ashvin's path: Berkeley robotics PhD → OpenAI 2017 intern (Dota era) → o1/o3 reasoning team → Cursor ML lead in three months Why robotics people are the most grounded at NeurIPS (they work with the real world) and simulation people are the most unhinged (Lex Fridman's take) The IOI Gold paradox: "If you told me we'd achieve IOI Gold in 2022, I'd assume we could all go on vacation—AI solved, no point working anymore. But life is still the same." The RL research era (2017–2022) and why most of it didn't pan out: overfitting to benchmarks, too many implicit knobs to tune, and the community rewarding complex ideas over simple ones that generalize Inside the o1 origin story: a dozen people, conviction from Ilya and Jakob Pachocki that RL would work, small-scale prototypes producing "surprisingly accurate reasoning traces" on math, and first-principles belief that scaled The reasoning team grew from ~12 to 300+ people as o1 became a product and safety, tooling, and deployment scaled up Why Cursor is uniquely positioned for continual learning: policy updates every two hours (online RL on tab), product and ML sitting next to each other, and the entire software engineering workflow (code, logs, debugging, DataDog) living in the product Composer as the start of product-model co-design: smart enough to use, fast enough to stay in the loop, and built by a 20–25 person ML team with high-taste co-founders who code daily The next paradigm shift: continual learning with infinite memory—models that experience something once (a bug, a user mistake) and store it in weights forever, learning from millions of deployment tokens without overloading capacity (trillions of pretraining tokens = plenty of room) Why off-policy RL is unstable (Ashvin's favorite interview question) and why Cursor does two-day work trials instead of whiteboard interviews The vision: automate software engineering as a process (not just answering prompts), co-design products so the entire workflow (write code, check logs, debug, iterate) is in-distribution for RL, and make models that never make the same mistake twice — Ashvin Nair Cursor: https://cursor.com X: https://x.com/ashvinnair_ Chapters 00:00:00 Introduction: From Robotics to Cursor via OpenAI 00:01:58 The Robotics to LLM Agent Transition: Why Code Won 00:09:11 RL Research Winter and Academic Overfitting 00:11:45 The Scaling Era and Moving Goalposts: IOI Gold Doesn't Mean AGI 00:21:30 OpenAI's Reasoning Journey: From Codex to O1 00:20:03 The Blip: Thanksgiving 2023 and OpenAI Governance 00:22:39 RL for Reasoning: The O-Series Conviction and Scaling 00:25:47 O1 to O3: Smooth Internal Progress vs External Hype Cycles 00:33:07 Why Cursor: Co-Designing Products and Models for Real Work 00:34:14 Composer and the Future: Online Learning Every Two Hours 00:35:15 Continual Learning: The Missing Paradigm Shift 00:44:00 Hiring at Cursor and Why Off-Policy RL is Unstable
From investing through the modern data stack era (DBT, Fivetran, and the analytics explosion) to now investing at the frontier of AI infrastructure and applications at Amplify Partners, Sarah Catanzaro has spent years at the intersection of data, compute, and intelligence—watching categories emerge, merge, and occasionally disappoint. We caught up with Sarah live at NeurIPS 2025 to dig into the state of AI startups heading into 2026: why $100M+ seed rounds with no near-term roadmap are now the norm (and why that terrifies her), what the DBT-Fivetran merger really signals about the modern data stack (spoiler: it's not dead, just ready for IPO), how frontier labs are using DBT and Fivetran to manage training data and agent analytics at scale, why data catalogs failed as standalone products but might succeed as metadata services for agents, the consumerization of AI and why personalization (memory, continual learning, K-factor) is the 2026 unlock for retention and growth, why she thinks RL environments are a fad and real-world logs beat synthetic clones every time, and her thesis for the most exciting AI startups: companies that marry hard research problems (RAG, rule-following, continual learning) with killer applications that were simply impossible before. We discuss: The DBT-Fivetran merger: not the death of the modern data stack, but a path to IPO scale (targeting $600M+ combined revenue) and a signal that both companies were already winning their categories How frontier labs use data infrastructure: DBT and Fivetran for training data curation, agent analytics, and managing increasingly complex interactions—plus the rise of transactional databases (RocksDB) and efficient data loading (Vortex) for GPU-bound workloads Why data catalogs failed: built for humans when they should have been built for machines, focused on discoverability when the real opportunity was governance, and ultimately subsumed as features inside Snowflake, DBT, and Fivetran The $100M+ seed phenomenon: raising massive rounds at billion-dollar valuations with no 6-month roadmap, seven-day decision windows, and founders optimizing for signal ("we're a unicorn") over partnership or dilution discipline Why world models are overhyped but underspecified: three competing definitions, unclear generalization across use cases (video games ≠ robotics ≠ autonomous driving), and a research problem masquerading as a product category The 2026 theme: consumerization of AI via personalization—memory management, continual learning, and solving retention/churn by making products learn skills, preferences, and adapt as the world changes (not just storing facts in cursor rules) Why RL environments are a fad: labs are paying 7–8 figures for synthetic clones when real-world logs, traces, and user activity (à la Cursor) are richer, cheaper, and more generalizable Sarah's investment thesis: research-driven applications that solve hard technical problems (RAG for Harvey, rule-following for Sierra, continual learning for the next killer app) and unlock experiences that were impossible before Infrastructure bets: memory, continual learning, stateful inference, and the systems challenges of loading/unloading personalized weights at scale Why K-factor and growth fundamentals matter again: AI felt magical in 2023–2024, but as the magic fades, retention and virality are back—and most AI founders have never heard of K-factor — Sarah Catanzaro X: https://x.com/sarahcat21 Amplify Partners: https://amplifypartners.com/ Where to find Latent Space X: https://x.com/latentspacepod Substack: https://www.latent.space/ Chapters 00:00:00 Introduction: Sarah Catanzaro's Journey from Data to AI 00:01:02 The DBT-Fivetran Merger: Not the End of the Modern Data Stack 00:05:26 Data Catalogs and What Went Wrong 00:08:16 Data Infrastructure at AI Labs: Surprising Insights 00:10:13 The Crazy Funding Environment of 2024-2025 00:17:18 World Models: Hype, Confusion, and Market Potential 00:18:59 Memory Management and Continual Learning: The Next Frontier 00:23:27 Agent Environments: Just a Fad? 00:25:48 The Perfect AI Startup: Research Meets Application 00:28:02 Closing Thoughts and Where to Find Sarah
Adam Marblestone is CEO of Convergent Research. He's had a very interesting past life: he was a research scientist at Google Deepmind on their neuroscience team and has worked on everything from brain-computer interfaces to quantum computing to nanotech and even formal mathematics.In this episode, we discuss how the brain learns so much from so little, what the AI field can learn from neuroscience, and the answer to Ilya's question: how does the genome encode abstract reward functions? Turns out, they're all the same question.Watch on YouTube; read the transcript.Sponsors* Gemini 3 Pro recently helped me run an experiment to test multi-agent scaling: basically, if you have a fixed budget of compute, what is the optimal way to split it up across agents? Gemini was my colleague throughout the process — honestly, I couldn't have investigated this question without it. Try Gemini 3 Pro today gemini.google.com* Labelbox helps you train agents to do economically-valuable, real-world tasks. Labelbox's network of subject-matter experts ensures you get hyper-realistic RL environments, and their custom tooling lets you generate the highest-quality training data possible from those environments. Learn more at labelbox.com/dwarkeshTo sponsor a future episode, visit dwarkesh.com/advertise.Timestamps(00:00:00) – The brain's secret sauce is the reward functions, not the architecture(00:22:20) – Amortized inference and what the genome actually stores(00:42:42) – Model-based vs model-free RL in the brain(00:50:31) – Is biological hardware a limitation or an advantage?(01:03:59) – Why a map of the human brain is important(01:23:28) – What value will automating math have?(01:38:18) – Architecture of the brainFurther readingIntro to Brain-Like-AGI Safety - Steven Byrnes's theory of the learning vs steering subsystem; referenced throughout the episode.A Brief History of Intelligence - Great book by Max Bennett on connections between neuroscience and AIAdam's blog, and Convergent Research's blog on essential technologies.A Tutorial on Energy-Based Learning by Yann LeCunWhat Does It Mean to Understand a Neural Network? - Kording & LillicrapE11 Bio and their brain connectomics approachSam Gershman on what dopamine is doing in the brainGwern's proposal on training models on the brain's hidden states Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Help support the free broadcast by donating to our PayPal fundraiser! https://www.paypal.com/ncp/payment/RL... *Behind the Bunker Paintball Podcast* is a long-running weekly show dedicated to everything paintball. Hosted by passionate players and industry veterans, the podcast dives into the latest happenings in the sport, from new gear releases and product reviews to updates on tournaments and events around the world. It has built a loyal audience by combining serious paintball discussion with a lighthearted, entertaining approach that keeps both new players and seasoned veterans engaged.
Read the essay here.Timestamps00:00:00 What are we scaling?00:03:11 The value of human labor00:05:04 Economic diffusion lag is cope00:06:34 Goal-post shifting is justified00:08:23 RL scaling00:09:18 Broadly deployed intelligence explosion Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Send us a textRecorded Dec 20, 2025 - Enjoy the final episode in our teaching series, "The Biblical Roots of Christmas." This week, we turn to the celebration of Christmas itself—asking not only what we celebrate, but why. We'll explore how the fulfillment found in Jesus Christ naturally gives rise to joyful remembrance, how the early people of God marked God's redemptive acts, and what Scripture teaches us about honoring Christ through meaningful celebration.Let's rediscover together why Christmas is not merely a tradition to defend or dismiss, but a gospel truth to rejoice in.The Biblical Roots MinistriesOur websiteOur YouTube ChannelProf. Solberg's BlogSupport our Ministry (Thank you!)What if Christmas felt sacred again? Full of Grace and Truth, the new book from award-winning author R. L. Solberg, invites you to rediscover the biblical story at the heart of the season. Available now in paperback and Kindle, with all proceeds supporting The Biblical Roots Ministries. Get your copy today on Amazon.com.
Ari Morcos and Rob Toews return for their spiciest conversation yet. Fresh from NeurIPS, they debate whether models are truly plateauing or if we're just myopically focused on LLMs while breakthroughs happen in other modalities.They reveal why infinite capital at labs may actually constrain innovation, explain the narrow "Goldilocks zone" where RL actually works, and argue why U.S. chip restrictions may have backfired catastrophically—accelerating China's path to self-sufficiency by a decade. The conversation covers OpenAI's code red moment and structural vulnerabilities, the mystique surrounding SSI and Ilya's "two words," and why the real bottleneck in AI research is compute, not ideas.The episode closes with bold 2026 predictions: Rob forecasts Sam Altman won't be OpenAI's CEO by year-end, while Ari gives 50%+ odds a Chinese open-source model will be the world's best at least once next year. (0:00) Intro(1:51) Reflections on NeurIPS Conference(5:14) Are AI Models Plateauing?(11:12) Reinforcement Learning and Enterprise Adoption(16:16) Future Research Vectors in AI(28:40) The Role of Neo Labs(39:35) The Myth of the Great Man Theory in Science(41:47) OpenAI's Code Red and Market Position(47:19) Disney and OpenAI's Strategic Partnership(51:28) Meta's Super Intelligence Team Challenges(54:33) US-China AI Chip Dynamics(1:00:54) Amazon's Nova Forge and Enterprise AI(1:03:38) End of Year Reflections and Predictions With your co-hosts:@jacobeffron - Partner at Redpoint, Former PM Flatiron Health@patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn@ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare)@jordan_segall - Partner at Redpoint
Gemini 3 was a landmark frontier model launch in AI this year — but the story behind its performance isn't just about adding more compute. In this episode, I sit down with Sebastian Bourgeaud, a pre-training lead for Gemini 3 at Google DeepMind and co-author of the seminal RETRO paper. In his first-ever podcast interview, Sebastian takes us inside the lab mindset behind Google's most powerful model — what actually changed, and why the real work today is no longer “training a model,” but building a full system.We unpack the “secret recipe” idea — the notion that big leaps come from better pre-training and better post-training — and use it to explore a deeper shift in the industry: moving from an “infinite data” era to a data-limited regime, where curation, proxies, and measurement matter as much as web-scale volume. Sebastian explains why scaling laws aren't dead, but evolving, why evals have become one of the hardest and most underrated problems (including benchmark contamination), and why frontier research is increasingly a full-stack discipline that spans data, infrastructure, and engineering as much as algorithms.From the intuition behind Deep Think, to the rise (and risks) of synthetic data loops, to the future of long-context and retrieval, this is a technical deep dive into the physics of frontier AI. We also get into continual learning — what it would take for models to keep updating with new knowledge over time, whether via tools, expanding context, or new training paradigms — and what that implies for where foundation models are headed next. If you want a grounded view of pre-training in late 2025 beyond the marketing layer, this conversation is a blueprint.Google DeepMindWebsite - https://deepmind.googleX/Twitter - https://x.com/GoogleDeepMindSebastian BorgeaudLinkedIn - https://www.linkedin.com/in/sebastian-borgeaud-8648a5aa/X/Twitter - https://x.com/borgeaud_sFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) – Cold intro: “We're ahead of schedule” + AI is now a system(00:58) – Oriol's “secret recipe”: better pre- + post-training(02:09) – Why AI progress still isn't slowing down(03:04) – Are models actually getting smarter?(04:36) – Two–three years out: what changes first?(06:34) – AI doing AI research: faster, not automated(07:45) – Frontier labs: same playbook or different bets?(10:19) – Post-transformers: will a disruption happen?(10:51) – DeepMind's advantage: research × engineering × infra(12:26) – What a Gemini 3 pre-training lead actually does(13:59) – From Europe to Cambridge to DeepMind(18:06) – Why he left RL for real-world data(20:05) – From Gopher to Chinchilla to RETRO (and why it matters)(20:28) – “Research taste”: integrate or slow everyone down(23:00) – Fixes vs moonshots: how they balance the pipeline(24:37) – Research vs product pressure (and org structure)(26:24) – Gemini 3 under the hood: MoE in plain English(28:30) – Native multimodality: the hidden costs(30:03) – Scaling laws aren't dead (but scale isn't everything)(33:07) – Synthetic data: powerful, dangerous(35:00) – Reasoning traces: what he can't say (and why)(37:18) – Long context + attention: what's next(38:40) – Retrieval vs RAG vs long context(41:49) – The real boss fight: evals (and contamination)(42:28) – Alignment: pre-training vs post-training(43:32) – Deep Think + agents + “vibe coding”(46:34) – Continual learning: updating models over time(49:35) – Advice for researchers + founders(53:35) – “No end in sight” for progress + closing
Datawizz is pioneering continuous reinforcement learning infrastructure for AI systems that need to evolve in production, not ossify after deployment. After building and exiting RapidAPI—which served 10 million developers and had at least one team at 75% of Fortune 500 companies using and paying for the platform—Founder and CEO Iddo Gino returned to building when he noticed a pattern: nearly every AI agent pitch he reviewed as an angel investor assumed models would simultaneously get orders of magnitude better and cheaper. In a recent episode of BUILDERS, we sat down with Iddo to explore why that dual assumption breaks most AI economics, how traditional ML training approaches fail in the LLM era, and why specialized models will capture 50-60% of AI inference by 2030. Topics Discussed Why running two distinct businesses under one roof—RapidAPI's developer marketplace and enterprise API hub—ultimately capped scale despite compelling synergy narratives The "Big Short moment" reviewing AI pitches: every business model assumed simultaneous 1-2 order of magnitude improvements in accuracy and cost Why companies spending 2-3 months on fine-tuning repeatedly saw frontier models (GPT-4, Claude 3) obsolete their custom work The continuous learning flywheel: online evaluation → suspect inference queuing → human validation → daily/weekly RL batches → deployment How human evaluation companies like Scale AI shift from offline batch labeling to real-time inference correction queues Early GTM through LinkedIn DMs to founders running serious agent production volume, working backward through less mature adopters ICP discovery: qualifying on whether 20% accuracy gains or 10x cost reductions would be transformational versus incremental The integration layer approach: orchestrating the continuous learning loop across observability, evaluation, training, and inference tools Why the first $10M is about selling to believers in continuous learning, not evangelizing the category GTM Lessons For B2B Founders Recognize when distribution narratives mask structural incompatibility: RapidAPI had 10 million developers and teams at 75% of Fortune 500 paying for the platform—massive distribution that theoretically fed enterprise sales. The problem: Iddo could always find anecdotes where POC teams had used RapidAPI, creating a compelling story about grassroots adoption. The critical question he should have asked earlier: "Is self-service really the driver for why we're winning deals, or is it a nice-to-have contributor?" When two businesses have fundamentally different product roadmaps, cultures, and buying journeys, distribution overlap doesn't create a sustainable single company. Stop asking if synergies exist—ask if they're causal. Qualify on whether improvements cross phase-transition thresholds: Datawizz disqualifies prospects who acknowledge value but lack acute pain. The diagnostic questions: "If we improved model accuracy by 20%, how impactful is that?" and "If we cut your costs 10x, what does that mean?" Companies already automating human labor often respond that inference costs are rounding errors compared to savings. The ideal customers hit differently: "We need accuracy at X% to fully automate this process and remove humans from the loop. Until then, it's just AI-assisted. Getting over that line is a step-function change in how we deploy this agent." Qualify on whether your improvement crosses a threshold that changes what's possible, not just what's better. Use discovery to map market structure, not just validate hypotheses: Iddo validated that the most mature companies run specialized, fine-tuned models in production. The surprise: "The chasm between them and everybody else was a lot wider than I thought." This insight reshaped their entire strategy—the tooling gap, approaches to model development, and timeline to maturity differed dramatically across segments. Most founders use discovery to confirm their assumptions. Better founders use it to understand where different cohorts sit on the maturity curve, what bridges or blocks their progression, and which segments can buy versus which need multi-year evangelism. Target spend thresholds that indicate real commitment: Datawizz focuses on companies spending "at a minimum five to six figures a month on AI and specifically on LLM inference, using the APIs directly"—meaning they're building on top of OpenAI/Anthropic/etc., not just using ChatGPT. This filters for companies with skin in the game. Below that threshold, AI is an experiment. Above it, unit economics and quality bars matter operationally. For infrastructure plays, find the spend level that indicates your problem is a daily operational reality, not a future consideration. Structure discovery to extract insight, not close deals: Iddo's framework: "If I could run [a call where] 29 of 30 minutes could be us just asking questions and learning, that would be the perfect call in my mind." He compared it to "the dentist with the probe trying to touch everything and see where it hurts." The most valuable calls weren't those that converted to POCs—they came from people who approached the problem differently or had conflicting considerations. In hot markets with abundant budgets, founders easily collect false positives by selling when they should be learning. The discipline: exhaust your question list before explaining what you build. If they don't eventually ask "What do you do?" you're not surfacing real pain. Avoid the false-positive trap in well-funded categories: Iddo identified a specific risk in AI: "You can very easily run these calls, you think you're doing discovery, really you're doing sales, you end up getting a bunch of POCs and maybe some paying customers. So you get really good initial signs but you've never done any actual discovery. You have all the wrong indications—you're getting a lot of false positive feedback while building the completely wrong thing." When capital is abundant and your space is hot, early revenue can mask product-market misalignment. Good initial signs aren't validation if you skipped the work to understand why people bought. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM
Our 228th episode with a summary and discussion of last week's big AI news!Recorded on 12/12/2025Hosted by Andrey Kurenkov and Jeremie HarrisFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:OpenAI's latest model GPT-5.2 demonstrates improved performance and enhanced multi-modal capabilities but comes with increased costs and a different knowledge cutoff date.Disney invests $1 billion in OpenAI to generate Disney character content, creating unique licensing agreements across characters from Marvel, Pixar, and Star Wars franchises.The U.S. government imposes new AI chip export rules involving security reviews, while simultaneously moving to prevent states from independently regulating AI.DeepMind releases a paper outlining the challenges and findings in scaling multi-agent systems, highlighting the complexities of tool coordination and task performance.Timestamps:(00:00:00) Intro / Banter(00:01:19) News PreviewTools & Apps(00:01:58) GPT-5.2 is OpenAI's latest move in the agentic AI battle | The Verge(00:08:48) Runway releases its first world model, adds native audio to latest video model | TechCrunch(00:11:51) Google says it will link to more sources in AI Mode | The Verge(00:12:24) ChatGPT can now use Adobe apps to edit your photos and PDFs for free | The Verge(00:13:05) Tencent releases Hunyuan 2.0 with 406B parametersApplications & Business(00:16:15) China set to limit access to Nvidia's H200 chips despite Trump export approval(00:21:02) Disney investing $1 billion in OpenAI, will allow characters on Sora(00:24:48) Unconventional AI confirms its massive $475M seed round(00:29:06) Slack CEO Denise Dresser to join OpenAI as chief revenue officer | TechCrunch(00:31:18) The state of enterprise AIProjects & Open Source(00:33:49) [2512.10791] The FACTS Leaderboard: A Comprehensive Benchmark for Large Language Model Factuality(00:36:27) Claude 4.5 Opus' Soul DocumentResearch & Advancements(00:43:49) [2512.08296] Towards a Science of Scaling Agent Systems(00:48:43) Evaluating Gemini Robotics Policies in a Veo World Simulator(00:52:10) Guided Self-Evolving LLMs with Minimal Human Supervision(00:56:08) Martingale Score: An Unsupervised Metric for Bayesian Rationality in LLM Reasoning(01:00:39) [2512.07783] On the Interplay of Pre-Training, Mid-Training, and RL on Reasoning Language Models(01:04:42) Stabilizing Reinforcement Learning with LLMs: Formulation and Practices(01:09:42) Google's AI unit DeepMind announces UK 'automated research lab'Policy & Safety(01:10:28) Trump Moves to Stop States From Regulating AI With a New Executive Order - The New York Times(01:13:54) [2512.09742] Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs(01:17:57) Forecasting AI Time Horizon Under Compute Slowdowns(01:20:46) AI Security Institute focuses on AI measurements and evaluations(01:21:16) Nvidia AI Chips to Undergo Unusual U.S. Security Review Before Export to China(01:22:01) U.S. Authorities Shut Down Major China-Linked AI Tech Smuggling NetworkSynthetic Media & Art(01:24:01) RSL 1.0 has arrived, allowing publishers to ask AI companies pay to scrape content | The VergeSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
We Like Shooting Episode 641 This episode of We Like Shooting is brought to you by: C&G Holsters, Midwest Industries, Gideon Optics, Primary Arms, Medical Gear Outfitters, Die Free Co., Blue Alpha, and Bowers Group Welcome to the We Like Shooting Show, episode 641! Our cast tonight is Jeremy Pozderac, Aaron Krieger, Nick Lynch, and me Shawn Herrin, welcome to the show! Text Dear WLS or Reviews. +1 743 500 2171 - Gear Chat Shawn - PopStop™ Review: Innovative Solutions for Shooting Enthusiasts PopStop™ is a device designed to eliminate first round pop (FRP) in suppressors by injecting inert carbon dioxide to replace oxygen, thereby reducing impulse noise and suppressor flash. It has been shown to achieve noise reductions of up to 9 dB and can stabilize velocity standard deviations. The product is not compatible with all firearms, particularly 9mm pistols, and requires specific barrel measurements for proper use. Its introduction aims to enhance suppressor performance within the gun community. Shawn - RL-100 Pre-Order Announcement Cloud Defensive has announced the RL-100, a new entry-level rifle light that combines performance with affordability, priced at $149.99 for early pre-orders. Designed for reliability and ease of use, the RL-100 aims to provide a high-quality lighting option for budget-conscious users and agencies without sacrificing performance. This product's introduction may impact the gun community by offering a cost-effective alternative to higher-priced weapon lights, which could enhance accessibility for everyday users and law enforcement. Shawn - Long Range Shooting Tips Advanced long range shooting by Cleckner Nick - KRG Bravo KRG Bravo Shawn - Hi Point's AR-15 Fun Hi Point AR-15 Shawn - Precision Shooting Simplified Kelbly Precision Element Shawn - C&G Holsters News! C&G Holsters Announcement Jeremy - Savage 24F and Chiappa 12ga barrel inserts Bullet Points Chiappa 44 mag Gun Fights Step right up for "Gun Fights," the high-octane segment hosted by Nick Lynch, where our cast members go head-to-head in a game show-style showdown! Each contestant tries to prove their gun knowledge dominance. It's a wild ride of bids, bluffs, and banter—who will come out on top? Tune in to find out! Agency Brief AGENCY BRIEF: SHAYS' REBELLION 1780 – 1785: Economic Conditions Veterans' Pay: Paid in depreciated Continental currency/IOUs. State Policy: Massachusetts demands taxes in hard currency (gold/silver). The Debt: Boston merchants control state debt; courts aggressively foreclose on farms and imprison debtors. August – October 1786: Escalation Aug 29: 1,500 "Regulators" seize the Northampton courthouse to stop debtor trials. Sept: Armed shutdowns spread to Worcester, Concord, and Great Barrington. Captain Daniel Shays emerges as leader. Sept 26: Shays (600 men) vs. Gen. Shepard (militia) at Springfield Supreme Judicial Court. No fire exchanged; court adjourns. Oct 20: Continental Congress authorizes troops but lacks funds. MA passes Riot Act (arrests without bail). January 1787: The Private Army Jan 4: Gov. Bowdoin authorizes a private militia. Funding: 125 Boston merchants subscribe £6,000. Force: 3,000 mercenaries raised, led by Gen. Benjamin Lincoln. January 25, 1787: Springfield Arsenal (The Climax) Objective: Shays leads ~1,200 men to seize 7,000 muskets/cannons at the federal arsenal. Defense: Gen. Shepard (900 militia) defends the arsenal. The Engagement: Shepard fires artillery warning shots over rebels' heads. Rebels advance. Shepard fires grapeshot directly into the ranks. Casualties: 4 rebels dead, 20 wounded. Rebels flee without firing. February – June 1787: The Fallout Feb 4: Gen. Lincoln marches overnight through a blizzard to Petersham, surprising retreating rebels. 150 captured; Shays escapes to Vermont. Spring Election: Gov. Bowdoin is voted out in a landslide; John Hancock elected Governor. June: Hancock issues broad pardons. Legislature enacts debt moratoriums and lowers taxes. 1787 – 1791: Constitutional Impact May 1787: Constitutional Convention convenes; Washington/Madison cite Shays' Rebellion as proof the Articles of Confederation failed. 1788: Anti-Federalists demand a Bill of Rights to check the power of the proposed federal standing army. 1791: Second Amendment ratified. Modern Parallels Narrative: Veterans labeled "insurrectionists" for resisting economic policy. Tactics: Use of private capital to fund state enforcement when tax revenue failed. Legal Precedent: Establishing the "well-regulated militia" as a counter-balance to federal military power. WLS is Lifestyle Jelly Roll and Gun Rights Jelly Roll wants his gun rights back to hunt after losing them for felonies. Deadpool Unleashed Dead pool Machine Head Introduces 94-Proof Bourbon Whiskey Machine Head has launched Shotgun Blast Whiskey, a 94-proof bourbon designed for fans who enjoy stronger spirits. This product aligns with the band's aggressive identity while remaining accessible as a traditional bourbon. The whiskey emphasizes classic bourbon flavors and is marketed as a lifestyle product, mirroring a trend of music collaborations in the spirits industry. Aaron's Alley Going Ballistic Manhunt Madness: Another Day, Another Gun Control Fail (no summary available) More Giffords Nonsense: Gun Control Before Facts (no summary available) When "Gun Control" Meets Reality: The Brown University Attack Details (no summary available) Gun Control: An Epic Fail at Bondi Beach (no summary available) "Legal Gun Ownership: The Unintended Target of Gun Control Fanatics" (no summary available) When Antique Gun Ownership Becomes a Crime: UK Cops Confiscate 129 Legal Firearms (no summary available) New Jersey's Carry Ban: Lawsuit Showdown or Just Another Dance with Gun Control? (no summary available) Traveling with NFA to get easier? Reviews ⭐⭐⭐⭐⭐ - from TwinDadARguy - Great show, been listening for about 4 or so years. Just heard the convo about Aaron's weird ability to pull interest from the fairer sex. You couldn't come up with a good word for it - I'm here to help. The perfect word is conFAUXdence. You're welcome. ⭐⭐⭐⭐⭐ - from Devin K - Where is the damn squares button!? Love this show and all the antics that come along with it. Lever action debate that would be fun to listen too. What's your favorite lever action caliber for whitetail hunting? What would be the one you would take if you needed to defend that SSB. #171, #fuckthethumb. ⭐⭐⭐⭐⭐ - from System AI - A review and comparison to bring us all back to Dungeon Crawler Carl. Let's pair each cast member to a Character from DCC. First, Shawn, obviously he's Carl. He's the main character. He's powerful. He's the reason we are all here. There may or may not be a Cat that led him here. He likely has someone obsessed with his feet and definitely only has heart boxers on behind his desk. Second, Aaron, he's Prepotene. Smart and powerful. Sometimes on the team, sometimes in the way, sometimes nowhere to be seen. Probably rides a Goat. Screams nonsense from time to time. Would be dead without the rest of the team. Third, Jeremy. Jeremy is Quasar. Swears constantly Hates the leader/rulers of the galaxy and game. Is there everytime we need him. Will likely be the reason the rest end up in a prison. Fourth, Savage. He's JuiceBox. Extremely smart. AI generated. Self aware. Playing the same game but may have a different motive. Likely to lead to the downfall of the show. Last, Nick. Nick is Samantha. Much more powerful then he's willing to let on. Always growing in power. A very important member to keep the show running. Would be dangerous if all his organs worked correctly. And Shawn has definitely been inside him. These comparisons can not be altered. Debate will result in acceleration. Thanks for your attention to this matter. Signed, Gary/System AI. #nonotes Before we let you go - Join Gun Owners of America Tell your friends about the show and get backstage access by joining the Gun Cult at theguncult.com. No matter how tough your battle is today, we want you here fight with us tomorrow. Don't struggle in silence, you can contact the suicide prevention line by dialing 988 from your phone. Remember - Always prefer Dangerous Freedom over peaceful slavery. We'll see you next time! Nick - @busbuiltsystems | Bus Built Systems Jeremy - @ret_actual | Rivers Edge Tactical Aaron - @machinegun_moses Savage - @savage1r Shawn - @dangerousfreedomyt | @camorado.cam | Camorado
The hosts dive into paintball basics for beginners, breaking down the sport into approachable steps. They explain the essential gear (marker/gun, hopper, tank, protective mask) and highlight what newcomers often overlook—like the importance of a well-fitting mask and reliable loader system.Next, they cover the fundamental rules and game formats: capture the flag, elimination, scenario play. They emphasise safety protocols (never removing your mask on the field, always chronograph your marker to legal FPS, clear communication). They also stress field etiquette—don't move thrown bunkers, call your hits honestly, and respect referees.They then shift into strategy tips: how to pick your playing style (aggressive front-player vs. back-field support), coordinate with teammates, and use the available bunkers/cover effectively. A good tip: keep your body low, pop out for shots, and always move quickly between cover to avoid being an easy target.The hosts share some common rookie mistakes—shooting wildly rather than taking aimed bursts, failing to reload/have backup paint, focusing too much on your own play instead of good team positioning. They recommend new players practice first in low-pressure games, watch experienced players, and ask for feedback.Finally, they talk about choosing your first marker: set budget, reliability, ease of maintenance, and the local field's rental gear—sometimes starting with rentals is a smart move until you know you're invested. They wrap up by encouraging listeners to get out on the field, learn by doing, and enjoy the camaraderie and fun of paintball rather than stressing perfect play.Help support the free broadcast by donating to our PayPal fundraiser! https://www.paypal.com/ncp/payment/RL... *Behind the Bunker Paintball Podcast* is a long-running weekly show dedicated to everything paintball. Hosted by passionate players and industry veterans, the podcast dives into the latest happenings in the sport, from new gear releases and product reviews to updates on tournaments and events around the world. It has built a loyal audience by combining serious paintball discussion with a lighthearted, entertaining approach that keeps both new players and seasoned veterans engaged.
Send us a textRecorded Dec 13, 2025 - Enjoy episode 3 of our 4-week teaching series, The Biblical Roots of Christmas. This week, we turn from promise to fulfillment, exploring how the hopes of Israel find their “Yes” in Jesus Christ. We'll examine how the birth of Christ fulfills the Law and the Prophets, why the Incarnation stands at the center of God's redemptive plan, and what it means to say that in Jesus, the long-awaited Messiah has finally come. The Biblical Roots MinistriesOur websiteOur YouTube ChannelProf. Solberg's BlogSupport our Ministry (Thank you!)What if Christmas felt sacred again? Full of Grace and Truth, the new book from award-winning author R. L. Solberg, invites you to rediscover the biblical story at the heart of the season. Available now in paperback and Kindle, with all proceeds supporting The Biblical Roots Ministries. Get your copy today on Amazon.com.What if Christmas felt sacred again? Full of Grace and Truth, the new book from award-winning author R. L. Solberg, invites you to rediscover the biblical story at the heart of the season. Available now in paperback and Kindle, with all proceeds supporting The Biblical Roots Ministries. Get your copy today on Amazon.com.
AI models feel smarter than their real-world impact. They ace benchmarks, yet still struggle with reliability, strange bugs, and shallow generalization. Why is there such a gap between what they can do on paper and in practiceIn this episode from The Dwarkesh Podcast, Dwarkesh talks with Ilya Sutskever, cofounder of SSI and former OpenAI chief scientist, about what is actually blocking progress toward AGI. They explore why RL and pretraining scale so differently, why models outperform on evals but underperform in real use, and why human style generalization remains far ahead.Ilya also discusses value functions, emotions as a built-in reward system, the limits of pretraining, continual learning, superintelligence, and what an AI driven economy could look like. Resources:Transcript: https://www.dwarkesh.com/p/ilya-sutsk...Apple Podcasts: https://podcasts.apple.com/us/podcast...Spotify: https://open.spotify.com/episode/7naO... Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures](http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Send us a textRecorded Dec 6, 2025. The 2nd episode of our 4-week teaching series, "The Biblical Roots of Christmas." This week, we turn to the great storyline of Scripture to examine the promises and prophecies that set the stage for the birth of Christ. From Eden to Abraham to the prophets of Israel, we trace the unfolding hope of a coming Redeemer and explore how the Incarnation fulfills God's ancient covenant promises. Bring your Bibles and your questions, and let's rediscover together how the long-awaited Messiah entered history in the fullness of time.The Biblical Roots MinistriesOur websiteOur YouTube ChannelProf. Solberg's BlogSupport our Ministry (Thank you!)What if Christmas felt sacred again? Full of Grace and Truth, the new book from award-winning author R. L. Solberg, invites you to rediscover the biblical story at the heart of the season. Available now in paperback and Kindle, with all proceeds supporting The Biblical Roots Ministries. Get your copy today on Amazon.com.
Edwin Chen is the founder and CEO of Surge AI, the company that teaches AI what's good vs. what's bad, powering frontier labs with elite data, environments, and evaluations. Surge surpassed $1 billion in revenue with under 100 employees last year, completely bootstrapped—the fastest company in history to reach this milestone. Before founding Surge, Edwin was a research scientist at Google, Facebook, and Twitter and studied mathematics, computer science, and linguistics at MIT.We discuss:1. How Surge reached over $1 billion in revenue with fewer than 100 people by obsessing over quality2. The story behind how Claude Code got so good at coding and writing3. The problems with AI benchmarks and why they're pushing AI in the wrong direction4. How RL environments are the next frontier in AI training5. Why Edwin believes we're still a decade away from AGI6. Why taste and human judgment shape which AI models become industry leaders7. His contrarian approach to company building that rejects Silicon Valley's “pivot and blitzscale” playbook8. How AI models will become increasingly differentiated based on the values of the companies building them—Brought to you by:Vanta—Automate compliance. Simplify security.WorkOS—Modern identity platform for B2B SaaS, free up to 1 million MAUsCoda—The all-in-one collaborative workspace—Transcript: https://www.lennysnewsletter.com/p/surge-ai-edwin-chen—My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/180055059/my-biggest-takeaways-from-this-conversation—Where to find Edwin Chen:• X: https://x.com/echen• LinkedIn: https://www.linkedin.com/in/edwinzchen• Surge's blog: https://surgehq.ai/blog—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Edwin Chen(04:48) AI's role in business efficiency(07:08) Building a contrarian company(08:55) An explanation of what Surge AI does(09:36) The importance of high-quality data(13:31) How Claude Code has stayed ahead(17:37) Edwin's skepticism toward benchmarks(21:54) AGI timelines and industry trends(28:33) The Silicon Valley machine(33:07) Reinforcement learning and future AI training(39:37) Understanding model trajectories(41:11) How models have advanced and will continue to advance(42:55) Adapting to industry needs(44:39) Surge's research approach(48:07) Predictions for the next few years in AI(50:43) What's underhyped and overhyped in AI(52:55) The story of founding Surge AI(01:02:18) Lightning round and final thoughts—Referenced:• Surge: https://surgehq.ai• Surge's product page: https://surgehq.ai/products• Claude Code: https://www.claude.com/product/claude-code• Gemini 3: https://aistudio.google.com/models/gemini-3• Sora: https://openai.com/sora• Terrence Rohan on LinkedIn: https://www.linkedin.com/in/terrencerohan• Richard Sutton—Father of RL thinks LLMs are a dead end: https://www.dwarkesh.com/p/richard-sutton• The Bitter Lesson: http://www.incompleteideas.net/IncIdeas/BitterLesson.html• Reinforcement learning: https://en.wikipedia.org/wiki/Reinforcement_learning• Grok: https://grok.com• Warren Buffett on X: https://x.com/WarrenBuffett• OpenAI's CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil (CPO at OpenAI, ex-Instagram, Twitter): https://www.lennysnewsletter.com/p/kevin-weil-open-ai• Anthropic's CPO on what comes next | Mike Krieger (co-founder of Instagram): https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what-comes-next• Brian Armstrong on LinkedIn: https://www.linkedin.com/in/barmstrong• Interstellar on Prime Video: https://www.amazon.com/Interstellar-Matthew-McConaughey/dp/B00TU9UFTS• Arrival on Prime Video: https://www.amazon.com/Arrival-Amy-Adams/dp/B01M2C4NP8• Travelers on Netflix: https://www.netflix.com/title/80105699• Waymo: https://waymo.com• Soda versus pop: https://flowingdata.com/2012/07/09/soda-versus-pop-on-twitter—Recommended books:• Stories of Your Life and Others: https://www.amazon.com/Stories-Your-Life-Others-Chiang/dp/1101972122• The Myth of Sisyphus: https://www.amazon.com/Myth-Sisyphus-Vintage-International/dp/0525564454• Le Ton Beau de Marot: In Praise of the Music of Language: https://www.amazon.com/dp/0465086454• Gödel, Escher, Bach: An Eternal Golden Braid: https://www.amazon.com/G%C3%B6del-Escher-Bach-Eternal-Golden/dp/0465026567—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com
TDC 077: The 4 Types of Driven Entrepreneurs (And Why You're Secretly Seeking Validation)Four archetypes, one painful realization, and 14 words that could change everything.Episode Summary:In this episode of The Digital Contrarian, host Ryan Levesque shares a vulnerable story from Front Row Dads Live about a moment he's not proud of as a father.You'll learn the difference between your True Self and Strategic Self, discover which of the four archetypal patterns drives your behavior, and uncover how childhood wounds show up in your business today.Question of the Day
From building Medal into a 12M-user game clipping platform with 3.8B highlight moments to turning down a reported $500M offer from OpenAI (https://www.theinformation.com/articles/openai-offered-pay-500-million-startup-videogame-data) and raising a $134M seed from Khosla (https://techcrunch.com/2025/10/16/general-intuition-lands-134m-seed-to-teach-agents-spatial-reasoning-using-video-game-clips/) to spin out General Intuition, Pim is betting that world models trained on peak human gameplay are the next frontier after LLMs. We sat down with Pim to dig into why game highlights are “episodic memory for simulation” (and how Medal's privacy-first action labels became a world-model goldmine https://medal.tv/blog/posts/enabling-state-of-the-art-security-and-protections-on-medals-new-apm-and-controller-overlay-features), what it takes to build fully vision-based agents that just see frames and output actions in real time, how General Intuition transfers from games to real-world video and then into robotics, why world models and LLMs are complementary rather than rivals, what founders with proprietary datasets should know before selling or licensing to labs, and his bet that spatial-temporal foundation models will power 80% of future atoms-to-atoms interactions in both simulation and the real world. We discuss: How Medal's 3.8B action-labeled highlight clips became a privacy-preserving goldmine for world models Building fully vision-based agents that only see frames and output actions yet play like (and sometimes better than) humans Transferring from arcade-style games to realistic games to real-world video using the same perception–action recipe Why world models need actions, memory, and partial observability (smoke, occlusion, camera shake) vs. “just” pretty video generation Distilling giant policies into tiny real-time models that still navigate, hide, and peek corners like real players Pim's path from RuneScape private servers, Tourette's, and reverse engineering to leading a frontier world-model lab How data-rich founders should think about valuing their datasets, negotiating with big labs, and deciding when to go independent GI's first customers: replacing brittle behavior trees in games, engines, and controller-based robots with a “frames in, actions out” API Using Medal clips as “episodic memory of simulation” to move from imitation learning to RL via world models and negative events The 2030 vision: spatial–temporal foundation models that power the majority of atoms-to-atoms interactions in simulation and the real world — Pim X: https://x.com/PimDeWitte LinkedIn: https://www.linkedin.com/in/pimdw/ Where to find Latent Space X: https://x.com/latentspacepod Substack: https://www.latent.space/ Chapters 00:00:00 Introduction and Medal's Gaming Data Advantage 00:02:08 Exclusive Demo: Vision-Based Gaming Agents 00:06:17 Action Prediction and Real-World Video Transfer 00:08:41 World Models: Interactive Video Generation 00:13:42 From Runescape to AI: Pim's Founder Journey 00:16:45 The Research Foundations: Diamond, Genie, and SEMA 00:33:03 Vinod Khosla's Largest Seed Bet Since OpenAI 00:35:04 Data Moats and Why GI Stayed Independent 00:38:42 Self-Teaching AI Fundamentals: The Francois Fleuret Course 00:40:28 Defining World Models vs Video Generation 00:41:52 Why Simulation Complexity Favors World Models 00:43:30 World Labs, Yann LeCun, and the Spatial Intelligence Race 00:50:08 Business Model: APIs, Agents, and Game Developer Partnerships 00:58:57 From Imitation Learning to RL: Making Clips Playable 01:00:15 Open Research, Academic Partnerships, and Hiring 01:02:09 2030 Vision: 80 Percent of Atoms-to-Atoms AI Interactions
No Priors: Artificial Intelligence | Machine Learning | Technology | Startups
In just over three years, Harvey has not only scaled to nearly one thousand customers, including Walmart, PwC, and other giants of the Fortune 500, but fundamentally transformed how legal work is delivered. Sarah Guo and Elad Gil are joined by Harvey's co-founder and president Gabe Pereyra to discuss why the future of legal AI isn't only about individual productivity, but also about putting together complex client matters to make law firms more profitable. They also talk about how Harvey analyzes complex tasks like fund formation or M&A and deploys agents to handle research and drafting, the strategic reasoning behind enabling law firms rather than competing with them, and why AI won't replace partners but will change law firm leverage models and training for associates. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @gabepereyra | @Harvey Chapters: 00:00 – Gabe Pereyra Introduction 00:09 – Introduction to Harvey 02:04 – Expanding Harvey's Reach 03:22 – Understanding Legal Workflows 06:20 – Agentic AI Applications in Law 09:06 – The Future Evolution of Law Firms 13:36 – RL in Law 19:46 – Deploying Harvey and Customization 23:46 – Adoption and Customer Success 25:28– Why Harvey Isn't Building a Law Firm 27:25 – Challenges and Opportunities in Legal Tech 29:26 – Building a Company During the Rise of Gen AI 37:24 – Hiring at Harvey 40:19 – Future Predictions 44:17 – Conclusion
TDC 076: Worldviews from Viewers: Real Perspectives On How to Make Sense of this Post-AI World...World views from readers reveal what's really shaping how thoughtful people navigate today's chaos.Episode SummaryIn this special episode of The Digital Contrarian, host Ryan Levesque shares thought-provoking reader responses to last week's worldview challenge.You'll discover seven diverse principles shaping how people make sense of this moment in history, explore frameworks for navigating complexity, and hear perspectives that might challenge your own assumptions.Question of the Day
We're told that AI progress is slowing down, that pre-training has hit a wall, that scaling laws are running out of road. Yet we're releasing this episode in the middle of a wild couple of weeks that saw GPT-5.1, GPT-5.1 Codex Max, fresh reasoning modes and long-running agents ship from OpenAI — on top of a flood of new frontier models elsewhere. To make sense of what's actually happening at the edge of the field, I sat down with someone who has literally helped define both of the major AI paradigms of our time.Łukasz Kaiser is one of the co-authors of “Attention Is All You Need,” the paper that introduced the Transformer architecture behind modern LLMs, and is now a leading research scientist at OpenAI working on reasoning models like those behind GPT-5.1. In this conversation, he explains why AI progress still looks like a smooth exponential curve from inside the labs, why pre-training is very much alive even as reinforcement-learning-based reasoning models take over the spotlight, how chain-of-thought actually works under the hood, and what it really means to “train the thinking process” with RL on verifiable domains like math, code and science. We talk about the messy reality of low-hanging fruit in engineering and data, the economics of GPUs and distillation, interpretability work on circuits and sparsity, and why the best frontier models can still be stumped by a logic puzzle from his five-year-old's math book.We also go deep into Łukasz's personal journey — from logic and games in Poland and France, to Ray Kurzweil's team, Google Brain and the inside story of the Transformer, to joining OpenAI and helping drive the shift from chatbots to genuine reasoning engines. Along the way we cover GPT-4 → GPT-5 → GPT-5.1, post-training and tone, GPT-5.1 Codex Max and long-running coding agents with compaction, alternative architectures beyond Transformers, whether foundation models will “eat” most agents and applications, what the translation industry can teach us about trust and human-in-the-loop, and why he thinks generalization, multimodal reasoning and robots in the home are where some of the most interesting challenges still lie.OpenAIWebsite - https://openai.comX/Twitter - https://x.com/OpenAIŁukasz KaiserLinkedIn - https://www.linkedin.com/in/lukaszkaiser/X/Twitter - https://x.com/lukaszkaiserFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) – Cold open and intro(01:29) – “AI slowdown” vs a wild week of new frontier models(08:03) – Low-hanging fruit: infra, RL training and better data(11:39) – What is a reasoning model, in plain language?(17:02) – Chain-of-thought and training the thinking process with RL(21:39) – Łukasz's path: from logic and France to Google and Kurzweil(24:20) – Inside the Transformer story and what “attention” really means(28:42) – From Google Brain to OpenAI: culture, scale and GPUs(32:49) – What's next for pre-training, GPUs and distillation(37:29) – Can we still understand these models? Circuits, sparsity and black boxes(39:42) – GPT-4 → GPT-5 → GPT-5.1: what actually changed(42:40) – Post-training, safety and teaching GPT-5.1 different tones(46:16) – How long should GPT-5.1 think? Reasoning tokens and jagged abilities(47:43) – The five-year-old's dot puzzle that still breaks frontier models(52:22) – Generalization, child-like learning and whether reasoning is enough(53:48) – Beyond Transformers: ARC, LeCun's ideas and multimodal bottlenecks(56:10) – GPT-5.1 Codex Max, long-running agents and compaction(1:00:06) – Will foundation models eat most apps? The translation analogy and trust(1:02:34) – What still needs to be solved, and where AI might go next
Ilya & I discuss SSI's strategy, the problems with pre-training, how to improve the generalization of AI models, and how to ensure AGI goes well.Watch on YouTube; read the transcript.Sponsors* Gemini 3 is the first model I've used that can find connections I haven't anticipated. I recently wrote a blog post on RL's information efficiency, and Gemini 3 helped me think it all through. It also generated the relevant charts and ran toy ML experiments for me with zero bugs. Try Gemini 3 today at gemini.google* Labelbox helped me create a tool to transcribe our episodes! I've struggled with transcription in the past because I don't just want verbatim transcripts, I want transcripts reworded to read like essays. Labelbox helped me generate the exact data I needed for this. If you want to learn how Labelbox can help you (or if you want to try out the transcriber tool yourself), go to labelbox.com/dwarkesh* Sardine is an AI risk management platform that brings together thousands of device, behavior, and identity signals to help you assess a user's risk of fraud & abuse. Sardine also offers a suite of agents to automate investigations so that as fraudsters use AI to scale their attacks, you can use AI to scale your defenses. Learn more at sardine.ai/dwarkeshTo sponsor a future episode, visit dwarkesh.com/advertise.Timestamps(00:00:00) – Explaining model jaggedness(00:09:39) - Emotions and value functions(00:18:49) – What are we scaling?(00:25:13) – Why humans generalize better than models(00:35:45) – SSI's plan to straight-shot superintelligence(00:46:47) – SSI's model will learn from deployment(00:55:07) – How to think about powerful AGIs(01:18:13) – “We are squarely an age of research company”(01:20:23) – Self-play and multi-agent(01:32:42) – Research taste Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
A young guitarist disappears for months—and returns playing like no human ever could. They say Robert Johnson met the Devil at a lonely Mississippi crossroads—trading his soul for the sound that birthed the blues. But what really happened that night? Was it a deal, a myth, or something darker still? Join us as we journey into the Delta, where music, magic, and the supernatural collide. SOURCES (for show notes)https://www.openculture.com/2020/10/the-legend-of-how-bluesman-robert-johnson-sold-his-soul-to-the-devil-at-the-crossroads.htmlhttps://entertainment.howstuffworks.com/devil-and-robert-johnson.htm?utm_source=chatgpt.comhttps://nashvilleghosts.com/the-crossroads-the-king-of-delta-blues-the-devil/?utm_source=chatgpt.comhttps://www.thevintagenews.com/2018/04/05/crossroads/?utm_source=chatgpt.comhttps://genius.com/artists/Robert-johnsonhttps://www.britannica.com/biography/Robert-Johnson-American-musicianhttps://blackpast.org/african-american-history/johnson-robert-1911-1938/https://www.vialma.com/en/articles/266/Niccolo-Paganini-The-Devils-Violinisthttps://www.gutenberg.org/files/14591/14591-h/14591-h.htmBiographies and historical accountsUp Jumped the Devil: The Real Life of Robert Johnson by Bruce Conforth and Gayle Dean Wardlow: A comprehensive look at the legendary bluesman's life.Searching for Robert Johnson by Peter Guralnick: Explores the myth and reality of Johnson's life and career.Escaping the Delta: Robert Johnson and the Invention of the Blues by Elijah Wald: Analyzes Johnson's music and its impact on the blues genre.Biography of a Phantom: A Robert Johnson Blues Odyssey by Robert Mack McCormick: A biographical exploration of Johnson's life.Robert Johnson: Lost and Found by Barry Lee Pearson: A scholarly account that delves into the details of Johnson's life.Personal memoirs and graphic novelsBrother Robert: Growing Up with Robert Johnson by Annye C. Anderson: A firsthand account of Johnson's life from his niece's perspective.Love in Vain: Robert Johnson, 1911–1938 by Mezzo and J.M. Dupont: A graphic novel that tells the story of Johnson's life through illustrations.RL's Dream by Walter Mosley: A fictional novel inspired by the legend of Robert Johnson
TDC 075: How To Craft Your Worldview With AIWhat I learned at Tony Robbins and Dean Graziosi's $250,000 private mastermind this week.Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque dives into building a comprehensive worldview and why it's your hidden operating system.You'll learn how to surface your existing beliefs, discover the three levels of reality that shape decisions, and explore a six-step AI-assisted process for crafting worldviews that drive real results.Question of the Day
Detta avsnitt är i samarbete med Probi.Vi möter Caroline Montelius, forskare och vetenskapskommunikatör vid Probi, för ett samtal om mag- och tarmhälsa, och varför din tarm kan vara nyckeln till både välmående och lång livslängd.Caroline berättar hur upp till 90% av våra välfärdssjukdomar – som diabetes, cancer, övervikt och ledsjukdomar – kan kopplas till en obalanserad tarmflora. Vi pratar om läckande tarm, inflammationer, IBS och varför så många idag lider av återkommande magbesvär utan att förstå orsaken. Hon delar den senaste forskningen om probiotika och hur goda bakterier påverkar allt från immunförsvaret till hjärnan. Du får också konkreta råd för hur du kan förbättra din tarmhälsa – genom rätt kost, mer fibrer och medvetna val av probiotika med kliniskt dokumenterad effekt.Dessutom pratar vi om hur tarmhälsa hänger ihop med stress, klimakteriet, idrottsprestationer och livslängd. Detta är ett avsnitt för dig som vill förstå kroppen på djupet och ta kontroll över din hälsa – från insidan och ut.Följ Probi härLäs mer om Probi här.Ta del av Framgångsakademins kurser.Beställ "Mitt Framgångsår".Följ Alexander Pärleros på Instagram.Följ Alexander Pärleros på Tiktok.Bästa tipsen från avsnittet i Nyhetsbrevet.I samarbete med Convendum.I samarbete med Convendum. Hosted on Acast. See acast.com/privacy for more information.
On this episode of Drunken Book Club we live read the Tales to Give You Goosebumps short story The Cat's Tale to pair with The Barking Ghost. It's time to find out if RL can write a decent scary story around an animal. Follow the linktree here and find where you can listen to and follow us! https://linktr.ee/drunkenbookclub Support us on https://www.patreon.com/dbcanddmm All of the content is $1! Make sure to check out our Patrons 1. Trey 2. Weese https://www.youtube.com/user/pikidoo1
In this special release episode, Matt sits down with Nathan Lambert and Luca Soldaini from Ai2 (the Allen Institute for AI) to break down one of the biggest open-source AI drops of the year: OLMo 3. At a moment when most labs are offering “open weights” and calling it a day, AI2 is doing the opposite — publishing the models, the data, the recipes, and every intermediate checkpoint that shows how the system was built. It's an unusually transparent look into the inner machinery of a modern frontier-class model.Nathan and Luca walk us through the full pipeline — from pre-training and mid-training to long-context extension, SFT, preference tuning, and RLVR. They also explain what a thinking model actually is, why reasoning models have exploded in 2025, and how distillation from DeepSeek and Qwen reasoning models works in practice. If you've been trying to truly understand the “RL + reasoning” era of LLMs, this is the clearest explanation you'll hear.We widen the lens to the global picture: why Meta's retreat from open source created a “vacuum of influence,” how Chinese labs like Qwen, DeepSeek, Kimi, and Moonshot surged into that gap, and why so many U.S. companies are quietly building on Chinese open models today. Nathan and Luca offer a grounded, insider view of whether America can mount an effective open-source response — and what that response needs to look like.Finally, we talk about where AI is actually heading. Not the hype, not the doom — but the messy engineering reality behind modern model training, the complexity tax that slows progress, and why the transformation between now and 2030 may be dramatic without ever delivering a single “AGI moment.” If you care about the future of open models and the global AI landscape, this is an essential conversation.Allen Institute for AI (AI2)Website - https://allenai.orgX/Twitter - https://x.com/allen_aiNathan LambertBlog - https://www.interconnects.aiLinkedIn - https://www.linkedin.com/in/natolambert/X/Twitter - https://x.com/natolambertLuca SoldainiBlog - https://soldaini.netLinkedIn - https://www.linkedin.com/in/soldni/X/Twitter - https://x.com/soldniFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) – Cold Open(00:39) – Welcome & today's big announcement(01:18) – Introducing the Olmo 3 model family(02:07) – What “base models” really are (and why they matter)(05:51) – Dolma 3: the data behind Olmo 3(08:06) – Performance vs Qwen, Gemma, DeepSeek(10:28) – What true open source means (and why it's rare)(12:51) – Intermediate checkpoints, transparency, and why AI2 publishes everything(16:37) – Why Qwen is everywhere (including U.S. startups)(18:31) – Why Chinese labs go open source (and why U.S. labs don't)(20:28) – Inside ATOM: the U.S. response to China's model surge(22:13) – The rise of “thinking models” and inference-time scaling(35:58) – The full Olmo pipeline, explained simply(46:52) – Pre-training: data, scale, and avoiding catastrophic spikes(50:27) – Mid-training (tail patching) and avoiding test leakage(52:06) – Why long-context training matters(55:28) – SFT: building the foundation for reasoning(1:04:53) – Preference tuning & why DPO still works(1:10:51) – The hard part: RLVR, long reasoning chains, and infrastructure pain(1:13:59) – Why RL is so technically brutal(1:18:17) – Complexity tax vs AGI hype(1:21:58) – How everyone can contribute to the future of AI(1:27:26) – Closing thoughts
My fellow pro-growth/progress/abundance Up Wingers in America and around the world:What really gets AI optimists excited isn't the prospect of automating customer service departments or human resources. Imagine, rather, what might happen to the pace of scientific progress if AI becomes a super research assistant. Tom Davidson's new paper, How Quick and Big Would a Software Intelligence Explosion Be?, explores that very scenario.Today on Faster, Please! — The Podcast, I talk with Davidson about what it would mean for automated AI researchers to rapidly improve their own algorithms, thus creating a self-reinforcing loop of innovation. We talk about the economic effects of self-improving AI research and how close we are to that reality.Davidson is a senior research fellow at Forethought, where he explores AI and explosive growth. He was previously a senior research fellow at Open Philanthropy and a research scientist at the UK government's AI Security Institute.In This Episode* Making human minds (1:43)* Theory to reality (6:45)* The world with automated research (10:59)* Considering constraints (16:30)* Worries and what-ifs (19:07)Below is a lightly edited transcript of our conversation. Making human minds (1:43). . . you don't have to build any more computer chips, you don't have to build any more fabs . . . In fact, you don't have to do anything at all in the physical world.Pethokoukis: A few years ago, you wrote a paper called “Could Advanced AI Drive Explosive Economic Growth?,” which argued that growth could accelerate dramatically if AI would start generating ideas the way human researchers once did. In your view, population growth historically powered kind of an ideas feedback loop. More people meant more researchers meant more ideas, rising incomes, but that loop broke after the demographic transition in the late-19th century but you suggest that AI could restart it: more ideas, more output, more AI, more ideas. Does this new paper in a way build upon that paper? “How quick and big would a software intelligence explosion be?”The first paper you referred to is about the biggest-picture dynamic of economic growth. As you said, throughout the long run history, when we produced more food, the population increased. That additional output transferred itself into more people, more workers. These days that doesn't happen. When GDP goes up, that doesn't mean people have more kids. In fact, the demographic transition, the richer people get, the fewer kids they have. So now we've got more output, we're getting even fewer people as a result, so that's been blocked.This first paper is basically saying, look, if we can manufacture human minds or human-equivalent minds in any way, be it by building more computer chips, or making better computer chips, or any way at all, then that feedback loop gets going again. Because if we can manufacture more human minds, then we can spend output again to create more workers. That's the first paper.The second paper double clicks on one specific way that we can use output to create more human minds. It's actually, in a way, the scariest way because it's the way of creating human minds which can happen the quickest. So this is the way where you don't have to build any more computer chips, you don't have to build any more fabs, as they're called, these big factories that make computer chips. In fact, you don't have to do anything at all in the physical world.It seems like most of the conversation has been about how much investment is going to go into building how many new data centers, and that seems like that is almost the entire conversation, in a way, at the moment. But you're not looking at compute, you're looking at software.Exactly, software. So the idea is you don't have to build anything. You've already got loads of computer chips and you just make the algorithms that run the AIs on those computer chips more efficient. This is already happening, but it isn't yet a big deal because AI isn't that capable. But already, one year out, Epoch, this AI forecasting organization, estimates that just in one year, it becomes 10 times to 1000 times cheaper to run the same AI system. Just wait 12 months, and suddenly, for the same budget, you are able to run 10 times as many AI systems, or maybe even 1000 times as many for their most aggressive estimate. As I said, not a big deal today, but if we then develop an AI system which is better than any human at doing research, then now, in 10 months, you haven't built anything, but you've got 10 times as many researchers that you can set to work or even more than that. So then we get this feedback loop where you make some research progress, you improve your algorithms, now you've got loads more researchers, you set them all to work again, finding even more algorithmic improvements. So today we've got maybe a few hundred people that are advancing state-of-the-art AI algorithms.I think they're all getting paid a billion dollars a person, too.Exactly. But maybe we can 10x that initially by having them replaced by AI researchers that do the same thing. But then those AI researchers improve their own algorithms. Now you have 10x as many again, you have them building more computer chips, you're just running them more efficiently, and then the cycle continues. You're throwing more and more of these AI researchers at AI progress itself, and the algorithms are improving in what might be a very powerful feedback loop.In this case, it seems me that you're not necessarily talking about artificial general intelligence. This is certainly a powerful intelligence, but it's narrow. It doesn't have to do everything, it doesn't have to play chess, it just has to be able to do research.It's certainly not fully general. You don't need it to be able to control a robot body. You don't need it to be able to solve the Riemann hypothesis. You don't need it to be able to even be very persuasive or charismatic to a human. It's not narrow, I wouldn't say, it has to be able to do literally anything that AI researchers do, and that's a wide range of tasks: They're coding, they're communicating with each other, they're managing people, they are planning out what to work on, they are thinking about reviewing the literature. There's a fairly wide range of stuff. It's extremely challenging. It's some of the hardest work in the world to do, so I wouldn't say it's now, but it's not everything. It's some kind of intermediate level of generality in between a mere chess algorithm that just does chess and the kind of AGI that can literally do anything.Theory to reality (6:45)I think it's a much smaller gap for AI research than it is for many other parts of the economy.I think people who are cautiously optimistic about AI will say something like, “Yeah, I could see the kind of intelligence you're referring to coming about within a decade, but it's going to take a couple of big breakthroughs to get there.” Is that true, or are we actually getting pretty close?Famously, predicting the future of technology is very, very difficult. Just a few years before people invented the nuclear bomb, famous, very well-respected physicists were saying, “It's impossible, this will never happen.” So my best guess is that we do need a couple of fairly non-trivial breakthroughs. So we had the start of RL training a couple of years ago, became a big deal within the language model paradigm. I think we'll probably need another couple of breakthroughs of that kind of size.We're not talking a completely new approach, throw everything out, but we're talking like, okay, we need to extend the current approach in a meaningfully different way. It's going to take some inventiveness, it's going to take some creativity, we're going to have to try out a few things. I think, probably, we'll need that to get to the researcher that can fully automate OpenAI, is a nice way of putting it — OpenAI doesn't employ any humans anymore, they've just got AIs there.There's a difference between what a model can do on some benchmark versus becoming actually productive in the real world. That's why, while all the benchmark stuff is interesting, the thing I pay attention to is: How are businesses beginning to use this technology? Because that's the leap. What is that gap like, in your scenario, versus an AI model that can do a theoretical version of the lab to actually be incorporated in a real laboratory?It's definitely a gap. I think it's a pretty big gap. I think it's a much smaller gap for AI research than it is for many other parts of the economy. Let's say we are talking about car manufacturing and you're trying to get an AI to do everything that happens there. Man, it's such a messy process. There's a million different parts of the supply chain. There's all this tacit knowledge and all the human workers' minds. It's going to be really tough. There's going to be a very big gap going from those benchmarks to actually fully automating the supply chain for cars.For automating what OpenAI does, there's still a gap, but it's much smaller, because firstly, all of the work is virtual. Everyone at OpenAI could, in principle, work remotely. Their top research scientists, they're just on a computer all day. They're not picking up bricks and doing stuff like that. So also that already means it's a lot less messy. You get a lot less of that kind of messy world reality stuff slowing down adoption. And also, a lot of it is coding, and coding is almost uniquely clean in that, for many coding tasks, you can define clearly defined metrics for success, and so that makes AI much better. You can just have a go. Did AI succeed in the test? If not, try something else or do a gradient set update.That said, there's still a lot of messiness here, as any coder will know, when you're writing good code, it's not just about whether it does the function that you've asked it to do, it needs to be well-designed, it needs to be modular, it needs to be maintainable. These things are much harder to evaluate, and so AIs often pass our benchmarks because they can do the function that you asked it to do, the code runs, but they kind of write really spaghetti code — code that no one wants to look at, that no one can understand, and so no company would want to use that.So there's still going to be a pretty big benchmark-to-reality gap, even for OpenAI, and I think that's one of the big uncertainties in terms of, will this happen in three years versus will this happen in 10 years, or even 15 years?Since you brought up the timeline, what's your guess? I didn't know whether to open with that question or conclude with that question — we'll stick it right in the middle of our chat.Great. Honestly, my best guess about this does change more often than I would like it to, which I think tells us, look, there's still a state of flux. This is just really something that's very hard to know about. Predicting the future is hard. My current best guess is it's about even odds that we're able to fully automate OpenAI within the next 10 years. So maybe that's a 50-50.The world with AI research automation (10:59). . . I'm talking about 30 percent growth every year. I think it gets faster than that. If you want to know how fast it eventually gets, you can think about the question of how fast can a kind of self-replicating system double itself?So then what really would be the impact of that kind of AI research automation? How would you go about quantifying that kind of acceleration? What does the world look like?Yeah, so many possibilities, but I think what strikes me is that there is a plausible world where it is just way, way faster than almost everyone is expecting it to be. So that's the world where you fully automate OpenAI, and then we get that feedback loop that I was talking about earlier where AIs make their algorithms way more efficient, now you've got way more of them, then they make their algorithms way more efficient again, now they're way smarter. Now they're thinking a hundred times faster. The feedback loop continues and maybe within six months you now have a billion superintelligent AIs running on this OpenAI data center. The combined cognitive abilities of all these AIs outstrips the whole of the United States, outstrips anything we've seen from any kind of company or entity before, and they can all potentially be put towards any goal that OpenAI wants to. And then there's, of course, the risk that OpenAI's lost control of these systems, often discussed, in which case these systems could all be working together to pursue a particular goal. And so what we're talking about here is really a huge amount of power. It's a threat to national security for any government in which this happens, potentially. It is a threat to everyone if we lose control of these systems, or if the company that develops them uses them for some kind of malicious end. And, in terms of economic impacts, I personally think that that again could happen much more quickly than people think, and we can get into that.In the first paper we mentioned, it was kind of a thought experiment, but you were really talking about moving the decimal point in GDP growth, instead of talking about two and three percent, 20 and 30 percent. Is that the kind of world we're talking about?I speak to economists a lot, and —They hate those kinds of predictions, by the way.Obviously, they think I'm crazy. Not all of them. There are economists that take it very seriously. I think it's taken more seriously than everyone else realizes. It's like it's a bit embarrassing, at the moment, to admit that you take it seriously, but there are a few really senior economists who absolutely know their stuff. They're like, “Yep, this checks out. I think that's what's going to happen.” And I've had conversation with them where they're like, “Yeah, I think this is going to happen.” But the really loud, dominant view where I think people are a little bit scared to speak out against is they're like, “Obviously this is sci-fi.”One analogy I like to give to people who are very, very confident that this is all sci-fi and it's rubbish is to imagine that we were sitting there in the year 1400, imagine we had an economics professor who'd been studying the rate of economic growth, and they've been like, “Yeah, we've always had 0.1 percent growth every single year throughout history. We've never seen anything higher.” And then there was some kind of futurist economist rogue that said, “Actually, I think that if I extrapolate the curves in this way and we get this kind of technology, maybe we could have one percent growth.” And then all the other economists laugh at them, tell them they're insane – that's what happened. In 1400, we'd never had growth that was at all fast, and then a few hundred years later, we developed industrial technology, we started that feedback loop, we were investing more and more resources in scientific progress and in physical capital, and we did see much faster growth.So I think it can be useful to try and challenge economists and say, “Okay, I know it sounds crazy, but history was crazy. This crazy thing happened where growth just got way, way faster. No one would've predicted it. You would not have predicted it.” And I think being in that mindset can encourage people to be like, “Yeah, okay. You know what? Maybe if we do get AI that's really that powerful, it can really do everything, and maybe it is possible.”But to answer your question, yeah, I'm talking about 30 percent growth every year. I think it gets faster than that. If you want to know how fast it eventually gets, you can think about the question of how fast can a kind of self-replicating system double itself? So ultimately, what the economy is going to be like is it's going to have robots and factories that are able to fully create new versions of themselves. Everything you need: the roads, the electricity, the robots, the buildings, all of that will be replicated. And so you can look at actually biology and say, do we have any examples of systems which fully replicate themselves? How long does it take? And if you look at rats, for example, they're able to double the number of rats by grabbing resources from the environment, and giving birth, and whatnot. The doubling time is about six weeks for some types of rats. So that's an example of here's a physical system — ultimately, everything's made of physics — a physical system that has some intelligence that's able to go out into the world, gather resources, replicate itself. The doubling time is six weeks.Now, who knows how long it'll take us to get to AI that's that good? But when we do, you could see the whole physical economy, maybe a part that humans aren't involved with, a whole automated city without any humans just doubling itself every few weeks. If that happens, and the amount of stuff we're able to reduce as a civilization is doubling again on the order of weeks. And, in fact, there are some animals that double faster still, in days, but that's the kind of level of craziness. Now we're talking about 1000 percent growth, at that point. We don't know how crazy it could get, but I think we should take even the really crazy possibilities, we shouldn't fully rule them out.Considering constraints (16:30)I really hope people work less. If we get this good future, and the benefits are shared between all . . . no one should work. But that doesn't stop growth . . .There's this great AI forecast chart put out by the Federal Reserve Bank of Dallas, and I think its main forecast — the one most economists would probably agree with — has a line showing AI improving GDP by maybe two tenths of a percent. And then there are two other lines: one is more or less straight up, and the other one is straight down, because in the first, AI created a utopia, and in the second, AI gets out of control and starts killing us, and whatever. So those are your three possibilities.If we stick with the optimistic case for a moment, what constraints do you see as most plausible — reduced labor supply from rising incomes, social pushback against disruption, energy limits, or something else?Briefly, the ones you've mentioned, people not working, 100 percent. I really hope people work less. If we get this good future, and the benefits are shared between all — which isn't guaranteed — if we get that, then yeah, no one should work. But that doesn't stop growth, because when AI and robots can do everything that humans do, you don't need humans in the loop anymore. That whole thing is just going and kind of self-replicating itself and making as many goods as services as we want. Sure, if you want your clothes to be knitted by a human, you're in trouble, then your consumption is stuck. Bad luck. If you're happy to consume goods and services produced by AI systems or robots, fine if no one wants to work.Pushback: I think, for me, this is the biggest one. Obviously, the economy doubling every year is very scary as a thought. Tech progress will be going much faster. Imagine if you woke up and, over the course of the year, you go from not having any telephones at all in the world, to everyone's on their smartphones and social media and all the apps. That's a transition that took decades. If that happened in a year, that would be very disconcerting.Another example is the development of nuclear weapons. Nuclear weapons were developed over a number of years. If that happened in a month, or two months, that could be very dangerous. There'd be much less time for different countries, different actors to figure out how they're going to handle it. So I think pushback is the strongest one that we might as a society choose, “Actually, this is insane. We're going to go slower than we could.” That requires, potentially, coordination, but I think there would be broad support for some degree of coordination there.Worries and what-ifs (19:07)If suddenly no one has any jobs, what will we want to do with ourselves? That's a very, very consequential transition for the nature of human society.I imagine you certainly talk with people who are extremely gung-ho about this prospect. What is the common response you get from people who are less enthusiastic? Do they worry about a future with no jobs? Maybe they do worry about the existential kinds of issues. What's your response to those people? And how much do you worry about those things?I think there are loads of very worrying things that we're going to be facing. One class of pushback, which I think is very common, is worries about employment. It's a source of income for all of us, employment, but also, it's a source of pride, it's a source of meaning. If suddenly no one has any jobs, what will we want to do with ourselves? That's a very, very consequential transition for the nature of human society. I think people aren't just going to be down to just do it. I think people are scared about three AI companies literally now taking all the revenues that all of humanity used to be earning. It is naturally a very scary prospect. So that's one kind of pushback, and I'm sympathetic with it.I think that there are solutions, if we find a way to tax AI systems, which isn't necessarily easy, because it's very easy to move physical assets between countries. It's a lot easier to tax labor than capital already when rich people can move their assets around. We're going to have the same problem with AI, but if we can find a way to tax it, and we maintain a good democratic country, and we can just redistribute the wealth broadly, it can be solved. So I think it's a big problem, but it is doable.Then there's the problem of some people want to stop this now because they're worried about AI killing everyone. Their literally worry is that everyone will be dead because superintelligent AI will want that to happen. I think there's a real risk there. It's definitely above one percent, in my opinion. I wouldn't go above 10 percent, myself, but I think it's very scary, and that's a great reason to slow things down. I personally don't want to stop quite yet. I think you want to stop when the AI is a bit more powerful and a bit more useful than it is today so it can kind of help us figure out what to do about all of this crazy stuff that's coming.On what side of that line is AI as an AI researcher?That's a really great question. Should we stop? I think it's very hard to stop just after you've got the AI researcher AI, because that's when it's suddenly really easy to go very, very fast. So my out-of-the-box proposal here, which is probably very flawed, would be: When we're within a few spits distance — not spitting distance, but if you did that three times, and we can see we're almost at that AI automating OpenAI — then you pause, because you're not going to accidentally then go all the way. It is actually still a little bit a fair distance away, but it's actually still, at that point, probably a very powerful AI that can really help.Then you pause and do what?Great question. So then you pause, and you use your AI systems to help you firstly solve the problem of AI alignment, make extra, double sure that every time we increase the notch of AI capabilities, the AI is still loyal to humanity, not to its own kind of secret goals.Secondly, you solve the problem of, how are we going to make sure that no one person in government or no one CEO of an AI company ensures that this whole AI army is loyal to them, personally? How are we going to ensure that everyone, the whole world gets influenced over what this AI is ultimately programmed to do? That's the second problem.And then there's just a whole host of other things: unemployment that we've talked about, competition between different countries, US and China, there's a whole host of other things that I think you want to research on, figure out, get consensus on, and then slowly ratchet up the capabilities in what is now a very safe and controlled way.What else should we be working on? What are you working on next?One problem I'm excited about is people have historically worried about AI having its own goals. We need to make it loyal to humanity. But as we've got closer, it's become increasingly obvious, “loyalty to humanity” is very vague. What specifically do you want the AI to be programmed to do? I mean, it's not programmed, it's grown, but if it were programmed, if you're writing a rule book for AI, some organizations have employee handbooks: Here's the philosophy of the organization, here's how you should behave. Imagine you're doing that for the AI, but you're going super detailed, exactly how you want your AI assistant to behave in all kinds of situations. What should that be? Essentially, what should we align the AI to? Not any individual person, probably following the law, probably loads of other things. I think basically designing what is the character of this AI system is a really exciting question, and if we get that right, maybe the AI can then help us solve all these other problems.Maybe you have no interest in science fiction, but is there any film, TV, book that you think is useful for someone in your position to be aware of, or that you find useful in any way? Just wondering.I think there's this great post called “AI 2027,” which lays out a concrete scenario for how AI could go wrong or how maybe it could go right. I would recommend that. I think that's the only thing that's coming top of mind. I often read a lot of the stuff I read is I read a lot of LessWrong, to be honest. There's a lot of stuff from there that I don't love, but a lot of new ideas, interesting content there.Any fiction?I mean, I read fiction, but honestly, I don't really love the AI fiction that I've read because often it's quite unrealistic, and so I kind of get a bit overly nitpicky about it. But I mean, yeah, there's this book called Harry Potter and the Methods of Rationality, which I read maybe 10 years ago, which I thought was pretty fun.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
Outline00:00 – Intro07:22 – Anatomy of a feedback loop15:12 – A brief historical recap on the history of feedback23:40 – Inventing the negative feedback amplifier34:28 – Feedback in biology, economics, society, and ... board games!52:44 – Negative vs positive feedback59:15 – Feedback, causality, and the arrow of time1:06:22 – Classics: fundamental limitations, uncertainty, robustness1:21:30 – Adaptive control: learning in the loop1:29:50 – Modern AI feedback loops (RL, social media, alignment)1:40:40 – OutroLinksWatt's flyball governor: https://tinyurl.com/ne5nene3Maxwell - "On Governors": https://tinyurl.com/2a7cxj7m Black - "Inventing the negative-feedback amplifier": https://tinyurl.com/yevsemdpNyquist Criterion: https://tinyurl.com/33hfbw8mBode's integral: https://tinyurl.com/53sxkdzuWiener - "Cybernetics": https://tinyurl.com/yta899ayApoptosis: https://tinyurl.com/mcxjycka Predator–prey dynamics (Lotka–Volterra): https://tinyurl.com/5cvx33tn Bird migration cues (photoperiodism): https://tinyurl.com/y2e7t22v Neuron action potentials: https://tinyurl.com/2wemwdn4Economic equilibrium & feedback: https://tinyurl.com/nhdx7r3sEcho chambers: https://tinyurl.com/4v8yk7e8 Game design: https://tinyurl.com/bdhdhv38Gap metric (Vinnicombe): https://tinyurl.com/y9nw3yveGeorgiou, Smith - "Feedback Control and the Arrow of Time": https://tinyurl.com/5xvj76jrAnnaswamy, Fradkov - "A Historical Perspective of Adaptive Control and Learning": https://tinyurl.com/4nfew8vm Algorithmic trading flash crash (2010): https://tinyurl.com/2dsrs8j2AI alignment: https://tinyurl.com/yvs3wnj8Support the showPodcast infoPodcast website: https://www.incontrolpodcast.com/Apple Podcasts: https://tinyurl.com/5n84j85jSpotify: https://tinyurl.com/4rwztj3cRSS: https://tinyurl.com/yc2fcv4yYoutube: https://tinyurl.com/bdbvhsj6Facebook: https://tinyurl.com/3z24yr43Twitter: https://twitter.com/IncontrolPInstagram: https://tinyurl.com/35cu4kr4Acknowledgments and sponsorsThis episode was supported by the National Centre of Competence in Research on «Dependable, ubiquitous automation» and the IFAC Activity fund. The podcast benefits from the help of an incredibly talented and passionate team. Special thanks to L. Seward, E. Cahard, F. Banis, F. Dörfler, J. Lygeros, ETH studio and mirrorlake . Music was composed by A New Element.
TDC 074: The First Trillion Dollar Thought Leader: Being Known for How You Think, Not What You ConsumeWhy being known for how you think beats influence every time.Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque dives into the critical distinction between influencers and thought leaders in the AI era.You'll learn why chasing followers is the wrong game, how thought leadership transforms ideas into equity, and discover the unsexy immediate next step to start building your own trillion-dollar personal brand.Question of the Day
TDC: The Efficiency vs. Resiliency DilemmaWhat's the biggest vulnerability in your business right now?Episode Summary:In this episode of The Digital Contrarian, host Ryan Levesque explores the dangerous trade-off between efficiency and resiliency in business.You'll learn why single-channel dependency threatens your business survival, discover how the Irish Potato Famine reveals critical marketing insights, and explore how to build an antifragile strategic content ecosystem that can weather any storm.Question of the Day
Frontier AI is colliding with real-world infrastructure. Eiso Kant (Co-CEO & Co-Founder, Poolside) joins the MAD Podcast to unpack Project Horizon— a multi-gigawatt West Texas build—and why frontier labs must own energy, compute, and intelligence to compete. We map token economics, cloud-style margins, and the staged 250 MW rollout using 2.5 MW modular skids.Then we get operational: the CoreWeave anchor partnership, environmental choices (SCR, renewables + gas + batteries), community impact, and how Poolside plans to bring capacity online quickly without renting away margin—plus the enterprise motion (defense to Fortune 500) powered by forward deployed research engineers.Finally, we go deep on training. Eiso lays out RL2L (Reinforcement Learning to Learn)— aimed at reverse-engineering the web's thoughts and actions— why intelligence may commoditize, what that means for agents, and how coding served as a proxy for long-horizon reasoning before expanding to broader knowledge work.PoolsideWebsite - https://poolside.aiX/Twitter - https://x.com/poolsideaiEiso KantLinkedIn - https://www.linkedin.com/in/eisokant/X/Twitter - https://x.com/eisokantFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)Blog - https://www.mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Cold open – “Intelligence becomes a commodity”(00:23) Host intro – Project Horizon & RL2L(01:19) Why Poolside exists amid frontier labs(04:38) Project Horizon: building one of the largest US data center campuses(07:20) Why own infra: scale, cost, and avoiding “cosplay”(10:06) Economics deep dive: $8B for 250 MW, capex/opex, margins(16:47) CoreWeave partnership: anchor tenant + flexible scaling(18:24) Hiring the right tail: building a physical infra org(30:31) RL today → agentic RL and long-horizon tasks(37:23) RL2L revealed: reverse-engineering the web's thoughts & actions(39:32) Continuous learning and the “hot stove” limitation(43:30) Agents debate: thin wrappers, differentiation, and model collapse(49:10) “Is AI plateauing?”—chip cycles, scale limits, and new axes(53:49) Why software was the proxy; expanding to enterprise knowledge work(55:17) Model status: Malibu → Laguna (small/medium/large)(57:31) Poolside's Commercial Reality today: defense; Fortune 500; FDRE (1:02:43) Global team, avoiding the echo chamber(1:04:34) Next 12–18 months: frontier models + infra scale(1:05:52) Closing
Learn more about Michael Wenderoth, Executive Coach: www.changwenderoth.comThere's a huge chance you're being passed over for top jobs – and you're not even aware of it. How has AI changed recruiting and job search, and what does that mean for you? In this episode of 97% Effective, host Michael Wenderoth speaks with Nick Day, founder and CEO of JGA Recruitment Group, a leading global recruiter and payroll advisory based in the UK. Nick provides sharp, practical advice on how to be visible, stand out, and land your dream job in an increasingly competitive talent market, where recruiters and other job seekers are increasingly using AI. He talks about the two versions of your CV that you most need, suggests you answer problems that aren't being advertised, and explains why visibility is the currency of credibility in today's job market. You'll leave this episode with a strong understanding of AI's impact on job seekers – and a much deeper appreciation for the human touch that will get you the best results.SHOW NOTES:What Nick's social post about Costa Rica this summer reveals about him – and JGA“Done Lists” and how Nick sets his intentions each dayHow fear disguises itself as wisdom: Nick's hard truth about AI and the current job marketWhat's the right depth and place to “jump in” learning about AIWhy great job candidates are being left in the coldTip#1: Optimize yourself for the algorithms – and produce two versions of your resume.What?! How excellent candidates with high level strategic resumes are getting rejectedTip#2: Make sure to add that personal element to your CV, because everyone's submitting the “perfect” resumeTip#3: The 3 necessary approaches to getting your dream jobHow “easy apply” is overwhelming recruiters and ensures top candidates never get the lookTip#4: Go back to basics and cater your CV to a position – and tell a story that shows your value“Answer a problem that isn't being advertised”Your CV/resume is the most important document you will ever writeWhy most CV/resume services are a big waste of moneyTip#5: Have an achievement section at the top of your CV – don't wait for the recruiter to find them on page 2Michael's highlight: How Nick's job search tips are also best practice that help you get promoted“The important bits” that Nick says we should save for the interview (and not put on the resume)The importance of generating connectionTip#6: Treat your job as a campaign, not as a checkboxWhere Nick sees AI systems doing more harm than goodTip#7: Change your resume for the position, but also change your persona for the person that's interviewing youHow to be creative – but not lie – in your resume, to helps you work with the algorithm, even if don't have the exact experienceGetting over imposter syndrome to become your biggest advocateTip#8: The most underrated skill in business is storytelling“Visibility is the currency of credibility”Reaching Nick: No AI, no PA's. Nick responds personally!BIO AND LINKS:Nick Day is a globally recognised HR and payroll authority with over 20 years of experience leading the sector through innovation, insight and influence. As CEO and founder of JGA Recruitment Group, Nick has built one of the most respected payroll and HR talent consultancies in the UK and abroad. Nick's voice reaches tens of thousands of professionals through his acclaimed platforms: The Payroll Podcast, the H&R L&D Podcast, and the Mindful Paths Podcast.Nick on LinkedIn: https://www.linkedin.com/in/nickday/?originalSubdomain=ukJGA Recruitment Group: https://jgarecruitment.comNick's post on his daughter, in Costa Rica: https://tinyurl.com/2s3f3n7jWalk the Camino Santiago: https://en.wikipedia.org/wiki/Camino_de_Santiago“Life Moves Pretty Fast..” (The epic line, from Ferris Bueller): https://tinyurl.com/bdns8pa7Nick's Mindful Paths Podcast: https://podcasts.apple.com/us/podcast/mindful-paths-podcast/id1682002299Done Lists: https://tinyurl.com/2s3hrdfdAs an Algorithm Blocking Your Job Search? (WSJ Podcast): https://tinyurl.com/ujy6yttnBohemiam Rhapsody Flashmob in France: https://www.youtube.com/watch?v=rfUEstWJUkAMichael on Nick's HR L&D Podcast, “Mastering Office Politics: Power, Promotion & Playing to Win: https://www.youtube.com/watch?v=ARVsf7dFOyYMichael's Award-Winning book, Get Promoted: What Your Really Missing at Work That's Holding You Back https://tinyurl.com/453txk74Watch this episode on video, the 97% Effective Youtube channel: https://www.youtube.com/@97PercentEffectiveAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Reader beware, you're in for a scare! If you were a kid, tween, or teen at all in the '90s, you knew Goosebumps books, and you were obsessed with those colorful and creepy covers! Mark welcomes the original Goosebumps cover illustrator Tim Jacobus on the show to talk about his career, the process of making all those iconic covers, and what cover designs are most special to him. This is Part 1 of a Goosebumps celebration, and Part 2 will focus on the premiere episodes of the Fox TV series for its 30th anniversary. Follow Tim @timjacobus on Instagram, check out his work at jacobusstudios.com, and say hi to him at future conventions, where he sells prints of his work and gives a portion of the income to help feed those who are in need.
TDC 072: How to Become A "Category of One" Thought Leader.Three questions that separate thought leaders from everyone else in their market.Episode Summary:In this episode of The Digital Contrarian, host Ryan Levesque reveals the Category of One framework that's made his consulting practice oversubscribed and generated 24 speaking invitations in 18 months.You'll learn the three critical questions that establish thought leadership positioning, discover why timing and novelty matter more than credentials, and understand how to make all roads lead naturally to you.Question of the Day
My guest today on the Online for Authors podcast is RL Carpentier III, author of the book Our Lady of the Overlook. Rodney is a lifelong writer of stories and poems and songs and now, novels. As a kid he and his mom would try to plot whodunits on Sunday afternoons; in middle school he and his friends tried to develop their own superhero comic book; and in college he wrote song lyrics for his pop-punk band Gone Ashley. But he's had a novel stuck in his head for the last 20 years and it's about time for him to let it all out. Professionally, he has been a storyteller throughout his long law enforcement career. He has written factually about the mundane to the maniacal. He has told his peers, his bosses, and juries about what he has seen and done, what he's investigated and what he's able to prove. It is a procedural and clinical style; something he has brought over to his fiction writing. So, at this crossroad of his life and career, well into his 40's and in the downswing toward retirement, he is going to live out his dream of being a writer. He has his wife and daughter in his corner and the whole world for an audience. In my book review, I stated Our Lady of the Overlook is the first in a trilogy of murder suspense novels. And talk about suspense! From the beginning, RL makes the reader wonder who did it and why. We meet Mike Ellis, a man who has come home to be a cop - but remains in his dead father's shadow, a man who was Police Chief of the town before his heart attack. Before long, he is sucked into a murder investigation that is quite like his father's first murder case and one that has long since gone cold. Despite no evidence, Mike believes that the two have to be related. But with a 40-year span between them, will he ever be able to prove his theory? But more importantly, will he get the chance? There are those on the force who want to set him up for failure - retribution against his father. And his personal ghosts work against him as well. This novel is full of intrigue, half-clues, secrets, and trauma, and the ending will leave you wanting more! Subscribe to Online for Authors to learn about more great books! https://www.youtube.com/@onlineforauthors?sub_confirmation=1 Join the Novels N Latte Book Club community to discuss this and other books with like-minded readers: https://www.facebook.com/groups/3576519880426290 You can follow Author RL Carpentier III Website: https://www.rlcarpentierwriter.com/ Social media: FB: @R.L. Carpentier - Debut Novelist IG: @rlcarpentier Purchase Our Lady of the Overlook on Amazon: Paperback: https://amzn.to/4lW3uBe Ebook: https://amzn.to/4mePSA Teri M Brown, Author and Podcast Host: https://www.terimbrown.com FB: @TeriMBrownAuthor IG: @terimbrown_author X: @terimbrown1 Want to be a guest on Online for Authors? Send Teri M Brown a message on PodMatch, here: https://www.podmatch.com/member/onlineforauthors #rlcarpentier #ourladyoftheoverlook #thriller #mystery #terimbrownauthor #authorpodcast #onlineforauthors #characterdriven #researchjunkie #awardwinningauthor #podcasthost #podcast #readerpodcast #bookpodcast #writerpodcast #author #books #goodreads #bookclub #fiction #writer #bookreview *As an Amazon Associate I earn from qualifying purchases.
A16z Podcast: Read the notes at at podcastnotes.org. Don't forget to subscribe for free to our newsletter, the top 10 ideas of the week, every Monday --------- Amjad Masad, founder and CEO of Replit, joins a16z's Marc Andreessen and Erik Torenberg to discuss the new world of AI agents, the future of programming, and how software itself is beginning to build software.They trace the history of computing to the rise of AI agents that can now plan, reason, and code for hours without breaking, and explore how Replit is making it possible for anyone to create complex applications in natural language. Amjad explains how RL unlocked reasoning for modern models, why verification loops changed everything, whether LLMs are hitting diminishing returns — and if “good enough” AI might actually block progress toward true general intelligence. Resources:Follow Amjad on X: https://x.com/amasadFollow Marc on X: https://x.com/pmarcaFollow Erik on X: https://x.com/eriktorenberg Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
TDC 071: How to Get Your Website to Show Up in ChatGPT…The 2025 version of ranking #1 on Google—master these AI concepts before your competition does.Episode Summary:In this episode of The Digital Contrarian, host Ryan Levesque dives into Answer Engine Optimization (AEO) and the advanced AI concepts determining which brands get recommended by ChatGPT.You'll learn how to optimize for vector embeddings instead of keywords, discover entity graph gap analysis techniques, and master information gain rate principles that make AI systems choose your content over competitors'.Question of the Day
2 sections- third (and final) sub conversation within RL and RE regarding the inherited korban's ownership (and kapara-ability) for deceased or inheritors, discussion if the korban slaughtered with wrong intent achieves kapara for owner
Amjad Masad, founder and CEO of Replit, joins a16z's Marc Andreessen and Erik Torenberg to discuss the new world of AI agents, the future of programming, and how software itself is beginning to build software.They trace the history of computing to the rise of AI agents that can now plan, reason, and code for hours without breaking, and explore how Replit is making it possible for anyone to create complex applications in natural language. Amjad explains how RL unlocked reasoning for modern models, why verification loops changed everything, whether LLMs are hitting diminishing returns — and if “good enough” AI might actually block progress toward true general intelligence. Resources:Follow Amjad on X: https://x.com/amasadFollow Marc on X: https://x.com/pmarcaFollow Erik on X: https://x.com/eriktorenberg Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Are we failing to understand the exponential, again?My guest is Julian Schrittwieser (top AI researcher at Anthropic; previously Google DeepMind on AlphaGo Zero & MuZero). We unpack his viral post (“Failing to Understand the Exponential, again”) and what it looks like when task length doubles every 3–4 months—pointing to AI agents that can work a full day autonomously by 2026 and expert-level breadth by 2027. We talk about the original Move 37 moment and whether today's AI models can spark alien insights in code, math, and science—including Julian's timeline for when AI could produce Nobel-level breakthroughs.We go deep on the recipe of the moment—pre-training + RL—why it took time to combine them, what “RL from scratch” gets right and wrong, and how implicit world models show up in LLM agents. Julian explains the current rewards frontier (human prefs, rubrics, RLVR, process rewards), what we know about compute & scaling for RL, and why most builders should start with tools + prompts before considering RL-as-a-service. We also cover evals & Goodhart's law (e.g., GDP-Val vs real usage), the latest in mechanistic interpretability (think “Golden Gate Claude”), and how safety & alignment actually surface in Anthropic's launch process.Finally, we zoom out: what 10× knowledge-work productivity could unlock across medicine, energy, and materials, how jobs adapt (complementarity over 1-for-1 replacement), and why the near term is likely a smooth ramp—fast, but not a discontinuity.Julian SchrittwieserBlog - https://www.julian.acX/Twitter - https://x.com/mononofuViral post: Failing to understand the exponential, again (9/27/2025)AnthropicWebsite - https://www.anthropic.comX/Twitter - https://x.com/anthropicaiMatt Turck (Managing Director)Blog - https://www.mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturckFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCap(00:00) Cold open — “We're not seeing any slowdown.”(00:32) Intro — who Julian is & what we cover(01:09) The “exponential” from inside frontier labs(04:46) 2026–2027: agents that work a full day; expert-level breadth(08:58) Benchmarks vs reality: long-horizon work, GDP-Val, user value(10:26) Move 37 — what actually happened and why it mattered(13:55) Novel science: AlphaCode/AlphaTensor → when does AI earn a Nobel?(16:25) Discontinuity vs smooth progress (and warning signs)(19:08) Does pre-training + RL get us there? (AGI debates aside)(20:55) Sutton's “RL from scratch”? Julian's take(23:03) Julian's path: Google → DeepMind → Anthropic(26:45) AlphaGo (learn + search) in plain English(30:16) AlphaGo Zero (no human data)(31:00) AlphaZero (one algorithm: Go, chess, shogi)(31:46) MuZero (planning with a learned world model)(33:23) Lessons for today's agents: search + learning at scale(34:57) Do LLMs already have implicit world models?(39:02) Why RL on LLMs took time (stability, feedback loops)(41:43) Compute & scaling for RL — what we see so far(42:35) Rewards frontier: human prefs, rubrics, RLVR, process rewards(44:36) RL training data & the “flywheel” (and why quality matters)(48:02) RL & Agents 101 — why RL unlocks robustness(50:51) Should builders use RL-as-a-service? Or just tools + prompts?(52:18) What's missing for dependable agents (capability vs engineering)(53:51) Evals & Goodhart — internal vs external benchmarks(57:35) Mechanistic interpretability & “Golden Gate Claude”(1:00:03) Safety & alignment at Anthropic — how it shows up in practice(1:03:48) Jobs: human–AI complementarity (comparative advantage)(1:06:33) Inequality, policy, and the case for 10× productivity → abundance(1:09:24) Closing thoughts
TDC 070: Seven "Non-Obvious" Email Lessons I've Learned Writing This Email Newsletter Each Week.Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque reveals the seven most impactful lessons from writing 70 consecutive weekly newsletters by hand.You'll learn how less AI usage led to higher engagement, why creative volume beats perfectionism, and discover the custom dashboard metrics that actually matter for email success.Question of the Day
All over the world, for all of human history – and probably going back to our earliest hominid ancestors – people have found ways to try to keep themselves clean. But how did soap come about? Research: “Soap, N. (1), Etymology.” Oxford English Dictionary, Oxford UP, June 2025, https://doi.org/10.1093/OED/1115187665. American Cleaning Institute. “Soaps & Detergents History.” https://www.cleaninginstitute.org/understanding-products/why-clean/soaps-detergents-history Beckmann, John. “History of Inventions, Discoveries and Origins.” William Johnston, translator. Bosart, L.W. “The Early History of the Soap Industry.” The American Oil Chemists' Society. Journal of Oil & Fat Industries 1924-10: Vol 1 Iss 2. Cassidy, Cody. “Who Discovered Soap? What to Know About the Origins of the Life-Saving Substance.” Time. 5/5/2020. https://time.com/5831828/soap-origins/ Ciftyurek, Muge, and Kasim Ince. "Selahattin Okten Soap Factory in Antakya and an Evaluation on Soap Factory Plan Typology/Antakya'da Bulunan Selahattin Okten Sabunhanesi ve Sabunhane Plan Tipolojisi Uzerine Bir Degerlendirme." Art-Sanat, no. 19, Jan. 2023, pp. 133+. Gale Academic OneFile, dx.doi.org/10.26650/artsanat.2023.19.1106544. Accessed 18 Aug. 2025. Costa, Albert B. “Michel-Eugène Chevreul.” Encyclopedia Britannica. https://www.britannica.com/biography/Michel-Eugene-Chevreul Curtis, Valerie A. “Dirt, disgust and disease: a natural history of hygiene.” Journal of epidemiology and community health vol. 61,8 (2007): 660-4. doi:10.1136/jech.2007.062380 Dijkstra, Albert J. “How Chevreul (1786-1889) based his conclusions on his analytical results.” OCL. Vol. 16, No. 1. January-February 2009. Gibbs, F.W. “The History and Manufacture of Soap.” Annals of Science. 1939. Koeppel, Dan. “The History of Soap.” 4/15/2020. https://www.nytimes.com/wirecutter/blog/history-of-soap/ List, Gary, and Michael Jackson. “Giants of the Past: The Battle Over Hydrogenation (1903-1920).” https://www.ars.usda.gov/research/publications/publication/?seqNo115=210614 Maniatis, George C. “Guild Organized Soap Manufacturing Industry in Constantinople: Tenth-Twelfth Centuries.” Byzantion, 2010, Vol. 80 (2010). https://www.jstor.org/stable/44173107 National Museum of American History. “Bathing (Body Soaps and Cleansers).” https://americanhistory.si.edu/collections/object-groups/health-hygiene-and-beauty/bathing-body-soaps-and-cleansers New Mexico Historic Sites. “Making Soap from the Leaves of the Soaptree Yucca.” https://nmhistoricsites.org/assets/files/selden/Virtual%20Classroom_Soaptree%20Yucca%20Soap%20Making.pdf “The history of soapmaking.” 8/30/2019. https://www.open.edu/openlearn/history-the-arts/history/history-science-technology-and-medicine/history-science/the-history-soapmaking Pliny the Elder. “The Natural History of Pliny. Translated, With Copious Notes and Illustrations.” Vol. 5. John Bostock, translator. https://www.gutenberg.org/files/60688/60688-h/60688-h.htm Pointer, Sally. “An Experimental Exploration of the Earliest Soapmaking.” EXARC Journal. 2024/3. 8/22/2024. https://exarc.net/issue-2024-3/at/experimental-exploration-earliest-soapmaking Ridner, Judith. “The dirty history of soap.” The Conversation. 5/12/2020. https://theconversation.com/the-dirty-history-of-soap-136434 Routh, Hirak Behari et al. “Soaps: From the Phoenicians to the 20th Century - A Historical Review.” Clinics in Dermatology. Vol. No. 3. 1996. Smith, Cyril Stanley, and John G. Hawthorne. “Mappae Clavicula: A Little Key to the World of Medieval Techniques.” Transactions of the American Philosophical Society, vol. 64, no. 4, 1974, pp. 1–128. JSTOR, https://doi.org/10.2307/1006317. Accessed 18 Aug. 2025. Timilsena, Yakindra Prasad et al. “Perspectives on Saponins: Food Functionality and Applications.” International journal of molecular sciences vol. 24,17 13538. 31 Aug. 2023, doi:10.3390/ijms241713538 “Craftsmanship of Aleppo Ghar soap.” https://ich.unesco.org/en/RL/craftsmanship-of-aleppo-ghar-soap-02132 “Tradition of Nabulsi soap making in Palestine.” https://ich.unesco.org/en/RL/tradition-of-nabulsi-soap-making-in-palestine-02112 “Soaps.” https://www.fs.usda.gov/wildflowers/ethnobotany/soaps.shtml van Dijk, Kees. “Soap is the onset of civilization.” From Cleanliness and Culture. Kees van Dijk and Jean Gelman Taylor, eds. Brill. 2011. https://www.jstor.org/stable/10.1163/j.ctvbnm4n9.4 Wei, Huang. “The Sordid, Sudsy Rise of Soap in China.” Sixth Tone. 8/11/2020. https://www.sixthtone.com/news/1006041 See omnystudio.com/listener for privacy information.