Capacity for consciously making sense of things
POPULARITY
Categories
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
In this episode, Sebastian Raschka, independent LLM researcher and author, joins us to break down how the LLM landscape has changed over the past year and what is likely to matter most in 2026. We discuss the shift from raw model scaling to reasoning-focused post-training, inference-time techniques, and better tool integration. Sebastian explains why methods like self-consistency, self-refinement, and verifiable-reward reinforcement learning have become central to progress in domains like math and coding, and where those approaches still fall short. We also explore agentic workflows in practice, including where multi-agent systems add real value and where reliability constraints still dominate system design. The conversation covers architecture trends such as mixture-of-experts, attention efficiency strategies, and the practical impact of long-context models, alongside persistent challenges like continual learning. We close with Sebastian's perspective on maintaining strong coding fundamentals in the age of AI assistants and a preview of his new book, Build A Reasoning Model (From Scratch). The complete show notes for this episode can be found at https://twimlai.com/go/762.
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Listen to Full Audio at https://podcasts.apple.com/us/podcast/ai-business-and-development-weekly-news-rundown-the/id1684415169?i=1000750869091
The text for today’s study is Luke 23:44-49 READ CLICK to Read Transcripts The post Belief That Transcends Reasoning appeared first on Living in God's Presence.
Professor Richard Epstein of the CIVITAS INSTITUTE analyzes constitutional limits of presidential authority to fire independent agency officials, discussing historical precedents like Humphrey's Executor and critiquing legal reasoning behind maintaining quasi-judicial independence within the executive branch. 151910 SCOTUS
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Listen to Full Audio at https://podcasts.apple.com/us/podcast/ai-business-and-development-daily-news-rundown-gemini/id1684415169?i=1000750740639
In this episode of Good Is In The Details, hosts Gwendolyn Dolske and Rudy Salo are joined by philosopher and author Bill Tomlinson to explore the foundations of critical thinking and the practice of philosophy. Drawing from his book Dialogues with Artificial Intelligence: On the Tools of Philosophy, the conversation offers an accessible introduction to how philosophers think — and how anyone can develop clearer, more rigorous reasoning. What is philosophy, and how do philosophers approach complex questions? What is the difference between inductive and deductive reasoning? How do definitions, distinctions, and paradoxes shape philosophical thinking? This episode addresses these commonly asked questions while guiding listeners through the essential tools used in philosophical inquiry. The discussion also explores a timely question: Can artificial intelligence support critical thinking rather than replace it? Tomlinson explains how students, educators, and curious learners can engage with AI as a tool for reflection, questioning, and deeper reasoning — without surrendering the work of thinking itself. Listeners will explore: what philosophy is and how philosophical thinking works the foundations of critical thinking and clear reasoning inductive vs. deductive reasoning explained what a paradox is and why paradoxes matter in philosophy how making distinctions improves understanding and argument how educators and students can use AI to strengthen, not replace, thinking Blending philosophy, education, and accessible explanation, this episode offers a clear introduction to philosophical inquiry while inviting listeners to think more carefully about how they reason, question, and understand the world. Get your copy of Bill's book: Dialogues with Artificial Intelligence: On The Tools of Philosophy Support the pod and join our community on Patreon: https://www.patreon.com/c/GoodIsInTheDetails Get your copy of Interview With Intention Get in touch! Questions, Partnership opportunities, Speaking Inquiries: https://www.goodisinthedetails.com
******Support the channel******Patreon: https://www.patreon.com/thedissenterPayPal: paypal.me/thedissenterPayPal Subscription 1 Dollar: https://tinyurl.com/yb3acuuyPayPal Subscription 3 Dollars: https://tinyurl.com/ybn6bg9lPayPal Subscription 5 Dollars: https://tinyurl.com/ycmr9gpzPayPal Subscription 10 Dollars: https://tinyurl.com/y9r3fc9mPayPal Subscription 20 Dollars: https://tinyurl.com/y95uvkao ******Follow me on******Website: https://www.thedissenter.net/The Dissenter Goodreads list: https://shorturl.at/7BMoBFacebook: https://www.facebook.com/thedissenteryt/Twitter: https://x.com/TheDissenterYT This show is sponsored by Enlites, Learning & Development done differently. Check the website here: http://enlites.com/ Dr. Hanna Schleihauf is Assistant Professor in the Department for Developmental Psychology at Utrecht University. She studies the roots of human diversity: although each of us shares 99.99% of our genes with every other human on the planet, there is massive variation in our socio-cultural practices driven by our ability to learn from and interact with others. Her research investigates socio-cognitive underpinnings of cultural learning, focusing on how cultural novices, children during early and middle childhood, grow into proficient cultural beings. In this episode, we first talk about when and how children start considering other people's beliefs, the kinds of beliefs people care about, fact-based beliefs and value-based beliefs, and intuitions about people's control over their own beliefs. We then talk about belief revision and how it develops in children. We discuss what people consider to be good reasoning. Finally, we talk about recent exciting findings that suggest that chimpanzees respond to higher-order evidence.--A HUGE THANK YOU TO MY PATRONS/SUPPORTERS: PER HELGE LARSEN, JERRY MULLER, BERNARDO SEIXAS, ADAM KESSEL, MATTHEW WHITINGBIRD, ARNAUD WOLFF, TIM HOLLOSY, HENRIK AHLENIUS, ROBERT WINDHAGER, RUI INACIO, ZOOP, MARCO NEVES, COLIN HOLBROOK, PHIL KAVANAGH, SAMUEL ANDREEFF, FRANCIS FORDE, TIAGO NUNES, FERGAL CUSSEN, HAL HERZOG, NUNO MACHADO, JONATHAN LEIBRANT, JOÃO LINHARES, STANTON T, SAMUEL CORREA, ERIK HAINES, MARK SMITH, JOÃO EIRA, TOM HUMMEL, SARDUS FRANCE, DAVID SLOAN WILSON, YACILA DEZA-ARAUJO, ROMAIN ROCH, DIEGO LONDOÑO CORREA, YANICK PUNTER, CHARLOTTE BLEASE, NICOLE BARBARO, ADAM HUNT, PAWEL OSTASZEWSKI, NELLEKE BAK, GUY MADISON, GARY G HELLMANN, SAIMA AFZAL, ADRIAN JAEGGI, PAULO TOLENTINO, JOÃO BARBOSA, JULIAN PRICE, HEDIN BRØNNER, DOUGLAS FRY, FRANCA BORTOLOTTI, GABRIEL PONS CORTÈS, URSULA LITZCKE, SCOTT, ZACHARY FISH, TIM DUFFY, SUNNY SMITH, JON WISMAN, WILLIAM BUCKNER, PAUL-GEORGE ARNAUD, LUKE GLOWACKI, GEORGIOS THEOPHANOUS, CHRIS WILLIAMSON, PETER WOLOSZYN, DAVID WILLIAMS, DIOGO COSTA, ALEX CHAU, AMAURI MARTÍNEZ, CORALIE CHEVALLIER, BANGALORE ATHEISTS, LARRY D. LEE JR., OLD HERRINGBONE, MICHAEL BAILEY, DAN SPERBER, ROBERT GRESSIS, JEFF MCMAHAN, JAKE ZUEHL, BARNABAS RADICS, MARK CAMPBELL, TOMAS DAUBNER, LUKE NISSEN, KIMBERLY JOHNSON, JESSICA NOWICKI, LINDA BRANDIN, VALENTIN STEINMANN, ALEXANDER HUBBARD, BR, JONAS HERTNER, URSULA GOODENOUGH, DAVID PINSOF, SEAN NELSON, MIKE LAVIGNE, JOS KNECHT, LUCY, MANVIR SINGH, PETRA WEIMANN, CAROLA FEEST, MAURO JÚNIOR, 航 豊川, TONY BARRETT, NIKOLAI VISHNEVSKY, STEVEN GANGESTAD, TED FARRIS, HUGO B., JAMES, JORDAN MANSFIELD, AND CHARLOTTE ALLEN!A SPECIAL THANKS TO MY PRODUCERS, YZAR WEHBE, JIM FRANK, ŁUKASZ STAFINIAK, TOM VANEGDOM, BERNARD HUGUENEY, CURTIS DIXON, BENEDIKT MUELLER, THOMAS TRUMBLE, KATHRINE AND PATRICK TOBIN, JONCARLO MONTENEGRO, NICK GOLDEN, CHRISTINE GLASS, IGOR NIKIFOROVSKI, AND PER KRAULIS!AND TO MY EXECUTIVE PRODUCERS, MATTHEW LAVENDER, SERGIU CODREANU, ROSEY, AND GREGORY HASTINGS!
What is a logical fallacy? What is a sound argument? And, why does it matter for Christians today? In this episode of the Bible and Theology Matters podcast, Dr. Paul Weaver is joined by the co-authors of Talking About World Views: A Conversational Introduction to Thinking Philosophically for a powerful discussion on logic, logical fallacies, and sound reasoning in a confused culture.Featuring:-Michael Jones – Professor of Philosophy & Religion, Liberty University-Mark J. Farnham – Professor of Apologetics, Lancaster Bible College-David Saxon – Professor of Church History, Maranatha Baptist UniversityTogether, they explore:✔️ What a worldview is—and why everyone has one✔️ Why logic is foundational for defending the Christian faith ✔️ The difference between factual mistakes and logical fallacies ✔️ Common fallacies like: Ad Hominem Straw Man Appeal to Authority Appeal to Pity Genetic Fallacy Red Herring Self-Referential Incoherence✔️ How to construct sound deductive and inductive arguments✔️ Why truth corresponds to reality ✔️ How Christians can argue graciously, clearly, and persuasivelyIn a world shaped by expressive individualism, emotional reasoning, and intellectual shortcuts, this conversation equips believers to think critically and biblically. As C.S. Lewis famously emphasized, Christianity is not something we would have invented—it confronts us with reality. Whether you're a pastor, Bible teacher, seminary student, homeschool parent, or simply someone who wants to strengthen your reasoning skills, this episode will sharpen your thinking and deepen your confidence in the Christian worldview.
When we disagree with someone, it's tempting to assume the problem is simple: they're irrational, biased, or misinformed. But what if human reasoning doesn't work the way we think it does? What if reasoning isn't primarily about finding the truth on our own, but about exchanging arguments with others? In this episode of TrustTalk, we speak with cognitive scientist Hugo Mercier of the CNRS in Paris and co-author of The Enigma of Reason. He explains why humans may be better at reasoning than we assume, why disagreement often turns on trust rather than logic, and what this means for science communication, polarization, and our ability to reason together. Hugo Mercier also reflects on how confirmation bias can serve a useful function in group deliberation, why personal and local relationships often succeed where institutional messaging fails, and why, despite everything, he remains cautiously optimistic about our collective capacity to reason well.
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Listen to Full Audio at https://podcasts.apple.com/us/podcast/gemini-3-1-pros-reasoning-leap-microsofts-10-000-year/id1684415169?i=1000750546704
Dr. Jackie Cheung is an Associate Professor at McGill University where he co-directs the Reasoning and Learning Lab. He is also an Associate Scientific Director at Mila-Quebec Artificial Intelligence Institute. He and his team are developing computational models to improve the reliability, pragmatics, and evaluation of large language models to ensure they are contextually appropriate and factually grounded.Jackie was worked as a consultant researcher with Microsoft Research and before his current appointments, he earned his PhD and MSc in Computer Science from the University of Toronto, focusing on computational linguistics, and his BSc from the University of British Columbia.00:00:00 Highlight & Introduction00:02:04 Entrypoint in AI & NLP00:04:47 Academia vs. Industry: Career choices00:09:48 Language Revitalization using AI00:12:24 Addressing Biases & Data sovereignty in language revitalization 00:15:49 Evaluating LLMs as Judges00:17:14 Validity and reliability in LLM evaluation 00:25:11 Evidence-centered benchmark design (ECBD) framework00:30:38 Gaps in LLM benchmarks and meaning of "general purpose" AI00:35:24 General purpose intelligence vs reasoning00:40:16 Safety as an undefined bundle in LLMs00:51:45 Stochastic chameleons: how LLMs generalize and hallucinate 01:03:02 Potential & Biases of agentic frameworks for research01:05:52 Evaluating LLMs for summarization01:11:43 Scaling large language models01:16:33 Advice to beginners entering AI in 202601:20:33 Pitfalls to avoid in AI research & development More about Jackie & his research: https://www.cs.mcgill.ca/~jcheung/About the Host:Jay is a Machine Learning Engineer III at PathAI working on improving AI for medical diagnosis and prognosis. Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://jaygshah.github.io/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
In this episode of the Mr. Barton Maths podcast, Craig is joined by Chris Shaw, a mathematics educator with nearly 30 years of experience. They discuss Chris's transition from secondary school teaching to a full-time role at Loughborough University, where he is involved in teacher training and research. The conversation delves into the importance of effective explanations in mathematics education, the challenges of pursuing a PhD, and the role of sense-making in teaching. Chris shares insights from his research on what constitutes a good mathematical explanation and the significance of example selection in teaching. The episode concludes with reflections on the complexities of teaching and the ongoing quest for effective educational practices. Read the show notes here: podcast.mrbartonmaths.com/212-research-in-action-29-explanations-and-reasoning-with-chris-shore
The fallout continues from the release of the Epstein files. On Saturday, the Justice Department sent a letter to Congress that included a list of names of "politically exposed persons" mentioned in the files of convicted sex offender Jeffrey Epstein. Justice correspondent Ali Rogin reports. PBS News is supported by - https://www.pbs.org/newshour/about/funders. Hosted on Acast. See acast.com/privacy
The fallout continues from the release of the Epstein files. On Saturday, the Justice Department sent a letter to Congress that included a list of names of "politically exposed persons" mentioned in the files of convicted sex offender Jeffrey Epstein. Justice correspondent Ali Rogin reports. PBS News is supported by - https://www.pbs.org/newshour/about/funders. Hosted on Acast. See acast.com/privacy
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Hosted by Pastor Ed TaylorResource mentioned was Reasoning from the Scriptures with Muslims by Ron Rhodes.Calvary Live is an outreach ministry of GraceFM at Calvary Church in Aurora, Colorado.Pastor Ed Taylor is the Senior Pastor of Calvary Church – you can find more about him at edtaylor.org.If you like what you hear on Calvary Live – don't forget to follow us, and share it with your friends and family!
Ereli Eran is the Founding Engineer at 7AI, where he's focused on building and scaling the company's agentic AI-driven cybersecurity platform — developing autonomous AI agents that triage alerts, investigate threats, enrich security data, and enable end-to-end automated security operations so human teams can focus on higher-value strategic work.Software Engineering in the Age of Coding Agents: Testing, Evals, and Shipping Safely at Scale // MLOps Podcast #361 with Ereli Eran, Founding Engineer at 7AIJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractA conversation on how AI coding agents are changing the way we build and operate production systems. We explore the practical boundaries between agentic and deterministic code, strategies for shared responsibility across models, engineering teams, and customers, and how to evaluate agent performance at scale. Topics include production quality gates, safety and cost tradeoffs, managing long-tail failures, and deployment patterns that let you ship agents with confidence.// BioEreli Eran is a founding engineer at 7AI, where he builds agentic AI systems for security operations and the production infrastructure that powers them. His work spans the full stack - from designing experiment frameworks for LLM-based alert investigation to architecting secure multi-tenant systems with proper authentication boundaries. Previously, he worked in data science and software engineering roles at Stripe, VMware Carbon Black, and was an early employee of Ravelin and Normalyze.// Related LinksWebsite: https://7ai.com/Coding Agents Conference: https://luma.com/codingagents~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Ereli on LinkedIn: /erelieran/Timestamps:[00:00] Language Sensitivity in Reasoning[00:25] Value of Claude Code[01:54] AI in Security Workflows[06:21] Agentic Systems Failures[12:50] Progressive Disclosure in Voice Agents[16:39] LLM vs Classic ML[19:44] Hybrid Approach to Fraud[25:58] Debugging with User Feedback[33:52] Prompts as Code[42:07] LLM Security Workflow[45:10] Shared Memory in Security[49:11] Common Agent Failure Modes[53:34] Wrap up
Programming is not a template.
Children, like adults, use one of two strategies to solve mental rotation problems, says Amanda Ruggeri. Read the article on BOLD.Stay up to date with all the latest research on child development and learning at boldscience.org.Join the conversation on Facebook, Instagram, LinkedIn.Subscribe to BOLD's newsletter.
The Reasoning |No Behaviour Episode 305 ft Ras Nado by Margs & Loons
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Full Audio at https://podcasts.apple.com/us/podcast/product-showdown-gpt-5-3-codex-vs-claude-opus-4-6-the/id1684415169?i=1000748609126In this rapid-fire "Product Showdown," we test drive the two hottest coding models on the planet: OpenAI's GPT-5.3 Codex and Anthropic's Claude Opus 4.6.Are they direct competitors, or do they serve completely different masters? We break down the strengths of each: Codex for "execution" and Opus for "reasoning." If you are a developer trying to decide which subscription to keep, this 2-minute breakdown is for you.Key Takeaways:GPT-5.3 Codex: Best for fast iteration, terminal workflows, and shipping code quickly.Claude Opus 4.6: Best for deep reasoning, long-context architecture, and complex problem-solving.The Verdict: Stop looking for a winner. Use Codex for doing and Opus for thinking.Keywords: GPT-5.3 Codex, Claude Opus 4.6, AI Coding Benchmarks, Dev Tools, Agentic Workflows, OpenAI vs AnthropicTimestamps:00:00 – Intro: The Ultimate Product Showdown: GPT-5.3 Codex vs. Claude Opus 4.601:08 – Round 1: "The Doer" – GPT-5.3 Codex (Speed, Execution, & The Terminal)02:40 – Round 2: "The Thinker" – Claude Opus 4.6 (Depth, Reasoning, & Architecture)03:54 – Round 3: Real-World Use Case – The "Swarm CRM" (Orchestrating Agents)05:01 – Round 4: The Vibe Check – "Vibe Coding" vs. "Strict Engineering"06:21 – The Verdict: Why You Need Both (The Doer to Build, The Thinker to Plan)07:22 – Outro: The Era of Orchestration (Don't Just Code, Manage)Full Audio at https://podcasts.apple.com/us/podcast/product-showdown-gpt-5-3-codex-vs-claude-opus-4-6-the/id1684415169?i=1000748609126
In dieser Folge ist Andreas Klinger, Gründer und General Partner von PROTOTYPE, zu Gast. Andreas hat tiefgreifende Erfahrungen aus der US-Tech-Szene (u.a. AngelList, Product Hunt, OnDeck) und fokussiert sich heute auf Investments in Europas DeepTech-Sektor. Er spricht über die Herausforderungen des europäischen Startup-Ökosystems, die Notwendigkeit einer paneuropäischen Firmenstruktur (EU Inc.), die spannendsten Technologien im Bereich Robotics und Manufacturing und warum jetzt der beste Zeitpunkt ist, ein Robotics-Startup zu gründen. Andreas gibt zudem Einblicke in seinen Investmentansatz, die größten Probleme Europas und warum er politisches Engagement für essenziell hält, um das europäische Tech-Ökosystem langfristig konkurrenzfähig zu machen. Was du aus der Folge mitnimmst: Europas Herausforderungen im Startup-Bereich: Warum fragmentierte Märkte, fehlende Standards und mangelnde Kapitalstrukturen das Wachstum behindern. EU Inc. als Lösung: Andreas erklärt, wie eine paneuropäische Firmenstruktur das Gründen und Investieren in Europa revolutionieren könnte. Warum DeepTech Europas Stärke ist: Mit einem Fokus auf Robotics, Manufacturing und Frontier Tech hat Europa die Möglichkeit, eine globale Führungsrolle einzunehmen. Tech-Trends der Zukunft: Von autonomen Traktoren bis zu kleinen Roboterzellen für Produktion – Andreas zeigt, wie Fortschritte in Computer Vision, Reasoning und Hardware die Industrie verändern. Warum 2026 der ideale Zeitpunkt für Robotics-Startups ist: Durch technologische Durchbrüche in AI und Manufacturing ist jetzt die perfekte Zeit, um in Robotics einzusteigen. Das Potenzial von Hardware-Startups: Trotz höherer Anfangskosten bieten Hardware-Startups langfristig oft mehr Wettbewerbsvorteile und größere Marktchancen. Andreas' Appell an Gründer: Fokussiere dich auf innovative und unkonventionelle Ideen, die durch technologische Fortschritte möglich geworden sind. ALLES ZU UNICORN BAKERY: https://stan.store/fabiantausch Mehr zu Andreas: LinkedIn: https://de.linkedin.com/in/andreasklinger Website: https://www.prototypecap.com/ Join our Founder Tactics Newsletter: 2x die Woche bekommst du die Taktiken der besten Gründer der Welt direkt ins Postfach: https://www.tactics.unicornbakery.de/ Kapitel: (00:00:00) Einstieg: Europas Rolle in einer globalen Tech-Welt (00:02:37) Die Herausforderungen des europäischen Startup-Ökosystems (00:04:49) Warum paneuropäische Standards fehlen und wie EU Inc. das ändern soll (00:09:19) EU Inc.: Wie eine einheitliche europäische Firmenstruktur Innovation fördern könnte (00:13:00) Vergleich Europa vs. USA: Was macht die USA besser? (00:17:27) Politisches Engagement: Warum Andreas sich für EU Inc. einsetzt (00:20:59) PROTOTYPE: Fokus auf DeepTech, Robotics und Manufacturing (00:26:28) Warum 2026 der beste Zeitpunkt ist, ein Robotics-Startup zu gründen (00:32:12) Wie PROTOTYPE Hardware-Startups unterstützt und finanziert (00:37:16) Sunrise, Voltrack und Sensmoor: Beispiele für spannende DeepTech-Startups (00:44:17) Breakthroughs in Robotics: Von Computer Vision bis zu autonomen Maschinen (00:51:29) Die größten Unterschiede zwischen Software- und Hardware-Startups (00:56:48) Warum Europas Fragmentierung das größte Hindernis bleibt (01:00:00) Abschluss: Chancen für Europäische Startups und Andreas' Appell an Gründer
The Rush Hour is back with a loaded afternoon update. We break down new information in the Blake Lively vs. Justin Baldoni legal battle that could seriously tilt the case in Baldoni's favor—and why the momentum may be shifting fast. Plus, new detention centers are going up, and we explain why this is a full-blown five-alarm fire from a humanitarian perspective, what's being overlooked, and why the alarm bells should be ringing right now. We also unpack the National Prayer Breakfast, where Donald Trump escalates his attacks—this time going after Congressman Thomas Massie, Democrats, and voter ID laws—before being publicly confronted by a reverend to his face in a moment that's now going viral. Fast-moving news, legal analysis, politics, and culture—this is the Rush Hour afternoon update. Sponsored by Gobymeds. Go to gobymeds dot com code rushhour for $50 off
Blitzy founders Brian and Sid break down how their “infinite code context” system lets AI autonomously complete over 80% of major enterprise software projects in days. They dive into their dynamic agent architecture, how they choose and cross-check different models, and why they prioritize advances in AI memory over fine-tuning. The conversation also covers their 20¢/line pricing model, the path to 99%+ autonomous project completion, and what this all means for the future software engineering job market. Sponsors: Blitzy: Blitzy is the autonomous code generation platform that ingests millions of lines of code to accelerate enterprise software development by up to 5x with premium, spec-driven output. Schedule a strategy session with their AI solutions consultants at https://blitzy.com Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai Serval: Serval uses AI-powered automations to cut IT help desk tickets by more than 50%, freeing your team from repetitive tasks like password resets and onboarding. Book your free pilot and guarantee 50% help desk automation by week four at https://serval.com/cognitive CHAPTERS: (00:00) About the Episode (03:02) AGI effects without AGI (07:07) Domain-specific context engineering (16:54) Dynamic harness and evals (Part 1) (17:00) Sponsors: Blitzy | Tasklet (20:00) Dynamic harness and evals (Part 2) (30:42) Graphs, RAG, and memory (Part 1) (30:49) Sponsor: Serval (32:26) Graphs, RAG, and memory (Part 2) (41:17) Model zoo and memory (50:07) Planning, scaling, and parallelism (56:13) Pricing, onboarding, and autonomy (01:04:24) Closing the last 20% (01:12:34) Strange behaviors and judges (01:22:23) Reasoning budgets and autonomy (01:33:36) Fine-tuning, benchmarks, and training (01:42:31) Securing AI-generated code (01:49:52) Future of software work (01:57:05) Outro PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
Hosted by Pastor Ed TaylorResource mentioned was "Reasoning from the Scriptures with Jehovah's Witnesses," by Ron Rhodes.Calvary Live is an outreach ministry of GraceFM at Calvary Church in Aurora, Colorado.Pastor Ed Taylor is the Senior Pastor of Calvary Church – you can find more about him at edtaylor.orgIf you like what you hear on Calvary Live – don't forget to follow us, and share it with your friends and family!
Why do today's most powerful AI systems still struggle to explain their decisions, repeat the same mistakes, and undermine trust at the very moment we are asking them to take on more responsibility? In this episode of Tech Talks Daily, I'm joined by Artur d'Avila Garcez, Professor of Computer Science at City, St George's University of London, and one of the early pioneers of neurosymbolic AI. Our conversation cuts through the noise around ever-larger language models and focuses on a deeper question many leaders are now grappling with. If scale alone cannot deliver reliability, accountability, or genuine reasoning, what is missing from today's AI systems? Artur explains neurosymbolic AI in clear, practical terms as the integration of neural learning with symbolic reasoning. Deep learning excels at pattern recognition across language, images, and sensor data, but it struggles with planning, causality, and guarantees. Symbolic AI, by contrast, offers logic, rules, and explanations, yet falters when faced with messy, unstructured data. Neurosymbolic AI aims to bring these two worlds together, allowing systems to learn from data while reasoning with knowledge, producing AI that can justify decisions and avoid repeating known errors. We explore why simply adding more parameters and data has failed to solve hallucinations, brittleness, and trust issues. Artur shares how neurosymbolic approaches introduce what he describes as software assurances, ways to reduce the chance of critical errors by design rather than trial and error. From self-driving cars to finance and healthcare, he explains why combining learned behavior with explicit rules mirrors how high-stakes systems already operate in the real world. A major part of our discussion centers on explainability and accountability. Artur introduces the neurosymbolic cycle, sometimes called the NeSy cycle, which translates knowledge into neural networks and extracts knowledge back out again. This two-way process opens the door to inspection, validation, and responsibility, shifting AI away from opaque black boxes toward systems that can be questioned, audited, and trusted. We also discuss why scaling neurosymbolic AI looks very different from scaling deep learning, with an emphasis on knowledge reuse, efficiency, and model compression rather than ever-growing compute demands. We also look ahead. From domain-specific deployments already happening today to longer-term questions around energy use, sustainability, and regulation, Artur offers a grounded view on where this field is heading and what signals leaders should watch for as neurosymbolic AI moves from research into real systems. If you care about building AI that is reliable, explainable, and trustworthy, this conversation offers a refreshing and necessary perspective. As the race toward more capable AI continues, are we finally ready to admit that reasoning, not just scale, may decide what comes next, and what kind of AI do we actually want to live with? Useful Links Neurosymbolic AI (NeSy) Association website Artur's personal webpage on the City, St George's University of London page Co-authored book titled "Neural-Symbolic Learning Systems" The article about neurosymbolic AI and the road to AGI The Accountability in AI article Reasoning in Neurosymbolic AI Neurosymbolic Deep Learning Semantics
This week's podcast is all about about autonomous systems, advanced machine intelligence and how Palladyne AI is revolutionizing how robots and drones perceive and interact with the world. My guest is Palladyne AI Chief Technology Officer & Co-Founder Dr. Denis Garagić! Denis and I discuss the secrets behind their proprietary technology, what sets them apart from other AI solutions that power and drones, their unique approach to AI challenges, and how Palladyne AI approaches the ethical considerations of deploying its AI technology
In this episode, I go through Hebrews 13:5-6, which says (ESV), "5 Keep your life free from love of money, and be content with what you have, for he has said, “I will never leave you nor forsake you.” 6 So we can confidently say, “The Lord is my helper; I will not fear; what can man do to me?”"There are three big takeaways from this text that I discuss:(1) We should make sure that our life and character is free from the love of money.(2) We should, instead of having a love for money and more money, be content with what we have, with what God has blessed us with.(3) As a remedy for the love of money, we should trust in the Lord, who has promised never to forsake us.I hope this episode, which was inspired by a Sunday school lesson I recently taught, will be a blessing to you.#Christian #Bible #biblestudy #Scripture #ChristianTeaching #faith #christianlife #christianpodcast #money --------------------------------LINKS---------------------------------Science Faith & Reasoning podcast link: https://podcasters.spotify.com/pod/show/science-faith-reasoning Coffee with John Calvin Podcast link (An SFR+ Production hosted by Daniel Faucett) https://open.spotify.com/show/5UWb8SavK17HO8ERorHPYN Learning the Fundaments (An SFR+ Production hosted by Shepard Merritt): https://creators.spotify.com/pod/profile/shep304/ -----------------------------CONNECT------------------------------https://www.scifr.com Instagram: https://www.instagram.com/sciencefaithandreasoning X: https://twitter.com/SFRdaily
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today, we're joined by Yejin Choi, professor and senior fellow at Stanford University in the Computer Science Department and the Institute for Human-Centered AI (HAI). In this conversation, we explore Yejin's recent work on making small language models reason more effectively. We discuss how high-quality, diverse data plays a central role in closing the intelligence gap between small and large models, and how combining synthetic data generation, imitation learning, and reinforcement learning can unlock stronger reasoning capabilities in smaller models. Yejin explains the risks of homogeneity in model outputs and mode collapse highlighted in her “Artificial Hivemind” paper, and its impacts on human creativity and knowledge. We also discuss her team's novel approaches, including reinforcement learning as a pre-training objective, where models are incentivized to “think” before predicting the next token, and "Prismatic Synthesis," a gradient-based method for generating diverse synthetic math data while filtering overrepresented examples. Additionally, we cover the societal implications of AI and the concept of pluralistic alignment—ensuring AI reflects the diverse norms and values of humanity. Finally, Yejin shares her mission to democratize AI beyond large organizations and offers her predictions for the coming year. The complete show notes for this episode can be found at https://twimlai.com/go/761.
Metaphors matter. They enliven our speech and our prose; they animate our arguments and stir our passions. Some metaphors power political movements; others propel scientific revolutions. These little figures of speech delight, provoke, captivate, shock, amuse, and galvanize us. In one way or another, metaphors just seem to help us make sense of a messy world. But how do they do all this? Whence their peculiar powers? What does it say about the human mind that we just can't escape our metaphors—and frankly don't want to? My guest today is Dr. Stephen Flusberg. Steve is an Assistant Professor of Psychology at Vassar College, where he directs the Framing, Reasoning, And Metaphor (FRAME) Lab. Here, Steve and I talk about what metaphors are and why we're so drawn to them. We discuss some of the misleading ideas about metaphor you may remember from middle school literature class. We consider why some metaphors work and others flop. We talk about the metaphors we use for climate change and prevalence and potency of war metaphors across different realms of public discourse. We consider how metaphor operates in science and in scientific theorizing. Finally, we talk about the question of whether there are some ideas that we simply can't grasp literally, concepts we can only approach through metaphor. Along the way, Steve and I talk about: "aura farming"; nautical metaphors and textile metaphors; the outmoded idea that metaphors are mere adornments; metaphor versus analogy; dead metaphors and how to resuscitate them; shadows and footprints; Dan Dennett's technique of metaphorical triangulation; and the brain-as-computer metaphor—and whether it is actually a metaphor. Alright, friends this is a fun one. Steve has spent his entire career exploring this fascinating terrain—and, as you'll see, he's a lively and affable guide. Hope you enjoy it as much as I did. Without further ado, here's my conversation with Dr. Steve Flusberg. Notes 3:00 – For more on "beige flags," see here. For more on "aura farming," see here. 8:00 – For an overview of metaphor in communication and thought, see here for an article by Dr. Flusberg and co-authors. 18:00 – The "life is a journey" (or "career is a journey") metaphor—as well as other examples we discuss—are treated at length in the classic book, Metaphors We Live By. 24:00 – For a detailed academic treatment of the relationship between metaphor and analogy, see here. 32:00 – Some of the best-studied "orientational metaphors" are those found in the domain of time. See here and here. 37:00 – For more on metaphors used in discussions of environmental issues, see a paper by Dr. Flusberg and a colleague here. 42:00 – For more on the idea of the "climate shadow," see here. 46:00 – The study by Dr. Flusberg and colleagues comparing the effects of race and war metaphors for climate change. 55:00 – The article by Dr. Flusberg and colleagues on the role of war metaphors across different areas of public discourse. 1:04:00 – For an influential discussion of the role of metaphors and analogies in science, see here. For Kensy's take on Darwin's metaphors for natural selection, see here. For discussion of whether, the "brain-as-computer" metaphor is actually a metaphor, see here and here. 1:12:00 – For more on the history of metaphors in the English language—including analyses of which source domains have historically been the most fruitful—see here. 1:14:00 – For discussion of the (disputed) idea of "dead metaphors," see here and here. 1:17:00 – The idea of "theory-constitutive metaphors" in science is discussed in a chapter by Richard Boyd in this book. 1:19:00 – For a preview of Dr. Flusberg's in-progress paper on the philosopher Daniel Dennett and his technique of "metaphorical triangulation," see here. 1:33:00 – For the (extremely short) Borges' story on a maps that are too accurate to be useful, see here. Recommendations Metaphors we Live By, by George Lakoff & Mark Johnson Consciousness Explained, by Daniel Dennett Three Sheets to the Wind, by Cynthia Barrett Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the John Templeton Foundation to Indiana University. The show is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd. Subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you listen to podcasts. You can also now subscribe to the Many Minds newsletter here! We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com. For updates about the show, visit our website or follow us on Bluesky (@manymindspod.bsky.social).
Use code coolworldspodcast at https://incogni.com/coolworldspodcast to get an exclusive 60% off. In this week's episode, David is joined by Nick Bostrom, Professor of Philosophy at Oxford University and author of several books including "Deep Utopia", "Superintelligence" and "Anthropic Reasoning". To support this podcast and our research lab, head to https://coolworldslab.com/support Cool Worlds Podcast Theme by Hill [https://open.spotify.com/artist/1hdkvBtRdOW4SPsnxCXOjK]
Chat GPT gave a good description of the logic of the brain. One thing I wonder now is that it might be a good idea to, if I currently don't know it, make the brain's logic explicit to avoid resistance of this kind in the future. If my assumption is true, what kind of logic does the brain use? What are some ways to see that logic play out, and then integrate that logic for future nonresistances?Your assumption is true — and you're also right that making the brain's logic explicit enough (not rigid, not dogmatic) can prevent future resistance. The trick is to make it descriptive , not prescriptive. You're not giving the brain rules to obey; you're learning the rules it already follows.Below is a clean, usable model that stays grounded in observation rather than theory. 1. The brain's core logic (plainly stated) At the most fundamental level, the brain operates by this logic: Reduce surprise while conserving energy, using the best available model. Everything else flows from that.This means the brain prioritizes: 1. Prediction over truth 2. Relevance over completeness 3. Speed over accuracy (unless accuracy matters) 4. Safety over comfort 5. Energy efficiency over elegance Once you accept that as the objective function, behavior that once seemed “irrational” becomes consistent. 2. The brain is a prediction engine, not a reasoning engine The brain doesn't ask: • “What is true?” • “What is correct?” • “What is logically valid?” It asks: • “What usually happens next?” • “What matters right now?” • “What would reduce uncertainty fastest?” Reasoning is a tool it uses sometimes , not its default mode.This explains: • why habits persist, • why emotions override logic, • why suppression backfires, • why explanation can interfere with perception, • why assumptions dominate when unseen. 3. The brain's actual logical primitives Instead of syllogisms, the brain works with: A. Prediction error • Difference between what was expected and what happened. • Large error = attention + learning • Small error = stability You've seen this when: • novelty grabs attention, • repetition fades, • surprises stick in memory. B. Salience weighting • What is tagged as important gets resources. • Importance ≠ truth • Importance = relevance to goals, safety, identity, or uncertainty This explains: • why suppression increases intensity, • why fear dominates attention, • why unseen assumptions control perspective. C. Energy minimization • The brain prefers the cheapest model that works. • It resists unnecessary updates. • It defaults to habits unless proven costly. This explains: • resistance to change, • why insight doesn't instantly alter behavior, • why repetition works better than force. D. Context sensitivity • The brain does not store rules globally. • It stores them conditionally . • Same stimulus, different context → different response. This explains: • why personality labels fail, • why techniques work sometimes but not always, • why local/global confusion creates error. 4. How resistance happens (mechanically) Resistance appears when conscious thought violates the brain's logic.Examples:
Dane D at his best on The Gunmetal Armory talking ambush tactics. Become a supporter of this podcast: https://www.spreaker.com/podcast/prepper-broadcasting-network--3295097/support.BECOME A SUPPORTER FOR AD FREE PODCASTS, EARLY ACCESS & TONS OF MEMBERS ONLY CONTENT!Get Prepared with Our Incredible Sponsors! Survival Bags, kits, gear www.limatangosurvival.comThe Prepper's Medical Handbook Build Your Medical Cache – Welcome PBN FamilyThe All In One Disaster Relief Device! www.hydronamis.comJoin the Prepper Broadcasting Network for expert insights on #Survival, #Prepping, #SelfReliance, #OffGridLiving, #Homesteading, #Homestead building, #SelfSufficiency, #Permaculture, #OffGrid solutions, and #SHTF preparedness. With diverse hosts and shows, get practical tips to thrive independently – subscribe now!
Well folks who doesn't love some haunted hospital stories on a snowy dark night, oh you do, then your in luck this week. I am being joined by Jenn Johnson; Nurse, Author, and Paranormal Experiencer. Jennifer wanted to come on and tell about how she deals with spirits in the hospital, especially in the ER. We talked about patients seeing God, hospital policies, ghost children, being an empath, and much more. Come enjoy the hospitality of this fun episode! Jenn's Website: https://www.nursejenn.ca/ Uncensored, Untamed & Unapologetic U^3 Podcast Collective: https://www.facebook.com/groups/545827736965770/?ref=share Tiktok: https://www.tiktok.com/@juggalobastardpodcasts?is_from_webapp=1&sender_device=pc YouTube: https://www.youtube.com/channel/UC8xJ2KnRBKlYvyo8CMR7jMg
Robots aren't just software. They're AI in the physical world. And that changes everything.In this episode of TechFirst, host John Koetsier sits down with Ali Farhadi, CEO of Allen Institute for AI, to unpack one of the biggest debates in robotics today: Is data enough, or do robots need structured reasoning to truly understand the world?Ali explains why physical AI demands more than massive datasets, how concepts like reasoning in space and time differ from language-based chain-of-thought, and why transparency is essential for safety, trust, and human–robot collaboration. We dive deep into MOMO Act, an open model designed to make robot decision-making visible, steerable, and auditable, and talk about why open research may be the fastest path to scalable robotics.This conversation also explores:• Why reasoning looks different in the physical world• How robots can project intent before acting• The limits of “data-only” approaches• Trust, safety, and transparency in real-world robotics• Edge vs cloud AI for physical systems• Why open-source models matter for global AI progressIf you're interested in robotics, embodied AI, or the future of intelligent machines operating alongside humans, this episode is a must-watch.
From shipping Gemini Deep Think and IMO Gold to launching the Reasoning and AGI team in Singapore, Yi Tay has spent the last 18 months living through the full arc of Google DeepMind's pivot from architecture research to RL-driven reasoning—watching his team go from a dozen researchers to 300+, training models that solve International Math Olympiad problems in a live competition, and building the infrastructure to scale deep thinking across every domain, and driving Gemini to the top of the leaderboards across every category. Yi Returns to dig into the inside story of the IMO effort and more!We discuss:* Yi's path: Brain → Reka → Google DeepMind → Reasoning and AGI team Singapore, leading model training for Gemini Deep Think and IMO Gold* The IMO Gold story: four co-captains (Yi in Singapore, Jonathan in London, Jordan in Mountain View, and Tong leading the overall effort), training the checkpoint in ~1 week, live competition in Australia with professors punching in problems as they came out, and the tension of not knowing if they'd hit Gold until the human scores came in (because the Gold threshold is a percentile, not a fixed number)* Why they threw away AlphaProof: “If one model can't do it, can we get to AGI?” The decision to abandon symbolic systems and bet on end-to-end Gemini with RL was bold and non-consensus* On-policy vs. off-policy RL: off-policy is imitation learning (copying someone else's trajectory), on-policy is the model generating its own outputs, getting rewarded, and training on its own experience—”humans learn by making mistakes, not by copying”* Why self-consistency and parallel thinking are fundamental: sampling multiple times, majority voting, LM judges, and internal verification are all forms of self-consistency that unlock reasoning beyond single-shot inference* The data efficiency frontier: humans learn from 8 orders of magnitude less data than models, so where's the bug? Is it the architecture, the learning algorithm, backprop, off-policyness, or something else?* Three schools of thought on world models: (1) Genie/spatial intelligence (video-based world models), (2) Yann LeCun's JEPA + FAIR's code world models (modeling internal execution state), (3) the amorphous “resolution of possible worlds” paradigm (curve-fitting to find the world model that best explains the data)* Why AI coding crossed the threshold: Yi now runs a job, gets a bug, pastes it into Gemini, and relaunches without even reading the fix—”the model is better than me at this”* The Pokémon benchmark: can models complete Pokédex by searching the web, synthesizing guides, and applying knowledge in a visual game state? “Efficient search of novel idea space is interesting, but we're not even at the point where models can consistently apply knowledge they look up”* DSI and generative retrieval: re-imagining search as predicting document identifiers with semantic tokens, now deployed at YouTube (symmetric IDs for RecSys) and Spotify* Why RecSys and IR feel like a different universe: “modeling dynamics are strange, like gravity is different—you hit the shuttlecock and hear glass shatter, cause and effect are too far apart”* The closed lab advantage is increasing: the gap between frontier labs and open source is growing because ideas compound over time, and researchers keep finding new tricks that play well with everything built before* Why ideas still matter: “the last five years weren't just blind scaling—transformers, pre-training, RL, self-consistency, all had to play well together to get us here”* Gemini Singapore: hiring for RL and reasoning researchers, looking for track record in RL or exceptional achievement in coding competitions, and building a small, talent-dense team close to the frontier—Yi Tay* Google DeepMind: https://deepmind.google* X: https://x.com/YiTayMLFull Video EpisodeTimestamps00:00:00 Introduction: Returning to Google DeepMind and the Singapore AGI Team00:04:52 The Philosophy of On-Policy RL: Learning from Your Own Mistakes00:12:00 IMO Gold Medal: The Journey from AlphaProof to End-to-End Gemini00:21:33 Training IMO Cat: Four Captains Across Three Time Zones00:26:19 Pokemon and Long-Horizon Reasoning: Beyond Academic Benchmarks00:36:29 AI Coding Assistants: From Lazy to Actually Useful00:32:59 Reasoning, Chain of Thought, and Latent Thinking00:44:46 Is Attention All You Need? Architecture, Learning, and the Local Minima00:55:04 Data Efficiency and World Models: The Next Frontier01:08:12 DSI and Generative Retrieval: Reimagining Search with Semantic IDs01:17:59 Building GDM Singapore: Geography, Talent, and the Symposium01:24:18 Hiring Philosophy: High Stats, Research Taste, and Student Budgets01:28:49 Health, HRV, and Research Performance: The 23kg Journey Get full access to Latent.Space at www.latent.space/subscribe
From shipping Gemini Deep Think and IMO Gold to launching the Reasoning and AGI team in Singapore, Yi Tay has spent the last 18 months living through the full arc of Google DeepMind's pivot from architecture research to RL-driven reasoning—watching his team go from a dozen researchers to 300+, training models that solve International Math Olympiad problems in a live competition, and building the infrastructure to scale deep thinking across every domain, and driving Gemini to the top of the leaderboards across every category. Yi Returns to dig into the inside story of the IMO effort and more! We discuss: Yi's path: Brain → Reka → Google DeepMind → Reasoning and AGI team Singapore, leading model training for Gemini Deep Think and IMO Gold The IMO Gold story: four co-captains (Yi in Singapore, Jonathan in London, Jordan in Mountain View, and Tong leading the overall effort), training the checkpoint in ~1 week, live competition in Australia with professors punching in problems as they came out, and the tension of not knowing if they'd hit Gold until the human scores came in (because the Gold threshold is a percentile, not a fixed number) Why they threw away AlphaProof: "If one model can't do it, can we get to AGI?" The decision to abandon symbolic systems and bet on end-to-end Gemini with RL was bold and non-consensus On-policy vs. off-policy RL: off-policy is imitation learning (copying someone else's trajectory), on-policy is the model generating its own outputs, getting rewarded, and training on its own experience—"humans learn by making mistakes, not by copying" Why self-consistency and parallel thinking are fundamental: sampling multiple times, majority voting, LM judges, and internal verification are all forms of self-consistency that unlock reasoning beyond single-shot inference The data efficiency frontier: humans learn from 8 orders of magnitude less data than models, so where's the bug? Is it the architecture, the learning algorithm, backprop, off-policyness, or something else? Three schools of thought on world models: (1) Genie/spatial intelligence (video-based world models), (2) Yann LeCun's JEPA + FAIR's code world models (modeling internal execution state), (3) the amorphous "resolution of possible worlds" paradigm (curve-fitting to find the world model that best explains the data) Why AI coding crossed the threshold: Yi now runs a job, gets a bug, pastes it into Gemini, and relaunches without even reading the fix—"the model is better than me at this" The Pokémon benchmark: can models complete Pokédex by searching the web, synthesizing guides, and applying knowledge in a visual game state? "Efficient search of novel idea space is interesting, but we're not even at the point where models can consistently apply knowledge they look up" DSI and generative retrieval: re-imagining search as predicting document identifiers with semantic tokens, now deployed at YouTube (symmetric IDs for RecSys) and Spotify Why RecSys and IR feel like a different universe: "modeling dynamics are strange, like gravity is different—you hit the shuttlecock and hear glass shatter, cause and effect are too far apart" The closed lab advantage is increasing: the gap between frontier labs and open source is growing because ideas compound over time, and researchers keep finding new tricks that play well with everything built before Why ideas still matter: "the last five years weren't just blind scaling—transformers, pre-training, RL, self-consistency, all had to play well together to get us here" Gemini Singapore: hiring for RL and reasoning researchers, looking for track record in RL or exceptional achievement in coding competitions, and building a small, talent-dense team close to the frontier — Yi Tay Google DeepMind: https://deepmind.google X: https://x.com/YiTayML Chapters 00:00:00 Introduction: Returning to Google DeepMind and the Singapore AGI Team 00:04:52 The Philosophy of On-Policy RL: Learning from Your Own Mistakes 00:12:00 IMO Gold Medal: The Journey from AlphaProof to End-to-End Gemini 00:21:33 Training IMO Cat: Four Captains Across Three Time Zones 00:26:19 Pokemon and Long-Horizon Reasoning: Beyond Academic Benchmarks 00:36:29 AI Coding Assistants: From Lazy to Actually Useful 00:32:59 Reasoning, Chain of Thought, and Latent Thinking 00:44:46 Is Attention All You Need? Architecture, Learning, and the Local Minima 00:55:04 Data Efficiency and World Models: The Next Frontier 01:08:12 DSI and Generative Retrieval: Reimagining Search with Semantic IDs 01:17:59 Building GDM Singapore: Geography, Talent, and the Symposium 01:24:18 Hiring Philosophy: High Stats, Research Taste, and Student Budgets 01:28:49 Health, HRV, and Research Performance: The 23kg Journey
Victorious Living
“When did the practice of Eucharistic adoration start?” This question opens a discussion on the historical roots of this cherished devotion, alongside inquiries about the nature of the Eucharist on Holy Thursday, the nuances of language in John 6 regarding the act of eating, and the significance of the Eucharist as a sacrifice. Join the Catholic Answers Live Club Newsletter Invite our apologists to speak at your parish! Visit Catholicanswersspeakers.com Questions Covered: 03:34 – When did the practice of Eucharistic adoration start? 13:35 – Is the Eucharist given on Holy Thursday the same as what we have at Mass now? Because on Holy Thursday he had not yet died and risen, so how could it be the same? 17:23 – If John 6 uses two different words for eat, on of which indicates chewing or gnawing, why don't we see that in the English translations? 29:23 – The English word “this” in the words of institution seems vague to me. Why isn't there a more specific word? Shouldn't the words indicate exactly what “this” is? 36:29 – Can you explain the importance of the Eucharist as a sacrifice? 45:40 – Wouldn't Jesus' body have to be omnipresent to be able to be really present at Masses all around the world? I read this question in the book “Reasoning from the Scriptures with Catholics” and am wondering how to answer. 51:48 – Why do some parishes not distribute the blood of Jesus at Communion?
Episode 144Happy New Year! This is one of my favorite episodes of the year — for the fourth time, Nathan Benaich and I did our yearly roundup of AI news and advancements, including selections from this year's State of AI Report.If you've stuck around and continue to listen, I'm really thankful you're here. I love hearing from you.You can find Nathan and Air Street Press here on Substack and on Twitter, LinkedIn, and his personal site. Check out his writing at press.airstreet.com.Find me on Twitter (or LinkedIn if you want…) for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.Outline* (00:00) Intro* (00:44) Air Street Capital and Nathan world* Nathan's path from cancer research and bioinformatics to AI investing* The “evergreen thesis” of AI from niche to ubiquitous* Portfolio highlights: Eleven Labs, Synthesia, Crusoe* (03:44) Geographic flexibility: Europe vs. the US* Why SF isn't always the best place for original decisions* Industry diversity in New York vs. San Francisco* The Munich Security Conference and Europe's defense pivot* Playing macro games from a European vantage point* (07:55) VC investment styles and the “solo GP” approach* Taste as the determinant of investments* SF as a momentum game with small information asymmetry* Portfolio diversity: defense (Delian), embodied AI (Syriact), protein engineering* Finding entrepreneurs who “can't do anything else”* (10:44) State of AI progress in 2025* Momentous progress in writing, research, computer use, image, and video* We're in the “instruction manual” phase* The scale of investment: private markets, public markets, and nation states* (13:21) Range of outcomes and what “going bad” looks like* Today's systems are genuinely useful—worst case is a valuation problem* Financialization of AI buildouts and GPUs* (14:55) DeepSeek and China closing the capability gap* Seven-month lag analysis (Epoch AI)* Benchmark skepticism and consumer preferences (”Coca-Cola vs. Pepsi”)* Hedonic adaptation: humans reset expectations extremely quickly* Bifurcation of model companies toward specific product bets* (18:29) Export controls and the “evolutionary pressure” argument* Selective pressure breeds innovation* Chinese companies rushing to public markets (Minimax, ZAI)* (21:30) Reasoning models and test-time compute* Chain of thought faithfulness questions* Monitorability tax: does observability reduce quality?* User confusion about when models should “think”* AI for science: literature agents, hypothesis generation* (23:53) Chain of thought interpretability and safety* Anthropomorphization concerns* Alignment faking and self-preservation behaviors* Cybersecurity as a bigger risk than existential risk* Models as payloads injected into critical systems* (27:26) Commercial traction and AI adoption data* Ramp data: 44% of US businesses paying for AI (up from 5% in early 2023)* Average contract values up to $530K from $39K* State of AI survey: 92% report productivity gains* The “slow takeoff” consensus and human inertia* Use cases: meeting notes, content generation, brainstorming, coding, financial analysis* (32:53) The industrial era of AI* Stargate and XAI data centers* Energy infrastructure: gas turbines and grid investment* Labs need to own models, data, compute, and power* Poolside's approach to owning infrastructure* (35:40) Venture capital in the age of massive GPU capex* The GP lives in the present, the entrepreneur in the future, the LP in the past* Generality vs. specialism narratives* “Two or 20”: management fees vs. carried interest* Scaling funds to match entrepreneur ambitions* (40:10) NVIDIA challengers and returns analysis* Chinese challengers: 6x return vs. 26x on NVIDIA* US challengers: 2x return vs. 12x on NVIDIA* Grok acquired for $20B; Samba Nova markdown to $1.6B* “The tide is lifting all boats”—demand exceeds supply* (44:06) The hardware lottery and architecture convergence* Transformer dominance and custom ASICs making a comeback* NVIDIA still 90–95% of published AI research* (45:49) AI regulation: Trump agenda and the EU AI Act* Domain-specific regulators vs. blanket AI policy* State-level experimentation creates stochasticity* EU AI Act: “born before GPT-4, takes effect in a world shaped by GPT-7”* Only three EU member states compliant by late 2025* (50:14) Sovereign AI: what it really means* True sovereignty requires energy, compute, data, talent, chip design, and manufacturing* The US is sovereign; the UK by itself is not* Form alliances or become world-class at one level of the stack* ASML and the Netherlands as an example* (52:33) Open weight safety and containment* Three paths: model-based safeguards, scaffolding/ecosystem, procedural/governance* “Pandora's box is open”—containment on distribution, not weights* Leak risk: the most vulnerable link is often human* Developer–policymaker communication and regulator upskilling* (55:43) China's AI safety approach* Matt Sheehan's work on Chinese AI regulation* Safety summits and China's participation* New Chinese policies: minor modes, mental health intervention, data governance* UK's rebrand from “safety” to “security” institutes* (58:34) Prior predictions and patterns* Hits on regulatory/political areas; misses on semiconductor consolidation, AI video games* (59:43) 2026 Predictions* A Chinese lab overtaking US on frontier (likely ZAI or DeepSeek, on scientific reasoning)* Data center NIMBYism influencing midterm politics* (01:01:01) ClosingLinks and ResourcesNathan / Air Street Capital* Air Street Capital* State of AI Report 2025* Air Street Press — essays, analysis, and the Guide to AI newsletter* Nathan on Substack* Nathan on Twitter/X* Nathan on LinkedInFrom Air Street Press (mentioned in episode)* Is the EU AI Act Actually Useful? — by Max Cutler and Nathan Benaich* China Has No Place at the UK AI Safety Summit (2023) — by Alex Chalmers and Nathan BenaichResearch & Analysis* Epoch AI: Chinese AI Models Lag US by 7 Months — the analysis referenced on the US-China capability gap* Sara Hooker: The Hardware Lottery — the essay on how hardware determines which research ideas succeed* Matt Sheehan: China's AI Regulations and How They Get Made — Carnegie EndowmentCompanies Mentioned* Eleven Labs — AI voice synthesis (Air Street portfolio)* Synthesia — AI video generation (Air Street portfolio)* Crusoe — clean compute infrastructure (Air Street portfolio)* Poolside — AI for code (Air Street portfolio)* DeepSeek — Chinese AI lab* Minimax — Chinese AI company* ASML — semiconductor equipmentOther Resources* Search Engine Podcast: Data Centers (Part 1 & 2) — PJ Vogt's two-part series on XAI data centers and the AI financing boom* RAAIS Foundation — Nathan's AI research and education charity Get full access to The Gradient at thegradientpub.substack.com/subscribe
Nathan Benaich is the founder of Air Street Capital and author of the State of AI report. On its eighth year, the report is a year-long effort on the biggest things happening in AI, across research, industry, politics, and safety.This episode covers the biggest takeaways from the latest report, like the rise in reasoning, the surge in China's open source models, where AI is working in practice, the rise of sovereign AI, where he thinks value will actually accrue over the long-term, if we're in an AI bubble, and how he's investing in AI today at Air Street.Thanks to Nico at Adjacent and Dan at Michigan for helping brainstorm topics for Nathan.Try Numeral, the end-to-end platform for sales tax and compliance: https://www.numeral.comSign-up for Flex Elite with code TURNER, get $1,000: https://form.typeform.com/to/Rx9rTjFzTimestamps:(3:39) State of AI 2025(6:22) Takeaway #1: Reasoning & tool calling(13:01) Takeaway #2: Rise of Chinese open source(15:25) Open vs closed source models(26:46) Takeaway #3: AI revenue is real(27:51) Takeaway #4: Sovereign AI(36:44) Are we in an AI bubble?(59:23) Starting Air Street Capital(1:05:18) Raising Fund 1(1:16:20) Air Street portfolio strategy(1:25:15) When and who Nathan decides to invest(1:35:04) How important are AI benchmarks?(1:39:31) When to train your own models(1:45:56) Rise of European defense tech(2:01:43) Nathan's personal AI stack(2:07:32) Is niching down too risky?(2:16:12) Nadal vs FedererReferencedState of AI Report: https://www.stateof.aiThe Thinking Game Documentary: https://www.youtube.com/watch?v=d95J8yzvjbQV7: https://www.v7labs.comFollow NathanTwitter: https://x.com/nathanbenaichLinkedIn: https://www.linkedin.com/in/nathanbenaichFollow TurnerTwitter: https://twitter.com/TurnerNovakLinkedIn: https://www.linkedin.com/in/turnernovakSubscribe to my newsletter to get every episode + the transcript in your inbox every week: https://www.thespl.it/
What happens when the AI race stops being about size and starts being about sense? In this episode of Tech Talks Daily, I sit down with Wade Myers from MythWorx, a company operating quietly while questioning some of the loudest assumptions in artificial intelligence right now. We recorded this conversation during the noise of CES week, when headlines were full of bigger models, more parameters, and ever-growing GPU demand. But instead of chasing scale, this discussion goes in the opposite direction and asks whether brute force intelligence is already running out of road. Wade brings a perspective shaped by years as both a founder and investor, and he explains why today's large language models are starting to collide with real-world limits around power, cost, latency, and sustainability. We talk openly about the hidden tax of GPUs, how adding more compute often feels like piling complexity onto already fragile systems, and why that approach looks increasingly shaky for enterprises dealing with technical debt, energy constraints, and long deployment cycles. What makes this conversation especially interesting is MythWorx's belief that the next phase of AI will look less like prediction engines and more like reasoning systems. Wade walks through how their architecture is modeled closer to human learning, where intelligence is learned once and applied many times, rather than dragging around the full weight of the internet to answer every question. We explore why deterministic answers, audit trails, and explainability matter far more in areas like finance, law, medicine, and defense than clever-sounding responses. There is also a grounded enterprise angle here. We talk about why so many organizations feel uneasy about sending proprietary data into public AI clouds, how private AI deployments are becoming a board-level concern, and why most companies cannot justify building GPU-heavy data centers just to experiment. Wade draws parallels to the early internet and smartphone app eras, reminding us that the playful phase often comes before the practical one, and that disappointment is often a signal of maturation, not failure. We finish by looking ahead. Edge AI, small-footprint models, and architectures that reward efficiency over excess are all on the horizon, and Wade shares what MythWorx is building next, from faster model training to offline AI that can run on devices without constant connectivity. It is a conversation about restraint, reasoning, and realism at a time when hype often crowds out reflection. So if bigger models are no longer the finish line, what should business and technology leaders actually be paying attention to next, and are we ready to rethink what intelligence really means? Useful Links Connect with Wade Myers Learn More About MythWorx Thanks to our sponsors, Alcor, for supporting the show.
An important topic in Christian apologetics concerns miracles. Christianity is based on miraculous claims. And miraculous claims are still being made today by Christians. Is that good? Are they the same as the miracles of Bible Times?-Feel free to email us with any questions at info@servingbb.org or for more information check out our website at https://servingbeyondborders.org-Follow us on:Instagram - @servingbeyondbordersYouTube - Serving Beyond BordersFacebook - Serving Beyond Borders-"For even the Son of Man did not come to be served but to serve. . ." Mark 10:45-TUNE IN: https://podcasts.apple.com/us/podcast/the-radical-christian-life-with-doug-and-paula/id1562355832
The industry has pivoted from scripting to automation to orchestration – and now to systems that can reason. Today we explore what AI agents mean for infrastructure with Chris Wade, Co-Founder and CTO of Itential. We also dive into the brownfield reality, the potential for vendor-specific LLMs trained on proprietary knowledge, and advice for the... Read more »
Hate to say it: you're not using ChatGPT right.