Capacity for consciously making sense of things
POPULARITY
Categories
Well folks who doesn't love some haunted hospital stories on a snowy dark night, oh you do, then your in luck this week. I am being joined by Jenn Johnson; Nurse, Author, and Paranormal Experiencer. Jennifer wanted to come on and tell about how she deals with spirits in the hospital, especially in the ER. We talked about patients seeing God, hospital policies, ghost children, being an empath, and much more. Come enjoy the hospitality of this fun episode! Jenn's Website: https://www.nursejenn.ca/ Uncensored, Untamed & Unapologetic U^3 Podcast Collective: https://www.facebook.com/groups/545827736965770/?ref=share Tiktok: https://www.tiktok.com/@juggalobastardpodcasts?is_from_webapp=1&sender_device=pc YouTube: https://www.youtube.com/channel/UC8xJ2KnRBKlYvyo8CMR7jMg
Robots aren't just software. They're AI in the physical world. And that changes everything.In this episode of TechFirst, host John Koetsier sits down with Ali Farhadi, CEO of Allen Institute for AI, to unpack one of the biggest debates in robotics today: Is data enough, or do robots need structured reasoning to truly understand the world?Ali explains why physical AI demands more than massive datasets, how concepts like reasoning in space and time differ from language-based chain-of-thought, and why transparency is essential for safety, trust, and human–robot collaboration. We dive deep into MOMO Act, an open model designed to make robot decision-making visible, steerable, and auditable, and talk about why open research may be the fastest path to scalable robotics.This conversation also explores:• Why reasoning looks different in the physical world• How robots can project intent before acting• The limits of “data-only” approaches• Trust, safety, and transparency in real-world robotics• Edge vs cloud AI for physical systems• Why open-source models matter for global AI progressIf you're interested in robotics, embodied AI, or the future of intelligent machines operating alongside humans, this episode is a must-watch.
From shipping Gemini Deep Think and IMO Gold to launching the Reasoning and AGI team in Singapore, Yi Tay has spent the last 18 months living through the full arc of Google DeepMind's pivot from architecture research to RL-driven reasoning—watching his team go from a dozen researchers to 300+, training models that solve International Math Olympiad problems in a live competition, and building the infrastructure to scale deep thinking across every domain, and driving Gemini to the top of the leaderboards across every category. Yi Returns to dig into the inside story of the IMO effort and more! We discuss: Yi's path: Brain → Reka → Google DeepMind → Reasoning and AGI team Singapore, leading model training for Gemini Deep Think and IMO Gold The IMO Gold story: four co-captains (Yi in Singapore, Jonathan in London, Jordan in Mountain View, and Tong leading the overall effort), training the checkpoint in ~1 week, live competition in Australia with professors punching in problems as they came out, and the tension of not knowing if they'd hit Gold until the human scores came in (because the Gold threshold is a percentile, not a fixed number) Why they threw away AlphaProof: "If one model can't do it, can we get to AGI?" The decision to abandon symbolic systems and bet on end-to-end Gemini with RL was bold and non-consensus On-policy vs. off-policy RL: off-policy is imitation learning (copying someone else's trajectory), on-policy is the model generating its own outputs, getting rewarded, and training on its own experience—"humans learn by making mistakes, not by copying" Why self-consistency and parallel thinking are fundamental: sampling multiple times, majority voting, LM judges, and internal verification are all forms of self-consistency that unlock reasoning beyond single-shot inference The data efficiency frontier: humans learn from 8 orders of magnitude less data than models, so where's the bug? Is it the architecture, the learning algorithm, backprop, off-policyness, or something else? Three schools of thought on world models: (1) Genie/spatial intelligence (video-based world models), (2) Yann LeCun's JEPA + FAIR's code world models (modeling internal execution state), (3) the amorphous "resolution of possible worlds" paradigm (curve-fitting to find the world model that best explains the data) Why AI coding crossed the threshold: Yi now runs a job, gets a bug, pastes it into Gemini, and relaunches without even reading the fix—"the model is better than me at this" The Pokémon benchmark: can models complete Pokédex by searching the web, synthesizing guides, and applying knowledge in a visual game state? "Efficient search of novel idea space is interesting, but we're not even at the point where models can consistently apply knowledge they look up" DSI and generative retrieval: re-imagining search as predicting document identifiers with semantic tokens, now deployed at YouTube (symmetric IDs for RecSys) and Spotify Why RecSys and IR feel like a different universe: "modeling dynamics are strange, like gravity is different—you hit the shuttlecock and hear glass shatter, cause and effect are too far apart" The closed lab advantage is increasing: the gap between frontier labs and open source is growing because ideas compound over time, and researchers keep finding new tricks that play well with everything built before Why ideas still matter: "the last five years weren't just blind scaling—transformers, pre-training, RL, self-consistency, all had to play well together to get us here" Gemini Singapore: hiring for RL and reasoning researchers, looking for track record in RL or exceptional achievement in coding competitions, and building a small, talent-dense team close to the frontier — Yi Tay Google DeepMind: https://deepmind.google X: https://x.com/YiTayML Chapters 00:00:00 Introduction: Returning to Google DeepMind and the Singapore AGI Team 00:04:52 The Philosophy of On-Policy RL: Learning from Your Own Mistakes 00:12:00 IMO Gold Medal: The Journey from AlphaProof to End-to-End Gemini 00:21:33 Training IMO Cat: Four Captains Across Three Time Zones 00:26:19 Pokemon and Long-Horizon Reasoning: Beyond Academic Benchmarks 00:36:29 AI Coding Assistants: From Lazy to Actually Useful 00:32:59 Reasoning, Chain of Thought, and Latent Thinking 00:44:46 Is Attention All You Need? Architecture, Learning, and the Local Minima 00:55:04 Data Efficiency and World Models: The Next Frontier 01:08:12 DSI and Generative Retrieval: Reimagining Search with Semantic IDs 01:17:59 Building GDM Singapore: Geography, Talent, and the Symposium 01:24:18 Hiring Philosophy: High Stats, Research Taste, and Student Budgets 01:28:49 Health, HRV, and Research Performance: The 23kg Journey
Victorious Living
“When did the practice of Eucharistic adoration start?” This question opens a discussion on the historical roots of this cherished devotion, alongside inquiries about the nature of the Eucharist on Holy Thursday, the nuances of language in John 6 regarding the act of eating, and the significance of the Eucharist as a sacrifice. Join the Catholic Answers Live Club Newsletter Invite our apologists to speak at your parish! Visit Catholicanswersspeakers.com Questions Covered: 03:34 – When did the practice of Eucharistic adoration start? 13:35 – Is the Eucharist given on Holy Thursday the same as what we have at Mass now? Because on Holy Thursday he had not yet died and risen, so how could it be the same? 17:23 – If John 6 uses two different words for eat, on of which indicates chewing or gnawing, why don't we see that in the English translations? 29:23 – The English word “this” in the words of institution seems vague to me. Why isn't there a more specific word? Shouldn't the words indicate exactly what “this” is? 36:29 – Can you explain the importance of the Eucharist as a sacrifice? 45:40 – Wouldn't Jesus' body have to be omnipresent to be able to be really present at Masses all around the world? I read this question in the book “Reasoning from the Scriptures with Catholics” and am wondering how to answer. 51:48 – Why do some parishes not distribute the blood of Jesus at Communion?
Episode 144Happy New Year! This is one of my favorite episodes of the year — for the fourth time, Nathan Benaich and I did our yearly roundup of AI news and advancements, including selections from this year's State of AI Report.If you've stuck around and continue to listen, I'm really thankful you're here. I love hearing from you.You can find Nathan and Air Street Press here on Substack and on Twitter, LinkedIn, and his personal site. Check out his writing at press.airstreet.com.Find me on Twitter (or LinkedIn if you want…) for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.Outline* (00:00) Intro* (00:44) Air Street Capital and Nathan world* Nathan's path from cancer research and bioinformatics to AI investing* The “evergreen thesis” of AI from niche to ubiquitous* Portfolio highlights: Eleven Labs, Synthesia, Crusoe* (03:44) Geographic flexibility: Europe vs. the US* Why SF isn't always the best place for original decisions* Industry diversity in New York vs. San Francisco* The Munich Security Conference and Europe's defense pivot* Playing macro games from a European vantage point* (07:55) VC investment styles and the “solo GP” approach* Taste as the determinant of investments* SF as a momentum game with small information asymmetry* Portfolio diversity: defense (Delian), embodied AI (Syriact), protein engineering* Finding entrepreneurs who “can't do anything else”* (10:44) State of AI progress in 2025* Momentous progress in writing, research, computer use, image, and video* We're in the “instruction manual” phase* The scale of investment: private markets, public markets, and nation states* (13:21) Range of outcomes and what “going bad” looks like* Today's systems are genuinely useful—worst case is a valuation problem* Financialization of AI buildouts and GPUs* (14:55) DeepSeek and China closing the capability gap* Seven-month lag analysis (Epoch AI)* Benchmark skepticism and consumer preferences (”Coca-Cola vs. Pepsi”)* Hedonic adaptation: humans reset expectations extremely quickly* Bifurcation of model companies toward specific product bets* (18:29) Export controls and the “evolutionary pressure” argument* Selective pressure breeds innovation* Chinese companies rushing to public markets (Minimax, ZAI)* (21:30) Reasoning models and test-time compute* Chain of thought faithfulness questions* Monitorability tax: does observability reduce quality?* User confusion about when models should “think”* AI for science: literature agents, hypothesis generation* (23:53) Chain of thought interpretability and safety* Anthropomorphization concerns* Alignment faking and self-preservation behaviors* Cybersecurity as a bigger risk than existential risk* Models as payloads injected into critical systems* (27:26) Commercial traction and AI adoption data* Ramp data: 44% of US businesses paying for AI (up from 5% in early 2023)* Average contract values up to $530K from $39K* State of AI survey: 92% report productivity gains* The “slow takeoff” consensus and human inertia* Use cases: meeting notes, content generation, brainstorming, coding, financial analysis* (32:53) The industrial era of AI* Stargate and XAI data centers* Energy infrastructure: gas turbines and grid investment* Labs need to own models, data, compute, and power* Poolside's approach to owning infrastructure* (35:40) Venture capital in the age of massive GPU capex* The GP lives in the present, the entrepreneur in the future, the LP in the past* Generality vs. specialism narratives* “Two or 20”: management fees vs. carried interest* Scaling funds to match entrepreneur ambitions* (40:10) NVIDIA challengers and returns analysis* Chinese challengers: 6x return vs. 26x on NVIDIA* US challengers: 2x return vs. 12x on NVIDIA* Grok acquired for $20B; Samba Nova markdown to $1.6B* “The tide is lifting all boats”—demand exceeds supply* (44:06) The hardware lottery and architecture convergence* Transformer dominance and custom ASICs making a comeback* NVIDIA still 90–95% of published AI research* (45:49) AI regulation: Trump agenda and the EU AI Act* Domain-specific regulators vs. blanket AI policy* State-level experimentation creates stochasticity* EU AI Act: “born before GPT-4, takes effect in a world shaped by GPT-7”* Only three EU member states compliant by late 2025* (50:14) Sovereign AI: what it really means* True sovereignty requires energy, compute, data, talent, chip design, and manufacturing* The US is sovereign; the UK by itself is not* Form alliances or become world-class at one level of the stack* ASML and the Netherlands as an example* (52:33) Open weight safety and containment* Three paths: model-based safeguards, scaffolding/ecosystem, procedural/governance* “Pandora's box is open”—containment on distribution, not weights* Leak risk: the most vulnerable link is often human* Developer–policymaker communication and regulator upskilling* (55:43) China's AI safety approach* Matt Sheehan's work on Chinese AI regulation* Safety summits and China's participation* New Chinese policies: minor modes, mental health intervention, data governance* UK's rebrand from “safety” to “security” institutes* (58:34) Prior predictions and patterns* Hits on regulatory/political areas; misses on semiconductor consolidation, AI video games* (59:43) 2026 Predictions* A Chinese lab overtaking US on frontier (likely ZAI or DeepSeek, on scientific reasoning)* Data center NIMBYism influencing midterm politics* (01:01:01) ClosingLinks and ResourcesNathan / Air Street Capital* Air Street Capital* State of AI Report 2025* Air Street Press — essays, analysis, and the Guide to AI newsletter* Nathan on Substack* Nathan on Twitter/X* Nathan on LinkedInFrom Air Street Press (mentioned in episode)* Is the EU AI Act Actually Useful? — by Max Cutler and Nathan Benaich* China Has No Place at the UK AI Safety Summit (2023) — by Alex Chalmers and Nathan BenaichResearch & Analysis* Epoch AI: Chinese AI Models Lag US by 7 Months — the analysis referenced on the US-China capability gap* Sara Hooker: The Hardware Lottery — the essay on how hardware determines which research ideas succeed* Matt Sheehan: China's AI Regulations and How They Get Made — Carnegie EndowmentCompanies Mentioned* Eleven Labs — AI voice synthesis (Air Street portfolio)* Synthesia — AI video generation (Air Street portfolio)* Crusoe — clean compute infrastructure (Air Street portfolio)* Poolside — AI for code (Air Street portfolio)* DeepSeek — Chinese AI lab* Minimax — Chinese AI company* ASML — semiconductor equipmentOther Resources* Search Engine Podcast: Data Centers (Part 1 & 2) — PJ Vogt's two-part series on XAI data centers and the AI financing boom* RAAIS Foundation — Nathan's AI research and education charity Get full access to The Gradient at thegradientpub.substack.com/subscribe
Nathan Benaich is the founder of Air Street Capital and author of the State of AI report. On its eighth year, the report is a year-long effort on the biggest things happening in AI, across research, industry, politics, and safety.This episode covers the biggest takeaways from the latest report, like the rise in reasoning, the surge in China's open source models, where AI is working in practice, the rise of sovereign AI, where he thinks value will actually accrue over the long-term, if we're in an AI bubble, and how he's investing in AI today at Air Street.Thanks to Nico at Adjacent and Dan at Michigan for helping brainstorm topics for Nathan.Try Numeral, the end-to-end platform for sales tax and compliance: https://www.numeral.comSign-up for Flex Elite with code TURNER, get $1,000: https://form.typeform.com/to/Rx9rTjFzTimestamps:(3:39) State of AI 2025(6:22) Takeaway #1: Reasoning & tool calling(13:01) Takeaway #2: Rise of Chinese open source(15:25) Open vs closed source models(26:46) Takeaway #3: AI revenue is real(27:51) Takeaway #4: Sovereign AI(36:44) Are we in an AI bubble?(59:23) Starting Air Street Capital(1:05:18) Raising Fund 1(1:16:20) Air Street portfolio strategy(1:25:15) When and who Nathan decides to invest(1:35:04) How important are AI benchmarks?(1:39:31) When to train your own models(1:45:56) Rise of European defense tech(2:01:43) Nathan's personal AI stack(2:07:32) Is niching down too risky?(2:16:12) Nadal vs FedererReferencedState of AI Report: https://www.stateof.aiThe Thinking Game Documentary: https://www.youtube.com/watch?v=d95J8yzvjbQV7: https://www.v7labs.comFollow NathanTwitter: https://x.com/nathanbenaichLinkedIn: https://www.linkedin.com/in/nathanbenaichFollow TurnerTwitter: https://twitter.com/TurnerNovakLinkedIn: https://www.linkedin.com/in/turnernovakSubscribe to my newsletter to get every episode + the transcript in your inbox every week: https://www.thespl.it/
Victorious Living
In this episode of Founded & Funded, Madrona Partner Jon Turow hosts a live conversation with Carina Hong, Founder & CEO of Axiom, and Byron Cook, VP & Distinguished Scientist at AWS. Carina is building foundation models trained on verified proofs instead of human-written reasoning. Byron leads AWS's automated reasoning group, which secures massive infrastructure with mathematically proven systems. During this live IA Summit conversation, they explore what it means to build AI that actually reasons and why verified intelligence is critical for domains where being right really matters. They dive into: 1) Why today's models fail at objective truth in high-stakes domains 2) How verification bridges the gap between correctness and scale 3) What superintelligent reasoning engines might unlock 4) The future of AI in finance, chips, healthcare, and beyond This is a must-watch for any founder or builder working in AI, infrastructure, or high-consequence systems. Full Transcript: https://www.madrona.com/can-we-trust-ai-the-future-of-verified-reasoning-in-high-stakes-systems Chapters: (00:00:00) Introduction (00:02:01) Meet Carina Hong & Byron Cook (00:04:07) Why Next-Gen Reasoning Models? (00:05:59) Objective Truth & Verification in AI (00:08:03) Formal Verification at Amazon (00:09:40) Making Proof Tools Usable (00:11:14) Proofs vs. Bugs: The Mathematical Approach (00:13:25) The Market for Reasoning & Scarcity (00:15:41) From Scarcity to Abundance in Reasoning (00:17:47) AI Mathematicians & Scientific Breakthroughs (00:19:59) Collaboration: AI & Human Experts (00:22:16) Lowering the Cost of Creativity & Experimentation (00:24:08) Broad Applications of Mathematical Reasoning (00:25:28) Balancing Theory & Practice in AI (00:28:14) Customer-Driven Investment in Formal Methods (00:30:31) Building Toward Superintelligent Reasoning Engines
Victorious Living
Victorious Living
Victorious Living
What happens when the AI race stops being about size and starts being about sense? In this episode of Tech Talks Daily, I sit down with Wade Myers from MythWorx, a company operating quietly while questioning some of the loudest assumptions in artificial intelligence right now. We recorded this conversation during the noise of CES week, when headlines were full of bigger models, more parameters, and ever-growing GPU demand. But instead of chasing scale, this discussion goes in the opposite direction and asks whether brute force intelligence is already running out of road. Wade brings a perspective shaped by years as both a founder and investor, and he explains why today's large language models are starting to collide with real-world limits around power, cost, latency, and sustainability. We talk openly about the hidden tax of GPUs, how adding more compute often feels like piling complexity onto already fragile systems, and why that approach looks increasingly shaky for enterprises dealing with technical debt, energy constraints, and long deployment cycles. What makes this conversation especially interesting is MythWorx's belief that the next phase of AI will look less like prediction engines and more like reasoning systems. Wade walks through how their architecture is modeled closer to human learning, where intelligence is learned once and applied many times, rather than dragging around the full weight of the internet to answer every question. We explore why deterministic answers, audit trails, and explainability matter far more in areas like finance, law, medicine, and defense than clever-sounding responses. There is also a grounded enterprise angle here. We talk about why so many organizations feel uneasy about sending proprietary data into public AI clouds, how private AI deployments are becoming a board-level concern, and why most companies cannot justify building GPU-heavy data centers just to experiment. Wade draws parallels to the early internet and smartphone app eras, reminding us that the playful phase often comes before the practical one, and that disappointment is often a signal of maturation, not failure. We finish by looking ahead. Edge AI, small-footprint models, and architectures that reward efficiency over excess are all on the horizon, and Wade shares what MythWorx is building next, from faster model training to offline AI that can run on devices without constant connectivity. It is a conversation about restraint, reasoning, and realism at a time when hype often crowds out reflection. So if bigger models are no longer the finish line, what should business and technology leaders actually be paying attention to next, and are we ready to rethink what intelligence really means? Useful Links Connect with Wade Myers Learn More About MythWorx Thanks to our sponsors, Alcor, for supporting the show.
An important topic in Christian apologetics concerns miracles. Christianity is based on miraculous claims. And miraculous claims are still being made today by Christians. Is that good? Are they the same as the miracles of Bible Times?-Feel free to email us with any questions at info@servingbb.org or for more information check out our website at https://servingbeyondborders.org-Follow us on:Instagram - @servingbeyondbordersYouTube - Serving Beyond BordersFacebook - Serving Beyond Borders-"For even the Son of Man did not come to be served but to serve. . ." Mark 10:45-TUNE IN: https://podcasts.apple.com/us/podcast/the-radical-christian-life-with-doug-and-paula/id1562355832
The industry has pivoted from scripting to automation to orchestration – and now to systems that can reason. Today we explore what AI agents mean for infrastructure with Chris Wade, Co-Founder and CTO of Itential. We also dive into the brownfield reality, the potential for vendor-specific LLMs trained on proprietary knowledge, and advice for the... Read more »
don't miss George's AIE talk: https://www.youtube.com/watch?v=sRpqPgKeXNk —- From launching a side project in a Sydney basement to becoming the independent gold standard for AI benchmarking—trusted by developers, enterprises, and every major lab to navigate the exploding landscape of models, providers, and capabilities—George Cameron and Micah Hill-Smith have spent two years building Artificial Analysis into the platform that answers the questions no one else will: Which model is actually best for your use case? What are the real speed-cost trade-offs? And how open is "open" really? We discuss: The origin story: built as a side project in 2023 while Micah was building a legal AI assistant, launched publicly in January 2024, and went viral after Swyx's retweet Why they run evals themselves: labs prompt models differently, cherry-pick chain-of-thought examples (Google Gemini 1.0 Ultra used 32-shot prompts to beat GPT-4 on MMLU), and self-report inflated numbers The mystery shopper policy: they register accounts not on their own domain and run intelligence + performance benchmarks incognito to prevent labs from serving different models on private endpoints How they make money: enterprise benchmarking insights subscription (standardized reports on model deployment, serverless vs. managed vs. leasing chips) and private custom benchmarking for AI companies (no one pays to be on the public leaderboard) The Intelligence Index (V3): synthesizes 10 eval datasets (MMLU, GPQA, agentic benchmarks, long-context reasoning) into a single score, with 95% confidence intervals via repeated runs Omissions Index (hallucination rate): scores models from -100 to +100 (penalizing incorrect answers, rewarding "I don't know"), and Claude models lead with the lowest hallucination rates despite not always being the smartest GDP Val AA: their version of OpenAI's GDP-bench (44 white-collar tasks with spreadsheets, PDFs, PowerPoints), run through their Stirrup agent harness (up to 100 turns, code execution, web search, file system), graded by Gemini 3 Pro as an LLM judge (tested extensively, no self-preference bias) The Openness Index: scores models 0-18 on transparency of pre-training data, post-training data, methodology, training code, and licensing (AI2 OLMo 2 leads, followed by Nous Hermes and NVIDIA Nemotron) The smiling curve of AI costs: GPT-4-level intelligence is 100-1000x cheaper than at launch (thanks to smaller models like Amazon Nova), but frontier reasoning models in agentic workflows cost more than ever (sparsity, long context, multi-turn agents) Why sparsity might go way lower than 5%: GPT-4.5 is ~5% active, Gemini models might be ~3%, and Omissions Index accuracy correlates with total parameters (not active), suggesting massive sparse models are the future Token efficiency vs. turn efficiency: GPT-5 costs more per token but solves Tau-bench in fewer turns (cheaper overall), and models are getting better at using more tokens only when needed (5.1 Codex has tighter token distributions) V4 of the Intelligence Index coming soon: adding GDP Val AA, Critical Point, hallucination rate, and dropping some saturated benchmarks (human-eval-style coding is now trivial for small models) — Artificial Analysis Website: https://artificialanalysis.ai (https://artificialanalysis.ai ("https://artificialanalysis.ai")) George Cameron on X: https://x.com/grmcameron (https://x.com/grmcameron ("https://x.com/grmcameron")) Micah Hill-Smith on X: https://x.com/_micah_h (https://x.com/_micah_h ("https://x.com/_micah_h")) Chapters 00:00:00 Introduction: Full Circle Moment and Artificial Analysis Origins 00:01:08 Business Model: Independence and Revenue Streams 00:04:00 The Origin Story: From Legal AI to Benchmarking 00:07:00 Early Challenges: Cost, Methodology, and Independence 00:16:13 AI Grant and Moving to San Francisco 00:18:58 Evolution of the Intelligence Index: V1 to V3 00:27:55 New Benchmarks: Hallucination Rate and Omissions Index 00:33:19 Critical Point and Frontier Physics Problems 00:35:56 GDPVAL AA: Agentic Evaluation and Stirrup Harness 00:51:47 The Openness Index: Measuring Model Transparency 00:57:57 The Smiling Curve: Cost of Intelligence Paradox 01:04:00 Hardware Efficiency and Sparsity Trends 01:07:43 Reasoning vs Non-Reasoning: Token Efficiency Matters 01:10:47 Multimodal Benchmarking and Community Requests 01:14:50 Looking Ahead: V4 Intelligence Index and Beyond
No Priors: Artificial Intelligence | Machine Learning | Technology | Startups
Even if ChatGPT never existed, the tech giant NVIDIA would still be winning. The end of Moore's Law—says NVIDIA President, Founder, and CEO Jensen Huang—makes the shift to accelerated computing inevitable, regardless of any talk of an AI “bubble.” Sarah Guo and Elad Gil are joined by Jensen Huang for a wide-ranging discussion on the state of artificial intelligence as we begin 2026. Jensen reflects on the biggest surprises of 2025, including the rapid improvements in reasoning, as well as the profitability of inference tokens. He also talks about why AI will increase productivity without necessarily taking away jobs, and how physical AI and robotics can help to solve labor shortages. Finally, Jensen shares his 2026 outlook, including why he's optimistic about US-China relations, why open source remains essential for keeping the US competitive, and which sectors are due for their “ChatGPT moment.” Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @nvidia Chapters: 00:00 – Jensen Huang Introduction 00:17 – Biggest AI Surprises of 2025 04:12 – AI and Jobs: New Infrastructure and Demand for Skilled Labor 09:03 – Task vs. Purpose Framework in Labor 12:31 – Solving Labor Shortages with Robotics 15:14 – The Layer Cake of AI Technology 18:39 – The Importance of Open Source 21:52 – The Myth of “God AI” and Monolithic Models 23:54 – Addressing the “Doomer” Narrative and Regulation 29:25 – The Plummeting Cost of Compute and Tokenomics 35:09 – The Return to Research 37:49 – Future of Coding and Software Engineering 43:20 – The Industries Due For Their “ChatGPT” Moments 46:00 – The Evolution of Self-Driving Cars and Robotics 54:06 – Energy Demand and Growth for AI 58:49 – 2026 Outlook: US-China Relations and Geopolitics 1:04:43 – Is There An AI Bubble? 1:16:20 – Conclusion
Hate to say it: you're not using ChatGPT right.
Imagine an AI that doesn't just output answers — it remembers, adapts, and reasons over time like a living system. In this episode of The Neuron, Corey Noles and Grant Harvey sit down with Zuzanna Stamirowska, CEO & Cofounder of Pathway, to break down the world's first post-Transformer frontier model: BDH — the Dragon Hatchling architecture.Zuzanna explains why current language models are stuck in a “Groundhog Day” loop — waking up with no memory — and how Pathway's architecture introduces true temporal reasoning and continual learning. We explore:• Why Transformers lack real memory and time awareness • How BDH uses brain-like neurons, synapses, and emergent structure • How models can “get bored,” adapt, and strengthen connections • Why Pathway sees reasoning — not language — as the core of intelligence • How BDH enables infinite context, live learning, and interpretability • Why gluing two trained models together actually works in BDH • The path to AGI through generalization, not scaling • Real-world early adopters (Formula 1, NATO, French Postal Service) • Safety, reversibility, checkpointing, and building predictable behavior • Why this architecture could power the next era of scientific innovationFrom brain-inspired message passing to emergent neural structures that literally appear during training, this is one of the most ambitious rethinks of AI architecture since Transformers themselves.If you want a window into what comes after LLMs, this interview is essential.Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai
An important topic in Christian apologetics concerns miracles. Christianity is based on miraculous claims. And miraculous claims are still being made today by Christians. Is that good? Are they the same as the miracles of Bible Times?-Feel free to email us with any questions at info@servingbb.org or for more information check out our website at https://servingbeyondborders.org-Follow us on:Instagram - @servingbeyondbordersYouTube - Serving Beyond BordersFacebook - Serving Beyond Borders-"For even the Son of Man did not come to be served but to serve. . ." Mark 10:45-TUNE IN: https://podcasts.apple.com/us/podcast/the-radical-christian-life-with-doug-and-paula/id1562355832
Many residential real estate appraisers remain deeply loyal to the GSE and AMC appraisal system—even as fees decline, turn times shrink, and professional judgment is increasingly replaced by checklists, models, and automation. This podcast explores why that loyalty exists and why it is so hard to let go. The answer is not ignorance or laziness. Most appraisers are rational professionals responding to incentives, habits, and identities built over decades. The GSE system provides structure, predictability, and clear rules. It tells appraisers what “good work” looks like and absorbs much of the responsibility when things go wrong. That feels safe. But that safety is an illusion. Over time, the same system that promises protection also treats appraisers as interchangeable parts, compresses fees, rewards speed over judgment, and steadily removes professional autonomy. Appraisers stay not because the system loves them back—but because leaving feels risky. Behavioral economics calls this loss aversion. Psychology calls it identity attachment. Most appraisers simply call it survival. This piece also examines why private appraisal work—such as expert witness assignments, litigation support, and consulting—feels intimidating. In private work, the appraiser is the form. Reasoning replaces checklists. Judgment replaces automation. That level of visibility requires confidence, education, and intellectual courage rarely taught in production environments. Ultimately, this podcast does not shame appraisers or demand change. Instead, it offers illumination. It invites appraisers to reflect honestly on who controls their work, their time, and their professional future. The lantern is lit. The choice, as always, belongs to the appraiser.
In this video, I explore a survey that was sent to over a thousand pastors and asked them about their views on Calvinism, as well as asking them to list their favorite pastors. No surprise, John Piper was the number one mentioned pastor of greatest influence for all pastors in the survey. Younger Christian leaders lean heavily in the Calvinistic direction, and elderly Christian leaders (those over 65 years of age) lean heavily in the Arminian direction.#jesus #apologetics #Christianity #Calvinism #johnpiper #reformed #scienceandfaith #christianpodcast --------------------------------LINKS---------------------------------Science Faith & Reasoning podcast link: https://podcasters.spotify.com/pod/show/science-faith-reasoning Coffee with John Calvin Podcast link (An SFR+ Production hosted by Daniel Faucett) https://open.spotify.com/show/5UWb8SavK17HO8ERorHPYN Learning the Fundaments (An SFR+ Production hosted by Shepard Merritt): https://creators.spotify.com/pod/profile/shep304/ -----------------------------CONNECT------------------------------https://www.scifr.com Instagram: https://www.instagram.com/sciencefaithandreasoning X: https://twitter.com/SFRdaily
From Berkeley robotics and OpenAI's 2017 Dota-era internship to shipping RL breakthroughs on GPT-4o, o1, and o3, and now leading model development at Cursor, Ashvin Nair has done it all. We caught up with Ashvin at NeurIPS 2025 to dig into the inside story of OpenAI's reasoning team (spoiler: it went from a dozen people to 300+), why IOI Gold felt reachable in 2022 but somehow didn't change the world when o1 actually achieved it, how RL doesn't generalize beyond the training distribution (and why that means you need to bring economically useful tasks into distribution by co-designing products and models), the deeper lessons from the RL research era (2017–2022) and why most of it didn't pan out because the community overfitted to benchmarks, how Cursor is uniquely positioned to do continual learning at scale with policy updates every two hours and product-model co-design that keeps engineers in the loop instead of context-switching into ADHD hell, and his bet that the next paradigm shift is continual learning with infinite memory—where models experience something once (a bug, a mistake, a user pattern) and never forget it, storing millions of deployment tokens in weights without overloading capacity. We discuss: Ashvin's path: Berkeley robotics PhD → OpenAI 2017 intern (Dota era) → o1/o3 reasoning team → Cursor ML lead in three months Why robotics people are the most grounded at NeurIPS (they work with the real world) and simulation people are the most unhinged (Lex Fridman's take) The IOI Gold paradox: "If you told me we'd achieve IOI Gold in 2022, I'd assume we could all go on vacation—AI solved, no point working anymore. But life is still the same." The RL research era (2017–2022) and why most of it didn't pan out: overfitting to benchmarks, too many implicit knobs to tune, and the community rewarding complex ideas over simple ones that generalize Inside the o1 origin story: a dozen people, conviction from Ilya and Jakob Pachocki that RL would work, small-scale prototypes producing "surprisingly accurate reasoning traces" on math, and first-principles belief that scaled The reasoning team grew from ~12 to 300+ people as o1 became a product and safety, tooling, and deployment scaled up Why Cursor is uniquely positioned for continual learning: policy updates every two hours (online RL on tab), product and ML sitting next to each other, and the entire software engineering workflow (code, logs, debugging, DataDog) living in the product Composer as the start of product-model co-design: smart enough to use, fast enough to stay in the loop, and built by a 20–25 person ML team with high-taste co-founders who code daily The next paradigm shift: continual learning with infinite memory—models that experience something once (a bug, a user mistake) and store it in weights forever, learning from millions of deployment tokens without overloading capacity (trillions of pretraining tokens = plenty of room) Why off-policy RL is unstable (Ashvin's favorite interview question) and why Cursor does two-day work trials instead of whiteboard interviews The vision: automate software engineering as a process (not just answering prompts), co-design products so the entire workflow (write code, check logs, debug, iterate) is in-distribution for RL, and make models that never make the same mistake twice — Ashvin Nair Cursor: https://cursor.com X: https://x.com/ashvinnair_ Chapters 00:00:00 Introduction: From Robotics to Cursor via OpenAI 00:01:58 The Robotics to LLM Agent Transition: Why Code Won 00:09:11 RL Research Winter and Academic Overfitting 00:11:45 The Scaling Era and Moving Goalposts: IOI Gold Doesn't Mean AGI 00:21:30 OpenAI's Reasoning Journey: From Codex to O1 00:20:03 The Blip: Thanksgiving 2023 and OpenAI Governance 00:22:39 RL for Reasoning: The O-Series Conviction and Scaling 00:25:47 O1 to O3: Smooth Internal Progress vs External Hype Cycles 00:33:07 Why Cursor: Co-Designing Products and Models for Real Work 00:34:14 Composer and the Future: Online Learning Every Two Hours 00:35:15 Continual Learning: The Missing Paradigm Shift 00:44:00 Hiring at Cursor and Why Off-Policy RL is Unstable
Doug & Paula established in their previous podcasts that there is a God who created the heavens & earth. They also established the fact this God CAN do miracles today. But does He?-Feel free to email us with any questions at info@servingbb.org or for more information check out our website at https://servingbeyondborders.org-Follow us on:Instagram - @servingbeyondbordersYouTube - Serving Beyond BordersFacebook - Serving Beyond Borders-"For even the Son of Man did not come to be served but to serve. . ." Mark 10:45-TUNE IN: https://podcasts.apple.com/us/podcast/the-radical-christian-life-with-doug-and-paula/id1562355832
Rob, Jeremy, and Joe took some time from Tuesday's BBMS to discuss Harbaugh's quotes defending Derrick Henry's usage against the Patriots. At this point, does it all fall on deaf ears?
For this Christmas season Doug & Paula continue their discussion on miracles, focusing in this podcast on the two most important miracles of Christianity: The Incarnation & Resurrection, plus a Holiday Tradition!-Feel free to email us with any questions at info@servingbb.org or for more information check out our website at https://servingbeyondborders.org-Follow us on:Instagram - @servingbeyondbordersYouTube - Serving Beyond BordersFacebook - Serving Beyond Borders-"For even the Son of Man did not come to be served but to serve. . ." Mark 10:45-TUNE IN: https://podcasts.apple.com/us/podcast/the-radical-christian-life-with-doug-and-paula/id1562355832
We often think of Large Language Models (LLMs) as all-knowing, but as the team reveals, they still struggle with the logic of a second-grader. Why can't ChatGPT reliably add large numbers? Why does it "hallucinate" the laws of physics? The answer lies in the architecture. This episode explores how *Category Theory* —an ultra-abstract branch of mathematics—could provide the "Periodic Table" for neural networks, turning the "alchemy" of modern AI into a rigorous science.In this deep-dive exploration, *Andrew Dudzik*, *Petar Velichkovich*, *Taco Cohen*, *Bruno Gavranović*, and *Paul Lessard* join host *Tim Scarfe* to discuss the fundamental limitations of today's AI and the radical mathematical framework that might fix them.TRANSCRIPT:https://app.rescript.info/public/share/LMreunA-BUpgP-2AkuEvxA7BAFuA-VJNAp2Ut4MkMWk---Key Insights in This Episode:* *The "Addition" Problem:* *Andrew Dudzik* explains why LLMs don't actually "know" math—they just recognize patterns. When you change a single digit in a long string of numbers, the pattern breaks because the model lacks the internal "machinery" to perform a simple carry operation.* *Beyond Alchemy:* deep learning is currently in its "alchemy" phase—we have powerful results, but we lack a unifying theory. Category Theory is proposed as the framework to move AI from trial-and-error to principled engineering. [00:13:49]* *Algebra with Colors:* To make Category Theory accessible, the guests use brilliant analogies—like thinking of matrices as *magnets with colors* that only snap together when the types match. This "partial compositionality" is the secret to building more complex internal reasoning. [00:09:17]* *Synthetic vs. Analytic Math:* *Paul Lessard* breaks down the philosophical shift needed in AI research: moving from "Analytic" math (what things are made of) to "Synthetic" math [00:23:41]---Why This Matters for AGIIf we want AI to solve the world's hardest scientific problems, it can't just be a "stochastic parrot." It needs to internalize the rules of logic and computation. By imbuing neural networks with categorical priors, researchers are attempting to build a future where AI doesn't just predict the next word—it understands the underlying structure of the universe.---TIMESTAMPS:00:00:00 The Failure of LLM Addition & Physics00:01:26 Tool Use vs Intrinsic Model Quality00:03:07 Efficiency Gains via Internalization00:04:28 Geometric Deep Learning & Equivariance00:07:05 Limitations of Group Theory00:09:17 Category Theory: Algebra with Colors00:11:25 The Systematic Guide of Lego-like Math00:13:49 The Alchemy Analogy & Unifying Theory00:15:33 Information Destruction & Reasoning00:18:00 Pathfinding & Monoids in Computation00:20:15 System 2 Reasoning & Error Awareness00:23:31 Analytic vs Synthetic Mathematics00:25:52 Morphisms & Weight Tying Basics00:26:48 2-Categories & Weight Sharing Theory00:28:55 Higher Categories & Emergence00:31:41 Compositionality & Recursive Folds00:34:05 Syntax vs Semantics in Network Design00:36:14 Homomorphisms & Multi-Sorted Syntax00:39:30 The Carrying Problem & Hopf FibrationsPetar Veličković (GDM)https://petar-v.com/Paul Lessardhttps://www.linkedin.com/in/paul-roy-lessard/Bruno Gavranovićhttps://www.brunogavranovic.com/Andrew Dudzik (GDM)https://www.linkedin.com/in/andrew-dudzik-222789142/---REFERENCES:Model:[00:01:05] Veohttps://deepmind.google/models/veo/[00:01:10] Geniehttps://deepmind.google/blog/genie-3-a-new-frontier-for-world-models/Paper:[00:04:30] Geometric Deep Learning Blueprinthttps://arxiv.org/abs/2104.13478https://www.youtube.com/watch?v=bIZB1hIJ4u8[00:16:45] AlphaGeometryhttps://arxiv.org/abs/2401.08312[00:16:55] AlphaCodehttps://arxiv.org/abs/2203.07814[00:17:05] FunSearchhttps://www.nature.com/articles/s41586-023-06924-6[00:37:00] Attention Is All You Needhttps://arxiv.org/abs/1706.03762[00:43:00] Categorical Deep Learninghttps://arxiv.org/abs/2402.15332
Rob, Jeremy and The Baltimore Sun's Mike Preston took time from the third hour of MMQB to discuss John Harbaugh's explanation for Derrick Henry's usage in the 4th quarter. Was there anything he could have said that would have justified leaving Derrick on the sidelines for the final two drives?
Janet Walkoe & Margaret Walton, Exploring the Seeds of Algebraic Reasoning ROUNDING UP: SEASON 4 | EPISODE 8 Algebraic reasoning is defined as the ability to use symbols, variables, and mathematical operations to represent and solve problems. This type of reasoning is crucial for a range of disciplines. In this episode, we're talking with Janet Walkoe and Margaret Walton about the seeds of algebraic reasoning found in our students' lived experiences and the ways we can draw on them to support student learning. BIOGRAPHIES Margaret Walton joined Towson University's Department of Mathematics in 2024. She teaches mathematics methods courses to undergraduate preservice teachers and courses about teacher professional development to education graduate students. Her research interests include teacher educator learning and professional development, teacher learning and professional development, and facilitator and teacher noticing. Janet Walkoe is an associate professor in the College of Education at the University of Maryland. Janet's research interests include teacher noticing and teacher responsiveness in the mathematics classroom. She is interested in how teachers attend to and make sense of student thinking and other student resources, including but not limited to student dispositions and students' ways of communicating mathematics. RESOURCES "Seeds of Algebraic Thinking: a Knowledge in Pieces Perspective on the Development of Algebraic Thinking" "Seeds of Algebraic Thinking: Towards a Research Agenda" NOTICE Lab "Leveraging Early Algebraic Experiences" TRANSCRIPT Mike Wallus: Hello, Janet and Margaret, thank you so much for joining us. I'm really excited to talk with you both about the seeds of algebraic thinking. Janet Walkoe: Thanks for having us. We're excited to be here. Margaret Walton: Yeah, thanks so much. Mike: So for listeners, without prayer knowledge, I'm wondering how you would describe the seeds of algebraic thinking. Janet: OK. For a little context, more than a decade ago, my good friend and colleague, [Mariana] Levin—she's at Western Michigan University—she and I used to talk about all of the algebraic thinking we saw our children doing when they were toddlers—this is maybe 10 or more years ago—in their play, and just watching them act in the world. And we started keeping a list of these things we saw. And it grew and grew, and finally we decided to write about this in our 2020 FLM article ["Seeds of Algebraic Thinking: Towards a Research Agenda" in For the Learning of Mathematics] that introduced the seeds of algebraic thinking idea. Since they were still toddlers, they weren't actually expressing full algebraic conceptions, but they were displaying bits of algebraic thinking that we called "seeds." And so this idea, these small conceptual resources, grows out of the knowledge and pieces perspective on learning that came out of Berkeley in the nineties, led by Andy diSessa. And generally that's the perspective that knowledge is made up of small cognitive bits rather than larger concepts. So if we're thinking of addition, rather than thinking of it as leveled, maybe at the first level there's knowing how to count and add two groups of numbers. And then maybe at another level we add two negative numbers, and then at another level we could add positives and negatives. So that might be a stage-based way of thinking about it. And instead, if we think about this in terms of little bits of resources that students bring, the idea of combining bunches of things—the idea of like entities or nonlike entities, opposites, positives and negatives, the idea of opposites canceling—all those kinds of things and other such resources to think about addition. It's that perspective that we're going with. And it's not like we master one level and move on to the next. It's more that these pieces are here, available to us. We come to a situation with these resources and call upon them and connect them as it comes up in the context. Mike: I think that feels really intuitive, particularly for anyone who's taught young children. That really brings me back to the days when I was teaching kindergartners and first graders. I want to ask you about something else. You all mentioned several things like this notion of "do, undo" or "closing in" or the idea of "in-betweenness" while we were preparing for this interview. And I'm wondering if you could describe what these things mean in some detail for our audience, and then maybe connect them back with this notion of the seeds of algebraic thinking. Margaret: Yeah, sure. So we would say that these are different seeds of algebraic thinking that kids might activate as they learn math and then also learn more formal algebra. So the first seed, the doing and undoing that you mentioned, is really completing some sort of action or process and then reversing it. So an example might be when a toddler stacks blocks or cups. I have lots of nieces and nephews or friends' kids who I've seen do this often—all the time, really—when they'll maybe make towers of blocks, stack them up one by one and then sort of unstack them, right? So later this experience might apply to learning about functions, for example, as students plug in values as inputs, that's kind of the doing part, but also solve functions at certain outputs to find the input. So that's kind of one example there. And then you also talked about closing in and in-betweenness, which might both be related to intervals. So closing in is a seed where it's sort of related to getting closer and closer to a desired value. And then in formal algebra, and maybe math leading up to formal algebra, the seed might be activated when students work with inequalities maybe, or maybe ordering fractions. And then the last seed that you mentioned there, in-betweenness, is the idea of being between two things. For example, kids might have experiences with the story of Goldilocks and the Three Bears, and the porridge being too hot, too cold, or just right. So that "just right" is in-between. So these seats might relate to inequalities and the idea that solutions of math problems might be a range of values and not just one. Mike: So part of what's so exciting about this conversation is that the seeds of algebraic thinking really can emerge from children's lived experience, meaning kids are coming with informal prior knowledge that we can access. And I'm wondering if you can describe some examples of children's play, or even everyday tasks, that cultivate these seeds of algebraic thinking. Janet: That's great. So when I think back to the early days when we were thinking about these ideas, one example stands out in my head. I was going to the grocery store with my daughter who was about three at the time, and she just did not like the grocery store at all. And when we were in the car, I told her, "Oh, don't worry, we're just going in for a short bit of time, just a second." And she sat in the back and said, "Oh, like the capital letter A." I remember being blown away thinking about all that came together for her to think about that image, just the relationship between time and distance, the amount of time highlighting the instantaneous nature of the time we'd actually be in the store, all kinds of things. And I think in terms of play examples, there were so many. When she was little, she was gifted a play doctor kit. So it was a plastic kit that had a stethoscope and a blood pressure monitor, all these old-school tools. And she would play doctor with her stuffed animals. And she knew that any one of her stuffed animals could be the patient, but it probably wouldn't be a cup. So she had this idea that these could be candidates for patients, and it was this—but only certain things. We refer to this concept as "replacement," and it's this idea that you can replace whatever this blank box is with any number of things, but maybe those things are limited and maybe that idea comes into play when thinking about variables in formal algebra. Margaret: A couple of other examples just from the seeds that you asked about in the previous question. One might be if you're talking about closing in, games like when kids play things like "you're getting warmer" or "you're getting colder" when they're trying to find a hidden object or you're closing in when tuning an instrument, maybe like a guitar or a violin. And then for in-betweeness, we talked about Goldilocks, but it could be something as simple as, "I'm sitting in between my two parents" or measuring different heights and there's someone who's very tall and someone who's very short, but then there are a bunch of people who also fall in between. So those are some other examples. Mike: You're making me wonder about some of these ideas, these concepts, these habits of mind that these seeds grow into during children's elementary learning experiences. Can we talk about that a bit? Janet: Sure. Thank you for that question. So we think of seeds as a little more general. So rather than a particular seed growing into something or being destined for something, it's more that a seed becomes activated more in a particular context and connections with other seeds get strengthened. So for example, the idea of like or nonlike terms with the positive and negative numbers. Like or nonlike or opposites can come up in so many different contexts. And that's one seed that gets evoked when thinking potentially when thinking about addition. So rather than a seed being planted and growing into things, it's more like there are these seeds, these resources that children collect as they act on the world and experience things. And in particular contexts, certain seeds are evoked and then connected. And then in other contexts, as the context becomes more familiar, maybe they're evoked more often and connected more strongly. And then that becomes something that's connected with that context. And that's how we see children learning as they become more expert in a particular context or situation. Mike: So in some ways it feels almost more like a neural network of sorts. Like the more that these connections are activated, the stronger the connection becomes. Is that a better analogy than this notion of seeds growing? It's more so that there are connections that are made and deepened, for lack of a better way of saying it? Janet: Mm-hmm. And pruned in certain circumstances. We actually struggled a bit with the name because we thought seeds might evoke this, "Here's a seed, it's this particular seed, it grows into this particular concept." But then we really struggled with other neurons of algebraic thinking. So we tossed around some other potential ideas in it to kind of evoke that image a little better. But yes, that's exactly how I would think about it. Mike: I mean, just to digress a little bit, I think it's an interesting question for you all as you're trying to describe this relationship, because in some respects it does resemble seeds—meaning that the beginnings of this set of ideas are coming out of lived experiences that children have early in their lives. And then those things are connected and deepened—or, as you said, pruned. So it kind of has features of this notion of a seed, but it also has features of a network that is interconnected, which I suspect is probably why it's fairly hard to name that. Janet: Mm-hmm. And it does have—so if you look at, for example, the replacement seed, my daughter playing doctor with her stuffed animals, the replacement seed there. But you can imagine that that seed, it's domain agnostic, so it can come out in grammar. For instance, the ad-libs, a noun goes here, and so it can be any different noun. It's the same idea, different context. And you can see the thread among contexts, even though it's not meaning the same thing or not used in the same way necessarily. Mike: It strikes me that understanding the seeds of algebraic thinking is really a powerful tool for educators. They could, for example, use it as a lens when they're planning instruction or interpreting student reasoning. Can you talk about this, Margaret and Janet? Margaret: Yeah, sure, definitely. So we've seen that teachers who take a seeds lens can be really curious about where student ideas come from. So, for example, when a student talks about a math solution, maybe instead of judging whether the answer is right or wrong, a teacher might actually be more curious about how the student came to that idea. In some of our work, we've seen teachers who have a seeds perspective can look for pieces of a student answer that are productive instead of taking an entire answer as right or wrong. So we think that seeds can really help educators intentionally look for student assets and off of them. And for us, that's students' informal and lived experiences. Janet: And kind of going along with that, one of the things we really emphasize in our methods courses, and is emphasized in teacher education in general, is this idea of excavating for student ideas and looking at what's good about what the student says and reframing what a student says, not as a misconception, but reframing it as what's positive about this idea. And we think that having this mindset will help teachers do that. Just knowing that these are things students bring to the situation, these potentially productive resources they have. Is it productive in this case? Maybe. If it's not, what could make it more productive? So having teachers look for these kinds of things we found as helpful in classrooms. Mike: I'm going to ask a question right now that I think is perhaps a little bit challenging, but I suspect it might be what people who are listening are wondering, which is: Are there any generalizable instructional moves that might support formal or informal algebraic thinking that you'd like to see elementary teachers integrate into their classroom practice? Margaret: Yeah, I mean, I think, honestly, it's: Listen carefully to kids' ideas with an open mind. So as you listen to what kids are saying, really thinking about why they're saying what they're saying, maybe where that thinking comes from and how you can leverage it in productive ways. Mike: So I want to go back to the analogy of seeds. And I also want to think about this knowing what you said earlier about the fact that some of the analogy about seeds coming early in a child's life or emerging from their lived experiences, that's an important part of thinking about it. But there's also this notion that time and experiences allow some connections to be made and to grow or to be pruned. What I'm thinking about is the gardener. The challenge in education is that the gardener who is working with students in the form of the teacher and they do some cultivation, they might not necessarily be able to kind of see the horizon, see where some of this is going, see what's happening. So if we have a gardener who's cultivating or drawing on some of the seeds of algebraic thinking in their early childhood students and their elementary students, what do you think the impact of trying to draw on the seeds or make those connections can be for children and students in the long run? Janet: I think [there are] a couple of important points there. And first, one is early on in a child's life. Because experiences breed seeds or because seeds come out of experiences, the more experiences children can have, the better. So for example, if you're in early grades, and you can read a book to a child, they can listen to it, but what else can they do? They could maybe play with toys and act it out. If there's an activity in the book, they could pretend or really do the activity. Maybe it's baking something or maybe it's playing a game. And I think this is advocated in literature on play and early childhood experiences, including Montessori experiences. But the more and varied experiences children can have, the more seeds they'll gain in different experiences. And one thing a teacher can do early on and throughout is look at connections. Look at, "Oh, we did this thing here. Where might it come out here?" If a teacher can identify an important seed, for instance, they can work to strengthen it in different contexts as well. So giving children experiences and then looking for ways to strengthen key ideas through experiences. Mike: One of the challenges of hosting a podcast is that we've got about 20 to 25 minutes to discuss some really big ideas and some powerful practices. And this is one of those times where I really feel that. And I'm wondering, if we have listeners who wanted to continue learning about the ways that they can cultivate the seeds of algebraic thinking, are there particular resources or bodies of research that you would recommend? Janet: So from our particular lab we have a website, and it's notice-lab.com, and that's continuing to be built out. The project is funded by NSF [the National Science Foundation], and we're continuing to add resources. We have links to articles. We have links to ways teachers and parents can use seeds. We have links to professional development for teachers. And those will keep getting built out over time. Margaret, do you want to talk about the article? Margaret: Sure, yeah. Janet and I actually just had an article recently come out in Mathematics Teacher: Learning and Teaching from NCTM [National Council of Teachers of Mathematics]. And it's [in] Issue 5, and it's called "Leveraging Early Algebraic Experiences." So that's definitely another place to check out. And Janet, anything else you want to mention? Janet: I think the website has a lot of resources as well. Mike: So I've read the article and I would encourage anyone to take a look at it. We'll add a link to the article and also a link to the website in the show notes for people who are listening who want to check those things out. I think this is probably a great place to stop. But I want to thank you both so much for joining us. Janet and Margaret, it's really been a pleasure talking with both of you. Janet: Thank you so much, Mike. It's been a pleasure. Margaret: You too. Thanks so much for having us. Mike: This podcast is brought to you by The Math Learning Center and the Maier Math Foundation, dedicated to inspiring and enabling all individuals to discover and develop their mathematical confidence and ability. © 2025 The Math Learning Center | www.mathlearningcenter.org
Gemini 3 was a landmark frontier model launch in AI this year — but the story behind its performance isn't just about adding more compute. In this episode, I sit down with Sebastian Bourgeaud, a pre-training lead for Gemini 3 at Google DeepMind and co-author of the seminal RETRO paper. In his first-ever podcast interview, Sebastian takes us inside the lab mindset behind Google's most powerful model — what actually changed, and why the real work today is no longer “training a model,” but building a full system.We unpack the “secret recipe” idea — the notion that big leaps come from better pre-training and better post-training — and use it to explore a deeper shift in the industry: moving from an “infinite data” era to a data-limited regime, where curation, proxies, and measurement matter as much as web-scale volume. Sebastian explains why scaling laws aren't dead, but evolving, why evals have become one of the hardest and most underrated problems (including benchmark contamination), and why frontier research is increasingly a full-stack discipline that spans data, infrastructure, and engineering as much as algorithms.From the intuition behind Deep Think, to the rise (and risks) of synthetic data loops, to the future of long-context and retrieval, this is a technical deep dive into the physics of frontier AI. We also get into continual learning — what it would take for models to keep updating with new knowledge over time, whether via tools, expanding context, or new training paradigms — and what that implies for where foundation models are headed next. If you want a grounded view of pre-training in late 2025 beyond the marketing layer, this conversation is a blueprint.Google DeepMindWebsite - https://deepmind.googleX/Twitter - https://x.com/GoogleDeepMindSebastian BorgeaudLinkedIn - https://www.linkedin.com/in/sebastian-borgeaud-8648a5aa/X/Twitter - https://x.com/borgeaud_sFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) – Cold intro: “We're ahead of schedule” + AI is now a system(00:58) – Oriol's “secret recipe”: better pre- + post-training(02:09) – Why AI progress still isn't slowing down(03:04) – Are models actually getting smarter?(04:36) – Two–three years out: what changes first?(06:34) – AI doing AI research: faster, not automated(07:45) – Frontier labs: same playbook or different bets?(10:19) – Post-transformers: will a disruption happen?(10:51) – DeepMind's advantage: research × engineering × infra(12:26) – What a Gemini 3 pre-training lead actually does(13:59) – From Europe to Cambridge to DeepMind(18:06) – Why he left RL for real-world data(20:05) – From Gopher to Chinchilla to RETRO (and why it matters)(20:28) – “Research taste”: integrate or slow everyone down(23:00) – Fixes vs moonshots: how they balance the pipeline(24:37) – Research vs product pressure (and org structure)(26:24) – Gemini 3 under the hood: MoE in plain English(28:30) – Native multimodality: the hidden costs(30:03) – Scaling laws aren't dead (but scale isn't everything)(33:07) – Synthetic data: powerful, dangerous(35:00) – Reasoning traces: what he can't say (and why)(37:18) – Long context + attention: what's next(38:40) – Retrieval vs RAG vs long context(41:49) – The real boss fight: evals (and contamination)(42:28) – Alignment: pre-training vs post-training(43:32) – Deep Think + agents + “vibe coding”(46:34) – Continual learning: updating models over time(49:35) – Advice for researchers + founders(53:35) – “No end in sight” for progress + closing
Send us a textWhat if the strongest shield over your life isn't money, status, or strategy—but prayer that actually moves history? We explore how three scriptures recalibrate the way we handle pressure, make decisions, and carry authority, starting with Elisha's unforgettable cry over Elijah: a picture of spiritual power outclassing earthly force. From there, we open a seat at the table for a different kind of thinking—“Come, let us reason together”—and show how honest dialogue with God turns chaos into clarity without silencing hard questions or flattening our minds.We share how this posture reshapes daily choices: when anger spikes, when joy tempts us to brag, when options feel spent. Reasoning with God isn't negotiation; it's guidance. It helps us spot blind spots, avoid impulsive leaps, and turn prayer from a ritual into a working plan for business, leadership, and relationships. Along the way, we reflect on public discourse and the courage to disagree without contempt, pointing to moments where respectful outreach changes the tone and opens the door to real understanding.Finally, we sit with a sobering claim: “You have magnified your word above your name.” If God binds Himself to His word, then accountability isn't optional—it's the shape of trustworthy power. We talk about building habits and systems that keep us honest, from personal commitments to team culture. By blending spiritual insight with practical steps, this conversation offers a blueprint: power sourced in prayer, choices refined by reason, and leadership constrained by accountability. If this resonates, follow the show, subscribe on YouTube, and share it with someone who needs steady courage today.Support the showYou can support this show via the link below;https://www.buzzsprout.com/1718587/supporters/new
In dieser Episode dreht sich alles um das Thema Künstliche Intelligenz in der Versicherungsbranche – und wir bringen Licht ins Dunkel zwischen Hype und echter Praxis. Unser Co-Host Alex Bernert spricht mit den Experten von msg: Andrea van Aubel, Vorstand und KI-Pionierin mit über 30 Jahren Branchenerfahrung, sowie Axel Helmert, Mr. AI für die Lebensversicherungswelt und Head of Research and Development. Gemeinsam gehen sie der Frage nach: Was funktioniert mit KI in Versicherungen tatsächlich schon heute? Wo liegen die Herausforderungen und Stolperfallen? Und wie verändern Agentic AI und Reasoning-Modelle die Geschäftsprozesse von Leben über Kranken bis hin zu Schaden und Unfall?Von konkreten Beispielen aus dem Schadenmanagement bis hin zu Visionen für die Produktentwicklung – die Folge bietet ehrliche Einblicke, Expertenwissen und einen spannenden Ausblick auf die nächsten Jahre. Freut euch auf praxisnahe Use Cases, aufschlussreiche Diskussionen über Governance und Compliance sowie die berühmte Kristallkugel am Ende: Was wird KI wirklich im Versicherungsgeschäft verändern? Viel Spaß beim Zuhören!Schreibt uns gerne eine Nachricht!Dieser Podcast wird von msg unterstützt. Die msg Gruppe ist führender Anbieter im Versicherungsmarkt für moderne Systemlösungen. Von Automation- über KI- und SAP- bis hin zu modernen Kommunikations- und Vertriebslösungen. Die msg bündelt moderne Technologien mit tiefem Branchen Know-How. Folge uns auf unserer LinkedIn Unternehmensseite für weitere spannende Updates.Unsere Website: https://www.insurancemondaypodcast.de/Du möchtest Gast beim Insurance Monday Podcast sein? Schreibe uns unter info@insurancemondaypodcast.de und wir melden uns umgehend bei Dir.Dieser Podcast wird von dean productions produziert.Vielen Dank, dass Du unseren Podcast hörst!
12.16.25, Kevin Sheehan gets more caller opinions on the Commanders' shutting down Jayden Daniels for the season citing medical evaluations as the reasoning.
In this episode of Talking Teaching, Dr Sophie Specjal sits down with Dr Jennifer Buckingham to explore the critical intersection of reading, reasoning, and artificial intelligence in contemporary education.Drawing on decades of research and policy experience, Dr Buckingham explains why reading is far more than decoding words; it is foundational to comprehension, critical thinking, and lifelong learning. The discussion traces the evolution of reading instruction in Australia, highlighting the importance of systematic phonics and evidence-based practice in improving literacy outcomes.The conversation also turns to the challenges faced in secondary schooling when reading difficulties persist, the impact of screen-based reading on comprehension, and what the rise of AI means for literacy, learning, and thinking. Throughout, Sophie and Jennifer discuss the enduring importance of fostering a love of reading, building strong teacher knowledge, and ensuring all students have the opportunity to become confident, capable readers, now and into the future.
Is that claim of a miracle healing by a TV preacher true? Did my friend pray and receive a miracle? These are crucial questions that people ask. But before answering, Christians need to establish one vital factt. Listen to hear what it is.-Feel free to email us with any questions at info@servingbb.org or for more information check out our website at https://servingbeyondborders.org-Follow us on:Instagram - @servingbeyondbordersYouTube - Serving Beyond BordersFacebook - Serving Beyond Borders-"For even the Son of Man did not come to be served but to serve. . ." Mark 10:45-TUNE IN: https://podcasts.apple.com/us/podcast/the-radical-christian-life-with-doug-and-paula/id1562355832
For this episode we are joined by Nate Spieth. Photographer/media guy from North Adams, Michigan. He operates INSZN Media.Discussed:Notable races attended in 2025!When did he pick up the camera. Reasoning behind it and also where INSZN came fromA trip to Port Royal Speedway ⚡A couple "Oh Shit" moments. Bumps and bruises to get that good shot!Sprint cars or Late Models?WatermarksFood: Albion Malleable Brewing Company, Boohers Fresh Market, Finish Line. Sweet tooth ✅ Ice cream favorites and other sweets.Chicken wings. Go-to spots: Sweetwater Tavern, Saucy Dogs, Rocky Top (Ends around 1:33:00 minute mark)Stoking the FireOur weekend Gateway Dirt Nationals / Dome recap. Tempers flare, message board, Chet is back, driver intro's , and more Tulsa Shootout entries hit 1,563Chili Bowl Nationals entries surpass 300POWRi 410 Outlaws, POWRi National midget, World of Outlaw sprint car schedules are out!Donny Schatz secures a full time World of Outlaw ride for 2026.ASCS adds 2 new regional series for sprint car racing. Social media of the week: Hot Karl has a question. Devon Borden is fired up on a Sunday!The Draft(Ends around 1:50:00 minute mark)Feature Finish9th Annual Gateway Dirt Nationals in St. Louis at the Dome at America's Center The Smoke Charlie has some pork chops, Wasabi hibachi, and a visit to Darmstadt Inn that lingered all weekend...Bunner returns to Hornville Tavern after they have been closed for 2+ years. Garlic bread smash burgersJordy's Mexican buffet for lunch in OwensboroRigazzi's on The Hill in St. LouisTin Roof drunk foodMaking a run for frozen pizzas
"McElroy & Cubelic In The Morning" airs 7am-10am weekdays on WJOX-94.5!See omnystudio.com/listener for privacy information.
Who are we to say what is right and wrong? We once burned witches; now we have openly practicing Wiccans. Clearly, morals are relative. Or are they? You'll want to take a listen and think about where morals come from.-Feel free to email us with any questions at info@servingbb.org or for more information check out our website at https://servingbeyondborders.org-Follow us on:Instagram - @servingbeyondbordersYouTube - Serving Beyond BordersFacebook - Serving Beyond Borders-"For even the Son of Man did not come to be served but to serve. . ." Mark 10:45-TUNE IN: https://podcasts.apple.com/us/podcast/the-radical-christian-life-with-doug-and-paula/id1562355832
Pedro Domingos, author of the bestselling book "The Master Algorithm," introduces his latest work: Tensor Logic - a new programming language he believes could become the fundamental language for artificial intelligence.Think of it like this: Physics found its language in calculus. Circuit design found its language in Boolean logic. Pedro argues that AI has been missing its language - until now.**SPONSOR MESSAGES START**—Build your ideas with AI Studio from Google - http://ai.studio/build—Prolific - Quality data. From real people. For faster breakthroughs.https://www.prolific.com/?utm_source=mlst—cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economyHiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlstSubmit investment deck: https://cyber.fund/contact?utm_source=mlst—**END**Current AI is split between two worlds that don't play well together:Deep Learning (neural networks, transformers, ChatGPT) - great at learning from data, terrible at logical reasoningSymbolic AI (logic programming, expert systems) - great at logical reasoning, terrible at learning from messy real-world dataTensor Logic unifies both. It's a single language where you can:Write logical rules that the system can actually learn and modifyDo transparent, verifiable reasoning (no hallucinations)Mix "fuzzy" analogical thinking with rock-solid deductionINTERACTIVE TRANSCRIPT:https://app.rescript.info/public/share/NP4vZQ-GTETeN_roB2vg64vbEcN7isjJtz4C86WSOhw TOC:00:00:00 - Introduction00:04:41 - What is Tensor Logic?00:09:59 - Tensor Logic vs PyTorch & Einsum00:17:50 - The Master Algorithm Connection00:20:41 - Predicate Invention & Learning New Concepts00:31:22 - Symmetries in AI & Physics00:35:30 - Computational Reducibility & The Universe00:43:34 - Technical Details: RNN Implementation00:45:35 - Turing Completeness Debate00:56:45 - Transformers vs Turing Machines01:02:32 - Reasoning in Embedding Space01:11:46 - Solving Hallucination with Deductive Modes01:16:17 - Adoption Strategy & Migration Path01:21:50 - AI Education & Abstraction01:24:50 - The Trillion-Dollar WasteREFSTensor Logic: The Language of AI [Pedro Domingos]https://arxiv.org/abs/2510.12269The Master Algorithm [Pedro Domingos]https://www.amazon.co.uk/Master-Algorithm-Ultimate-Learning-Machine/dp/0241004543 Einsum is All you Need (TIM ROCKTÄSCHEL)https://rockt.ai/2018/04/30/einsum https://www.youtube.com/watch?v=6DrCq8Ry2cw Autoregressive Large Language Models are Computationally Universal (Dale Schuurmans et al - GDM)https://arxiv.org/abs/2410.03170 Memory Augmented Large Language Models are Computationally Universal [Dale Schuurmans]https://arxiv.org/pdf/2301.04589 On the computational power of NNs [95/Siegelmann]https://binds.cs.umass.edu/papers/1995_Siegelmann_JComSysSci.pdf Sebastian Bubeckhttps://www.reddit.com/r/OpenAI/comments/1oacp38/openai_researcher_sebastian_bubeck_falsely_claims/ I am a strange loop - Hofstadterhttps://www.amazon.co.uk/Am-Strange-Loop-Douglas-Hofstadter/dp/0465030793 Stephen Wolframhttps://www.youtube.com/watch?v=dkpDjd2nHgo The Complex World: An Introduction to the Foundations of Complexity Science [David C. Krakauer]https://www.amazon.co.uk/Complex-World-Introduction-Foundations-Complexity/dp/1947864629 Geometric Deep Learninghttps://www.youtube.com/watch?v=bIZB1hIJ4u8Andrew Wilson (NYU)https://www.youtube.com/watch?v=M-jTeBCEGHcYi Mahttps://www.patreon.com/posts/yi-ma-scientific-141953348 Roger Penrose - road to realityhttps://www.amazon.co.uk/Road-Reality-Complete-Guide-Universe/dp/0099440687 Artificial Intelligence: A Modern Approach [Russel and Norvig]https://www.amazon.co.uk/Artificial-Intelligence-Modern-Approach-Global/dp/1292153962
Co-hosts Mark Thompson and Steve Little present a special year-end episode, comparing their 2025 AI predictions to what actually unfolded in 2025. This episode is a great review of the top AI advancements in 2025!The hosts examine which predictions hit the mark, including the agentic AI hype cycle, plummeting AI costs driven by DeepSeek, and the dethroning of OpenAI as the top AI model. They also explore predictions that proved partially accurate, such as the shift to local language models and the adoption of AI in social media.Mark and Steve have a good laugh over their biggest misses, including breakthroughs in text-in-image generation, image restoration, and vibe coding. They highlight how reasoning models were the transformative force behind nearly every major AI advancement in 2025.The episode closes with a preview of next week's 2026 predictions episode.Timestamps:03:30 Agents, Agents, Everywhere: Deep Research and Agentic Browsers09:49 Cost of AI Drops Like a Rock: DeepSeek Disrupts the Market12:26 OpenAI Dethroned: Gemini and Anthropic Rise18:46 Local Language Models23:54 AI Invades Social Media28:24 AI-Enhanced Writing: From Grammar Checking to Ghost Writers32:30 Family Tree Diagrams: Possible But Not Practical36:03 Handwriting Recognition: Reasoning Improves Results38:01 Reasoning Models: 2025's Most Important Advancement41:31 Text in Images: A Solved Problem45:05 Image Restoration: Breakthroughs and Responsibilities51:02 Vibe Coding: Speaking Software Into BeingResource Links:Register for a Class with the Family History AI Show Academyhttps://tixoom.app/fhaishowAgentic AIAgentic AI In-Depth Report 2025https://hblabgroup.com/agentic-ai-in-depth-report/Perplexity's New AI-First Browser Kicks Off Agentic Applicationshttps://www.forbes.com/sites/stevenwolfepereira/2025/07/11/perplexitys-new-ai-first-browser-is-kicking-off-agentic-applications/Cost of AIState of AI in 10 Chartshttps://hai.stanford.edu/news/ai-index-2025-state-of-ai-in-10-chartsFree AI Tools 2025https://thehumanprompts.com/free-ai-tools-2025-platforms/OpenAI DethronedGeoffrey Hinton says Google is 'beginning to overtake' OpenAIhttps://www.businessinsider.com/ai-godfather-geoffrey-hinton-google-overtaking-openai-2025-12Local AI HardwareAssessing the On-Device Artificial Intelligence (AI) Opportunityhttps://www.qualcomm.com/content/dam/qcomm-martech/dm-assets/documents/assessing-the-on-device-ai-opportunity.pdfFamily Tree DiagramsExplore how powerful AI image editing can support advanced creative workflows.https://deepmind.google/models/gemini-image/pro/Text in ImagesNano Banana Pro Review: Is Google's AI Image Generator Too Good?https://www.cnet.com/tech/services-and-software/google-nano-banana-pro-ai-image-generator-review/Vibe CodingNo code, big dreamshttps://www.businessinsider.com/non-technical-people-vibecoding-lessons-ai-apps-2025-9Image RestorationResponsible AI Photo Restorationhttps://makingfamilyhistory.com/responsible-ai-photo-restoration/Protecting Trust in Historical Imageshttps://craigen.org/protecting-trust-in-historical-images/Tags:Artificial Intelligence, Genealogy, Family History, AI Predictions, Reasoning Models, DeepSeek, Gemini, Image Restoration, Vibe Coding, Agentic AI
Question about what tools of reasoning help us determine whether something is true or false, right or wrong, good or bad before bringing Scripture into it. How do you determine whether something is true or false, whether an action is right or wrong, or whether something is good or bad? Before you bring in Scripture, what tools of reasoning help you recognize these categories in daily life?
Thinking of building your own AI security tool? In this episode, Santiago Castiñeira, CTO of Maze, breaks down the realities of the "Build vs. Buy" debate for AI-first vulnerability management.While building a prototype script is easy, scaling it into a maintainable, audit-proof system is a massive undertaking requiring specialized skills often missing in security teams. The "RAG drug" relies too heavily on Retrieval-Augmented Generation for precise technical data like version numbers, which often fails .The conversation gets into the architecture required for a true AI-first system, moving beyond simple chatbots to complex multi-agent workflows that can reason about context and risk . We also cover the critical importance of rigorous "evals" over "vibe checks" to ensure AI reliability, the hidden costs of LLM inference at scale, and why well-crafted agents might soon be indistinguishable from super-intelligence .Guest Socials - Santiago's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:00) Who is Santiago Castiñeira?(02:40) What is "AI-First" Vulnerability Management? (Rules vs. Reasoning)(04:55) The "Build vs. Buy" Debate: Can I Just Use ChatGPT?(07:30) The "Bus Factor" Risk of Internal Tools(08:30) Why MCP (Model Context Protocol) Struggles at Scale(10:15) The Architecture of an AI-First Security System(13:45) The Problem with "Vibe Checks": Why You Need Proper Evals(17:20) Where to Start if You Must Build Internally(19:00) The Hidden Need for Data & Software Engineers in Security Teams(21:50) Managing Prompt Drift and Consistency(27:30) The Challenge of Changing LLM Models (Claude vs. Gemini)(30:20) Rethinking Vulnerability Management Metrics in the AI Era(33:30) Surprises in AI Agent Behavior: "Let's Get Back on Topic"(35:30) The Hidden Cost of AI: Token Usage at Scale(37:15) Multi-Agent Governance: Preventing Rogue Agents(41:15) The Future: Semi-Autonomous Security Fleets(45:30) Why RAG Fails for Precise Technical Data (The "RAG Drug")(47:30) How to Evaluate AI Vendors: Is it AI-First or AI-Sprinkled?(50:20) Common Architectural Mistakes: Vibe Evals & Cost Ignorance(56:00) Unpopular Opinion: Well-Crafted Agents vs. Super Intelligence(58:15) Final Questions: Kids, Argentine Steak, and Closing
Adam Crowley and Dorin Dickerson react to the response from Steelers' QB Aaron Rodgers regarding why TE Pat Freiermuth hasn't been getting many of the passes in games this season.
Carl and Mike get into why his wife is distraught in regards to them going on vacation and leaving their dogs in care while they are away. They then share thoughts on Lane Kiffin and how he handled his decision to leave Ole Miss for LSU and why they believe he may not have been 100 percent honest with some of the statements he made such as not knowing what his contract is with LSU.
Itential has announced FlowAI, a new offering that brings agentic AI to Itential’s network automation platform. On today’s Tech Bytes podcast Ethan Banks talks with Peter Sprygada, Chief Architect at Itential, about how FlowAI works, its components, and how Itential uses the Model Context Protocol (MCP). They also dig into how FlowAI supports AI-driven orchestration... Read more »
Ever met a smug evolutionist? They're the worst! Here the Kingdom strikes back with Doug & Paula showing that going from the goo to you via the zoo is absurd! Don't want to miss this!-Feel free to email us with any questions at info@servingbb.org or for more information check out our website at https://servingbeyondborders.org-Follow us on:Instagram - @servingbeyondbordersYouTube - Serving Beyond BordersFacebook - Serving Beyond Borders-"For even the Son of Man did not come to be served but to serve. . ." Mark 10:45-TUNE IN: https://podcasts.apple.com/us/podcast/the-radical-christian-life-with-doug-and-paula/id1562355832
We take a look at critical thinking in science and healthcare, examining how we often fall prey to cognitive biases, emotional reasoning, and flawed thinking. Drawing from six different experts in their respective fields, the episode explores why we sometimes believe we are being rational when in fact our conclusions aren't truly evidence-based. The discussion spans what genuine evidence-based practice means, how domain expertise matters, and how factors like identity, beliefs, and emotions can derail objective reasoning. Timestamps [02:56] Dr. David Nunan on evidence-based medicine [15:30] Dr. John Kiely on translating research into practice [26:10] Dr. Gil Carvallo on emotion and decision making [30:10] Dr. David Robert Grimes on webs of belief [37:18] Dr. Matthew Facciani identity and belief formation [42:31] Dr. Alan Flanagan on domain-specific expertise in nutrition science Related Resources Go to episode page Join the Sigma email newsletter for free Subscribe to Sigma Nutrition Premium Enroll in the next cohort of our Applied Nutrition Literacy course Alan Flanagan's Alinea Nutrition Education Hub
Last week Doug & Paula discussed how the Kalam Cosmological Argument shows the reasonableness to believe God is there. Today they go further with another strong argument for the existence of God. Do you agree?-Feel free to email us with any questions at info@servingbb.org or for more information check out our website at https://servingbeyondborders.org-Follow us on:Instagram - @servingbeyondbordersYouTube - Serving Beyond BordersFacebook - Serving Beyond Borders-"For even the Son of Man did not come to be served but to serve. . ." Mark 10:45-TUNE IN: https://podcasts.apple.com/us/podcast/the-radical-christian-life-with-doug-and-paula/id1562355832
This episode covers some of the basic things everyone should know when engaging in argument or debate, including an overview of some of the most common logical fallacies.
You ever see a new AI model drop and be like.... it's so good OMG how do I use it?