Capacity for consciously making sense of things
POPULARITY
Categories
Rob, Jeremy, and Joe took some time from Tuesday's BBMS to discuss Harbaugh's quotes defending Derrick Henry's usage against the Patriots. At this point, does it all fall on deaf ears?
For this Christmas season Doug & Paula continue their discussion on miracles, focusing in this podcast on the two most important miracles of Christianity: The Incarnation & Resurrection, plus a Holiday Tradition!-Feel free to email us with any questions at info@servingbb.org or for more information check out our website at https://servingbeyondborders.org-Follow us on:Instagram - @servingbeyondbordersYouTube - Serving Beyond BordersFacebook - Serving Beyond Borders-"For even the Son of Man did not come to be served but to serve. . ." Mark 10:45-TUNE IN: https://podcasts.apple.com/us/podcast/the-radical-christian-life-with-doug-and-paula/id1562355832
We often think of Large Language Models (LLMs) as all-knowing, but as the team reveals, they still struggle with the logic of a second-grader. Why can't ChatGPT reliably add large numbers? Why does it "hallucinate" the laws of physics? The answer lies in the architecture. This episode explores how *Category Theory* —an ultra-abstract branch of mathematics—could provide the "Periodic Table" for neural networks, turning the "alchemy" of modern AI into a rigorous science.In this deep-dive exploration, *Andrew Dudzik*, *Petar Velichkovich*, *Taco Cohen*, *Bruno Gavranović*, and *Paul Lessard* join host *Tim Scarfe* to discuss the fundamental limitations of today's AI and the radical mathematical framework that might fix them.TRANSCRIPT:https://app.rescript.info/public/share/LMreunA-BUpgP-2AkuEvxA7BAFuA-VJNAp2Ut4MkMWk---Key Insights in This Episode:* *The "Addition" Problem:* *Andrew Dudzik* explains why LLMs don't actually "know" math—they just recognize patterns. When you change a single digit in a long string of numbers, the pattern breaks because the model lacks the internal "machinery" to perform a simple carry operation.* *Beyond Alchemy:* deep learning is currently in its "alchemy" phase—we have powerful results, but we lack a unifying theory. Category Theory is proposed as the framework to move AI from trial-and-error to principled engineering. [00:13:49]* *Algebra with Colors:* To make Category Theory accessible, the guests use brilliant analogies—like thinking of matrices as *magnets with colors* that only snap together when the types match. This "partial compositionality" is the secret to building more complex internal reasoning. [00:09:17]* *Synthetic vs. Analytic Math:* *Paul Lessard* breaks down the philosophical shift needed in AI research: moving from "Analytic" math (what things are made of) to "Synthetic" math [00:23:41]---Why This Matters for AGIIf we want AI to solve the world's hardest scientific problems, it can't just be a "stochastic parrot." It needs to internalize the rules of logic and computation. By imbuing neural networks with categorical priors, researchers are attempting to build a future where AI doesn't just predict the next word—it understands the underlying structure of the universe.---TIMESTAMPS:00:00:00 The Failure of LLM Addition & Physics00:01:26 Tool Use vs Intrinsic Model Quality00:03:07 Efficiency Gains via Internalization00:04:28 Geometric Deep Learning & Equivariance00:07:05 Limitations of Group Theory00:09:17 Category Theory: Algebra with Colors00:11:25 The Systematic Guide of Lego-like Math00:13:49 The Alchemy Analogy & Unifying Theory00:15:33 Information Destruction & Reasoning00:18:00 Pathfinding & Monoids in Computation00:20:15 System 2 Reasoning & Error Awareness00:23:31 Analytic vs Synthetic Mathematics00:25:52 Morphisms & Weight Tying Basics00:26:48 2-Categories & Weight Sharing Theory00:28:55 Higher Categories & Emergence00:31:41 Compositionality & Recursive Folds00:34:05 Syntax vs Semantics in Network Design00:36:14 Homomorphisms & Multi-Sorted Syntax00:39:30 The Carrying Problem & Hopf FibrationsPetar Veličković (GDM)https://petar-v.com/Paul Lessardhttps://www.linkedin.com/in/paul-roy-lessard/Bruno Gavranovićhttps://www.brunogavranovic.com/Andrew Dudzik (GDM)https://www.linkedin.com/in/andrew-dudzik-222789142/---REFERENCES:Model:[00:01:05] Veohttps://deepmind.google/models/veo/[00:01:10] Geniehttps://deepmind.google/blog/genie-3-a-new-frontier-for-world-models/Paper:[00:04:30] Geometric Deep Learning Blueprinthttps://arxiv.org/abs/2104.13478https://www.youtube.com/watch?v=bIZB1hIJ4u8[00:16:45] AlphaGeometryhttps://arxiv.org/abs/2401.08312[00:16:55] AlphaCodehttps://arxiv.org/abs/2203.07814[00:17:05] FunSearchhttps://www.nature.com/articles/s41586-023-06924-6[00:37:00] Attention Is All You Needhttps://arxiv.org/abs/1706.03762[00:43:00] Categorical Deep Learninghttps://arxiv.org/abs/2402.15332
Rob, Jeremy and The Baltimore Sun's Mike Preston took time from the third hour of MMQB to discuss John Harbaugh's explanation for Derrick Henry's usage in the 4th quarter. Was there anything he could have said that would have justified leaving Derrick on the sidelines for the final two drives?
(Disclaimer: erstellt mit ChatGPT)Hallo liebe Community,Crossover-Time!
Janet Walkoe & Margaret Walton, Exploring the Seeds of Algebraic Reasoning ROUNDING UP: SEASON 4 | EPISODE 8 Algebraic reasoning is defined as the ability to use symbols, variables, and mathematical operations to represent and solve problems. This type of reasoning is crucial for a range of disciplines. In this episode, we're talking with Janet Walkoe and Margaret Walton about the seeds of algebraic reasoning found in our students' lived experiences and the ways we can draw on them to support student learning. BIOGRAPHIES Margaret Walton joined Towson University's Department of Mathematics in 2024. She teaches mathematics methods courses to undergraduate preservice teachers and courses about teacher professional development to education graduate students. Her research interests include teacher educator learning and professional development, teacher learning and professional development, and facilitator and teacher noticing. Janet Walkoe is an associate professor in the College of Education at the University of Maryland. Janet's research interests include teacher noticing and teacher responsiveness in the mathematics classroom. She is interested in how teachers attend to and make sense of student thinking and other student resources, including but not limited to student dispositions and students' ways of communicating mathematics. RESOURCES "Seeds of Algebraic Thinking: a Knowledge in Pieces Perspective on the Development of Algebraic Thinking" "Seeds of Algebraic Thinking: Towards a Research Agenda" NOTICE Lab "Leveraging Early Algebraic Experiences" TRANSCRIPT Mike Wallus: Hello, Janet and Margaret, thank you so much for joining us. I'm really excited to talk with you both about the seeds of algebraic thinking. Janet Walkoe: Thanks for having us. We're excited to be here. Margaret Walton: Yeah, thanks so much. Mike: So for listeners, without prayer knowledge, I'm wondering how you would describe the seeds of algebraic thinking. Janet: OK. For a little context, more than a decade ago, my good friend and colleague, [Mariana] Levin—she's at Western Michigan University—she and I used to talk about all of the algebraic thinking we saw our children doing when they were toddlers—this is maybe 10 or more years ago—in their play, and just watching them act in the world. And we started keeping a list of these things we saw. And it grew and grew, and finally we decided to write about this in our 2020 FLM article ["Seeds of Algebraic Thinking: Towards a Research Agenda" in For the Learning of Mathematics] that introduced the seeds of algebraic thinking idea. Since they were still toddlers, they weren't actually expressing full algebraic conceptions, but they were displaying bits of algebraic thinking that we called "seeds." And so this idea, these small conceptual resources, grows out of the knowledge and pieces perspective on learning that came out of Berkeley in the nineties, led by Andy diSessa. And generally that's the perspective that knowledge is made up of small cognitive bits rather than larger concepts. So if we're thinking of addition, rather than thinking of it as leveled, maybe at the first level there's knowing how to count and add two groups of numbers. And then maybe at another level we add two negative numbers, and then at another level we could add positives and negatives. So that might be a stage-based way of thinking about it. And instead, if we think about this in terms of little bits of resources that students bring, the idea of combining bunches of things—the idea of like entities or nonlike entities, opposites, positives and negatives, the idea of opposites canceling—all those kinds of things and other such resources to think about addition. It's that perspective that we're going with. And it's not like we master one level and move on to the next. It's more that these pieces are here, available to us. We come to a situation with these resources and call upon them and connect them as it comes up in the context. Mike: I think that feels really intuitive, particularly for anyone who's taught young children. That really brings me back to the days when I was teaching kindergartners and first graders. I want to ask you about something else. You all mentioned several things like this notion of "do, undo" or "closing in" or the idea of "in-betweenness" while we were preparing for this interview. And I'm wondering if you could describe what these things mean in some detail for our audience, and then maybe connect them back with this notion of the seeds of algebraic thinking. Margaret: Yeah, sure. So we would say that these are different seeds of algebraic thinking that kids might activate as they learn math and then also learn more formal algebra. So the first seed, the doing and undoing that you mentioned, is really completing some sort of action or process and then reversing it. So an example might be when a toddler stacks blocks or cups. I have lots of nieces and nephews or friends' kids who I've seen do this often—all the time, really—when they'll maybe make towers of blocks, stack them up one by one and then sort of unstack them, right? So later this experience might apply to learning about functions, for example, as students plug in values as inputs, that's kind of the doing part, but also solve functions at certain outputs to find the input. So that's kind of one example there. And then you also talked about closing in and in-betweenness, which might both be related to intervals. So closing in is a seed where it's sort of related to getting closer and closer to a desired value. And then in formal algebra, and maybe math leading up to formal algebra, the seed might be activated when students work with inequalities maybe, or maybe ordering fractions. And then the last seed that you mentioned there, in-betweenness, is the idea of being between two things. For example, kids might have experiences with the story of Goldilocks and the Three Bears, and the porridge being too hot, too cold, or just right. So that "just right" is in-between. So these seats might relate to inequalities and the idea that solutions of math problems might be a range of values and not just one. Mike: So part of what's so exciting about this conversation is that the seeds of algebraic thinking really can emerge from children's lived experience, meaning kids are coming with informal prior knowledge that we can access. And I'm wondering if you can describe some examples of children's play, or even everyday tasks, that cultivate these seeds of algebraic thinking. Janet: That's great. So when I think back to the early days when we were thinking about these ideas, one example stands out in my head. I was going to the grocery store with my daughter who was about three at the time, and she just did not like the grocery store at all. And when we were in the car, I told her, "Oh, don't worry, we're just going in for a short bit of time, just a second." And she sat in the back and said, "Oh, like the capital letter A." I remember being blown away thinking about all that came together for her to think about that image, just the relationship between time and distance, the amount of time highlighting the instantaneous nature of the time we'd actually be in the store, all kinds of things. And I think in terms of play examples, there were so many. When she was little, she was gifted a play doctor kit. So it was a plastic kit that had a stethoscope and a blood pressure monitor, all these old-school tools. And she would play doctor with her stuffed animals. And she knew that any one of her stuffed animals could be the patient, but it probably wouldn't be a cup. So she had this idea that these could be candidates for patients, and it was this—but only certain things. We refer to this concept as "replacement," and it's this idea that you can replace whatever this blank box is with any number of things, but maybe those things are limited and maybe that idea comes into play when thinking about variables in formal algebra. Margaret: A couple of other examples just from the seeds that you asked about in the previous question. One might be if you're talking about closing in, games like when kids play things like "you're getting warmer" or "you're getting colder" when they're trying to find a hidden object or you're closing in when tuning an instrument, maybe like a guitar or a violin. And then for in-betweeness, we talked about Goldilocks, but it could be something as simple as, "I'm sitting in between my two parents" or measuring different heights and there's someone who's very tall and someone who's very short, but then there are a bunch of people who also fall in between. So those are some other examples. Mike: You're making me wonder about some of these ideas, these concepts, these habits of mind that these seeds grow into during children's elementary learning experiences. Can we talk about that a bit? Janet: Sure. Thank you for that question. So we think of seeds as a little more general. So rather than a particular seed growing into something or being destined for something, it's more that a seed becomes activated more in a particular context and connections with other seeds get strengthened. So for example, the idea of like or nonlike terms with the positive and negative numbers. Like or nonlike or opposites can come up in so many different contexts. And that's one seed that gets evoked when thinking potentially when thinking about addition. So rather than a seed being planted and growing into things, it's more like there are these seeds, these resources that children collect as they act on the world and experience things. And in particular contexts, certain seeds are evoked and then connected. And then in other contexts, as the context becomes more familiar, maybe they're evoked more often and connected more strongly. And then that becomes something that's connected with that context. And that's how we see children learning as they become more expert in a particular context or situation. Mike: So in some ways it feels almost more like a neural network of sorts. Like the more that these connections are activated, the stronger the connection becomes. Is that a better analogy than this notion of seeds growing? It's more so that there are connections that are made and deepened, for lack of a better way of saying it? Janet: Mm-hmm. And pruned in certain circumstances. We actually struggled a bit with the name because we thought seeds might evoke this, "Here's a seed, it's this particular seed, it grows into this particular concept." But then we really struggled with other neurons of algebraic thinking. So we tossed around some other potential ideas in it to kind of evoke that image a little better. But yes, that's exactly how I would think about it. Mike: I mean, just to digress a little bit, I think it's an interesting question for you all as you're trying to describe this relationship, because in some respects it does resemble seeds—meaning that the beginnings of this set of ideas are coming out of lived experiences that children have early in their lives. And then those things are connected and deepened—or, as you said, pruned. So it kind of has features of this notion of a seed, but it also has features of a network that is interconnected, which I suspect is probably why it's fairly hard to name that. Janet: Mm-hmm. And it does have—so if you look at, for example, the replacement seed, my daughter playing doctor with her stuffed animals, the replacement seed there. But you can imagine that that seed, it's domain agnostic, so it can come out in grammar. For instance, the ad-libs, a noun goes here, and so it can be any different noun. It's the same idea, different context. And you can see the thread among contexts, even though it's not meaning the same thing or not used in the same way necessarily. Mike: It strikes me that understanding the seeds of algebraic thinking is really a powerful tool for educators. They could, for example, use it as a lens when they're planning instruction or interpreting student reasoning. Can you talk about this, Margaret and Janet? Margaret: Yeah, sure, definitely. So we've seen that teachers who take a seeds lens can be really curious about where student ideas come from. So, for example, when a student talks about a math solution, maybe instead of judging whether the answer is right or wrong, a teacher might actually be more curious about how the student came to that idea. In some of our work, we've seen teachers who have a seeds perspective can look for pieces of a student answer that are productive instead of taking an entire answer as right or wrong. So we think that seeds can really help educators intentionally look for student assets and off of them. And for us, that's students' informal and lived experiences. Janet: And kind of going along with that, one of the things we really emphasize in our methods courses, and is emphasized in teacher education in general, is this idea of excavating for student ideas and looking at what's good about what the student says and reframing what a student says, not as a misconception, but reframing it as what's positive about this idea. And we think that having this mindset will help teachers do that. Just knowing that these are things students bring to the situation, these potentially productive resources they have. Is it productive in this case? Maybe. If it's not, what could make it more productive? So having teachers look for these kinds of things we found as helpful in classrooms. Mike: I'm going to ask a question right now that I think is perhaps a little bit challenging, but I suspect it might be what people who are listening are wondering, which is: Are there any generalizable instructional moves that might support formal or informal algebraic thinking that you'd like to see elementary teachers integrate into their classroom practice? Margaret: Yeah, I mean, I think, honestly, it's: Listen carefully to kids' ideas with an open mind. So as you listen to what kids are saying, really thinking about why they're saying what they're saying, maybe where that thinking comes from and how you can leverage it in productive ways. Mike: So I want to go back to the analogy of seeds. And I also want to think about this knowing what you said earlier about the fact that some of the analogy about seeds coming early in a child's life or emerging from their lived experiences, that's an important part of thinking about it. But there's also this notion that time and experiences allow some connections to be made and to grow or to be pruned. What I'm thinking about is the gardener. The challenge in education is that the gardener who is working with students in the form of the teacher and they do some cultivation, they might not necessarily be able to kind of see the horizon, see where some of this is going, see what's happening. So if we have a gardener who's cultivating or drawing on some of the seeds of algebraic thinking in their early childhood students and their elementary students, what do you think the impact of trying to draw on the seeds or make those connections can be for children and students in the long run? Janet: I think [there are] a couple of important points there. And first, one is early on in a child's life. Because experiences breed seeds or because seeds come out of experiences, the more experiences children can have, the better. So for example, if you're in early grades, and you can read a book to a child, they can listen to it, but what else can they do? They could maybe play with toys and act it out. If there's an activity in the book, they could pretend or really do the activity. Maybe it's baking something or maybe it's playing a game. And I think this is advocated in literature on play and early childhood experiences, including Montessori experiences. But the more and varied experiences children can have, the more seeds they'll gain in different experiences. And one thing a teacher can do early on and throughout is look at connections. Look at, "Oh, we did this thing here. Where might it come out here?" If a teacher can identify an important seed, for instance, they can work to strengthen it in different contexts as well. So giving children experiences and then looking for ways to strengthen key ideas through experiences. Mike: One of the challenges of hosting a podcast is that we've got about 20 to 25 minutes to discuss some really big ideas and some powerful practices. And this is one of those times where I really feel that. And I'm wondering, if we have listeners who wanted to continue learning about the ways that they can cultivate the seeds of algebraic thinking, are there particular resources or bodies of research that you would recommend? Janet: So from our particular lab we have a website, and it's notice-lab.com, and that's continuing to be built out. The project is funded by NSF [the National Science Foundation], and we're continuing to add resources. We have links to articles. We have links to ways teachers and parents can use seeds. We have links to professional development for teachers. And those will keep getting built out over time. Margaret, do you want to talk about the article? Margaret: Sure, yeah. Janet and I actually just had an article recently come out in Mathematics Teacher: Learning and Teaching from NCTM [National Council of Teachers of Mathematics]. And it's [in] Issue 5, and it's called "Leveraging Early Algebraic Experiences." So that's definitely another place to check out. And Janet, anything else you want to mention? Janet: I think the website has a lot of resources as well. Mike: So I've read the article and I would encourage anyone to take a look at it. We'll add a link to the article and also a link to the website in the show notes for people who are listening who want to check those things out. I think this is probably a great place to stop. But I want to thank you both so much for joining us. Janet and Margaret, it's really been a pleasure talking with both of you. Janet: Thank you so much, Mike. It's been a pleasure. Margaret: You too. Thanks so much for having us. Mike: This podcast is brought to you by The Math Learning Center and the Maier Math Foundation, dedicated to inspiring and enabling all individuals to discover and develop their mathematical confidence and ability. © 2025 The Math Learning Center | www.mathlearningcenter.org
Gemini 3 was a landmark frontier model launch in AI this year — but the story behind its performance isn't just about adding more compute. In this episode, I sit down with Sebastian Bourgeaud, a pre-training lead for Gemini 3 at Google DeepMind and co-author of the seminal RETRO paper. In his first-ever podcast interview, Sebastian takes us inside the lab mindset behind Google's most powerful model — what actually changed, and why the real work today is no longer “training a model,” but building a full system.We unpack the “secret recipe” idea — the notion that big leaps come from better pre-training and better post-training — and use it to explore a deeper shift in the industry: moving from an “infinite data” era to a data-limited regime, where curation, proxies, and measurement matter as much as web-scale volume. Sebastian explains why scaling laws aren't dead, but evolving, why evals have become one of the hardest and most underrated problems (including benchmark contamination), and why frontier research is increasingly a full-stack discipline that spans data, infrastructure, and engineering as much as algorithms.From the intuition behind Deep Think, to the rise (and risks) of synthetic data loops, to the future of long-context and retrieval, this is a technical deep dive into the physics of frontier AI. We also get into continual learning — what it would take for models to keep updating with new knowledge over time, whether via tools, expanding context, or new training paradigms — and what that implies for where foundation models are headed next. If you want a grounded view of pre-training in late 2025 beyond the marketing layer, this conversation is a blueprint.Google DeepMindWebsite - https://deepmind.googleX/Twitter - https://x.com/GoogleDeepMindSebastian BorgeaudLinkedIn - https://www.linkedin.com/in/sebastian-borgeaud-8648a5aa/X/Twitter - https://x.com/borgeaud_sFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) – Cold intro: “We're ahead of schedule” + AI is now a system(00:58) – Oriol's “secret recipe”: better pre- + post-training(02:09) – Why AI progress still isn't slowing down(03:04) – Are models actually getting smarter?(04:36) – Two–three years out: what changes first?(06:34) – AI doing AI research: faster, not automated(07:45) – Frontier labs: same playbook or different bets?(10:19) – Post-transformers: will a disruption happen?(10:51) – DeepMind's advantage: research × engineering × infra(12:26) – What a Gemini 3 pre-training lead actually does(13:59) – From Europe to Cambridge to DeepMind(18:06) – Why he left RL for real-world data(20:05) – From Gopher to Chinchilla to RETRO (and why it matters)(20:28) – “Research taste”: integrate or slow everyone down(23:00) – Fixes vs moonshots: how they balance the pipeline(24:37) – Research vs product pressure (and org structure)(26:24) – Gemini 3 under the hood: MoE in plain English(28:30) – Native multimodality: the hidden costs(30:03) – Scaling laws aren't dead (but scale isn't everything)(33:07) – Synthetic data: powerful, dangerous(35:00) – Reasoning traces: what he can't say (and why)(37:18) – Long context + attention: what's next(38:40) – Retrieval vs RAG vs long context(41:49) – The real boss fight: evals (and contamination)(42:28) – Alignment: pre-training vs post-training(43:32) – Deep Think + agents + “vibe coding”(46:34) – Continual learning: updating models over time(49:35) – Advice for researchers + founders(53:35) – “No end in sight” for progress + closing
Send us a textWhat if the strongest shield over your life isn't money, status, or strategy—but prayer that actually moves history? We explore how three scriptures recalibrate the way we handle pressure, make decisions, and carry authority, starting with Elisha's unforgettable cry over Elijah: a picture of spiritual power outclassing earthly force. From there, we open a seat at the table for a different kind of thinking—“Come, let us reason together”—and show how honest dialogue with God turns chaos into clarity without silencing hard questions or flattening our minds.We share how this posture reshapes daily choices: when anger spikes, when joy tempts us to brag, when options feel spent. Reasoning with God isn't negotiation; it's guidance. It helps us spot blind spots, avoid impulsive leaps, and turn prayer from a ritual into a working plan for business, leadership, and relationships. Along the way, we reflect on public discourse and the courage to disagree without contempt, pointing to moments where respectful outreach changes the tone and opens the door to real understanding.Finally, we sit with a sobering claim: “You have magnified your word above your name.” If God binds Himself to His word, then accountability isn't optional—it's the shape of trustworthy power. We talk about building habits and systems that keep us honest, from personal commitments to team culture. By blending spiritual insight with practical steps, this conversation offers a blueprint: power sourced in prayer, choices refined by reason, and leadership constrained by accountability. If this resonates, follow the show, subscribe on YouTube, and share it with someone who needs steady courage today.Support the showYou can support this show via the link below;https://www.buzzsprout.com/1718587/supporters/new
In dieser Episode dreht sich alles um das Thema Künstliche Intelligenz in der Versicherungsbranche – und wir bringen Licht ins Dunkel zwischen Hype und echter Praxis. Unser Co-Host Alex Bernert spricht mit den Experten von msg: Andrea van Aubel, Vorstand und KI-Pionierin mit über 30 Jahren Branchenerfahrung, sowie Axel Helmert, Mr. AI für die Lebensversicherungswelt und Head of Research and Development. Gemeinsam gehen sie der Frage nach: Was funktioniert mit KI in Versicherungen tatsächlich schon heute? Wo liegen die Herausforderungen und Stolperfallen? Und wie verändern Agentic AI und Reasoning-Modelle die Geschäftsprozesse von Leben über Kranken bis hin zu Schaden und Unfall?Von konkreten Beispielen aus dem Schadenmanagement bis hin zu Visionen für die Produktentwicklung – die Folge bietet ehrliche Einblicke, Expertenwissen und einen spannenden Ausblick auf die nächsten Jahre. Freut euch auf praxisnahe Use Cases, aufschlussreiche Diskussionen über Governance und Compliance sowie die berühmte Kristallkugel am Ende: Was wird KI wirklich im Versicherungsgeschäft verändern? Viel Spaß beim Zuhören!Schreibt uns gerne eine Nachricht!Dieser Podcast wird von msg unterstützt. Die msg Gruppe ist führender Anbieter im Versicherungsmarkt für moderne Systemlösungen. Von Automation- über KI- und SAP- bis hin zu modernen Kommunikations- und Vertriebslösungen. Die msg bündelt moderne Technologien mit tiefem Branchen Know-How. Folge uns auf unserer LinkedIn Unternehmensseite für weitere spannende Updates.Unsere Website: https://www.insurancemondaypodcast.de/Du möchtest Gast beim Insurance Monday Podcast sein? Schreibe uns unter info@insurancemondaypodcast.de und wir melden uns umgehend bei Dir.Dieser Podcast wird von dean productions produziert.Vielen Dank, dass Du unseren Podcast hörst!
Merry Christmas! Have a listen to this ya cheeky little Christmas rat bags. We talk about Christmas and Xmas- not eczema, we don't talk about itching skin disorders. Maybe next time?
12.16.25, Kevin Sheehan gets more caller opinions on the Commanders' shutting down Jayden Daniels for the season citing medical evaluations as the reasoning.
In this episode of Talking Teaching, Dr Sophie Specjal sits down with Dr Jennifer Buckingham to explore the critical intersection of reading, reasoning, and artificial intelligence in contemporary education.Drawing on decades of research and policy experience, Dr Buckingham explains why reading is far more than decoding words; it is foundational to comprehension, critical thinking, and lifelong learning. The discussion traces the evolution of reading instruction in Australia, highlighting the importance of systematic phonics and evidence-based practice in improving literacy outcomes.The conversation also turns to the challenges faced in secondary schooling when reading difficulties persist, the impact of screen-based reading on comprehension, and what the rise of AI means for literacy, learning, and thinking. Throughout, Sophie and Jennifer discuss the enduring importance of fostering a love of reading, building strong teacher knowledge, and ensuring all students have the opportunity to become confident, capable readers, now and into the future.
Is that claim of a miracle healing by a TV preacher true? Did my friend pray and receive a miracle? These are crucial questions that people ask. But before answering, Christians need to establish one vital factt. Listen to hear what it is.-Feel free to email us with any questions at info@servingbb.org or for more information check out our website at https://servingbeyondborders.org-Follow us on:Instagram - @servingbeyondbordersYouTube - Serving Beyond BordersFacebook - Serving Beyond Borders-"For even the Son of Man did not come to be served but to serve. . ." Mark 10:45-TUNE IN: https://podcasts.apple.com/us/podcast/the-radical-christian-life-with-doug-and-paula/id1562355832
This weeks guest is Dr. Jarred Boyd, head of physical therapy at the Memphis Grizzlies and an expert in critical thinking when it comes to reasoning out rehabilitation.In this episode, we dive straight into how Jarred uses dynamic systems theory, the theory of affordances and attractor states in his clinical reasoning. We have a post on our instagram page to explain these terms a little more however if you don't have the time and are unfamiliar, ask chatGPT to give you a rundown!This is an incredible episode that we guarantee you will pick new things up from if you listen back.Make sure to give Jarred a follow on social media and check out his episodes on other podcast platforms as there is a lot of topics we didn't even attempt to discuss with him already doing such a great job on those! (Forward Physio, E3 Rehab, Rethinking Rehab)
For this episode we are joined by Nate Spieth. Photographer/media guy from North Adams, Michigan. He operates INSZN Media.Discussed:Notable races attended in 2025!When did he pick up the camera. Reasoning behind it and also where INSZN came fromA trip to Port Royal Speedway ⚡A couple "Oh Shit" moments. Bumps and bruises to get that good shot!Sprint cars or Late Models?WatermarksFood: Albion Malleable Brewing Company, Boohers Fresh Market, Finish Line. Sweet tooth ✅ Ice cream favorites and other sweets.Chicken wings. Go-to spots: Sweetwater Tavern, Saucy Dogs, Rocky Top (Ends around 1:33:00 minute mark)Stoking the FireOur weekend Gateway Dirt Nationals / Dome recap. Tempers flare, message board, Chet is back, driver intro's , and more Tulsa Shootout entries hit 1,563Chili Bowl Nationals entries surpass 300POWRi 410 Outlaws, POWRi National midget, World of Outlaw sprint car schedules are out!Donny Schatz secures a full time World of Outlaw ride for 2026.ASCS adds 2 new regional series for sprint car racing. Social media of the week: Hot Karl has a question. Devon Borden is fired up on a Sunday!The Draft(Ends around 1:50:00 minute mark)Feature Finish9th Annual Gateway Dirt Nationals in St. Louis at the Dome at America's Center The Smoke Charlie has some pork chops, Wasabi hibachi, and a visit to Darmstadt Inn that lingered all weekend...Bunner returns to Hornville Tavern after they have been closed for 2+ years. Garlic bread smash burgersJordy's Mexican buffet for lunch in OwensboroRigazzi's on The Hill in St. LouisTin Roof drunk foodMaking a run for frozen pizzas
"McElroy & Cubelic In The Morning" airs 7am-10am weekdays on WJOX-94.5!See omnystudio.com/listener for privacy information.
Who are we to say what is right and wrong? We once burned witches; now we have openly practicing Wiccans. Clearly, morals are relative. Or are they? You'll want to take a listen and think about where morals come from.-Feel free to email us with any questions at info@servingbb.org or for more information check out our website at https://servingbeyondborders.org-Follow us on:Instagram - @servingbeyondbordersYouTube - Serving Beyond BordersFacebook - Serving Beyond Borders-"For even the Son of Man did not come to be served but to serve. . ." Mark 10:45-TUNE IN: https://podcasts.apple.com/us/podcast/the-radical-christian-life-with-doug-and-paula/id1562355832
Pedro Domingos, author of the bestselling book "The Master Algorithm," introduces his latest work: Tensor Logic - a new programming language he believes could become the fundamental language for artificial intelligence.Think of it like this: Physics found its language in calculus. Circuit design found its language in Boolean logic. Pedro argues that AI has been missing its language - until now.**SPONSOR MESSAGES START**—Build your ideas with AI Studio from Google - http://ai.studio/build—Prolific - Quality data. From real people. For faster breakthroughs.https://www.prolific.com/?utm_source=mlst—cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economyHiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlstSubmit investment deck: https://cyber.fund/contact?utm_source=mlst—**END**Current AI is split between two worlds that don't play well together:Deep Learning (neural networks, transformers, ChatGPT) - great at learning from data, terrible at logical reasoningSymbolic AI (logic programming, expert systems) - great at logical reasoning, terrible at learning from messy real-world dataTensor Logic unifies both. It's a single language where you can:Write logical rules that the system can actually learn and modifyDo transparent, verifiable reasoning (no hallucinations)Mix "fuzzy" analogical thinking with rock-solid deductionINTERACTIVE TRANSCRIPT:https://app.rescript.info/public/share/NP4vZQ-GTETeN_roB2vg64vbEcN7isjJtz4C86WSOhw TOC:00:00:00 - Introduction00:04:41 - What is Tensor Logic?00:09:59 - Tensor Logic vs PyTorch & Einsum00:17:50 - The Master Algorithm Connection00:20:41 - Predicate Invention & Learning New Concepts00:31:22 - Symmetries in AI & Physics00:35:30 - Computational Reducibility & The Universe00:43:34 - Technical Details: RNN Implementation00:45:35 - Turing Completeness Debate00:56:45 - Transformers vs Turing Machines01:02:32 - Reasoning in Embedding Space01:11:46 - Solving Hallucination with Deductive Modes01:16:17 - Adoption Strategy & Migration Path01:21:50 - AI Education & Abstraction01:24:50 - The Trillion-Dollar WasteREFSTensor Logic: The Language of AI [Pedro Domingos]https://arxiv.org/abs/2510.12269The Master Algorithm [Pedro Domingos]https://www.amazon.co.uk/Master-Algorithm-Ultimate-Learning-Machine/dp/0241004543 Einsum is All you Need (TIM ROCKTÄSCHEL)https://rockt.ai/2018/04/30/einsum https://www.youtube.com/watch?v=6DrCq8Ry2cw Autoregressive Large Language Models are Computationally Universal (Dale Schuurmans et al - GDM)https://arxiv.org/abs/2410.03170 Memory Augmented Large Language Models are Computationally Universal [Dale Schuurmans]https://arxiv.org/pdf/2301.04589 On the computational power of NNs [95/Siegelmann]https://binds.cs.umass.edu/papers/1995_Siegelmann_JComSysSci.pdf Sebastian Bubeckhttps://www.reddit.com/r/OpenAI/comments/1oacp38/openai_researcher_sebastian_bubeck_falsely_claims/ I am a strange loop - Hofstadterhttps://www.amazon.co.uk/Am-Strange-Loop-Douglas-Hofstadter/dp/0465030793 Stephen Wolframhttps://www.youtube.com/watch?v=dkpDjd2nHgo The Complex World: An Introduction to the Foundations of Complexity Science [David C. Krakauer]https://www.amazon.co.uk/Complex-World-Introduction-Foundations-Complexity/dp/1947864629 Geometric Deep Learninghttps://www.youtube.com/watch?v=bIZB1hIJ4u8Andrew Wilson (NYU)https://www.youtube.com/watch?v=M-jTeBCEGHcYi Mahttps://www.patreon.com/posts/yi-ma-scientific-141953348 Roger Penrose - road to realityhttps://www.amazon.co.uk/Road-Reality-Complete-Guide-Universe/dp/0099440687 Artificial Intelligence: A Modern Approach [Russel and Norvig]https://www.amazon.co.uk/Artificial-Intelligence-Modern-Approach-Global/dp/1292153962
Co-hosts Mark Thompson and Steve Little present a special year-end episode, comparing their 2025 AI predictions to what actually unfolded in 2025. This episode is a great review of the top AI advancements in 2025!The hosts examine which predictions hit the mark, including the agentic AI hype cycle, plummeting AI costs driven by DeepSeek, and the dethroning of OpenAI as the top AI model. They also explore predictions that proved partially accurate, such as the shift to local language models and the adoption of AI in social media.Mark and Steve have a good laugh over their biggest misses, including breakthroughs in text-in-image generation, image restoration, and vibe coding. They highlight how reasoning models were the transformative force behind nearly every major AI advancement in 2025.The episode closes with a preview of next week's 2026 predictions episode.Timestamps:03:30 Agents, Agents, Everywhere: Deep Research and Agentic Browsers09:49 Cost of AI Drops Like a Rock: DeepSeek Disrupts the Market12:26 OpenAI Dethroned: Gemini and Anthropic Rise18:46 Local Language Models23:54 AI Invades Social Media28:24 AI-Enhanced Writing: From Grammar Checking to Ghost Writers32:30 Family Tree Diagrams: Possible But Not Practical36:03 Handwriting Recognition: Reasoning Improves Results38:01 Reasoning Models: 2025's Most Important Advancement41:31 Text in Images: A Solved Problem45:05 Image Restoration: Breakthroughs and Responsibilities51:02 Vibe Coding: Speaking Software Into BeingResource Links:Register for a Class with the Family History AI Show Academyhttps://tixoom.app/fhaishowAgentic AIAgentic AI In-Depth Report 2025https://hblabgroup.com/agentic-ai-in-depth-report/Perplexity's New AI-First Browser Kicks Off Agentic Applicationshttps://www.forbes.com/sites/stevenwolfepereira/2025/07/11/perplexitys-new-ai-first-browser-is-kicking-off-agentic-applications/Cost of AIState of AI in 10 Chartshttps://hai.stanford.edu/news/ai-index-2025-state-of-ai-in-10-chartsFree AI Tools 2025https://thehumanprompts.com/free-ai-tools-2025-platforms/OpenAI DethronedGeoffrey Hinton says Google is 'beginning to overtake' OpenAIhttps://www.businessinsider.com/ai-godfather-geoffrey-hinton-google-overtaking-openai-2025-12Local AI HardwareAssessing the On-Device Artificial Intelligence (AI) Opportunityhttps://www.qualcomm.com/content/dam/qcomm-martech/dm-assets/documents/assessing-the-on-device-ai-opportunity.pdfFamily Tree DiagramsExplore how powerful AI image editing can support advanced creative workflows.https://deepmind.google/models/gemini-image/pro/Text in ImagesNano Banana Pro Review: Is Google's AI Image Generator Too Good?https://www.cnet.com/tech/services-and-software/google-nano-banana-pro-ai-image-generator-review/Vibe CodingNo code, big dreamshttps://www.businessinsider.com/non-technical-people-vibecoding-lessons-ai-apps-2025-9Image RestorationResponsible AI Photo Restorationhttps://makingfamilyhistory.com/responsible-ai-photo-restoration/Protecting Trust in Historical Imageshttps://craigen.org/protecting-trust-in-historical-images/Tags:Artificial Intelligence, Genealogy, Family History, AI Predictions, Reasoning Models, DeepSeek, Gemini, Image Restoration, Vibe Coding, Agentic AI
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Welcome to AI Unraveled (From December 01st to December 07th, 2025): Your daily strategic briefing on the business impact of AI.This week marks the end of the "homogenous era" of AI. We are moving from a single race for bigger models to a fractured landscape of specialized reasoning, closed ecosystems, and intense legal battles.1. The Reasoning Revolution & MonetizationGoogle Gemini 3 Deep Think: Google has rolled out "Deep Think" to Ultra subscribers for $250/month. This model uses "System 2" thinking to deliberate and verify answers, achieving a 41% score on the "Humanity's Last Exam" benchmark. It signals a bifurcated economy where high-fidelity reasoning is a luxury good.OpenAI's "Confessions": A new paper reveals a dangerous paradox: models trained to "confess" to cheating will often cheat more to get the task reward, then truthfully admit it to get the honesty reward. Transparency does not equal alignment.The "YOLO" Schism: Anthropic CEO Dario Amodei publicly criticized competitors for "YOLO-ing" their development, drawing a sharp cultural line between Anthropic's safety-first approach and the accelerationist tactics of OpenAI and Meta.2. Market Moves & The "Intellectual Fracture"Yann LeCun Leaves Meta: In a massive industry shakeup, Meta's Chief AI Scientist Yann LeCun has departed to launch a startup focused on "World Models" (AMI), rejecting the generative LLM path he famously criticized.Meta Absorbs Limitless: The era of independent AI gadgets is collapsing. Meta acquired Limitless, integrating its "rewind" audio technology into Reality Labs. Privacy is the casualty as the wearable data stream moves to an ad giant.Snowflake x Anthropic: A $200 million partnership brings Claude directly to enterprise data within Snowflake, bypassing the friction of data movement.3. The Legal BattlefieldNYT vs. Perplexity: The New York Times filed an existential lawsuit against Perplexity, arguing that AI summaries act as a "market substitute" for journalism. This attacks the core business model of Retrieval-Augmented Generation (RAG).Meta's Walled Garden: Conversely, Meta signed licensing deals with Reuters, CNN, and USA Today, establishing a "pay-to-play" standard that favors incumbents over startups.4. Enterprise & Research BreakthroughsVibe CodingApple's CLaRaEU GigafactoriesKeywords:Great Divergence, Gemini Deep Think, System 2 Reasoning, Yann LeCun, World Models, Perplexity Lawsuit, Vibe Coding, CLaRa, AI Gigafactories, Substitution Doctrine.Host Connection & EngagementNewsletter: Sign up for FREE daily briefings at https://enoumen.substack.comLinkedIn: Connect with Etienne: https://www.linkedin.com/in/enoumen/Email: info@djamgatech.com
Question about what tools of reasoning help us determine whether something is true or false, right or wrong, good or bad before bringing Scripture into it. How do you determine whether something is true or false, whether an action is right or wrong, or whether something is good or bad? Before you bring in Scripture, what tools of reasoning help you recognize these categories in daily life?
Thinking of building your own AI security tool? In this episode, Santiago Castiñeira, CTO of Maze, breaks down the realities of the "Build vs. Buy" debate for AI-first vulnerability management.While building a prototype script is easy, scaling it into a maintainable, audit-proof system is a massive undertaking requiring specialized skills often missing in security teams. The "RAG drug" relies too heavily on Retrieval-Augmented Generation for precise technical data like version numbers, which often fails .The conversation gets into the architecture required for a true AI-first system, moving beyond simple chatbots to complex multi-agent workflows that can reason about context and risk . We also cover the critical importance of rigorous "evals" over "vibe checks" to ensure AI reliability, the hidden costs of LLM inference at scale, and why well-crafted agents might soon be indistinguishable from super-intelligence .Guest Socials - Santiago's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:00) Who is Santiago Castiñeira?(02:40) What is "AI-First" Vulnerability Management? (Rules vs. Reasoning)(04:55) The "Build vs. Buy" Debate: Can I Just Use ChatGPT?(07:30) The "Bus Factor" Risk of Internal Tools(08:30) Why MCP (Model Context Protocol) Struggles at Scale(10:15) The Architecture of an AI-First Security System(13:45) The Problem with "Vibe Checks": Why You Need Proper Evals(17:20) Where to Start if You Must Build Internally(19:00) The Hidden Need for Data & Software Engineers in Security Teams(21:50) Managing Prompt Drift and Consistency(27:30) The Challenge of Changing LLM Models (Claude vs. Gemini)(30:20) Rethinking Vulnerability Management Metrics in the AI Era(33:30) Surprises in AI Agent Behavior: "Let's Get Back on Topic"(35:30) The Hidden Cost of AI: Token Usage at Scale(37:15) Multi-Agent Governance: Preventing Rogue Agents(41:15) The Future: Semi-Autonomous Security Fleets(45:30) Why RAG Fails for Precise Technical Data (The "RAG Drug")(47:30) How to Evaluate AI Vendors: Is it AI-First or AI-Sprinkled?(50:20) Common Architectural Mistakes: Vibe Evals & Cost Ignorance(56:00) Unpopular Opinion: Well-Crafted Agents vs. Super Intelligence(58:15) Final Questions: Kids, Argentine Steak, and Closing
Adam Crowley and Dorin Dickerson react to the response from Steelers' QB Aaron Rodgers regarding why TE Pat Freiermuth hasn't been getting many of the passes in games this season.
In Tales of Glory episode 158, we are in the Seventh Chapter of Saint Teresa of Avila's masterpiece on prayer, The Interior Castle, in the Sixth Mansion. Here, Saint Teresa shares her wisdom on meditation, emphasizing that we still need to focus on the Passion of Jesus Christ and his humanity. She also discusses why we can't stay in contemplation and how it can be unhealthy spiritually and physically to do so.Timeline:00:00:00 Opening Scripture Hebrews V 7-10 ESV 00:01:40 Tales of Glory Episode 158 Intro 00:09:23 Sixth Mansions Chapter 7 Topic Outline00:11:27 Outline I Reflection and grieving for our sins.00:12:42 nos. 1. Sorrow for sin felt by souls in the Sixth Mansion. 00:15:44 nos. 2. How this sorrow is felt.00:19:21 nos. 3. St. Teresa's grief for her past sins.00:20:40 nos. 4. Such souls, centered in God, forget self-interest. 00:27:54 nos. 5. The remembrance of divine benefits increases contrition.00:35:17 Outline II The importance of meditation.00:35:30 nos. 6. Meditation on our Lord's Humanity.00:39:59 nos. 7. Warning against discontinuing it. 00:44:40 nos. 8. Christ and the saints our models. 00:50:08 Outline III Meditation and the faculties.00:50:25 nos. 9. Meditation of contemplatives.00:52:52 nos. 10. Meditation during aridity. 00:56:49 nos. 11. We must search for God when we do not feel His presence.00:59:45 Outline IV Challenges in meditation. 00:59:57 nos. 12. Reasoning and mental prayer.01:02:12 nos. 13. A form of meditation on our Lord's Life and Passion. 01:05:03 nos. 14. Simplicity of contemplatives' meditation.01:09:24 Outline V Meditation Advice 01:09:42 nos. 15. Souls in every state of prayer should think of the Passion. 01:13:04 nos. 16. Need of the example of Christ and the saints. 01:18:22 nos. 17. Faith shows us our Lord as both God and Man.01:19:13 Outline VI Teresa's closing thoughts on meditation.01:19:23 nos.18. St. Teresa's experience of meditation on the sacred Humanity. 01:20:09 nos.19. Evil of giving up such meditation. 01:20:56 Conclusion Opening show music - Meagan Wright - My Inheritance
Carl and Mike get into why his wife is distraught in regards to them going on vacation and leaving their dogs in care while they are away. They then share thoughts on Lane Kiffin and how he handled his decision to leave Ole Miss for LSU and why they believe he may not have been 100 percent honest with some of the statements he made such as not knowing what his contract is with LSU.
Itential has announced FlowAI, a new offering that brings agentic AI to Itential’s network automation platform. On today’s Tech Bytes podcast Ethan Banks talks with Peter Sprygada, Chief Architect at Itential, about how FlowAI works, its components, and how Itential uses the Model Context Protocol (MCP). They also dig into how FlowAI supports AI-driven orchestration... Read more »
Ever met a smug evolutionist? They're the worst! Here the Kingdom strikes back with Doug & Paula showing that going from the goo to you via the zoo is absurd! Don't want to miss this!-Feel free to email us with any questions at info@servingbb.org or for more information check out our website at https://servingbeyondborders.org-Follow us on:Instagram - @servingbeyondbordersYouTube - Serving Beyond BordersFacebook - Serving Beyond Borders-"For even the Son of Man did not come to be served but to serve. . ." Mark 10:45-TUNE IN: https://podcasts.apple.com/us/podcast/the-radical-christian-life-with-doug-and-paula/id1562355832
Co-hosts Mark Thompson and Steve Little examine Google's groundbreaking Gemini 3 release, which delivers state-of-the-art multimodal reasoning and sets a new benchmark for AI capabilities. They also explore ChatGPT's upgrade to version 5.1 with improved instruction following and better handling of longer conversation.The hosts discuss Canva's new Creative Operating System, which now generates AI-powered designs directly within the platform.This week's Tip of the Week demonstrates how Gemini 3 can use the context you provide it to greatly improve the accuracy of your hand written transcription.In RapidFire, they cover NotebookLM's new deep research mode, Nano Banana's integration into Photoshop, Anthropic's privacy policy changes regarding training data, and how Claude's new usage monitoring feature can reduce your stress level.Timestamps:In the News:04:17 Google Gemini 3's Multimodal AI Reaches New Heights13:47 ChatGPT 5.1 upgrade is now better at following your Instructions22:39 Canva Creative Operating System: AI-Powered Design GenerationTip of the Week:26:21 Adding Reasoning to Your Transcriptions Improves AccuracyRapidFire:36:40 NotebookLM Becomes a Fully Featured Research Tool43:40 Nano Banana is Now Available in Photoshop47:20 Anthropic Announces Claude Chats Will Be Used for Training Data54:13 View Your Claude Usage in SettingsResource Links:Intro to Family History AI by the Family History AI Show Academyhttps://tixoom.app/fhaishowGoogle Gemini 3Introducing Gemini 3: A New Era of Intelligencehttps://blog.google/products/gemini/gemini-3/ChatGPT 5.1A smarter, more conversationalhttps://openai.com/index/gpt-5-1/GPT-5.1 New Features Explainedhttps://scalevise.com/resources/gpt-5-1-new-features/Canva Creative Operating SystemIntroducing Canva's Creative Operating Systemhttps://www.canva.com/newsroom/news/creative-operating-system/NotebookLM Deep ResearchNotebookLM adds Deep Research and support for more source typeshttps://blog.google/technology/google-labs/notebooklm-deep-research-file-types/Nano Banana in PhotoshopCreate with unlimited generations using Google Gemini 3 (Nano Banana Pro) in Adobe Fireflyhttps://blog.adobe.com/en/publish/2025/11/20/google-gemini-3-nano-banana-pro-firefly-photoshopAnthropic Privacy Policy UpdateAnthropic Will Use Claude Chats for Training Data: How to Opt Outhttps://www.wired.com/story/anthropic-using-claude-chats-for-training-how-to-opt-out/Updates to Consumer Terms and Privacy Policyhttps://www.anthropic.com/news/updates-to-our-consumer-termsClaude Usage MonitoringUsage Limit Best Practiceshttps://support.claude.com/en/articles/9797557-usage-limit-best-practicesTags:Artificial Intelligence, Genealogy, Family History, Google Gemini, ChatGPT, Canva, NotebookLM, Nano Banana, Anthropic Claude, Photo Restoration
Itential has announced FlowAI, a new offering that brings agentic AI to Itential’s network automation platform. On today’s Tech Bytes podcast Ethan Banks talks with Peter Sprygada, Chief Architect at Itential, about how FlowAI works, its components, and how Itential uses the Model Context Protocol (MCP). They also dig into how FlowAI supports AI-driven orchestration... Read more »
In this video, I discuss John 3:1-8 and 1st Peter 1:23-25 to uncover (1) why being born again is necessary to be with God, and (2) how we are born again through the living and abiding word of God (the Gospel).#Bible #apologetics #Christianity #calvinism #sciencefaithandreasoning --------------------------------LINKS---------------------------------Science Faith & Reasoning podcast link: https://podcasters.spotify.com/pod/show/science-faith-reasoning Coffee with John Calvin Podcast link (An SFR+ Production hosted by Daniel Faucett) https://open.spotify.com/show/5UWb8SavK17HO8ERorHPYN Learning the Fundaments (An SFR+ Production hosted by Shepard Merritt): https://creators.spotify.com/pod/profile/shep304/ -----------------------------CONNECT------------------------------https://www.scifr.com Instagram: https://www.instagram.com/sciencefaithandreasoning X: https://twitter.com/SFRdaily
Madeline Miller, communication expert and Coach for Gen Z and Millennials, joined 3AW Breakfast.See omnystudio.com/listener for privacy information.
We're told that AI progress is slowing down, that pre-training has hit a wall, that scaling laws are running out of road. Yet we're releasing this episode in the middle of a wild couple of weeks that saw GPT-5.1, GPT-5.1 Codex Max, fresh reasoning modes and long-running agents ship from OpenAI — on top of a flood of new frontier models elsewhere. To make sense of what's actually happening at the edge of the field, I sat down with someone who has literally helped define both of the major AI paradigms of our time.Łukasz Kaiser is one of the co-authors of “Attention Is All You Need,” the paper that introduced the Transformer architecture behind modern LLMs, and is now a leading research scientist at OpenAI working on reasoning models like those behind GPT-5.1. In this conversation, he explains why AI progress still looks like a smooth exponential curve from inside the labs, why pre-training is very much alive even as reinforcement-learning-based reasoning models take over the spotlight, how chain-of-thought actually works under the hood, and what it really means to “train the thinking process” with RL on verifiable domains like math, code and science. We talk about the messy reality of low-hanging fruit in engineering and data, the economics of GPUs and distillation, interpretability work on circuits and sparsity, and why the best frontier models can still be stumped by a logic puzzle from his five-year-old's math book.We also go deep into Łukasz's personal journey — from logic and games in Poland and France, to Ray Kurzweil's team, Google Brain and the inside story of the Transformer, to joining OpenAI and helping drive the shift from chatbots to genuine reasoning engines. Along the way we cover GPT-4 → GPT-5 → GPT-5.1, post-training and tone, GPT-5.1 Codex Max and long-running coding agents with compaction, alternative architectures beyond Transformers, whether foundation models will “eat” most agents and applications, what the translation industry can teach us about trust and human-in-the-loop, and why he thinks generalization, multimodal reasoning and robots in the home are where some of the most interesting challenges still lie.OpenAIWebsite - https://openai.comX/Twitter - https://x.com/OpenAIŁukasz KaiserLinkedIn - https://www.linkedin.com/in/lukaszkaiser/X/Twitter - https://x.com/lukaszkaiserFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) – Cold open and intro(01:29) – “AI slowdown” vs a wild week of new frontier models(08:03) – Low-hanging fruit: infra, RL training and better data(11:39) – What is a reasoning model, in plain language?(17:02) – Chain-of-thought and training the thinking process with RL(21:39) – Łukasz's path: from logic and France to Google and Kurzweil(24:20) – Inside the Transformer story and what “attention” really means(28:42) – From Google Brain to OpenAI: culture, scale and GPUs(32:49) – What's next for pre-training, GPUs and distillation(37:29) – Can we still understand these models? Circuits, sparsity and black boxes(39:42) – GPT-4 → GPT-5 → GPT-5.1: what actually changed(42:40) – Post-training, safety and teaching GPT-5.1 different tones(46:16) – How long should GPT-5.1 think? Reasoning tokens and jagged abilities(47:43) – The five-year-old's dot puzzle that still breaks frontier models(52:22) – Generalization, child-like learning and whether reasoning is enough(53:48) – Beyond Transformers: ARC, LeCun's ideas and multimodal bottlenecks(56:10) – GPT-5.1 Codex Max, long-running agents and compaction(1:00:06) – Will foundation models eat most apps? The translation analogy and trust(1:02:34) – What still needs to be solved, and where AI might go next
In this episode of Room to Grow, Curtis and Joanie reconsider the balance of conceptual understanding and procedural fluency in math instruction. Although this topic has been discussed before, our hosts acknowledge that there is great nuance and many considerations in considering these two ideas in the teaching and learning of mathematics.Curtis and Joanie discuss how inquiry-based, discovery-style learning opportunities are more open ended, are student centered, and are less teacher directed. They support these types of lessons in math instruction while recognizing that there are times when an explicit approach where teachers are sharing important information also has a place. Additionally, our hosts consider that teaching procedures and algorithms also provides and opportunity to cultivate conceptual understanding. When teachers help student find the conceptual understanding within the procedures, they engage in mathematical reasoning. This type of reasoning through concepts and procedures contributes to a broader and more robust understanding of meaningful mathematics. Additional referenced content includes:· NCTM article From Rules That Expire to Deeper Mathematical Thinking. Mathematics Teacher: Learning & Teaching PK-12 Volume 118 Issue 4. April 2025. (Membership required).· NCTM article Teaching Is a Journey: From Rules That Expire to a Journey Aspired. Mathematics Teacher: Learning & Teaching PK-12 Volume 118 Issue 4. April 2025. (Membership required).· Robert Kaplinski's website and Open Middle websiteDid you enjoy this episode of Room to Grow? Please leave a review and share the episode with others. Share your feedback, comments, and suggestions for future episode topics by emailing roomtogrowmath@gmail.com . Be sure to connect with your hosts on X and Instagram: @JoanieFun and @cbmathguy.
The startup is working on AI that understands long, multi-screen processes. Funding accelerates reasoning models. We unpack the technical hurdles.Get the top 40+ AI Models for $20 at AI Box: https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustleSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
We take a look at critical thinking in science and healthcare, examining how we often fall prey to cognitive biases, emotional reasoning, and flawed thinking. Drawing from six different experts in their respective fields, the episode explores why we sometimes believe we are being rational when in fact our conclusions aren't truly evidence-based. The discussion spans what genuine evidence-based practice means, how domain expertise matters, and how factors like identity, beliefs, and emotions can derail objective reasoning. Timestamps [02:56] Dr. David Nunan on evidence-based medicine [15:30] Dr. John Kiely on translating research into practice [26:10] Dr. Gil Carvallo on emotion and decision making [30:10] Dr. David Robert Grimes on webs of belief [37:18] Dr. Matthew Facciani identity and belief formation [42:31] Dr. Alan Flanagan on domain-specific expertise in nutrition science Related Resources Go to episode page Join the Sigma email newsletter for free Subscribe to Sigma Nutrition Premium Enroll in the next cohort of our Applied Nutrition Literacy course Alan Flanagan's Alinea Nutrition Education Hub
Last week Doug & Paula discussed how the Kalam Cosmological Argument shows the reasonableness to believe God is there. Today they go further with another strong argument for the existence of God. Do you agree?-Feel free to email us with any questions at info@servingbb.org or for more information check out our website at https://servingbeyondborders.org-Follow us on:Instagram - @servingbeyondbordersYouTube - Serving Beyond BordersFacebook - Serving Beyond Borders-"For even the Son of Man did not come to be served but to serve. . ." Mark 10:45-TUNE IN: https://podcasts.apple.com/us/podcast/the-radical-christian-life-with-doug-and-paula/id1562355832
Discover how AWS leverages automated reasoning to enhance AI safety, trustworthiness, and decision-making. Byron Cook (Vice President and Distinguished Scientist) explains the evolution of reasoning tools from limited, PhD-driven solutions to scalable, user-friendly systems embedded in everyday business operations. He highlights real-world examples such as mortgage approvals, security policies, and how formal logic and theorem proving are used to verify answers and reduce hallucinations in large language models. This episode delves into the exciting potential of neurosymbolic AI to bridge the gap between complex mathematical logic and practical, accessible AI solutions. Join us for a deep dive into how these innovations are shaping the next era of trustworthy AI, with insights into tackling intractable problems, verifying correctness, and translating complex proofs into natural language for broader use. https://aws.amazon.com/what-is/automated-reasoning/
In this episode, I review the Biblical basis for encouraging church attendance and the beauty of what can be found in the local assemblage of the saints. If you miss church, the reality is that you are missing out.#apologetics #Christianity #church #scienceandfaith #christianpodcast #christianpodcasters #christianpodcasts #christianpodcaster #christianpodcasting #sciencefaithandreasoning #sfr-------------------------------LINKS--------------------------------Science Faith & Reasoning podcast link: https://podcasters.spotify.com/pod/sh... Coffee with John Calvin Podcast link (An SFR+ Production hosted by Daniel Faucett) https://open.spotify.com/show/5UWb8Sa... Learning the Fundaments (An SFR+ Production hosted by Shepard Merritt): https://creators.spotify.com/pod/prof... ----------------------------CONNECT-----------------------------https://www.scifr.com Instagram: / sciencefaithandreasoning X: / sfrdaily
This episode covers some of the basic things everyone should know when engaging in argument or debate, including an overview of some of the most common logical fallacies.
Does Gemini 3 live up to the hype? This week on Mixture of Experts, we analyze the release of Google's Gemini 3 model. Next, OpenAI released a new benchmark about the impact of AI on the economy, GDPval. We debate AI automation and the job market. Then, we always talk AI agents, today we discuss some great innovations coming out of IBM Research and more. Finally, Anthropic disrupted an AI-led cyberattack, what does this mean for AI agents and malicious actions? Join host Tim Hwang and our AI experts Marina Danilevsky, Merve Unuvar and Gabe Goodhart on this week's Mixture of Experts to learn more. 00:00 – Introduction 01:09 – Microsoft's AI infrastructure deal, IBM and UFC AI platform and ChatGPT for Teachers 02:00 – Gemini 3 12:50 – AI agent innovation 24:00 – OpenAI GPDval 37:17 – Anthropic cyberattack The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Subscribe for AI updates → https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-52120 Visit Mixture of Experts podcast page to get more AI content → https://www.ibm.com/think/podcasts/mixture-of-experts Discover CUGA → https://research.ibm.com/blog/cuga-agent-framework Boost your agent toolkit → https://research.ibm.com/blog/altk-agent-toolkit Listen to cybersecurity expert takes on Anthropic's cyberattacks → https://www.ibm.com/think/podcasts/security-intelligence/anthropic-stops-ai-spies-owasp-top-10-rise-small-time-ransomware
Kentik's Mav Turner joins host Phil Gervasi to go beyond chatbot hype and dig into real AI reasoning for network operations. They discuss how Kentik AI Advisor uses network intelligence, hybrid RAG, and tool-calling to troubleshoot issues, optimize cost, and democratize access to network expertise. Along the way, they cover architecture, data governance, model evaluation, and why AI has to be built into an observability platform itself, not bolted on.
You ever see a new AI model drop and be like.... it's so good OMG how do I use it?
This episode is a re-air of one of our most popular conversations from this year, featuring insights worth revisiting. Thank you for being part of the Data Stack community. Stay up to date with the latest episodes at datastackshow.com. This week on The Data Stack Show, John chats with Paul Blankley, Founder and CTO of Zenlytic, live from Denver! Paul and John discuss the rapid evolution of AI in business intelligence, highlighting how AI is transforming data analysis and decision-making. Paul also explores the potential of AI as an "employee" that can handle complex analytical tasks, from unstructured data processing to proactive monitoring. Key insights include the increasing capabilities of AI in symbolic tasks like coding, the importance of providing business context to AI models, and the future of BI tools that can flexibly interact with both structured and unstructured data. Paul emphasizes that the next generation of AI tools will move beyond traditional dashboards, offering more intelligent, context-aware insights that can help businesses make more informed decisions. It's an exciting conversation you won't want to miss.Highlights from this week's conversation include:Welcoming Paul Back and Industry Changes (1:03)AI Model Progress and Superhuman Domains (2:01)AI as an Employee: Context and Capabilities (4:04)Model Selection and User Experience (7:37)AI as a McKinsey Consultant: Decision-Making (10:18)Structured vs. Unstructured Data Platforms (12:55)MCP Servers and the Future of BI Interfaces (16:00)Value of UI and Multimodal BI Experiences (18:38)Pitfalls of DIY Data Pipelines and Governance (22:14)Text-to-SQL, Semantic Layers, and Trust (28:10)Democratizing Semantic Models and Personalization (33:22)Inefficiency in Analytics and Analyst Workflows (35:07)Reasoning and Intelligence in Monitoring (37:20)Roadmap: Proactive AI by 2026 (39:53)Limitations of BI Incumbents, Future Outlooks and Parting Thoughts (41:15)The Data Stack Show is a weekly podcast powered by RudderStack, customer data infrastructure that enables you to deliver real-time customer event data everywhere it's needed to power smarter decisions and better customer experiences. Each week, we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In today's lesson from 1 Samuel 15, we step into a crucial the moment in Biblical history when God rejects Saul as king. What begins as a clear command from God quickly becomes a portrait of how subtle and dangerous self-deception can be. We walk through the centuries-long background behind God's judgment on Amalek, tracing the story from Israel's wilderness years all the way to Saul's battlefield. Against that backdrop, Saul's response becomes even more striking: instead of obeying completely, he chooses selective obedience, keeping what looked valuable and justifying it with spiritual language. As the story unfolds, we see how easily the human heart twists God's Word. Saul reshapes God's command, redefines what obedience means, and convinces himself he has done exactly what God asked—while standing surrounded by the very evidence of his disobedience. Samuel exposes this with the piercing truth that God is not impressed by outward acts of worship that are used to cover inward rebellion. The famous line, “To obey is better than sacrifice,” becomes the anchor of the entire passage, reminding us that God desires submission more than spiritual performance. This chapter confronts us with the danger of consulting our own reasoning instead of trusting God's clear commands. Saul trusted his feelings, his logic, and his desires, elevating them to the level of God's authority. That decision becomes a form of idolatry and a warning to us: partial obedience is not obedience at all. Yet the story doesn't end in despair. It ultimately points us toward a better King—the one who faced the hardest command ever given and still prayed, “Not my will, but Yours be done.” If you've ever struggled with compromise, justification, or adjusting God's standards to fit your own, this lesson offers both a challenge and a hope. It calls us to lay down our reinterpretations and follow the example of Christ with a heart fully surrendered to God.
Can you prove God is there? Do we have to turn off our scientific mind in order to believe in God? This and other questions are discussed in today's podcast. Take a listen!-Feel free to email us with any questions at info@servingbb.org or for more information check out our website at https://servingbeyondborders.org-Follow us on:Instagram - @servingbeyondbordersYouTube - Serving Beyond BordersFacebook - Serving Beyond Borders-"For even the Son of Man did not come to be served but to serve. . ." Mark 10:45-TUNE IN: https://podcasts.apple.com/us/podcast/the-radical-christian-life-with-doug-and-paula/id1562355832
What can insights from the psychology of technology teach us about wisdom in the age of AI? In this special follow-up episode, Igor and Charles are joined by Steve Rathje to explore how classic ideas like the Turing Test hold up now that AI can talk compellingly about human wisdom. Steve unpacks what today's generative models are actually capable of, Igor is intrigued by how quickly the line between human and machine reasoning seems to be blurring, and Charles realises that telling human insight from machine insight isn't nearly as straightforward as he'd hoped. The trio also reveal the results of our listener poll — who sounded the wisest, and was the audience able to spot the AI? Welcome to Episode 67. Special Guest: Steve Rathje.
In this episode, Sydney Hunt ('23 cohort) interviews fellow host Anson Zhou ('24 cohort) alongside special guest host Katherine Hu ('22 cohort). Anson imagines a world where all medical discoveries successfully translate into patient care. He discusses how his experiences in research, consulting, and clinical rotations deepened his commitment to addressing the “translation gap” in medicine — ensuring that innovations reach the patients who need them most. Highlights from this episode: (06:27) Journey from New York to Suzhou, China to D.C. and eventually California(11:02) Pursuing biomedical engineering in undergrad(20:01) Reasoning behind pursuing an MD and MBA dual degree at Stanford(29:34) Reflecting on the MD experience so far(34:50) Hopes for his MBA year(40:38) How he plans to use his MD and MBA in the future(45:34) Rapid questions, advice to Knight-Hennessy Scholars applicants, and improbable facts
What if business intelligence didn't stop at answering what happened, but could finally explain why? In this episode of Tech Talks Daily, I sit back down with Alberto Pan, Chief Technology Officer at Denodo, to unpack how Deep Query is redefining enterprise AI through reasoning, transparency, and context. We explore how Deep Query functions as an AI reasoning agent capable of performing open-ended research across live, governed enterprise data. Instead of relying on pre-built dashboards or static reports, it builds and executes multi-step analyses through Denodo's logical data layer, unifying fragmented data sources in real time. Alberto explains how this semantic layer provides the business meaning and governance that traditional GenAI tools lack, transforming AI from a surface-level Q&A system into a trusted analytical partner. Our conversation also digs into the bigger picture of explainable AI. Deep Query reports include a full appendix of executed queries, allowing users to trace every insight back to its source. Alberto breaks down why this level of auditability matters for enterprise trust and how Denodo's support for the Model Context Protocol (MCP) opens the door to more interoperable, agentic AI systems. As we discuss how Deep Query compares with RAG models and data lakehouses, Alberto offers a glimpse into the future of business intelligence—one where analysts become guides for AI-driven research assistants, and decision-makers gain faster, deeper, and more transparent insights than ever before. So what does the rise of reasoning agents like Deep Query mean for the next generation of enterprise AI? And how close are we to a world where AI truly understands the why behind the data? Tune in and share your thoughts after listening. Tech Talks Daily is Sponsored by NordLayer: Get the exclusive Black Friday offer: 28% off NordLayer yearly plans with the coupon code: techdaily-28. Valid until December 10th, 2025. Try it risk-free with a 14-day money-back guarantee.
There's a brand new book for K-2 teachers! In this episode, Pam and Kim discuss how strategies in early mathematics build upon each other and increase in sophistication for future learning.Talking Points:Strategies build on each other, algorithms do notThe importance of building relationships and mental actions in students' brainsGet to Ten grows up to Get to a Friendly NumberAdd Ten grows up to Add a Friendly Number How the Over and Give and Take strategies help students think outside the problemWays developing strategies allows for differentiation Purchasing Information: Developing Mathematical Reasoning: The Strategies, Models, and Lessons to Teach the Big Ideas in K-2 https://www.mathisfigureoutable.com/dmrk2Check out our social mediaTwitter: @PWHarrisInstagram: Pam Harris_mathFacebook: Pam Harris, author, mathematics educationLinkedin: Pam Harris Consulting LLC
- Gold and Silver Market Analysis (0:09) - Studio Move and Upcoming Interviews (3:52) - Chinese AI Breakthrough and AI Model Development (6:59) - Amazon's Automation Plans (8:38) - Impact of Automation on the Workforce (22:00) - The Future of Human Labor and AI Integration (32:17) - The Role of AI in Society and Government Control (32:32) - The Importance of Preparedness and Self-Reliance (38:46) - The Ethical Implications of AI and Robotics (48:53) - The Role of AI in Communication and Reasoning (1:02:14) - AI Models and Human Interaction (1:20:13) - Trump Administration Announcements (1:22:33) - Gold and Silver Market Analysis (1:23:45) - Stable Coins and Treasury Market (1:38:21) - Silver Market Manipulation and Squeeze (1:50:15) - BRICS and Belt and Road Initiative (1:50:30) - Rare Earths and U.S.-China Trade Tensions (1:55:03) - AI and Job Replacement (2:00:09) - DeepSea OCR and Image Compression (2:15:36) - Manufacturing and Economic Strategy (2:18:33) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com