POPULARITY
Categories
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Markus Buehler, the McAfee Professor of Engineering at MIT, to explore how seemingly different systems—from proteins and music to knowledge structures and AI reasoning—share underlying patterns through hierarchy, self-organization, and scale-free networks. The conversation ranges from the limits of current AI interpolation versus true discovery (using the fire-to-fusion example), to the emergence of agent swarms and their non-linear effects, to practical questions about ontologies, knowledge graphs, and whether humans will remain necessary in the creative discovery process. Markus discusses his lab's work automating scientific discovery through AI agents that can generate hypotheses, run simulations, and even retrain themselves, while Stewart shares his own experiences building applications with AI coding agents and grapples with questions about intellectual property, material science constraints, and the future of human creativity in an AI-abundant world.Timestamps00:00 - Introduction to Marcus Buehler's work on knowledge graphs, structural grammar across proteins, music, and AI reasoning05:00 - Discussion of AI discovery versus interpolation, using fire and fusion as examples of fundamental versus incremental innovation10:00 - Language models as connective glue between agents, enabling communication despite imperfect outputs and canonical averaging15:00 - Embodiment and agency in AI systems, creating adversarial agents that challenge theories and expand world models20:00 - Emergent properties in materials and AI, comparing dislocations in metals to behaviors in agent swarms25:00 - Human role-playing and phase separation in society, parallels to composite materials and heterogeneity30:00 - Physical world challenges, atom-by-atom manufacturing at MIT.nano, limitations of lithography machines35:00 - Synthetic biology as alternative to nanotechnology, programming microorganisms for materials discovery40:00 - Intellectual property debates, commodification of AI models, control layers more valuable than model architecture45:00 - Automation of ontologies, agent self-testing, daughter's coding success at age 1150:00 - Graph theory for knowledge compression, neurosymbolic approaches combining symbolic and neural methods55:00 - Nonlinear acceleration in AI, emergence from accumulated innovations, restaurant owner embracing AI01:00:00 - Future generations possibly rejecting AI, democratization of knowledge, social media as real-time scientific discourseKey Insights1. Universal Patterns Across Disciplines: Seemingly different systems in nature—proteins, music, social networks, and knowledge itself—share fundamental structural patterns including hierarchy, self-organization, and scale-free networks. This commonality allows creative thinkers to draw insights across disciplines, applying principles from one domain to solve problems in another. As an engineer and materials scientist, Buehler has leveraged these isomorphisms to advance scientific understanding by mapping the "plumbing" of different systems onto each other, revealing hidden relationships that enable extrapolation beyond what's observable in any single domain.2. The Discovery Versus Interpolation Problem: Current AI systems, particularly large language models, excel at interpolation—recombining existing knowledge in new ways—but struggle with genuine discovery that requires fundamental rewiring of world models. Using the example of fire versus fusion, Buehler explains that an AI trained on combustion chemistry would propose bigger fires or new fuels, but couldn't conceive of fusion because that requires stepping back to more fundamental physics. True discovery demands the ability to recognize when existing theories have boundaries and to develop entirely new frameworks, something current AI architectures aren't designed to achieve due to their training objective of predicting the most likely outcome.3. The Role of Ontologies and Knowledge Graphs: While some AI researchers argue that ontologies are unnecessary because models form internal representations, Buehler advocates for explicit knowledge graphs as essential discovery tools. External ontologies provide sharp, analytical, symbolic representations that complement the fuzzy internal representations of neural networks. They enable verification of rare connections—like obscure papers that might hold key insights—which would be averaged away in standard AI training. This neurosymbolic approach combines the generalization capabilities of neural networks with the precision of formal knowledge structures, creating more powerful discovery systems.4. Emergent Properties and Agent Swarms: Just as materials science shows that collections of atoms exhibit properties impossible to predict from individual components, AI agent swarms demonstrate emergent behaviors beyond single models. When agents are incentivized not just to answer questions but to challenge each other adversarially, propose theories, and test hypotheses, they can spawn new copies of themselves and evolve understanding beyond their initial programming. This emergence isn't surprising from a materials science perspective—dislocations, grain boundaries, and other collective phenomena only appear at scale, fundamentally determining material behavior in ways unpredictable from studying just a few atoms.5. The Commoditization of Intelligence: The fundamental AI models themselves are becoming commodities, as evidenced by events like the Moldbug phenomenon where people built agents using various providers interchangeably. The real value is shifting from who has the smartest model to how models are orchestrated, integrated, and deployed. This parallels historical technology adoption patterns—just as we moved past debating who makes the best electricity to focusing on applications, AI is transitioning from a horse race over model capabilities to questions of infrastructure, energy, access speed, and agent coordination at the systems level.6. Human-AI Collaboration and Creative Control: Rather than wholesale replacement, AI enables humans to operate in an intensely creative space as orchestrators sampling from vast possibility spaces. Similar to how Buehler's 11-year-old daughter now builds sophisticated applications that would have required professional developers years ago, AI democratizes access to capabilities while humans retain the creative judgment about direction and meaning. The human role becomes curating emergence, finding rare connections, playing at the edges of knowledge, and exercising the kind of curiosity-driven exploration that AI systems lack without embodied stakes in their own survival and continuation.7. Technology as Evolutionary Inevitability: The development of AI represents not an unnatural threat but the next stage of human evolution—an extension of our innate drive to build models of ourselves and our world. From cave paintings to partial differential equations to artificial intelligence, humans continuously create increasingly sophisticated representations and tools. Attempting to stop this technological evolution is futile; instead, the focus should be on steering it ...
Ben Clemens of FanGraphs joins the show to break down where the 2026 St. Louis Cardinals are headed, from win expectations and outfield needs to Chaim Bloom's “down to the studs” rebuild and the strategy behind recent trades and draft-pick flexibility. We dive into the upside-heavy approach highlighted by Jurrangelo Cijntje and other recent additions, react to FanGraphs' updated Cardinals prospect rankings, see a sneak preview of a new feature on Fangraphs, and wrap with a quick spin around the latest league news.Have a question or comment for the show? Text or leave us a voicemail at: (848) 48-BIRDS (848-482-4737)Talking About Birds is listener supported on Patreon. Support the show and join our private discord server at: www.patreon.com/talkingaboutbirds.
Send a textImagine an autonomous agent that dreams up a business, raises funds, ships code, and starts earning—all without a human in the loop. That's no longer sci‑fi. We sit down with Rodrigo Coelho to map the rails that make it plausible: reliable blockchain data, open payment standards, and human‑grade controls that keep machine spenders on track.We start with a myth many still believe: blockchains are easy to read. Rodrigo explains why they were write‑first, and how The Graph became a quiet backbone of DeFi by turning messy ledgers into queryable data. Years of running high‑throughput infrastructure set the stage for AMP, a SQL‑first, local‑first approach that unifies access across chains, runs on‑prem for banks, and proves that internal datasets match on‑chain truth—fuel for compliance, audit, and real‑world finance moving on blockchain rails.Then we connect the dots with AI. Leaders who once shrugged at crypto now see agents as the perfect fit: low fees, transparency, and observability. With X402 enabling open micropayments over HTTP, the next missing piece was control. Enter "ampersend", a dashboard and policy plane for agent wallets, spend limits, batching, and reputation‑aware routing. Think: “only transact with agents above a reputation threshold,” “cap this task at 50 cents,” or “enforce daily budgets,” all verifiable and auditable. We also unpack emerging standards like ERC‑8004 for reputation and the Advanced AI Society's proof of control, outlining the identity, trust, and policy stack enterprises need before they unleash agents at scale.By 2026, expect major institutions to settle on blockchain rails, blending privacy with auditability, and tokenizing everything from bonds to real estate. The opportunity is clear: give agents the autonomy to create value while giving humans the levers to define, observe, and verify. If you care about AI agents, Web3 data, enterprise compliance, and the future of payments, this conversation connects the technical dots to the business outcomes.Enjoyed the episode? Follow the show, share it with a friend who loves AI or Web3, and leave a 5‑star review to help more people find us.This episode was recorded through a Descript call on February 5, 2026. Read the blog article and show notes here: https://webdrie.net/how-ai-agents-will-spend-earn-and-prove-trust-on-blockchain-rails/..........................................................................
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
AI-Powered Knowledge Graph Generator & APTs https://isc.sans.edu/diary/AI-Powered%20Knowledge%20Graph%20Generator%20%26%20APTs/32712 nslookup and ClickFix https://x.com/MsftSecIntel/status/2022456612120629742 Google Chrome 0-Day Patch https://chromereleases.googleblog.com/2026/02/stable-channel-update-for-desktop_13.html TURN Security Threats https://www.enablesecurity.com/blog/turn-server-security-threats/
Immerse yourself in captivating science fiction short stories, delivered daily! Explore futuristic worlds, time travel, alien encounters, and mind-bending adventures. Perfect for sci-fi lovers looking for a quick and engaging listen each day.
@GeneSohoForum coming through with the Graphs and walkthrough of the real story in American Labor markets. Are American's underpaid by greedy corporations? Find out on today's episode.
@GeneSohoForum coming through with the Graphs and walkthrough of the real story in American Labor markets. Are American's underpaid by greedy corporations? Find out on today's episode.
In this interview I'm joined by Dr. Ryan Burge (aka, Graphs about Religion) to discuss the state of religion in America. We cover claims of a Gen Z revival, the decline of mainline Protestantism, and what the data tells us about polarization. Read the Book: https://amzn.to/3ZrUpqnWant to support the channel? Here's how!Give monthly: https://patreon.com/gospelsimplicity Make a one-time donation: https://paypal.me/gospelsimplicityBook a meeting: https://calendly.com/gospelsimplicity/meet-with-austinRead my writings: https://austinsuggs.substack.com/Support the show
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop explores the complex world of context and knowledge graphs with guest Youssef Tharwat, the founder of NoodlBox who is building dot get for context. Their conversation spans from the philosophical nature of context and its crucial role in AI development, to the technical challenges of creating deterministic tools for software development. Tharwat explains how his product creates portable, versionable knowledge graphs from code repositories, leveraging the semantic relationships already present in programming languages to provide agents with better contextual understanding. They discuss the limitations of large context windows, the advantages of Rust for AI-assisted development, the recent Claude/Bun acquisition, and the broader geopolitical implications of the AI race between big tech companies and open-source alternatives. The conversation also touches on the sustainability of current AI business models and the potential for more efficient, locally-run solutions to challenge the dominance of compute-heavy approaches.For more information about NoodlBox and to join the beta, visit NoodlBox.io.Timestamps00:00 Stewart introduces Youssef Tharwat, founder of NoodlBox, building context management tools for programming05:00 Context as relevant information for reasoning; importance when hitting coding barriers10:00 Knowledge graphs enable semantic traversal through meaning vs keywords/files15:00 Deterministic vs probabilistic systems; why critical applications need 100% reliability20:00 CLI tool makes knowledge graphs portable, versionable artifacts with code repos25:00 Compiler front-ends, syntax trees, and Rust's superior feedback for AI-assisted coding30:00 Claude's Bun acquisition signals potential shift toward runtime compilation and graph-based context35:00 Open source vs proprietary models; user frustration with rate limits and subscription tactics40:00 Singularity path vs distributed sovereignty of developers building alternative architectures45:00 Global economics and why brute force compute isn't sustainable worldwide50:00 Corporate inefficiencies vs independent engineering; changing workplace dynamics55:00 February open beta for NoodlBox.io; vision for new development tool standardsKey Insights1. Context is semantic information that enables proper reasoning, and traditional LLM approaches miss the mark. Youssef defines context as the information you need to reason correctly about something. He argues that larger context windows don't scale because quality degrades with more input, similar to human cognitive limitations. This insight challenges the Silicon Valley approach of throwing more compute at the problem and suggests that semantic separation of information is more optimal than brute force methods.2. Code naturally contains semantic boundaries that can be modeled into knowledge graphs without LLM intervention. Unlike other domains where knowledge graphs require complex labeling, code already has inherent relationships like function calls, imports, and dependencies. Youssef leverages these existing semantic structures to automatically build knowledge graphs, making his approach deterministic rather than probabilistic. This provides the reliability that software development has historically required.3. Knowledge graphs can be made portable, versionable, and shareable as artifacts alongside code repositories. Youssef's vision treats context as a first-class citizen in version control, similar to how Git manages code. Each commit gets a knowledge graph snapshot, allowing developers to see conceptual changes over time and share semantic understanding with collaborators. This transforms context from an ephemeral concept into a concrete, manageable asset.4. The dependency problem in modern development can be solved through pre-indexed knowledge graphs of popular packages. Rather than agents struggling with outdated API documentation, Youssef pre-indexes popular npm packages into knowledge graphs that automatically integrate with developers' projects. This federated approach ensures agents understand exact APIs and current versions, eliminating common frustrations with deprecated methods and unclear documentation.5. Rust provides superior feedback loops for AI-assisted programming due to its explicit compiler constraints. Youssef rebuilt his tool multiple times in different languages, ultimately settling on Rust because its picky compiler provides constant feedback to LLMs about subtle issues. This creates a natural quality control mechanism that helps AI generate more reliable code, making Rust an ideal candidate for AI-assisted development workflows.6. The current AI landscape faces a fundamental tension between expensive centralized models and the need for global accessibility. The conversation reveals growing frustration with rate limiting and subscription costs from major providers like Claude and Google. Youssef believes something must fundamentally change because $200-300 monthly plans only serve a fraction of the world's developers, creating pressure for more efficient architectures and open alternatives.7. Deterministic tooling built on semantic understanding may provide a competitive advantage against probabilistic AI monopolies. While big tech companies pursue brute force scaling with massive data centers, Youssef's approach suggests that clever architecture using existing semantic structures could level the playing field. This represents a broader philosophical divide between the "singularity" path of infinite compute and the "disagreeably autistic engineer" path of elegant solutions that work locally and affordably.
Blitzy founders Brian and Sid break down how their “infinite code context” system lets AI autonomously complete over 80% of major enterprise software projects in days. They dive into their dynamic agent architecture, how they choose and cross-check different models, and why they prioritize advances in AI memory over fine-tuning. The conversation also covers their 20¢/line pricing model, the path to 99%+ autonomous project completion, and what this all means for the future software engineering job market. Sponsors: Blitzy: Blitzy is the autonomous code generation platform that ingests millions of lines of code to accelerate enterprise software development by up to 5x with premium, spec-driven output. Schedule a strategy session with their AI solutions consultants at https://blitzy.com Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai Serval: Serval uses AI-powered automations to cut IT help desk tickets by more than 50%, freeing your team from repetitive tasks like password resets and onboarding. Book your free pilot and guarantee 50% help desk automation by week four at https://serval.com/cognitive CHAPTERS: (00:00) About the Episode (03:02) AGI effects without AGI (07:07) Domain-specific context engineering (16:54) Dynamic harness and evals (Part 1) (17:00) Sponsors: Blitzy | Tasklet (20:00) Dynamic harness and evals (Part 2) (30:42) Graphs, RAG, and memory (Part 1) (30:49) Sponsor: Serval (32:26) Graphs, RAG, and memory (Part 2) (41:17) Model zoo and memory (50:07) Planning, scaling, and parallelism (56:13) Pricing, onboarding, and autonomy (01:04:24) Closing the last 20% (01:12:34) Strange behaviors and judges (01:22:23) Reasoning budgets and autonomy (01:33:36) Fine-tuning, benchmarks, and training (01:42:31) Securing AI-generated code (01:49:52) Future of software work (01:57:05) Outro PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
Send us a textPrasad Calyam, Curators' Distinguished Professor and Center Director at the University of Missouri, joins the show to explore how knowledge graphs, modern data platforms, and AI are reshaping power grids and cybersecurity. He breaks down graph database fundamentals, real-world research projects, and how industry can tap into cutting-edge university work—all in language that engineers, data folks, and developers can put to use.Timestamps 01:30 Meet Prasad Calyam 02:57 Why Higher Education? 05:22 Data Analytics 06:59 The Modern Power Grid 09:40 Graph DB Fundamentals 12:21 Cybersecurity via Graphs and RAG 13:45 Research Projects 14:38 Industry Leveraging University Research 16:07 Advice for Students 17:16 What's Fun for ProfessorsLinks LinkedIn: linkedin.com/in/prasadcalyam Website: http://www.missouri.edu#KnowledgeGraphs #GraphDatabase #RAG #Cybersecurity #PowerGrid #DataEngineering #AI #MLOps #TechPodcast #Developers #ResearchToProduction #UniversityResearchWant to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
Send us a textPrasad Calyam, Curators' Distinguished Professor and Center Director at the University of Missouri, joins the show to explore how knowledge graphs, modern data platforms, and AI are reshaping power grids and cybersecurity. He breaks down graph database fundamentals, real-world research projects, and how industry can tap into cutting-edge university work—all in language that engineers, data folks, and developers can put to use.Timestamps 01:30 Meet Prasad Calyam 02:57 Why Higher Education? 05:22 Data Analytics 06:59 The Modern Power Grid 09:40 Graph DB Fundamentals 12:21 Cybersecurity via Graphs and RAG 13:45 Research Projects 14:38 Industry Leveraging University Research 16:07 Advice for Students 17:16 What's Fun for ProfessorsLinks LinkedIn: linkedin.com/in/prasadcalyam Website: http://www.missouri.edu#KnowledgeGraphs #GraphDatabase #RAG #Cybersecurity #PowerGrid #DataEngineering #AI #MLOps #TechPodcast #Developers #ResearchToProduction #UniversityResearchWant to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
This is a link post. The new scaling paradigm for AI reduces the amount of information a model can learn from per hour of training by a factor of 1,000 to 1,000,000. I explore what this means and its implications for scaling. The last year has seen a massive shift in how leading AI models are trained. 2018–2023 was the era of pre-training scaling. LLMs were primarily trained by next-token prediction (also known as pre-training). Much of OpenAI's progress from GPT-1 to GPT-4, came from scaling up the amount of pre-training by a factor of 1,000,000. New capabilities were unlocked not through scientific breakthroughs, but through doing more-or-less the same thing at ever-larger scales. Everyone was talking about the success of scaling, from AI labs to venture capitalists to policy makers. However, there's been markedly little progress in scaling up this kind of training since (GPT-4.5 added one more factor of 10, but was then quietly retired). Instead, there has been a shift to taking one of these pre-trained models and further training it with large amounts of Reinforcement Learning (RL). This has produced models like OpenAI's o1, o3, and GPT-5, with dramatic improvements in reasoning (such as solving [...] --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/64iwgmMvGSTBHPdHg/the-extreme-inefficiency-of-rl-for-frontier-models Linkpost URL:https://www.tobyord.com/writing/inefficiency-of-reinforcement-learning --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a link post. The shift from scaling up the pre-training compute of AI systems to scaling up their inference compute may have profound effects on AI governance. The nature of these effects depends crucially on whether this new inference compute will primarily be used during external deployment or as part of a more complex training programme within the lab. Rapid scaling of inference-at-deployment would: lower the importance of open-weight models (and of securing the weights of closed models), reduce the impact of the first human-level models, change the business model for frontier AI, reduce the need for power-intense data centres, and derail the current paradigm of AI governance via training compute thresholds. Rapid scaling of inference-during-training would have more ambiguous effects that range from a revitalisation of pre-training scaling to a form of recursive self-improvement via iterated distillation and amplification. The end of an era — for both training and governance The intense year-on-year scaling up of AI training runs has been one of the most dramatic and stable markers of the Large Language Model era. Indeed it had been widely taken to be a permanent fixture of the AI landscape and the basis of many approaches to [...] ---Outline:(01:06) The end of an era -- for both training and governance(05:24) Scaling inference-at-deployment(06:42) Reducing the number of simultaneously served copies of each new model(08:45) Reducing the value of securing model weights(09:30) Reducing the benefits and risks of open-weight models(10:05) Unequal performance for different tasks and for different users(12:08) Changing the business model and industry structure(12:50) Reducing the need for monolithic data centres(17:16) Scaling inference-during-training(28:07) Conclusions(30:17) Appendix. Comparing the costs of scaling pre-training vs inference-at-deployment --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/RnsgMzsnXcceFfKip/inference-scaling-reshapes-ai-governance Linkpost URL:https://www.tobyord.com/writing/inference-scaling-reshapes-ai-governance --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a link post. In the last year or two, the most important trend in modern AI came to an end. The scaling-up of computational resources used to train ever-larger AI models through next-token prediction (pre-training) stalled out. Since late 2024, we've seen a new trend of using reinforcement learning (RL) in the second stage of training (post-training). Through RL, the AI models learn to do superior chain-of-thought reasoning about the problem they are being asked to solve. This new era involves scaling up two kinds of compute: the amount of compute used in RL post-training the amount of compute used every time the model answers a question Industry insiders are excited about the first new kind of scaling, because the amount of compute needed for RL post-training started off being small compared to the tremendous amounts already used in next-token prediction pre-training. Thus, one could scale the RL post-training up by a factor of 10 or 100 before even doubling the total compute used to train the model. But the second new kind of scaling is a problem. Major AI companies were already starting to spend more compute serving their models to customers than in the training [...] --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/5zfubGrJnBuR5toiK/evidence-that-recent-ai-gains-are-mostly-from-inference Linkpost URL:https://www.tobyord.com/writing/mostly-inference-scaling --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a link post. There is an extremely important question about the near-future of AI that almost no-one is asking. We've all seen the graphs from METR showing that the length of tasks AI agents can perform has been growing exponentially over the last 7 years. While GPT-2 could only do software engineering tasks that would take someone a few seconds, the latest models can (50% of the time) do tasks that would take a human a few hours. As this trend shows no signs of stopping, people have naturally taken to extrapolating it out, to forecast when we might expect AI to be able to do tasks that take an engineer a full work-day; or week; or year. But we are missing a key piece of information — the cost of performing this work. Over those 7 years AI systems have grown exponentially. The size of the models (parameter count) has grown by 4,000x and the number of times they are run in each task (tokens generated) has grown by about 100,000x. AI researchers have also found massive efficiencies, but it is eminently plausible that the cost for the peak performance measured by METR has been [...] ---Outline:(13:02) Conclusions(14:05) Appendix(14:08) METR has a similar graph on their page for GPT-5.1 codex. It includes more models and compares them by token counts rather than dollar costs: --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/AbHPpGTtAMyenWGX8/are-the-costs-of-ai-agents-also-rising-exponentially Linkpost URL:https://www.tobyord.com/writing/hourly-costs-for-ai-agents --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a link post. Building on the recent empirical work of Kwa et al. (2025), I show that within their suite of research-engineering tasks the performance of AI agents on longer-duration tasks can be explained by an extremely simple mathematical model — a constant rate of failing during each minute a human would take to do the task. This implies an exponentially declining success rate with the length of the task and that each agent could be characterised by its own half-life. This empirical regularity allows us to estimate the success rate for an agent at different task lengths. And the fact that this model is a good fit for the data is suggestive of the underlying causes of failure on longer tasks — that they involve increasingly large sets of subtasks where failing any one fails the task. Whether this model applies more generally on other suites of tasks is unknown and an important subject for further work. METR's results on the length of tasks agents can reliably complete A recent paper by Kwa et al. (2025) from the research organisation METR has found an exponential trend in the duration of the tasks that frontier AI agents can [...] ---Outline:(05:33) Explaining these results via a constant hazard rate(14:54) Upshots of the constant hazard rate model(18:47) Further work(19:25) References --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/qz3xyqCeriFHeTAJs/is-there-a-half-life-for-the-success-rates-of-ai-agents-3 Linkpost URL:https://www.tobyord.com/writing/half-life --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a link post. Improving model performance by scaling up inference compute is the next big thing in frontier AI. But the charts being used to trumpet this new paradigm can be misleading. While they initially appear to show steady scaling and impressive performance for models like o1 and o3, they really show poor scaling (characteristic of brute force) and little evidence of improvement between o1 and o3. I explore how to interpret these new charts and what evidence for strong scaling and progress would look like. From scaling training to scaling inference The dominant trend in frontier AI over the last few years has been the rapid scale-up of training — using more and more compute to produce smarter and smarter models. Since GPT-4, this kind of scaling has run into challenges, so we haven't yet seen models much larger than GPT-4. But we have seen a recent shift towards scaling up the compute used during deployment (aka 'test-time compute' or ‘inference compute'), with more inference compute producing smarter models. You could think of this as a change in strategy from improving the quality of your employees' work via giving them more years of training in which acquire [...] --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/zNymXezwySidkeRun/inference-scaling-and-the-log-x-chart Linkpost URL:https://www.tobyord.com/writing/inference-scaling-and-the-log-x-chart --- Narrated by TYPE III AUDIO. ---Images from the article:
We're joined by droqen (The End of Gameplay, Starseed Pilgrim), Darius Kazemi (Tiny Subversions, Harvard Applied Social Media Lab), and Tara Macalister (mathematician, composer) to discuss Vertex Dispenser, the second game in our year-long exploration of the work of Michael Brough. Next Month: Kompendium Audio edited by Dylan Shumway. Discussed in this episode: Vertex Dispenser https://store.steampowered.com/app/102400/Vertex_Dispenser/ Michael Brough's Website https://www.smestorp.com/ Four color theorem https://en.wikipedia.org/wiki/Four_color_theorem Graph coloring https://en.wikipedia.org/wiki/Graph_coloring Starcraft II https://starcraft2.blizzard.com/en-us/ Splatoon https://splatoon.nintendo.com/ Dota 2 https://www.dota2.com/home Droqen's rare color graph/explanation https://discord.com/channels/690388280767807518/1442554518092120186/1465039921147412510 lots of michael brough games https://smestorp.itch.io/lots-of-michael-brough-games The Sense of Connectedness https://forums.tigsource.com/index.php?topic=16151.0 Kompendium https://mightyvision.blogspot.com/2012/06/kompendium.html The End of Gameplay https://droqen.itch.io/the-end-of-gameplay Utopia Clicker https://tinysubversions.com/game/utopia/ A Jackpot of Skulls https://brainfruit.studio/games/jackpot _update() Jam https://adamatomic.itch.io/update-jam https://secretlives.games/ https://discord.gg/tslog https://www.patreon.com/tslog https://www.youtube.com/eggplantshow
This is a link post. AI capabilities have improved remarkably quickly, fuelled by the explosive scale-up of resources being used to train the leading models. But if you examine the scaling laws that inspired this rush, they actually show extremely poor returns to scale. What's going on? AI Scaling is Shockingly Impressive The era of LLMs has seen remarkable improvements in AI capabilities over a very short time. This is often attributed to the AI scaling laws — statistical relationships which govern how AI capabilities improve with more parameters, compute, or data. Indeed AI thought-leaders such as Ilya Sutskever and Dario Amodei have said that the discovery of these laws led them to the current paradigm of rapid AI progress via a dizzying increase in the size of frontier systems. Before the 2020s, most AI researchers were looking for architectural changes to push the frontiers of AI forwards. The idea that scale alone was sufficient to provide the entire range of faculties involved in intelligent thought was unfashionable and seen as simplistic. A key reason it worked was the tremendous versatility of text. As Turing had noted more than 60 years earlier, almost any challenge that one could pose to [...] --- First published: January 30th, 2026 Source: https://forum.effectivealtruism.org/posts/742xJNTqer2Dt9Cxx/the-scaling-paradox Linkpost URL:https://www.tobyord.com/writing/the-scaling-paradox --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
UK Property Market Weekly Update - Week 3 of 2026 I look at the UK property market in the ‘UK Property Market Stats Show“ for the week ending Sunday 25th January 2026 (week 3) with the brilliant Steph Vass. YouTube https://youtu.be/496XoAgOVIU ✅ New Listings * 35.2k new properties came to market this week in week 3, up as expected from 32.8k last week. * 2025 weekly average: 30.6k. * 10-year week 3 average : 31.8k * Year-to-date (YTD): 96.5k new listings, 0.5% above than 2025 YTD (96.1k), 17.5% above 2024 YTD (82.1k) and 34% above the 2017–19 average (72k) ✅ Price Reductions * 20k reductions this week * 7.6% of resi homes for sale were reduced in December. Compared to Oct 12.8%, Sept 14.1%, August 11.1%, July 14.1% in July and 14% in June. * 2025 average was 12.8%, versus the five-year long-term average of 10.74%. ✅ Sales Agreed * 24.6k homes sold stc this week 3, up expectedly from 21.2k last week. * Week 3 average (for last 10 years) : 23.4k * 2026 weekly average : 19.1k. * YTD: 62.7k gross sales, which is 8.7% behind Week 2 * 3 YTD of 2025 (68.7k), yet 23.5% ahead of wk.3 2024 (50.8k) and 30.6% above the 2017–19 average (48k). * Thoughts - January 2025 was an exceptional month as we had the stamp duty deadline for April 2025 - here it was a good sales month. To be ahead of 2024 and pre Covid years by such a amount is good to see. ✅ Price Diff between Listings & Sales * Average Asking Price of listings last week £413k * Average asking price of Sales Agreed (SSTC) last week was £348k * A 18.8% difference (long term 9 year average is 16% to 17%). ✅ Sell-Through Rate * 9.9% of homes on agents' books went SSTC in December '25. Down as expected from 13.5% in November, 15% in October, 14.1% in Sept, 14.5% in Aug, 15.4% in July, 15.3% in June, and 16.1% in May. * Pre-Covid average: 15.5%. ✅ Fall-Throughs * 4,783 fall-throughs last week (pipeline of 482k home Sold STC). * Weekly average for 2025: 6,100. * Fall-through rate: 25.8%, slightly up from 24.9% last week. * Long-term average: 24.2% (post-Truss chaos saw levels exceed 40%). ✅ Net Sales * Huge jump in net sales from last week. 19.3k, up from 15.8k last week. * Ten-year Week 3 average: 18.2k. * Weekly average for 2026: 15.4k. * Weekly average for the whole of 2025: 19.2k. * YTD: 46.1k, which is 8.3% behind Wk.3 of 2025 (30.6k), 35% ahead of wk2 2024 (19.9k) and 40% ahead of wk2 2017–19 (19.1k). ✅ Probability of Selling (% that Exchange vs withdrawal) * December Stats : 60.2% of homes that left agents' books exchanged & completed in December. (Note this figure will change throughout the month as more December stats come in). * November 55.2% / October 53.3% / September: 53.1% / August :55.8% / July: 50.9% / June: 51.3% / May: 51.7% / April: 53.2%. * Dec 24: 60.3% / Dec 23: 57.7% / Dec 22: 64.4% / Dec 21: 73.7% ✅ Stock Levels * 613k homes on the market on the 1st of January '26 , down from 678k on 1st of December '25 . (605k on the market on 1st Jan '25 for comparison) * 434k homes in agent's sales pipeline on the 1st Jan 2026, almost identical than 12 months ago on 1st Jan '25 (439k). ✅ House Prices (£/sq.ft) * December 2025 agreed sales averaged £337.09 per sq.ft. 0.6% higher than 12 months ago (£335.04) and 12.6% than 5 years ago (£299.30). The £/sqft at sale agreed matches the HM Land Registry Index with a 98% accuracy, 5 months in advance. That is why it is so important. ✅ UK Rental Market Overview * Average Rent in December 2025 - £1,702 pcm - compared to £1,719 pcm in Dec 2024 and £1,301 pcm in Dec 2017. * Available Rental Properties in December '25 - 285k compared to 321k in November '25. (Dec '24 - 258k and Dec '23 - 235k) ✅ Graphs https://youtu.be/496XoAgOVIU
Episode: 00302 Released on January 19, 2026 Description: Artificial intelligence is everywhere but to what degree? Andreas Olligschlaeger returns to Analyst Talk for a deep dive into AI in law enforcement analysis. We break down what AI really is (and isn't), explore graph databases, anomaly detection, and Graph RAG, and discuss how analysts can use AI without replacing human judgment. The conversation also tackles ethics, explainability, and why validation and transparency matter more than ever. This episode is a must-listen for analysts trying to separate real capability from AI hype.
Submit your CPR Report here. Get a call from Dr. Zeeshan or Nurse Brittany fill out form here!https://docs.google.com/forms/d/e/1FAIpQLSeAO_cq5OE6ONYgDFSz0HHrUqKt2Nk1JfC-3D7eXUl8LlzGdg/viewformOur February course is $149.99 JOIN ASAP!~https://nclexhighyield.com/collections/february-coursesOur Self-Paced Online Videos are on sale for $44.99 and has updated notes, videos, and practice questions! You can join at https://nclexhighyieldcourse.com/p/full-nclex-course7
Juan and Tim rant about Context Graphs and categorizations of how companies work with data.See omnystudio.com/listener for privacy information.
Juan and Tim rant about Context Graphs and categorizations of how companies work with data.See omnystudio.com/listener for privacy information.
My guests today are Animesh Koratana and Jamin Ball. Animesh is the founder and CEO of our portfolio company PlayerZero, which is building AI production engineers that operate complex enterprise software autonomously - resolving production incidents, catching defects before release, and building durable models of how systems actually behave.Jamin is a partner at Altimeter Capital and the writer behind Clouded Judgement, a Substack where he analyzes emerging trends in enterprise software. Jamin recently sparked a debate with an essay titled “Long Live Systems of Record.” His core argument is that while agents are changing how software is used and where value accrues, they still depend on ground truth. Systems of record won't disappear so much as get pushed down the stack as new agent-native interfaces emerge on top.My partner Jaya and I felt compelled to respond, with Animesh contributing insights based on what he's seeing on the ground as he builds PlayerZero. From our perspective, the missing layer is what happens inside the workflow itself: the judgment, exceptions, and reasoning that agents and humans apply as work gets done. We call these decision traces, and we believe the context graph they form over time will become the most valuable asset for companies building and deploying AI systems.It's a genuine debate - and one that's only going to matter more as agents move from demos to production.Looking forward to keeping the conversation going!Chapters00:00 Why Jamin's essay sparked debate00:35 Jamin's thesis: why agents need ground truth02:00 Animesh on why context graphs become the new source of leverage07:58 What current systems of record miss08:28 PlayerZero's perspective: context graphs in practice10:00 How context graphs could change org structures11:10 How to capture decision traces without forcing humans to log it?14:35 Which systems of record are most at risk17:04 Two workflows ripe for disruption: GTM and software development22:31 Animesh on where context graphs can add most value 28:50 Why context graphs create durability vs short-lived point solutions30:00 Will context graphs be verticalized or universal?34:00 Bear case: do context graphs fail like semantic layers?43:27 2026 predictions: big AI IPOs, world models, enterprise agent adoption45:00 Hot takes: point solutions die; AI job-loss discourse hits a fever pitch47:30 Jevons paradox: why agents create more work, not less
Tim and Juan chat with Tony Baer and Matt Housley, hosts of the “It’s About Data” podcast about what are the trends they are seeing with the start of 2026. We talked about AI magical thinking, Agentic architectures, Graphs, careers and much more. See omnystudio.com/listener for privacy information.
Tim and Juan chat with Tony Baer and Matt Housley, hosts of the “It’s About Data” podcast about what are the trends they are seeing with the start of 2026. We talked about AI magical thinking, Agentic architectures, Graphs, careers and much more. See omnystudio.com/listener for privacy information.
This is the takeaway episode about the chat that Tim and Juan have with Tony Baer and Matt Housley, hosts of the “It’s About Data” podcast about what are the trends they are seeing with the start of 2026. We talked about AI magical thinking, Agentic architectures, Graphs, careers and much more.See omnystudio.com/listener for privacy information.
Join Sean White and Aaron Nichols as they turn the world of solar energy upside down with humor, stories, and a fresh take on education. Discover why facts and graphs aren't enough, and how laughter and creativity can make even the most technical topics memorable. From fake ads to satirical brainstorms, this episode proves that learning about solar can be fun, engaging, and unforgettable. Tune in for a brighter perspective! Topics Covered Sitcom = Situation Comedy Making Boring Stuff Fun Graphs & Acronyms Exact Solar This Week in Solar Substack www.exactsolar.substack.com The Modern Mythmaker www.themodernmythmaker.substack.com Podbean Colbert Report Solyndra Fake Advertisement Big Oil Beverly Hillbillies YouTube Channel Repurposing Reach out to Aaron Nichols here: LinkedIn: www.linkedin.com/in/aaron-nichols Substack: www.exactsolar.substack.com Learn more at www.solarSEAN.com and be sure to get NABCEP certified by taking Sean's classes at www.heatspring.com/sean www.solarsean.com/pvip www.solarsean.com/esip
This is the takeaway episode about the chat that Tim and Juan have with Tony Baer and Matt Housley, hosts of the “It’s About Data” podcast about what are the trends they are seeing with the start of 2026. We talked about AI magical thinking, Agentic architectures, Graphs, careers and much more.See omnystudio.com/listener for privacy information.
TL;DR When surveyed, the EA community and leaders think ~18-24% of resources should go towards animal advocacy. The actual figure is about 7%. We as the EA ecosystem are putting less resources (money and time) into animal advocacy than the movement thinks we should when surveyed. This disparity could be because of loss of message fidelity, it's a harder cause area to pitch donors, or the role of large funders, but I'm honestly not too sure. My job at Senterra Funders involves making the case to EA/EA adjacent prospective donors that they can do a tonne of good by donating to animal advocacy charities. As part of this work I've noticed a certain level of inconsistency in the EA ecosystem: I encounter a lot more people who want the animal advocacy movement to 'win' than people working in or donating to the space.The numbersIt turns out this intuition is backed up by survey data. Sources (see Appendix for extra details): Meta Coordination Forum (MCF; 2024) / Talent Need Survey on ideal allocation of financial resources EA Community survey data from 2023 on jobs by cause area I obtained in private correspondence with David Moss. Historical EA [...] ---Outline:(01:07) The numbers(02:37) Accounting for the disparity(05:04) Appendix 1. Data Sources --- First published: January 13th, 2026 Source: https://forum.effectivealtruism.org/posts/FxZdQJXs45fTFnMEe/is-ea-underfunding-animal-advocacy-according-to-our-own --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
For episode 234, we're excited to welcome Yaniv Tal, a legendary builder who has helped shape the foundations of Web3 as we know it. Yaniv is the co-founder and former CEO of The Graph, one of the most critical pieces of decentralized infrastructure in the ecosystem, powering tens of thousands of applications across Web3. Today, he's building Geo, a project focused not on scaling transactions, but on rebuilding trust, knowledge, and coordination on the internet itself.In today's episode you'll learn:
Graphs and charts: One dealt with the average life span. I still have 17 years left according to statistical data. Another dealt with how much time we have left after arriving at age 65 to enjoy good health.
We're a week into 2026 and the optimism is still flowing! But that doesn't stop us hearing about all the binging and gains you had over the festive period. Plus, joining gyms, chocolate oranges, mixing Marmite with avocado and clothes not fitting. NOTE: there is more Easter chat in this episode than you'd expect from a January podcast.Send us a voicenote: 07468 286104 If you'd like to mark your weight loss with our exclusive certificates, get Extra Portions of this podcast and win CASH PRIZES go to patreon.com/noshameinagain or find us on the Patreon app. Hosted on Acast. See acast.com/privacy for more information.
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Why “context graphs” have suddenly become one of the most important ideas in enterprise AI, and what they reveal about why agents fail or succeed at real work. This episode explains the core idea behind context graphs, how they differ from systems of record and knowledge graphs, and why capturing decision traces — the why, not just the what — may be the key to scalable autonomy inside organizations. In the headlines: AI wearables make another run at relevance, China reports early success using AI for cancer detection, X faces global backlash over Grok moderation failures, and Yann LeCun publicly breaks with Meta's AI strategy. Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsZencoder - From vibe coding to AI-first engineering - http://zencoder.ai/zenflowRobots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai
Tots TURNT: We have an update on the final push for Tots TURNT. Shout outs to everyone that has donated. Corey's Angels Live: We must continue our watch of the best show on the Internet, Corey's Angels Live. This is a complete disaster. Special Guests: Fred Durst is in the building and Gerard McMahon, all the celebs! THE BEAR!, FUCK YOU, WATCH THIS!, THE KINKS!, FATHER CHRISTMAS!, SENTIENT NECK PUSSY!, AI!, ROAST!, TOTS TURNT!, DONATIONS!, SUPPORT!, COREY'S ANGELS LIVE!, COMMENTS!, CHAT!, SCROLLING!, DAISY DE LA HOYA!, HATERS!, TROLLS!, MODELS!, MERCH!, MONOLOGUE!, HOWARD STERN!, ROAST!, MICHAEL JACKSON!, SURGERY!, PHILIP SEYMOUR HOFFMAN!, LOVE OR POOP!, VOTES!, GRAPH!, GUY ON THE BOARDS!, TRYOUTS!, HATERS!, FRED DURST!, CRY LITTLE SISTER!, GERARD MCMAHON!, MICHAEL SCOTT!, SCUMBAG JOSH!, SUPERCHATS!, JAMESONANDJACK!, JUSTIN BIEBER!, PHILIP SEYMOUR HOFFMAN!, OVERDOSE!, BILL SHYTE!, FASHION SHOW!, ROCK OF LOVE!, DAISY OF LOVE!, SCIENCE!, BEANIE!, FISHERMAN HAT!, COOCOO!, AMERICA'S GOT TALENT!, BOOGIE DOWN!, PERFECT ENDING! You can find the videos from this episode at our Discord RIGHT HERE!
What happens when the person preaching on Sunday morning believes something completely different than the folks sitting in the pews? Well friends, that's exactly what we're digging into today. My buddy Ryan Burge brought the graphs—including some brand new data that hasn't even dropped on his Substack yet—and let me tell you, it's a real deal predicament for Mainline Protestantism. Turns out about 60-70% of mainline clergy identify as liberal, but only about 25% of the people in the pews do. That's not a gap, that's a canyon. We're talking ELCA, UCC, PCUSA, Episcopalians—the whole crew. And look, Ryan and I are both mainline folks, so we're not throwing rocks across the river here. We're throwing rocks at our own faces. We get into why this disconnect exists, what the "silver tsunami" of aging Boomers means for these congregations, and why young progressive folks aren't joining our churches even though we thought we built them a home. It's honest, it's a little uncomfortable, and yeah, we also talk about Zion Williamson and Christmas movies because that's just how we roll. If you want to go deeper on where American religion is headed, join me and Ryan along with Tony Jones for our upcoming class The Rise of the Nones this January at www.AmericanNones.com. Come on. You can WATCH the conversation and see the graphs on YouTube Dr. Ryan Burge is a professor of practice at the Danforth Center on Religion and Politics at Washington University in St. Louis. He is the author or co-author of four books including The Nones, The American Religious Landscape, and The Great Dechurching. He has written for the New York Times, the Wall Street Journal and POLITICO. He has also appeared on 60 Minutes, where Anderson Cooper called him, “one of the leading data analysts of religion and politics in the United States.” Previous Visits from Ryan Burge Gen Z Revival?: The Next Chapter in American Religious Life The 2024 Election & Religion Post-Mortem Distrust & Denominations Trust, Religion, & a Functioning Democracy What it's like to close a church The Future of Christian Education & Ministry in Charts The Sky is Falling & the Charts are Popping! Graphs about Religion & Politics w/ Spicy Banter a Year in Religion (in Graphs) Evangelical Jews, Educated Church-Goers, & other bits of dizzying data 5 Religion Graphs w/ a side of Hot Takes Myths about Religion & Politics Join us at Theology Beer Camp, October 8-10, in Kansas City! UPCOMING ONLINE CLASS: The Rise of the Nones One-third of Americans now claim no religious affiliation. That's 100 million people. But here's what most church leaders get wrong: they're not all the same. Some still believe in God. Some are actively searching. Some are quietly indifferent. Some think religion is harmful. Ryan Burge & Tony Jones have conducted the first large-scale survey of American "Nones", which reveals 4 distinct categories—each requiring a different approach. Understanding the difference could transform everything from your ministry to your own spiritual quest. Get info & join the donation-based class (including 0) here. This podcast is a Homebrewed Christianity production. Follow the Homebrewed Christianity, Theology Nerd Throwdown, & The Rise of Bonhoeffer podcasts for more theological goodness for your earbuds. Join over 75,000 other people by joining our Substack - Process This! Get instant access to over 50 classes at www.TheologyClass.com Follow the podcast, drop a review, send feedback/questions or become a member of the HBC Community. Learn more about your ad choices. Visit megaphone.fm/adchoices
The late Robert Solow was a giant among economists. When he was 98 years old he told Steve about cracking German codes in World War II, why it's so hard to reduce inequality, and how his field lost its way. SOURCES:Robert Solow, professor emeritus of economics at the Massachusetts Institute of Technology. RESOURCES:"Secrecy, Cigars, and a Venetian Wedding: How the P.G.A. Tour Made a Deal with Saudi Arabia," by Alan Blinder, Lauren Hirsch, Kevin Draper, and Kate Kelly (The New York Times, 2023)."Global Assessment of Environmental-Economic Accounting and Supporting Statistics: 2020," by United Nations Committee of Experts on Environmental-Economic Accounting (2021)."Where Modern Macroeconomics Went Wrong," by Joseph E. Stiglitz (Oxford Review of Economic Policy, 2015)."As Inequality Grows, So Does the Political Influence of the Rich," (The Economist, 2018)."Big Bang Financial Deregulation and Income Inequality: Evidence From U.K. and Japan," by Daniel Waldenstrom and Julia Tanndal (VoxEU, 2016)."The Fall And Rise Of U.S. Inequality, In 2 Graphs," by Quoctrung Bui (Planet Money, 2015).Nobel Prize Biographical, by Robert Solow (1987).Principles of Political Economy, by John Stuart Mills (1848). EXTRAS:"Is Economic Growth the Wrong Goal? (Update)," by Freakonomics Radio (2023). Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this sponsored Soap Box edition of the Risky Business podcast, Patrick Gray chats with Jared Atkinson, CTO of SpecterOps, about BloodHound OpenGraph. OpenGraph enumerates attack paths across platforms and services, not just your primary directories. A compromised GitHub account to on-prem AD compromise attack path? It's a thing, and OpenGraph will find it. Cross-platform attack path enumeration! So good! This episode is also available on Youtube. Show notes
Join the Refrigeration Mentor Hub here Learn more about Refrigeration Mentor Customized Technical Training Programs at www.refrigerationmentor.com/courses This episode is another of our "Morning Coffee" sessions with longtime refrigeration professionals Andrew Freeburg and Erik Holland, diving deep into trend graphs and CO2 refrigeration system troubleshooting. We cover understanding and reading trend graphs, practical tips for locating and analyzing system inefficiencies, and the procedures for fine-tuning refrigeration systems. We also discuss gas coolers, the impact of ambient temperature on system performance, and effective strategies for maintaining system stability in cold climates. Interested in joining the next Morning Coffee live? Join our FREE Refrigeration Mentor Community today. In this episode, we cover: -Understanding trend graphs -Comparing graphs on E2 systems -Trend graph analysis -Graphing techniques on E2 systems -Analyzing CO2 system graphs -System fine-tuning -Creating work and building credibility -Advanced graph analysis and troubleshooting -High pressure valve and bypass valve correlation -Gas cooler and system stability -System oscillations and expansion valves -Gas cooler maintenance -High pressure valve control -Fan control strategies -Troubleshooting fan and gas cooler issues -Winter challenges in CO2 systems -Heat reclaim and system efficiency Helpful Links and Resources: Episode 287. CO2 Experts: Using Trend Graphs to Troubleshoot CO2 Systems with Andrew Freeburg Episode 144. Troubleshooting CO2 High Pressure Valves Using Trend Graphs with the CAREL Boss System Episode 350. Supermarket Refrigeration Tips and Tricks with Robert Ochs
In this special 200th episode of Reclaim Your Rise, I sit down with Risely coaching alum Layne—an ICU nurse practitioner who has lived with type 1 diabetes for over 30 years—to explore a struggle I know so many in our community quietly carry: the weight of comparison, perfectionism, and those triggering “perfect” flat-line graphs. Even with an A1C of 5.8, Layne shares how she felt mentally drained from micromanaging every detail of her diabetes and questioning whether true freedom and stable numbers could ever exist together. She opens up about the years when flat graphs came only from restriction, and how that left her wondering if peace was possible without losing herself again. In our conversation, Layne reflects on how she learned to redefine progress, shift her mindset, and rebuild trust with her body in a way that finally brought her steadier days and more ease. I'm so excited for you to hear this honest, raw, and deeply relatable story. And to celebrate episode 200, I'm hosting a special giveaway for the community—details are inside!
Day & Ben analyze 6 different heart rate graphs from the athletes we coach.» Watch on YouTube: https://youtu.be/ibpFvGpzIqs» View All Episodes: https://zoarfitness.com/podcast/» Hire a Coach: https://www.zoarfitness.com/coach/» Shop Programs: https://www.zoarfitness.com/product-category/downloads/» Follow ZOAR Fitness on Instagram: https://www.instagram.com/zoarfitness/Support the show
Sherif Mansour, Head of AI at Atlassian, discusses bridging AI agents with massive-scale enterprise software deployment, drawing insights from Atlassian's millions of non-technical users. He shares his framework for avoiding "AI Slop" using Taste, Knowledge, and Workflow, and explains Atlassian's "Teamwork Graph" for complex enterprise queries beyond RAG. The conversation also explores the evolving relationship between AI and UI, and the shift from humans as workers to architects of AI-driven processes. This episode offers practical wisdom for both AI engineers and business leaders navigating the future of AI-enabled organizations. Sponsors: Framer: Framer is the all-in-one tool to design, iterate, and publish stunning websites with powerful AI features. Start creating for free and use code COGNITIVE to get one free month of Framer Pro at https://framer.com/design Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) About the Episode (03:56) Atlassian's AI Vision (08:27) Trust, Authenticity, and Slop (14:10) Taste, Knowledge, and Workflow (Part 1) (17:33) Sponsors: Framer | Tasklet (20:14) Taste, Knowledge, and Workflow (Part 2) (Part 1) (29:51) Sponsor: Shopify (31:47) Taste, Knowledge, and Workflow (Part 2) (Part 2) (31:48) Technicals: RAG vs. Graphs (40:48) Forgetting, Cost, and Optimization (52:28) The Model Commoditization Debate (55:12) The Future of AI Interfaces (01:02:44) How AI Changes SaaS (01:09:43) Debating the One-Person Unicorn (01:16:17) Becoming a Workflow Architect (01:21:39) The Browser for Work (01:33:23) How Leaders Drive Adoption (01:39:26) Conclusion: Just Go Tinker (01:40:08) Outro SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
All links and images can be found on CISO Series. This week's episode is hosted by David Spark, producer of CISO Series and Andy Ellis (@csoandy), principal of Duha. Joining them is our sponsored guest, Nathan Hunstad, director, security, Vanta. In this episode: Metrics that matter Testing for real AI as an assistant Intelligence without context Huge thanks to our sponsor, Vanta Vanta automates key areas of your GRC program—including compliance, risk, and customer trust—and streamlines the way you manage information. A recent IDC analysis found that compliance teams using Vanta are 129% more productive. Get back time to focus on strengthening security and scaling your business at vanta.com/ciso
Show DescriptionWhat do Balatro streamers do when the game is over, Random in CSS is so hot right now, Dave has a better idea for charts and graphs that would change the world, Quiet UI follow up, Dave tries vibe coding a tennis app and doesn't completely John McEnroe his laptop, Chris wonders about better cursor UI on the web, and debating affordances vs conventions. Listen on WebsiteWatch on YouTubeLinks Jynxzi - Twitch BALL x PIT on Steam Could Open Graph Just Be a CSS Media Type? | Scott Jehl, Web Designer/Developer https://webawesome.com Podcast Awesome Quiet UI A Beautiful Site Eleventy is a simpler static site generator Don't use custom CSS mouse cursors – Eric Bailey Home | Rach Smith's digital garden The Two Button Problem – Frontend Masters Blog SponsorstldrawHave you ever wanted to build an app that works kinda like Miro or Figma, that has a zoomable infinite canvas, that's multiplayer, and really good, but you also want to build it in React with normal React components on the canvas? Good news! tldraw is the world's first, best, and only SDK for building infinite canvas apps in React. tldraw takes care of all the canvas complexities — things like the camera, selection logic, and undo redo — so that you can focus on building the features that matter to your users. It's easy to use with plenty of examples and starter kits, including a kit where you can use AI to create things on the canvas. Get started for free at tldraw.dev/shoptalk, or run npm create tldraw to spin up a starter kit.
For episode 625 of the BlockHash Podcast, host Brandon Zemp is joined by Rodrigo Coelho, CEO of Edge & Node. Autonomous agents across blockchain ecosystems are beginning to transact, communicate, and collaborate independently, yet there is no standardized way to manage agent to agent interactions (payments).Edge & Node, the founding team behind The Graph, is solving this problem by providing the missing management layer, Ampersend. Ampersend extends Coinbase's x402 payment protocol and Google's A2A communication standard with observability, automation, and compliance-ready controls.The result is an operational system where developers, startups, and enterprises can see how agents interact, set policies, manage budgets, and ensure reliability. Ampersend also aligns with Ethereum's emerging ERC-8004 agent discovery standard. ⏳ Timestamps: (0:00) Introduction(1:15) Who is Rodrigo Coelho?(8:57) What is Edge & Node?(10:10) What is Ampersend?(12:35) What will the Agentic economy look like?(18:53) Scaling the Agentic economy(25:42) Neo Robots(26:43) Importance of Agentic verifiability(28:00) Ampersend launch(30:10) Edge & Node roadmap for 2026(32:47) Edge & Node website, socials & community
Ben Criddle talks BYU sports every weekday from 2 to 6 pm.Today's Co-Hosts: Ben Criddle (@criddlebenjamin)Subscribe to the Cougar Sports with Ben Criddle podcast:Apple Podcasts: https://itunes.apple.com/us/podcast/cougar-sports-with-ben-criddle/id99676
Scott and Jenny name and define the blood sugar “shapes” seen on CGM graphs—bell curves, spikes, plateaus, roller coasters—to create a shared language for understanding glucose patterns. Free Juicebox Community (non Facebook) Type 1 Diabetes Pro Tips - THE PODCAST Eversense CGM Medtronic Diabetes Tandem Mobi ** twiist AID System Drink AG1.com/Juicebox Use code JUICEBOX to save 40% at Cozy Earth CONTOUR NextGen smart meter and CONTOUR DIABETES app Dexcom G7 Go tubeless with Omnipod 5 or Omnipod DASH * Get your supplies from US MED or call 888-721-1514 Touched By Type 1 Take the T1DExchange survey Apple Podcasts> Subscribe to the podcast today! The podcast is available on Spotify, Google Play, iHeartRadio, Radio Public, Amazon Music and all Android devices The Juicebox Podcast is a free show, but if you'd like to support the podcast directly, you can make a gift here or buy me a coffee. Thank you! *The Pod has an IP28 rating for up to 25 feet for 60 minutes. The Omnipod 5 Controller is not waterproof. ** t:slim X2 or Tandem Mobi w/ Control-IQ+ technology (7.9 or newer). RX ONLY. Indicated for patients with type 1 diabetes, 2 years and older. BOXED WARNING:Control-IQ+ technology should not be used by people under age 2, or who use less than 5 units of insulin/day, or who weigh less than 20 lbs. Safety info: tandemdiabetes.com/safetyinfo Disclaimer - Nothing you hear on the Juicebox Podcast or read on Arden's Day is intended as medical advice. You should always consult a physician before making changes to your health plan. If the podcast has helped you to live better with type 1 please tell someone else how to find it!