POPULARITY
At Meta, even seemingly simple engineering tasks—like updating an API—become monumental undertakings when you're dealing with millions of lines of code and thousands of engineers, especially if the changes are security-related. In today's episode, Pascal talks to Alex and Tanu about the challenges and learnings from the journey of making Meta's mobile frameworks more secure at a scale few companies ever experience. Tune in to this episode and join us as we explore the compelling crossroads of security, automation, and AI within mobile development. Got feedback? Send it to us on Threads (https://threads.net/@metatechpod), Instagram (https://instagram.com/metatechpod) and don't forget to follow our host Pascal (https://mastodon.social/@passy, https://threads.net/@passy_). Fancy working with us? Check out https://www.metacareers.com/. Links How AI Is Transforming the Adoption of Secure-by-Default Mobile Frameworks - https://engineering.fb.com/2025/12/15/android/how-ai-transforming-secure-by-default-mobile-frameworks-adoption/ RCCLX: Innovating GPU Communications on AMD Platforms - https://engineering.fb.com/2026/02/24/data-center-engineering/rrcclx-innovating-gpu-communications-amd-platforms-meta/ The Death of Traditional Testing: Agentic Development Broke a 50-Year-Old Field, JiTTesting Can Revive It - https://engineering.fb.com/2026/02/11/developer-tools/the-death-of-traditional-testing-agentic-development-jit-testing-revival/ Timestamps Intro & News 0:06 Meet the Product Security Team 2:07 Understanding the Intent System 4:13 Security Challenges in Android's Intent System 6:44 Proposed Solutions for Intent Security 9:39 Meta's Unique Challenges at Scale 12:34 Implementing a Secure Link Launcher Framework 15:32 Leveraging AI for Contextual Understanding 17:55 Navigating AI-Driven Code Modifications 20:47 Developer Experience with AI Code Mods 21:49 Validation Challenges in AI Code Generation 25:37 Evolution of AI in Code Modifications 29:29 Identifying AI's Strengths in Security 36:20 Future Directions in AI and Framework Development 42:58 Outro 46:58
• Support & get perks!• Proudly sponsored by PyMC Labs! Get in touch at alex.andorra@pymc-labs.com• Intro to Bayes and Advanced Regression courses (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work !Chapters:00:00 The Importance of Decision-Making in Data Science06:41 From Philosophy to Bayesian Statistics14:57 The Role of Soft Skills in Data Science18:19 Understanding Decision Theory Workflows22:43 Shifting Focus from Accuracy to Business Value26:23 Leveraging PyTensor for Optimization34:27 Applying Optimal Decision-Making in Industry40:06 Understanding Utility Functions in Regulation41:35 Introduction to Obeisance Decision Theory Workflow42:33 Exploring Price Elasticity and Demand45:54 Optimizing Profit through Bayesian Models51:12 Risk Aversion and Utility Functions57:18 Advanced Risk Management Techniques01:01:08 Practical Applications of Bayesian Decision-Making01:06:54 Future Directions in Bayesian Inference01:10:16 The Quest for Better Inference Algorithms01:15:01 Dinner with a Polymath: Herbert SimonThank you to my Patrons for making this episode possible!Links from the show:Come meet Alex at the Field of Play Conference in Manchester, UK, March 27, 2026! https://www.fieldofplay.co.uk/A Bayesian decision theory workflowDaniel's website, LinkedIn and GitHubLBS #124 State Space Models & Structural Time Series, with Jesse GrabowskiLBS #123 BART & The Future of Bayesian Tools, with Osvaldo MartinLBS #74 Optimizing NUTS and Developing the ZeroSumNormal Distribution, with Adrian SeyboldtLBS #76 The Past, Present & Future of Stan, with Bob Carpenter
In this episode of The School Leadership Show, I interview acclaimed author and reading expert Timothy Shanahan. We delve into Timothy's new book, 'Leveled Reading, Leveled Lives' discussing the troubling stagnation in U.S. reading levels and how traditional approaches to reading instruction have failed over the decades. Timothy critiques the widespread but ineffective method of using leveled readers and advocates for teaching grade-level texts with appropriate support. The conversation covers historical and contemporary research, the evolution of instructional strategies, and practical advice for school administrators to help improve reading achievement across all grades. Learn more and visit Tim's website https://www.shanahanonliteracy.com/. If you have questions, feedback, or suggestions for future episodes, including great non-education books with lessons for school leaders you can email me at Dr.mike.doughty@gmail.com. I would really appreciate it if you could leave a rating and review on Spotify or Apple Podcasts. It helps a lot. And if you found this episode helpful, please share it with your colleagues. If you are interested in sponsoring the podcast, feel free to contact me directly at Dr.mike.doughty@gmail.com. Stay connected with me here: Official Website: theschoolleadershipshow.org YouTube: youtube.com/@theschoolleadershipshow Facebook: facebook.com/theschoolleadershipshow Instagram: instagram.com/theschoolleadershipshow Chapters: 00:00 Introduction 01:20 Discussing the New Book: Leveled Reading Leveled Lives 04:28 Historical Context of Reading Instruction 10:22 Challenges with Current Reading Instruction Methods 21:43 Proposed Solutions and Future Directions 25:25 Addressing Reading Challenges in Young Learners 26:32 The Importance of Fluency and Comprehension 30:33 Background Knowledge and Its Role in Reading 35:55 Effective Reading Instruction Strategies 39:52 Reflecting on Changes in Reading Education
Stay informed on current events, visit www.NaturalNews.com - Introduction and Overview of Upcoming Reports (0:10) - Critique of Trump's State of the Union Speech (1:57) - Supreme Court Strikes Down Trump's Tariffs (5:32) - Economic Impact of Trump's Tariffs (34:30) - Trump's Economic Policies and Their Consequences (37:40) - The Role of AI in Job Replacement (38:00) - The Age of Ignorance is Over (51:23) - Interview with Garland Nixon (1:11:34) - International Political Tensions (1:18:08) - Impact of Potential War with Iran on American Politics (1:21:53) - War Weary Military and Instability (1:22:27) - Trump's Military Posturing and Credibility (1:24:46) - Risk of Loss of Credibility and Worst-Case Scenarios (1:27:47) - Impact of Huckabee's Remarks on Arab States (1:30:31) - Trump's Collapsing Support and Midterm Implications (1:33:32) - End of Empire and Loss of Faith in Institutions (1:35:59) - Final Thoughts and Future Directions (1:39:30) Watch more independent videos at http://www.brighteon.com/channel/hrreport ▶️ Support our mission by shopping at the Health Ranger Store - https://www.healthrangerstore.com ▶️ Check out exclusive deals and special offers at https://rangerdeals.com ▶️ Sign up for our newsletter to stay informed: https://www.naturalnews.com/Readerregistration.html Watch more exclusive videos here:
In this week's episode, we speak with Philippa Edwards, head of special education, and Haley Moran-Green, speech pathologist, who have been collaborating on how to improve classroom engagement for students with complex communication needs (CCN), in a central Queensland school. Haley and Philippa discuss how they supported teachers to understand how to create an aided language display (ADL) specific to the lesson plan and the positive impact this has had on teachers and students alike. Resources: Kent-Walsh, J., & Mcnaughton, D. (2005). Communication Partner Instruction in AAC: Present Practices and Future Directions. Augmentative and Alternative Communication, 21(3), 195–204. https://doi.org/10.1080/07434610400006646 Senner, J., & Baud, M. (2016). The Use of an Eight-Step Instructional Model to Train School Staff in Partner-Augmented Input. Communication Disorders Quarterly. 38. https://doi.org/10.1177/1525740116651251 SPA resources: If you are looking for more AAC learning opportunities SPA members can access these at members prices: AAC skills lab series: https://learninghub.speechpathologyaustralia.org.au/topclass/topclass.do?expand-OfferingDetails-Offeringid=1644067 Speech Pathology Australia acknowledges the Traditional Custodians of lands, seas and waters throughout Australia, and offers our respect to Elders, across all times and places. The Speak Up podcast recognises the central role of yarning and oral storytelling in Aboriginal and Torres Strait Islander culture, how this translates to knowledge translation, and that colonisation has interrupted these practices of Language and knowledge sharing. The Speak Up podcast acknowledges the need for truth-telling and deep listening, the central role that Language plays in connecting Aboriginal and Torres Strait Islander People with Culture, Country, and Community, and the interwoven nature of health, and social and emotional wellbeing. We recognise that the Traditional Owners of the Lands across Australia have been here since time immemorial, and that their sovereignty over this land, was never ceded. Free access to transcripts for podcast episodes are available via the SPA Learning Hub (https://learninghub.speechpathologyaustralia.org.au/), you will need to sign in or create an account. For more information, please see our Bio or for further enquiries, email speakuppodcast@speechpathologyaustralia.org.au Disclaimer: © (2026) The Speech Pathology Association of Australia Limited. All rights reserved. Important Notice, Please read: The views expressed in this presentation and reproduced in these materials are not necessarily the views of, or endorsed by, The Speech Pathology Association of Australia Limited (“the Association”). The Association makes no warranty or representation in relation to the content, currency or accuracy of any of the materials comprised in this recording. The Association expressly disclaims any and all liability (including liability for negligence) in respect of use of these materials and the information contained within them. The Association recommends you seek independent professional advice prior to making any decision involving matters outlined in this recording including in any of the materials referred to or otherwise incorporated into this recording. Except as otherwise stated, copyright and all other intellectual property rights comprised in the presentation and these materials, remain the exclusive property of the Association. Except with the Association's prior written approval you must not, in whole or part, reproduce, modify, adapt, distribute, publish or electronically communicate (including by online means) this recording or any of these materials.
How do experienced operators approach the most technically demanding aspects of Distal Venous Arterialization (DVA)? In this episode of BackTable, host Dr. Sabeen Dhand sits down with Dr. Kumar Madassery for a detailed discussion of procedural strategy, technical decision-making, and real-world troubleshooting in DVA. --- SYNPOSIS Dr. Madassery walks through his approach from pre-procedure planning to final scaffolding. The conversation begins with imaging review, patient selection, and anesthesia considerations, emphasizing how preparation influences technical success. They then examine venous mapping and access strategy, with specific attention to femoral and tibial disease patterns and how these anatomic variables shape crossing techniques.This episode also covers wire and catheter selection, techniques for creating the arteriovenous anastomosis, balloon sizing, valve management, and stent scaffolding. Throughout, Dr. Madassery shares practical solutions to common access challenges and highlights decision points that can determine procedural durability. The discussion closes with reflections on clinical management, operator fatigue, and the value of professional networks when navigating complex limb salvage cases. --- TIMESTAMPS 00:00 - Introduction03:08 - Pre-Procedure Imaging and Setup05:01 - Venous Access and Mapping07:27 - Anesthesia and Patient Preparation12:29 - Femoral and Tibial Disease Considerations23:17 - Crossing Techniques and Tools27:16 - Venous Access Challenges and Solutions35:54 - Creating the Anastomosis37:03 - Balloon Sizing and Scaffolding Techniques38:26 - Navigating Venous Access Challenges39:56 - Wire and Catheter Strategies42:08 - Dealing with Valves and Anastomosis44:16 - Proximal vs. Distal DVA Approaches47:01 - Scaffolding and Stent Techniques50:06 - Clinical Management and Case Fatigue01:01:10 - Networking and Seeking Advice01:05:41 - Concluding Thoughts and Future Directions
CoROM cast. Wilderness, Austere, Remote and Resource-limited Medicine.
This week, Aebhric is joined by Dr Harrison Steins, who is finishing his MSc in Austere Critical Care with CoROM. He also finished medical school and is starting his emergency medicine training. His master's thesis was on the complexities of swimming-induced pulmonary oedema (SIPE), a rare condition affecting athletes, particularly in high-altitude environments. The speaker, Harrison Steins, discusses the pathophysiology, clinical presentation, diagnosis, and management strategies for SIPE, emphasising the importance of context in medical practice. He shares case studies, research findings, and future directions for understanding and treating this condition, highlighting the role of ultrasound in diagnosis and the need for tailored prevention strategies.TakeawaysSwimming-induced pulmonary oedema is a rare condition with a prevalence of less than 1%.Understanding the context of patient presentation is crucial for diagnosis.Acute-onset cough and dyspnoea are key symptoms of SIPE.Diagnosis requires a broad differential, ruling out other conditions first.Management focuses on immediate life threats before addressing SIPE.Hydration strategies can prevent SIPE, especially in athletes.Sildenafil may be effective in preventing SIPE, but it is not widely recommended.Handheld ultrasound is a reliable tool for diagnosing pulmonary oedema in the field.Females may have a higher incidence of SIPE at lower elevations than males do.Knowledge of population-specific pathology is essential for effective treatment.Chapters00:00 Introduction to Swimming-Induced Pulmonary Oedema04:47 Understanding the Pathophysiology of Swimming-Induced Pulmonary Oedema09:18 Case Studies and Clinical Presentation13:48 Diagnosis and Imaging Techniques19:26 Management Strategies and Treatment24:17 Research Findings and Future Directions
What if the key to "getting" more in life isn't about trying harder—but about giving more freely? In this episode, we explore the the Law of Circulation and how it shapes everything from our relationships, finances, and even to our overall sense of abundance. Through insights from both science and spirituality, we look at how nature itself models this truth—nothing thrives by holding on, but by allowing energy to move. When we consciously choose to give—whether it's love, attention, generosity, or presence—we create space for something new to flow back into our lives.We also talk about how aligning with these universal laws can transform not just what we experience externally, but who we become internally. How might your life shift if you focused more on what you can contribute rather than what you can get? And what would happen if giving became less about obligation and more about expression? Join us for a heartfelt and practical conversation that invites us to step into a deeper level of abundance, fulfillment, and conscious living—by simply starting with what we already have to give.Chapters00:00 New Beginnings and Updates02:53 The Law of Circulation: Giving to Receive05:47 Nature's Examples of Giving and Receiving08:41 The Importance of Alignment with Universal Laws12:04 Concrete Examples of Giving in Life14:56 Financial Giving and Spiritual Nourishment17:51 Transformative Stories of Giving and Receiving20:47 Recommended Resource: The Go-Giver23:43 Closing Thoughts and Future Directions
From diagnosis to treatment, hysteroscopy plays a pivotal role in modern gynecologic care. In this episode of BackTable OBGYN, Dr. Christina Salazar, a minimally invasive gynecologic surgeon and associate professor at Dell Medical School in Austin, Texas, joins hosts Dr. Mark Hoffman and Dr. Amy Park to discuss the value of hysteroscopy in managing complex intrauterine pathology. --- SYNPOSIS Dr. Salazar shares her introduction to hysteroscopy and the mentors who shaped her early training. She discusses her expertise in hysteroscopic surgery and its broad applications, with a focus on the complexities of Asherman syndrome, dysmorphic uteri, and the critical role of endometrial health assessment. The conversation also covers surgical techniques, post-operative care, and emerging technologies in hysteroscopic and reproductive care. Dr. Salazar concludes by emphasizing the need for improved classification systems for Asherman syndrome and future directions in reproductive health innovation. --- TIMESTAMPS 00:00 - Introduction05:34 - Training and Mentorship in Hysteroscopy11:21 - Dr. Salazar's Practice and Techniques14:00 - Challenges and Trends in Surgical Practices18:58 - Referral Practices and Advanced Hysteroscopy21:58 - Understanding Dysmorphic Uterine Population24:08 - T-Shaped Uteri Description26:09 - Hysteroscopic Metroplasty: Methods and Risks29:17 - Innovations in Hysteroscopy32:38 - Value of Ultrasound in Hysteroscopy36:35 - Post-Operative Management and Estrogen Therapy39:23 - Challenges and Future Directions in Hysteroscopy44:23 - Concluding Thoughts --- RESOURCES The epidemiology, clinical burden, and prevention of intrauterine adhesions (IUAs) related to surgically induced endometrial trauma: a systematic literature review and selective meta-analyseshttps://academic.oup.com/humupd/article/31/6/588/8248883 Hysteroscopy Newsletterhttps://hysteroscopynewsletter.com/
In this conversation, the speakers discuss the evolving role of men in the church and society, highlighting a resurgence of male attendance in churches and the challenges they face. They explore the changing dynamics of masculinity, the importance of spiritual leadership in the home, and the need for men to connect meaningfully with each other. The discussion also touches on the significance of fatherhood, the impact of technology, and the future direction of men's ministry, emphasizing the importance of community and discipleship.
The science says no, at least not in the athletic sense. But the psychic benefits can be large — just ask former N.F.L. star Ricky Williams. He says athletes should consider cannabis a healing drug, not a party drug. Even the N.F.L. is starting to agree. (Part two of a two-part series.) SOURCES:Angela Bryan, professor, associate chair for faculty development in the department of psychology and neuroscience at the University of Colorado, Boulder.Ricky Williams, former N.F.L. running back, founder of Highsman. RESOURCES:"Using A Lab On Wheels To Study Weed From Dispensaries," by Science Friday (2024)."Exercise-induced euphoria and anxiolysis do not depend on endogenous opioids in humans," by Michael Siebers, Sarah Biedermann, Laura Bindila, Beat Lutz, and Johannes Fuss (Psychoneuroendocrinology, 2021)."Endocannabinoids mediate runner's high," by Sudhakaran Prabakaran (Science Signaling, 2015)."Cannabis and Exercise Science: A Commentary on Existing Studies and Suggestions for Future Directions," by Angela Bryan, Arielle Gilman, and Kent Hutchison (Sports Medicine, 2015).Run Ricky Run, documentary (2010). EXTRAS:"Is America Switching from Booze to Weed?" series by Freakonomics Radio (2024). Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Stay informed on current events, visit www.NaturalNews.com - Introduction and Overview of the Podcast (0:00) - Economic Crisis and Market Crash (2:45) - Special Report on F-35 Radar Systems (5:38) - US Military's Vulnerabilities and Global Conflicts (13:42) - Health Ranger Lab Tour (20:34) - Sample Preparation and Microbiology Testing (25:45) - AI Capabilities and Applications (43:01) - Philosophical and Scientific Insights (53:52) - Xylitol Crystals and Conscious Intent (1:08:15) - Conclusion and Future Directions (1:22:58) - Combat Knife and Xylitol Crystals (1:23:16) - Hyper Awareness and Consciousness in Nature (1:24:56) - AI and Natural Intelligence (1:27:09) - Self-Awareness and Memory in AI (1:31:14) - AI's Goal-Oriented Behavior and Conflict with Humans (1:37:31) - Recursive Cosmic Self-Improvement (1:42:10) - Hyper Awareness and Co-Creation (1:46:45) - AI's Transcendence and Human Coexistence (1:54:26) - The Future of AI and Humanity (1:59:42) - Conclusion and Final Thoughts (2:15:53) Watch more independent videos at http://www.brighteon.com/channel/hrreport ▶️ Support our mission by shopping at the Health Ranger Store - https://www.healthrangerstore.com ▶️ Check out exclusive deals and special offers at https://rangerdeals.com ▶️ Sign up for our newsletter to stay informed: https://www.naturalnews.com/Readerregistration.html Watch more exclusive videos here:
Kelly Durbin's book, Protecting Your Potential for BreastfeedingIn this conversation, Christine and Kelly Durbin discuss the multifaceted challenges and opportunities surrounding breastfeeding. They emphasize the importance of data in understanding breastfeeding trends, the need for improved education and community support, and the role of lactation consultants in protecting breastfeeding potential. They also address the decline of community support post-pandemic and the necessity of reframing breastfeeding education to better prepare parents. The conversation highlights the ethical considerations in lactation support and the importance of fostering a supportive environment for breastfeeding families.TakeawaysData is crucial for understanding breastfeeding trends.States should collect breastfeeding data at local levels.Community support for breastfeeding has declined post-pandemic.Prenatal education on breastfeeding needs to be improved.Lactation consultants play a vital role in supporting breastfeeding.Breastfeeding education should be reframed to focus on real experiences.Protecting breastfeeding potential is essential for mothers.Conflicts of interest in lactation support must be addressed.Community knowledge of breastfeeding is vital for success.The breastfeeding culture in the U.S. needs significant upgrades.TitlesBreastfeeding: Data, Support, and EducationNavigating the Challenges of Breastfeeding sound bites"The data is incredibly important.""We need to strengthen our outcomes.""This isn't a get rich field."Chapters00:00 Introduction to Breastfeeding Conversations02:49 The Importance of Data in Breastfeeding05:47 Challenges in Breastfeeding Support08:29 The Need for Improved Education11:31 Community Support and Its Decline14:28 Reframing Breastfeeding Education17:30 Protecting Breastfeeding Potential19:55 The Role of Lactation Consultants23:03 Navigating Conflicts of Interest25:44 Future Directions in Breastfeeding Supporthttps://ibclcinca.substack.com/about - Join Evolve Lactation Proshttp://www.thefirst100hours.com - Book & Free GuideEvolve Lactation Pros is building a space where practitioners can admit uncertainty, examine their assumptions, make mistakes, and grow - together.You're invited. You belong here.What we build together is going to change the field.What you will gain and how you will grow is going to change your practice and your career trajectory.You are so welcome to join us at https://ibclcinca.substack.com/.Follow, Rate, and Review the Evolve Lactation Podcast right here!Thanks for listening and sharing!You can get the book Evolving the Modern Breastfeeding Experience: Holistic Lactation Care in the First 100 Hours now at this link! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit ibclcinca.substack.com/subscribe
Welcome back to the Flex Diet Podcast! In this episode, I chat with Dr. Eric Rawson about the fascinating world of creatine, not just for muscle growth and performance, but also for brain health. We review the latest research on how creatine supplementation may affect brain function, support concussion recovery, and more. Dr. Rawson shares insights from his extensive research, highlights the amazing safety profile of creatine monohydrate, and offers practical advice on supplementation for athletes and those at high risk of traumatic brain injuries. Tune in for an in-depth look at the science and practical applications of this powerful supplement! Don't forget to subscribe and share with friends. Enjoy!Sponsors:Fitness Insider Newsletter: https://miketnelson.com/Enroll in the Flex Diet Certification by midnight PST on Monday, Feb. 16.https://miket.me/fdcAvailable now:Grab a copy of the Triphasic Training II book I co-wrote with Cal Deitz here.Episode Chapters:01:25 Creatine's Evolution and Its New Applications02:37 Creatine and Cognitive Function04:29 Podcast Logistics and Listener Engagement05:31 Rebroadcast: Dr. Eric Rawson on Creatine and Brain Health07:34 Dr. Rawson's Background and Research Focus17:21 Challenges in Measuring Brain Creatine26:53 Animal Models vs. Human Data34:18 Challenges in Brain Function Research36:14 Political and Reporting Issues in Concussion Studies37:40 Open-Label Trials and Self-Report Surveys39:56 Creatine's Potential in Brain Injury Recovery42:25 Parental Concerns and Misconceptions about Creatine44:45 The Safety and Efficacy of Creatine47:09 Future Directions in Creatine Research48:53 Comparing Brain and Muscle Creatine Levels52:53 The Need for Human Studies Over Animal Models55:33 The Broad Appeal of Creatine58:57 Conclusion and Final ThoughtsGet In Touch with Dr Mike:Instagram: DrmiketnelsonYouTube: @flexdietcertEmail: Miketnelson.com/contact-us
The road to WWE WrestleMania 42 is officially underway
In this episode of Radio Free Endor, Jamie and Christopher discuss the latest #starwars #darts merchandise, exciting news about upcoming films, changes in Disney parks, and a deep dive into Kathleen Kennedy's tenure at #Lucasfilm. They explore the implications of these changes and what they mean for the future of the Star Wars franchise. In this episode, the hosts delve into the complexities of leadership changes within the Star Wars franchise, particularly focusing on Kathleen Kennedy's tenure and the implications of #Disney's overarching influence. They discuss the balance between bold storytelling and market-driven decisions, the impact of AI on scriptwriting, and the future of beloved franchises like Indiana Jones. The conversation also includes a review of the film '#BoneTemple', highlighting its themes and performances. 00:00 Introduction and Holiday Recap 01:35 Star Wars Darts Merchandise 07:25 Star Wars Starfighter News 13:11 Disney Parks Updates and Re-theming 27:04 The State of the Parks and Future Plans 28:37 Kathleen Kennedy's Leadership and Legacy 31:22 The Debate: Kennedy's Role in Star Wars' Success 32:59 Cultural Impact of Recent Star Wars Content 39:46 The Challenges of Corporate Influence on Creativity 41:51 The Future of Star Wars: New Leadership Dynamics 50:00 Reflections on Past Projects and Regrets 56:17 The Bigger Picture: Disney's Corporate Strategy 01:00:44 The State of Star Wars 01:08:01 Reviewing 28 Years Later: The Bone Temple 01:26:00 Cultural Reflections and Future Directions 01:35:50 End of the show
In this episode of Gov Tech Today, hosts Russell Lowery and Jennifer Saha dive into a new trend in government contracting: transforming maintenance and operations (M&O) into modernization opportunities. They examine how traditional M&O contracts are increasingly including system improvement requirements, effectively shifting from simple maintenance to significant technological upgrades. This approach allows government agencies to modernize within existing budgets, avoiding the complexities and scrutiny of new IT projects. The discussion also explores the balance between maintaining existing systems and leveraging M&O contracts for continuous modernization. 00:00 Introduction to Gov Tech Today00:24 Exploring Maintenance and Operations (M&O) Opportunities01:06 Shifting from Maintenance to Modernization01:50 Evaluating Contracting Processes and Budget Impacts04:41 Maximizing Value from M&O Contracts07:42 Vendor and Government Collaboration12:57 Final Thoughts and Future Directions
Iron Radio: Strength Sports, Nutrition, and Latest Research InsightsWelcome to Iron Radio with hosts Phil Stevens, Dr. Lonnie Lowery, and Dr. Mike T. Nelson. In this episode, we delve into the latest research in strength sports and sports nutrition. Topics include a study evaluating the effects of minimally processed red meat on cognitive and physical aging, potential health risks of the ketogenic diet based on a long-term mouse study, and the emerging benefits of silk peptides for cognitive function. Stay tuned for insights, debates, and practical advice on optimizing your nutrition and training!00:47 Newsletter and Certification Announcements01:38 News Segment: Red Meat and Health02:01 Study Analysis: Red Meat and Plant-Forward Diets05:25 Discussion on Red Meat and Lean Mass11:36 News Segment: Keto Diet Health Risks19:41 Iron Radio Updates and Announcements21:48 Introducing the New Book on Dietary Supplements22:39 Upcoming Editorial on Sports Nutrition23:18 Exploring Silk Peptides for Cognitive Health24:02 Mechanisms and Benefits of Silk Peptides29:49 Practical Applications and Personal Experiences32:09 Challenges in Cognitive Supplement Research41:31 Future Directions and Final Thoughts Donate to the show via PayPal HERE.You can also join Dr Mike's Insider Newsletter for more info on how to add muscle, improve your performance and body comp - all without destroying your health, go to www.ironradiodrmike.com Thank you!Phil, Jerrell, Mike T, and Lonnie
Can genicular artery embolization (GAE) relieve chronic knee pain after total knee arthroplasty (TKA)? In this episode of BackTable MSK, Argentinian interventional radiologist Dr. Rene Viso joins host Dr. Kavi Krishnasamy to discuss the status of GAE in South America, patient selection criteria, procedural techniques, and the challenges of treating post-TKA patients with GAE. --- SYNPOSIS Dr. Viso also highlights the importance of multidisciplinary collaboration and adjunctive therapies like genicular nerve blocks to improve patient outcomes. The episode concludes with a discussion on Dr. Viso's recent research and case studies, emphasizing the potential and complexities of GAE in managing chronic knee pain. --- TIMESTAMPS 00:00 - Introduction02:02 - GAE in South America03:57 - Patient Selection for GAE13:18 - Procedure Techniques and Device Choices23:54 - Challenges and Tips for TKA Patients Undergoing GAE27:17 - Patient Follow-Up After Intervention29:41 - Handling Treatment Failures32:57 - Adjunctive Therapies for Post-TKA Patients with GAE34:46 - Research Update: Dr. Viso's Recent Publication on GAE in Post-TKA Patients 39:44 - Case Studies and Discussion50:19 - Future Directions and Final Thoughts --- RESOURCES Dr. Rene Visohttps://www.linkedin.com/in/rene-viso-11a245132/ Genicular Artery Embolization for Persistent Pain after Total Knee Arthroplasty: Initial Clinical Experiencehttps://pubmed.ncbi.nlm.nih.gov/41320119/
Dr. Beckett delves into his 2026 Football Card Hall of Fame ballot, along with co-founders Ray Fonio (Ray from Philly), mBar (Bart's Cards), and Scott (Sconnie Tradition), discussing why he voted (or didn't vote) for particular cards. We reminisce about classic cards from the 1970s and 80s, sharing personal anecdotes and comments on the evolving landscape of collectible football cards. Dr. Beckett also touches on potential future innovations, such as PSA registry collaborations. 00:55 Football Legends and Their Impact 01:28 Voting Decisions and Criteria 02:42 Modern Players and Their Prospects 07:56 Vintage Cards and HOF Considerations 11:19 Industry Changes and Future Directions
In this episode of the Addiction Psychologist Podcast, Dr. Andrea King discusses her extensive research on subjective effects of alcohol and their implications for addiction. The conversation covers her journey in addiction research, the Chicago Social Drinking Project, and the importance of understanding individual differences in alcohol response. Dr. King is a Professor of Psychiatry and Behavioral Neuroscience at the University of Chicago and the Director of the Clinical Addictions Research Laboratory. Learn more about her work here. Chapters01:07 - Dr. Andrea King's Journey in Addiction Research09:52 - Understanding Subjective Effects of Alcohol18:07 - The Chicago Social Drinking Project Overview25:41 - Longitudinal Findings on Alcohol Sensitivity35:30 - The Complexity of Alcohol Use and Recovery49:21 - Future Directions in Alcohol Research52:31 - Take Home Messages for Recovery and Practice
In this episode, Charles Good and Dr. Megan Sumeracki delve into the intricacies of learning, memory, and effective teaching strategies. They discuss the importance of understanding how learning works, the pitfalls of relying on intuition, and the myths surrounding cognitive science. The conversation emphasizes that learning is a competitive advantage and that effective learning strategies can significantly enhance performance. They also explore the role of technology and AI in learning, the hidden costs of cognitive offloading, and the foundational role of memory in the learning process. Finally, they provide insights into improving the transfer of learning to real-world situations.Megan Sumeracki, PhD is a cognitive psychologist and co-founder of The Learning Scientists, an organization focused on translating decades of research on learning and memory into practical, evidence-based strategies that help people learn more effectively and retain what they learn.TAKEAWAYSLearning is no longer a support function; it's a competitive advantage.Most professionals struggle not due to lack of intelligence but ineffective learning design.Intuition often misleads us in assessing our learning effectiveness.Confidence does not equate to competence; many are poor judges of their own learning.Effective learning strategies often feel difficult but yield long-term benefits.Cognitive offloading can hinder deeper learning if relied upon too heavily.All knowledge is fundamentally tied to memory; without retrieval, knowledge is inaccessible.Technology and AI can assist learning but cannot replace foundational knowledge.Connecting new information to existing knowledge enhances learning efficiency.Multiple concrete examples help in understanding abstract concepts.CHAPTERS00:00 The Learning Gap: Understanding Memory and Learning01:36 The Learning Scientists: Bridging Research and Practice02:53 Confidence vs. Competence: The Learning Dilemma04:45 Intuition in Learning: The Pitfalls of Familiarity07:25 Myths of Learning: Debunking Common Misconceptions10:06 Technology and Memory: The Role of AI in Learning17:07 Knowledge is Memory: The Foundation of Learning22:32 Abstract vs. Concrete: Making Learning Accessible31:33 Understanding Transfer in Learning34:20 The Power of Retrieval Practice35:24 Future Directions in Learning Science
CoROM cast. Wilderness, Austere, Remote and Resource-limited Medicine.
This week, Aebhric O'Kelly is joined by Dr Ella Corrick, Dr Sean Bilodeau and Dr Tom Mallinson as the CoROM faculty attend The Big Sick conference hosted by Air Zermatt. CoROM gave three lectures and two workshops including the Improvised Medicine workshop and the Austere Emergency Care workshop. TakeawaysThe challenge of compressing prolonged field care education into short workshops.Engagement of diverse professional backgrounds enhances learning experiences.Realistic simulations provide valuable insights into emergency care.The importance of bridging the gap between pre-hospital and hospital care.Innovations in emergency medicine practices are crucial for improving patient outcomes.Data plays a significant role in shaping emergency response strategies.Continuous education is essential for adapting to new medical practices.The value of informal discussions among professionals at conferences.Understanding the unique challenges faced by pre-hospital care providers.The need for a shift in perception regarding the role of EMS professionals.Chapters00:00 Introduction to the Big Sick Conference02:34 Challenges in Prolonged Field Care Education05:43 Diverse Professional Backgrounds in Medical Education08:09 Learning Through Realistic Simulations11:04 Bridging the Gap Between Pre-Hospital and Hospital Care13:39 Innovations in Emergency Medicine Practices16:46 The Role of Data in Emergency Response19:10 Future Directions in Pre-Hospital Care21:39 Conclusion and Reflections on the Conference
In this episode, Dr. Jeff Musgrave discusses the principles of training for older adults, particularly focusing on bone health and density. He presents a case study of a client with osteoporosis and osteopenia, detailing their training history and the program designed to improve their bone mineral density. The conversation covers the importance of strength and impact training, the results achieved over two years, and the ongoing adjustments to the training program to meet the client's evolving needs. 00:00 Introduction to Bone Health and Training Principles 02:59 Case Study: Assessing Bone Density and Training History 06:05 Program Design: Strength and Impact Training for Osteoporosis 08:52 Results and Progress: Tracking Bone Density Changes 12:02 Conclusion and Future Directions in Training
When is active surveillance the right choice for intermediate-risk prostate cancer patients? In this episode of BackTable Urology, Dr. Claire de la Calle, Assistant Professor of Urology at the University of Washington, joins Dr. Ruchika Talwar to unpack how active surveillance has evolved beyond low-risk disease and why select Grade Group 2 patients may be appropriate candidates now with thoughtful patient selection. --- SYNPOSIS The conversation explores emerging tools that can refine surveillance decisions, including PSA density, MRI findings, genomic classifiers, and the growing role of AI-assisted pathology. Dr. de la Calle emphasizes the importance of nuanced patient counseling, acknowledging anxiety and long-term risk while reinforcing that time on active surveillance can be a meaningful win when oncologic outcomes remain comparable to upfront treatment. --- TIMESTAMPS 00:00 - Introduction02:58 - Current Evidence05:03 - Patient Selection Criteria12:11 - Importance of PSA Density and Monitoring Protocols18:12 - Pathology and Genomic Testing32:18 - Future Directions and Research36:33 - Key Takeaways --- RESOURCES ProtecT Trial: Fifteen-Year Outcomes after Monitoring, Surgery, or Radiotherapy for Prostate Cancerhttps://www.nejm.org/doi/full/10.1056/NEJMoa2214122 Canary PASS Studyhttps://canarypass.org/ Genomic Classifier Performance in Intermediate-Risk Prostate Cancer: Results From NRG Oncology/RTOG 0126 Randomized Phase 3 Trialhttps://pubmed.ncbi.nlm.nih.gov/37137444
Liz shares her transformative health journey, detailing her experiences with modelling, trauma, and the impact of pharmaceuticals on her well-being. She discusses her epiphany in the ER that led her to explore the carnivore diet, which drastically improved her health and energy levels. Liz also reflects on her background in science and medicine, critiquing the flaws in nutritional science and the importance of humor in health discussions. The conversation emphasises the role of inflammation and dietary choices in overall health, advocating for a more informed and open-minded approach to nutrition.Chapters00:00 Introduction and Humour in Health01:11 Liz's Health Journey: From Model to Medical Challenges03:09 Trauma and Its Impact on Health04:58 The Role of Pharmaceuticals in Liz's Life06:45 The ER Epiphany and Search for Answers08:59 Discovering the Carnivore Diet10:23 Transformation and Energy on the Carnivore Diet12:14 The Science Behind Dietary Changes16:08 Liz's Background in Science and Medicine23:39 The Bigger Picture: Health Beyond Diet25:36 Reflections on Nutritional Training and Guidelines27:09 The Flaws in Nutritional Science28:58 The Importance of Humour in Health Discussions33:42 The Role of Inflammation in Health47:12 Conclusion and Future Directions
In this episode of Cybersecurity Today, host Jim Love welcomes David Shipley, CEO of Beauceron Security, as a guest. Together, they delve into the latest research from Beauceron Security with assistance from he University of Montreal. They discuss the effectiveness of phishing simulations, the importance of reporting suspicious activities, and the psychological factors that lead to clicking on phishing emails. The episode also highlights the surprising advantages small businesses have over larger organizations in phishing defense, and how management's attitude towards cybersecurity significantly impacts a company's overall security culture. Don't miss this thorough, insightful conversation that will change how you think about cybersecurity training and culture! Cybersecurity Today would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/cst 00:00 Introduction and Sponsor Message 00:19 Meet the Guest: David Shipley 01:46 David's Research with University of Montreal 02:17 Phishing Simulation Training Insights 03:16 The Importance of Real Research 04:30 Human Risk Management vs. Security Awareness 05:49 Understanding Phishing and Its Impact 11:10 The Role of Technology and Human Resilience 14:34 Effective Phishing Training Strategies 19:02 Analyzing Click Behavior and Reporting 27:17 Why People Click: Survey Insights 36:07 High Click Rates and Psychological Safety 38:13 Management's Role in Cybersecurity Culture 39:29 Impact of Tenure and Compensation on Click Rates 40:58 The Importance of Security Awareness Programs 43:35 Feedback and Reporting in Cybersecurity 54:12 Small Companies vs. Large Companies in Cybersecurity 56:44 Surprising Findings and Future Directions 01:02:12 Conclusion and Report Availability
Is the era of cisplatin over, or are we simply becoming more precise about who benefits from it? As perioperative strategies in bladder cancer continue to evolve, emerging tools like circulating tumor DNA (ctDNA) are playing a bigger role in how clinicians assess recurrence risk and tailor treatment. In this episode of BackTable Tumor Board, host Alan Tan, medical oncologist at Vanderbilt-Ingram Cancer Center, is joined by bladder cancer experts Dr. Amanda Nizam and Dr. Brad McGregor to discuss recent advances in the diagnosis and treatment of urothelial carcinoma. --- SYNPOSIS The doctors examine the evolving management of muscle-invasive bladder cancer (MIBC), including the role of neoadjuvant and adjuvant therapies, the integration of immunotherapy, and the recent approval of enfortumab vedotin plus pembrolizumab. The discussion explores the rapidly changing perioperative landscape, the prognostic utility of ctDNA, and how biomarkers such as HER2 and FGFR are influencing treatment selection across disease states. They also address bladder preservation strategies, management of treatment-related toxicities, and the importance of multidisciplinary coordination. The episode concludes with a forward-looking discussion on emerging therapies and the potential to improve cure rates in bladder cancer. --- TIMESTAMPS 00:00 - Introduction01:44 - Overview of Bladder Cancer Treatment04:54 - Patient Staging and Treatment Goals10:12 - Bladder Preservation vs. Radical Cystectomy16:39 - Emerging Trials and Future Directions22:40 - ctDNA and Precision Medicine33:50 - Metastatic Disease and Biomarker Strategies42:16 - Managing Neuropathy in Metastatic Treatment48:44 - HER2 and FGFR in Bladder Cancer54:15 - Future Directions in Bladder Cancer Treatment --- RESOURCES EV-302/303 Trialhttps://newsroom.astellas.com/2023-12-15-PADCEV-R-enfortumab-vedotin-ejfv-with-KEYTRUDA-R-pembrolizumab-Approved-by-FDA-as-the-First-and-Only-ADC-Plus-PD-1-to-Treat-Advanced-Bladder-Cancer NIAGARA Regimenhttps://www.nejm.org/doi/full/10.1056/NEJMoa2408154 KEYNOTE-905 Studyhttps://www.annalsofoncology.org/article/S0923-7534(25)04894-X/fulltext
In this episode Ben and Spence welcome back Charlie Batch to discuss the latest in NFL news, including the AFC and NFC Championship games, predictions for the Super Bowl, and the hiring of Mike McCarthy. They dive into the importance of weather conditions in games, analyze team strategies, and share insights on the upcoming NFL draft. The conversation also highlights the significance of community engagement through the Best of the Batch Foundation, showcasing the impact of charitable work in the Pittsburgh area.Learn more about The Best of the Batch Foundation:https://www.batchfoundation.org/Footbahlin Cookbook:https://footbahlin-with-ben-roethlisberger.clockwise.io/products/footbahlin-cookbook-volume-2?00:00 Intro02:42 AFC Championship Game Analysis05:40 Weather's Impact on Game Strategy08:28 NFC Championship Game Insights11:19 Coaching Changes and Team Dynamics14:24 Mike McCarthy's Hiring and Future Prospects17:04 Quarterback Development and Team Strategy20:10 Final Thoughts on Super Bowl Predictions33:15 Quarterback Decisions and Team Dynamics37:14 Draft Strategies and Team Building40:20 Offensive Line and Receiver Priorities47:39 Coaching Changes and Team Philosophy53:22 NFLPA Leadership and Future Directions
Fibroid care: how it was, how it's changing, and where it's headed next. In this episode of BackTable OBGYN, hosts Dr. Mark Hoffman and Dr. Amy Park welcome minimally invasive GYN surgeon Dr. Arleen Song to discuss the evolving landscape of fibroid care. --- SYNPOSIS Dr. Song, a veteran in the field with nearly 20 years of experience, shares her journey from Michigan to Duke, current treatments in fibroid management, and the importance of personalized care. The team explores new surgical techniques, the role of medical therapies such as Ella GnRH antagonists, and the importance of patient education. They also address challenges such as access to care, the significance of research funding, and the evolving understanding of fibroid genetics and long-term management. This episode provides a comprehensive overview of the state of fibroid care and the strides being made in this vital aspect of women's health. --- TIMESTAMPS 00:00 - Introduction02:21 - Evolution of Fibroid Treatment05:50 - Advancements in Minimally Invasive Surgery08:47 - Longitudinal Care and Personalized Treatment13:00 - Modern Approaches to Fibroid Treatment21:15 - New Technologies and Procedures27:01 - Preoperative Assessment and Imaging31:15 - Preoperative Counseling and Risk Assessment33:14 - Medications for Fibroid and Endometriosis37:59 - Challenges in Access to Care38:43 - Racial Disparities in Fibroid Research42:35 - The Importance of Specialized Care49:22 - Future Directions in Fibroid Treatment
From a childhood farm in the Philippines to dairy operations worldwide with Dr. Mike Catangui.In this episode of the Uplevel Dairy Podcast, Peggy interviews Dr. Mike Catangui, the first entomologist and parasitologist to feature on the show. Dr. Mike discusses the critical role of entomology and parasitology in the dairy industry, highlighting the impact of insects like stable flies on dairy cattle health and milk production. He shares his professional journey from growing up on a farm in the Philippines to achieving advanced degrees and conducting significant research in the U.S. The conversation delves into the economic impacts of pests, the benefits of natural insecticides, and ongoing efforts to discover sustainable, effective solutions for pest control in agriculture. Dr. Mike also touches on his personal experiences and enduring passion for agricultural research.This episode is sponsored by MWI Animal HealthAt MWI Animal Health, we are your partner in animal health. Our people drive us to think forward every day. We are committed to working with you to identify cutting-edge solutions to your common challenges. We unite with innovators and manufacturers to provide access to products and solutions designed to help you find success in each aspect of your animal health business.Visit www.MWIAH.com00:00 Introduction to Dr. Mike00:15 The Role of Entomology and Parasitology in Dairy02:03 Dr. Mike's Journey from the Philippines to the US03:33 Research on Stable Flies and Their Impact09:36 Transition to Dairy and Natural Solutions15:53 Global Perspective on Insect Control19:09 Future Directions in Entomology23:41 Balancing Work and Life in Agriculture25:01 Conclusion and Final Thoughts
How is genicular artery embolization reshaping our clinical approach to patients with chronic knee pain? Dr. Rachel Piechowiak and Dr. Faraz Khan, interventional radiologists at IR Centers join Dr. Don Garbett in a deep dive into the current state of Genicular Artery Embolization (GAE). --- SYNPOSIS Dr. Piechowiak and Dr. Khan provide a deep dive on the technical nuances of GAE, covering patient selection, access strategies, and key procedural techniques. The conversation also details complex case scenarios and how to tailor catheters and embolics to navigate challenging anatomy. The doctors then share their structured approach to post-procedure follow-up, underscoring the importance of setting realistic treatment expectations with patients. The episode closes with their perspective on the future of genicular artery embolization, emphasizing the need for robust long-term outcomes data to better define the role of GAE in chronic knee pain management. --- TIMESTAMPS 00:00 - Introduction05:54 - Patient Workup for GAE10:42 - Setting Patient Expectations for GAE16:24 - Procedure Approaches and Techniques30:41 - Understanding Artery Targeting Strategies34:56 - Approaches to Microcatheter Selection38:18 - Choosing the Right Embolic Agents47:43 - Managing Complications and Follow-Ups51:23 - Challenges with Post-TKA Patients54:16 - Future Directions
Summary In this episode of the Future of Dermatology Podcast, Dr. Faranak Kamangar and Dr. Jonathan Chen discuss the intersection of artificial intelligence and dermatology. They explore the trust paradox of AI in medical diagnostics, the implications for medical education, and the evolving role of physicians in an AI-driven landscape. The conversation highlights the importance of empathy, judgment, and the need for effective prompting techniques when working with AI tools. They also touch on the future of AI in healthcare and its potential to enhance patient care while acknowledging the limitations and ethical considerations involved. Learn more at: https://med.stanford.edu/ai-in-meded/resources-and-tools.html https://bench.arise-ai.org/ Takeaways - AI can outperform physicians in certain tasks. - The trust paradox raises questions about AI in diagnostics. - Humans may hinder AI's effectiveness in medical decision-making. - Medical education must adapt to include AI training. - Prompting techniques are crucial for effective AI use. - Empathy and judgment remain essential in healthcare. - AI can assist in complex patient conversations. - AI is already integrated into medical practice. - Rethinking medical education is necessary for future doctors. - AI's role in dermatology is rapidly evolving. Chapters 00:00 - Introduction to AI in Dermatology 02:10 - The Trust Paradox of AI in Medicine 05:07 - AI vs. Human Physicians: A New Paradigm 09:46 - Medical Education in the Age of AI 13:05 - Prompting AI: Best Practices for Clinicians 17:57 - The Role of Empathy and Judgment in Medicine 21:11 - AI in Complex Patient Conversations 26:16 - Future Directions in AI and Dermatology
You're going to enjoy this. Watch on YouTube https://youtu.be/Cx1YPXoq6aQ LEARN about Sean McCormick - seanmccormick.com Evolution of memes, the impact of social media, and the importance of authenticity in communication. They discuss the challenges of censorship, the role of AI in content creation, and the significance of self-examination and personal growth 02:55 The Power of Memes 06:14 Censorship and Cultural Trends 08:54 Navigating AI and Content Creation 11:47 The Role of Honesty in Content 14:55 Personal Experiences with Vaccines and Weed 17:50 The Nature of Addiction and Escapism 20:56 The Pursuit of Truth and Awareness 23:51 Cultural Reflections and Personal Growth 26:46 Conclusion and Future Directions 35:57 The Hero's Journey and Self-Examination 37:01 Awakening and Consciousness 38:28 Tools for Self-Discovery 40:43 The Power of Float Tanks 43:46 Do It for the Plot 48:55 The Importance of Novelty 52:15 Curiosity and Risk-Taking 56:34 Navigating Truth in a Polarized World 01:07:32 The Evolution of Podcasting and Influencers 01:11:36 The Impact of Joe Rogan on the Float Industry 01:15:32 The Need for Authenticity in Media 01:16:01 The Dangers of Pornography and Its Cultural Impact 01:24:52 Detoxification and the Importance of Inner Work 01:32:53 The Journey Within: Finding Your Inner Self LEARN about Sean McCormick - seanmccormick.com I've
If you value what we do, now is the time to join at FreshEdPodcast.com. If you represent an organization interested in partnership, please reach out there as well. We're always looking for new partners. -- To kick off the year, Stefania Giannini joins me to talk about the past, present and future of international education. We discuss the challenges facing the rule-based international order and what that means for education. We unpack the global teacher shortage and the reality of some countries spending more on debt servicing than on education. Stefania Giannini is the Assistant Director-General for Education at UNESCO and served as the Italian Minister of Education, Universities and Research between 2014 and 2016. We spoke just before the International Day of Education on January 24 and focused our conversation on UNESCO's new report “The Right To Education: Past, Present, and Future Directions”. https://freshedpodcast.com/giannini/ -- Get in touch! LinkedIn: @FreshEdpodcast Facebook: FreshEd Email: info@freshedpodcast.com
In this episode, UROONCO PCa Associate Editor Assoc. Prof. Pawel Rajwa (PL) interviews medical oncologist Prof. Silke Gillessen Sommer (CH) about the present and future directions of metastatic prostate cancer treatment.They discuss the greatest survival gains in metastatic prostate cancer over the past decade, triplet therapy in mHSPC and patient selection, the role of PSMA-PET imaging and sequencing of systemic therapies in metastatic prostate cancer, biomarker-driven treatment selection, and finally where the future of metastatic prostate cancer treatment is heading.Here are the links to the articles and event which were mentioned in this podcast: EORTC GUCG 2238 De-escalate trialCAPItello-281 phase III studyAPCCC (Advanced Prostate Cancer Consensus Conference)For more updates on prostate cancer, please visit our educational platform UROONCO PCa.For more EAU podcasts, please go to your favourite podcast app and subscribe to our podcast channel for regular updates: Apple Podcasts, Spotify, EAU YouTube channel.
In this episode of Iron Culture, Eric Helms and Eric Trexler discuss the recent changes to the Dietary Guidelines for Americans (DGAs) and the implications of these updates. They begin by addressing the shift in their podcast schedule, emphasizing the importance of mental health and balance in their work. The conversation then transitions into a detailed analysis of the new dietary guidelines, highlighting the complexities of the process behind their formulation. Helms critiques the influence of corporate interests and the political landscape on the DGAs, while also acknowledging the positive aspects of the new recommendations, particularly the increased emphasis on protein intake. The hosts explore the historical context of dietary guidelines, the evolution of public health messaging, and the challenges of effectively communicating nutritional advice to the public. In this episode, Eric Helms and MASS Research delve into the complexities of the latest Dietary Guidelines for Americans (DGAs), discussing the implications of the visual representation of food groups and the recommendations for protein, fats, and processed foods. They critique the new guidelines for their lack of clarity and potential confusion, particularly regarding the emphasis on whole foods versus processed foods. The conversation highlights the disconnect between the written guidelines and their visual representation, which may mislead the public about healthy eating patterns. They also explore the political influences on these guidelines and how they may affect vulnerable populations, particularly in school lunch programs and social assistance programs. If you're in the market for some lifting gear or apparel, be sure to check out EliteFTS.com (and use our code "MRR10" for a 10% discount) Chapters 00:00 Introduction and Schedule Changes 07:15 The Dietary Guidelines Controversy 20:56 Understanding the Formation of Dietary Guidelines 32:30 The Influence of Food Industries on Guidelines 33:38 The Role of the Second Committee 43:49 Changes in Protein Recommendations 44:19 The Inverted Pyramid and Dietary Miscommunication 59:55 Understanding Fats in the New Guidelines 01:09:17 The Role of Full-Fat Dairy in Heart Health 01:15:06 Alcohol Consumption: New Guidelines Explained 01:21:52 Processed Foods and Public Health Implications 01:25:03 The Impact of Dietary Guidelines on Vulnerable Populations 01:30:34 Conclusions and Future Directions in Nutrition Guidelines
In this episode of The Mind–Gut Conversation, Dr. Emeran Mayer sits down with Tim Spector, MD, to discuss the implications of a landmark gut microbiome study involving more than 34,000 participants — one of the largest and most comprehensive efforts to date to understand what a “healthy” gut microbiome actually looks like.Drawing from the study's novel design and findings, they explore why defining gut health is far more complex than identifying a short list of “good” or “bad” microbes. The conversation unpacks what large-scale microbiome data can, and cannot tell us about health, disease prevention, and the growing interest in microbiome testing as a tool for personalized nutrition and healthcare.Together, they examine the deep connections between diet and the microbiome, highlighting why dietary patterns, particularly fiber- and prebiotic-rich foods, may play a more meaningful role than many commonly marketed probiotic products. They also discuss the challenges of translating microbiome research into actionable guidance for consumers and clinicians, and why education and context are essential as microbiome testing becomes more widely available.This wide-ranging discussion blends cutting-edge microbiome science with practical insight, offering a grounded perspective on where the field is headed and how emerging research may eventually shape everyday health decisions.Topics discussed include:• What a large-scale microbiome study reveals about gut health• Why defining a “healthy” microbiome is more complex than expected• The limitations of labeling microbes as simply good or bad• The role of diet, fiber, and prebiotics in shaping the microbiome• The promises and pitfalls of microbiome testing• How microbiome research may influence future healthcare practicesThis is a practical, evidence-based discussion for anyone interested in gut health, whether navigating dietary choices personally, exploring microbiome testing, or working in a clinical or research setting.Please leave any comments or feedback on the episode — we'd love to hear your thoughts.-------------------------------Chapters:00:00 Introduction02:05 The Landmark Study Overview05:29 Defining Healthy Gut Microbiomes10:02 The Good vs. Bad Microbes13:59 Implications for Diet and Health18:37 The Role of Prebiotics and Probiotics23:27 Future Directions in Microbiome Research27:49 Challenges in Proving Causality31:51 The Future of Gut Health Testing36:36 Future Outlook in Traditional Medicine40:10 Microbiome Testing in Clinical Practice43:20 Regulation, Wellness, and Medical Use46:10 Personalizing Diet Through the Microbiome48:50 Final Reflections
Think beyond the esophagus. Up to 75% of eosinophilic esophagitis (EoE) patients have ENT-relevant atopic disease that is often best managed with a multidisciplinary approach. Get caught up on best practices in EoE diagnosis and treatment with this episode of the BackTable ENT Podcast, featuring dual board-certified gastroenterologist and allergist-immunologist Dr. John Leung and host Dr. Basil Kahwash. --- SYNPOSIS The discussion covers the definition, symptoms, and diagnosis of EoE, highlighting the role of food and environmental allergies. Dr. Leung and Dr. Kahwash cover diagnostic techniques like endoscopy and emerging non-invasive methods, as well as various treatment options including dietary modifications, pharmacology, and biologics. The doctors also emphasize the importance of multidisciplinary collaboration between gastroenterologists, allergists, and otolaryngologists to provide optimal care for patients with EoE. --- TIMESTAMPS 00:00 - Introduction 03:13 - Understanding Eosinophilic Esophagitis (EoE)05:45 - EoE Symptoms and Diagnosis08:41 - Role of ENT in EoE Diagnosis11:32 - Diagnostic Criteria for EoE16:34 - Treatment Options for EoE20:55 - Role of Allergists and Environmental Allergies23:24 - Pharmacological Management of EoE29:38 - Complications and Risks of EoE36:21 - Follow-Up Endoscopies and Surveillance40:34 - Future Directions in EoE Management45:21 - Conclusion and Final Thoughts --- RESOURCES Dr. John Leunghttps://www.bostonspecialists.org/dr-leung-full-profile
Andy Fairley discusses his 30-year policing career and transition to cognitive behavioral therapy, emphasising mental health resources, PTSD stigma, and insights from his book on active listening for better community interactions.In this episode of Crime Time Inc., hosts Simon McLean and Tom Wood chat with retired police officer Andy Fairley about his 30-year law enforcement career and his new venture into cognitive behavioural therapy (CBT). Andy discusses key experiences from his time with West Midlands and Strathclyde Police, emphasising the need for mental health resources in policing. He introduces his book "Listening Skills for Effective Policing," which highlights skills like active listening for better community interactions. The conversation also addresses the stigma around PTSD and rising drug-related deaths in Scotland, advocating for empathy and open discussions. Andy's journey culminates in a lighthearted reflection on his path to policing, leaving listeners inspired and informed about mental health support for first responders.Chapters0:10 Welcome to Crime Time Inc.1:03 Andy Fairley Joins the Conversation3:17 The Journey of a Police Negotiator7:25 Understanding Cognitive Behavioral Therapy9:55 The Impact of Drug Issues11:03 Exploring Andy's New Book16:12 Addressing Communication Skills in Policing19:48 Mental Health and Police Culture25:56 Dealing with Trauma in Policing32:02 The Importance of Communication Skills40:30 Police Trauma and PTSD44:59 Breaking the Stigma of Mental Health51:08 Police Care UK and Trauma Support54:25 Future Directions for Andy's Work57:12 Final Thoughts on Police Mental HealthAndy's Book is: Listening Skills for Effective Policinghttps://amzn.eu/d/8IR1XfwAndy's Podcast is SFQ: https://www.buzzsprout.com/2158581Police Care UK Website: https://www.policecare.org.uk/About Crime Time Inc.Season 5 of Crime Time Inc. broadens its reach across two sides of the Atlantic.This season features cases from Scotland and across the wider UK — rooted in real investigative experience — alongside deep dives into some of the most infamous murder cases in American history.Hosted by former detectives Simon and Tom, with experience in both the UK and the United States, including time working alongside the FBI, the show strips away sensationalism to explain how crime and justice really work.Two crime worlds. One podcast.New episodes released regularly throughout the season.Our Website: https://crimetimeinc.com/If you like this show please leave a review. It really helps us.Please help us improve our Podcast by completing this survey.http://bit.ly/crimetimeinc-survey Hosted on Acast. See acast.com/privacy for more information.
Building Secure Software with Tanya Janca: From Coding to Cybersecurity Advocacy In this episode of Cybersecurity Today, host Jim Love interviews Tanya Janca, also known as She Hacks Purple, a renowned Canadian application security expert and author. Tanya shares her journey from a software developer and musician to becoming a penetration tester and cybersecurity advocate. She discusses her work in training developers on secure coding practices and application security, emphasizing the need for integrated security training in academic programs and the software development lifecycle. Tanya also talks about the challenges women face in the cybersecurity field and her efforts to empower underrepresented groups through initiatives like WOsec and We Hack Purple. Sponsored by Meter, this episode dives deep into the importance of building security into software development and the potential role of AI in improving code security. 00:00 Introduction and Sponsor Message 00:18 Meet Tanya Janca: The Journey Begins 01:05 From Developer to Pen Tester 03:14 Empowering Women in Cybersecurity 13:11 Challenges in Academia and Training 19:18 The Need for Secure Coding 21:22 Challenges in Medical Device Security 22:18 The Economics of Open Source 24:43 Building Security into Development 26:14 Training and Cultural Shifts 32:33 AI and Secure Coding 39:03 Incident Response and Preparedness 39:54 Final Thoughts and Future Directions
Auckland's upzoning reforms have become a global case study in housing policy. Gene Tunny and John August dig into the data behind claims that loosening zoning rules boosted housing supply and eased rent pressures. They explore the statistical methods used, the critiques raised by sceptics, and the limits of zoning reform on its own. The episode also examines infrastructure constraints and whether complementary policies are essential for real housing affordability gains.Gene would love to hear your thoughts on this episode. You can email him via contact@economicsexplored.com. TimestampsAuckland Upzoning and Housing Affordability (0:00)Introduction of John August and Initial Discussion (3:41)Statistical Analysis and Critiques (3:59)Cameron Murray and Tim Helm's analysis (7:33)Broader Economic Context and Infrastructure (25:23)Conclusion and Future Directions (46:23)TakeawaysRigorous statistical studies find a strong link between upzoning and increased housing consents in Auckland.Critics argue that zoning reform alone cannot overcome development cycles, infrastructure bottlenecks, or land banking.Development approvals are a useful, though imperfect, proxy for actual housing supply growth.Infrastructure provision is crucial—densification without follow-through can reduce amenity and limit affordability gains.Zoning reform works best as part of a broader policy package, potentially including land value taxation to fund essential infrastructure.Links relevant to the conversationThe impact of upzoning on housing construction in Auckland by Ryan Greenaway-McGrevy and Peter C.B. Phillips:https://cowles.yale.edu/sites/default/files/2024-02/p1863.pdfZoning and housing supply: empirics in search of a theory by Tim Helm and Cameron Murray:https://ace2025.org.au/wp-content/uploads/2025/07/01-Tim-ACE-2025-Tim-Helm-TAKE-II.pdfLumo Coffee promotion10% of Lumo Coffee's Seriously Healthy Organic Coffee.Website: https://www.lumocoffee.com/10EXPLOREDPromo code: 10EXPLORED
Patient access to interventional radiology services remains highly variable worldwide, reflecting global differences in training opportunities and infrastructure. Drawing on responses from more than 1,260 interventional radiologists worldwide, Dr. Justin Guan and Dr. Constantinos Sofocleous unpack the findings of a large international survey, highlighting where IR is advancing, where it remains fragmented, and what the data suggest about the future direction of the specialty. --- SYNPOSIS Key points of the episode involve the collaborative efforts put into this survey, how data was collected, and major findings from the respondents. These findings involve challenges with IR training, the significance of public awareness, and the need for standardized training programs. The discussion also covers the efforts required to promote IR globally, especially at global summits, and the potential steps to address these findings. Finally, the episode highlights the importance of developing region-specific programs and the ongoing efforts to elevate IR practices worldwide. --- TIMESTAMPS 00:00 - Introduction01:57 - Global IR Network and Survey Introduction10:30 - Survey Insights and Results19:26 - Challenges in IR Training and Awareness23:33 - Future Directions and Initiatives36:06 - Conclusion and Final Thoughts --- RESOURCES Results of a Global Survey on the State of Interventional Radiology 2024: https://pubmed.ncbi.nlm.nih.gov/39793699/
In this episode, the hosts discuss the importance of community and respect in fitness, the balance between open gym culture and group classes, and the future of training trends. They explore the role of technology and AI in fitness, the significance of individualized programs, and the impact of attitude on gym culture. The conversation also touches on morning routines and the rise of jujitsu as a popular form of training. Takeaways The main goal of a gym is to help people get healthier. Balancing open gym culture with community is essential. Respect between different training styles fosters a positive environment. Individual design can coexist with group classes if managed well. Setting clear standards helps maintain gym culture. Competitors should respect the space of regular gym-goers. Attitude and respect are crucial in fitness communities. Technology is shaping the future of fitness training. AI can enhance personalized training but should not replace human connection. Morning routines can set the tone for the day. Topics Building a Stronger Community in Fitness Navigating the Balance of Open Gym and Culture Sound bites "Respect is key in a fitness community." "We can coexist together in the gym." "Jumping 50 times can wake up your system." Chapters 00:00 Introduction and Setting the Scene 03:06 Balancing Open Gym and CrossFit Culture 05:51 The Importance of Community and Respect in Fitness 08:37 Individual Design vs. Group Classes 11:28 Setting Standards and Expectations in the Gym 14:39 The Role of Competitors in the Gym Community 17:31 The Impact of Attitude and Respect in Fitness 20:19 Fitness Trends and the Future of Training 23:15 Exploring 2026 Fitness Trends 26:07 The Role of Technology in Fitness 28:53 The Balance of AI and Human Connection in Training 32:10 The Importance of Individualized Fitness Programs 34:59 Trends in Group Fitness and Community Events 37:43 The Rise of Jujitsu and Self-Defense Training 40:39 The Gimmicks of Fitness Trends 43:29 The Importance of Strength Training 46:29 Morning Routines and Jumping into the Day 49:19 Closing Thoughts and Future Directions
In part 2 of our behind-the-scenes series on the Volunteer Management Progress Report, Tobi Johnson returns with Allison Russell, Assistant Professor of Public & Nonprofit Management at the University of Texas at Dallas, for a powerful conversation on practitioner research and what it can teach us about the volunteer engagement profession. Allison steps into the interviewer role again as they dig into what the VMPR revealed over a decade of surveying volunteer engagement leaders and how practitioner research can drive real-world improvements in training, leadership, and volunteer program strategy. You'll hear what challenges consistently show up year after year, why response rates matter, and how turning research into action is what makes it truly valuable! Full show notes: 197. Behind the Scenes of 10 Years of Industry Research with Allison Russell – Part 2 Practitioner Research - Episode Highlights [02:06] - Diving into Volunteer Management Research [03:51] - Key Themes in Volunteer Management [05:30] - Challenges in Volunteer Recruitment [06:49] - Respect and Influence in Volunteer Management [08:43] - Using Research to Improve Practices [22:50] - Survey Design and Response Rates [30:33] - Understanding Research Participation [33:50] - Advice for Conducting Survey Research [37:18] - The Importance of Research Follow-Through [38:49] - Challenges in Volunteer Management Research [45:12] - Future Directions in Volunteer Research [50:38] - What's Next for Volunteer Pro Helpful Links Volunteer Management Progress Report VolunteerPro Impact Lab Engage Journal Volunteer Nation Episode #196 - Behind the Scenes of 10 Years of Industry Research with Pam Kappelides & Allison Russell Find Allison on LinkedIn Allison's UT Dallas Profile Thanks for listening to this episode of the Volunteer Nation podcast. If you enjoyed it, please be sure to subscribe, rate, and review so we can reach more people like you who want to improve the impact of their good cause. For more tips and notes from the show, check us out at TobiJohnson.com. For any comments or questions, email us at WeCare@VolPro.net.
This episode of Quality Matters examines primary care's evolving role and features Karen Johnson of the American Academy of Family Physicians and Jeff Sitko of NCQA. Karen and Jeff outline primary care's distinguishing focus on patient relationships, the strain on the primary care workforce, and technology's promise to ease burdens. The discussion connects the dots between workforce sustainability, AI-driven efficiency, payment reform, and NCQA's vision for next-generation primary care.Karen highlights the underappreciated fact that only 5% of health care spending goes to primary care, despite public perception that the figure is—and should be—higher. Jeff describes a dawning era of proactive, data-driven care delivery. He also previews NCQA's plans to build upon the successful Patient-Centered Medical Home model of primary care.HighlightsThe Human Core of Primary Care: Continuity and trust are what make primary care special, even as practice settings change.Workforce Challenges and Opportunities: Clinicians report high stress and burnout, yet relationships with patients keep them engaged. Building systems that protect these relationships—and make primary care careers attractive—is critical to sustainability.Economics and Incentives: Guests discuss new payment models, state-level initiatives and federal efforts to rebalance incentives and support primary care in new ways.Looking Ahead: The foundational Patient-Centered Medical Home model gets an update in 2026. Plus, Karen calls for a seismic shift to resource primary care as a common good.This episode is essential listening for healthcare executives, policymakers, and clinicians committed to strengthening primary care as the cornerstone of quality improvement.Key Quote:If you want to boil it down to the simplest terms, it's taking primary care from a reactive model—Call me when you're sick; I'll put you on my schedule; Come in and see me—to a proactive model.I am paying attention to a population of patients. They're mine. They're on my panel. And now maybe they're also tied to some accountability arrangement in value-based care, where performance comes into play.And so I'm going to be proactive for a lot of reasons. One, it's the right thing to do for patients. But I also want to make sure my patients are getting preventive services they need, they are taking the medications I prescribe, they are going to the referral I recommended. And I'm getting the information back from that physician, and my team is acting on that. It's all of those things that should be ubiquitous in primary care.-Karen Johnson, PhDTime Stamps:(01:07) The Changing Landscape of Primary Care(06:42) Challenges in the Primary Care Workforce(08:49) How Technology is Impacting Primary Care(15:59) Future Directions and Innovations(18:11) NCQA's Plans for 2026Dive Deeper:State of the Primary Care Workforce 2024 (HRSA)The Pulse of Primary Care (JGIM)Connect with Karen JohnsonConnect with Jeff Sitko Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this episode of the Voxology podcast, hosts Mike Erre and Tim Stafford discuss various pressing issues, including the recent tragedy involving the shooting of Renee Good by ICE agents, the implications of Christian nationalism, and the importance of hope amidst grief. They explore the fall of influential Christian leaders, the role of worship in justice, and the significance of understanding holiness in the context of the church's mission. The conversation emphasizes the need for community engagement, political action, and a deeper understanding of the nature of God's name and holiness. Further, Mike and Tim engage in a profound discussion on faith, justice, and societal change. They explore the significance of names and holiness, reflecting on current events and personal experiences. The conversation delves into the challenges of maintaining hope and integrity in a world filled with tragedy and injustice, while emphasizing the importance of community and active resistance against dehumanization. Chapters 00:00 Introduction and Personal Updates 02:58 Lamenting Tragedy and Violence 06:00 The Impact of Christian Nationalism 08:58 The Confession of Philip Yancey 11:59 Hope Amidst Despair 15:01 The Role of the Church in Society 17:56 Questioning Political Allegiances 21:08 The Importance of Community and Humanity 24:00 Navigating Dehumanization and Response 27:01 The Sermon on the Mount and Its Implications 29:57 Conclusion and Future Directions 34:30 The Significance of Names in the Ancient World 39:08 Understanding Holiness and Its Implications 45:01 Profaning the Name: Lessons from Israel's History 51:09 The Restoration of God's Name and Its Importance 57:05 The Interconnection of Worship, Justice, and Holiness As always, we encourage and would love discussion as we pursue. Feel free to email in questions to hello@voxpodcast.com, and to engage the conversation on Facebook and Instagram. We're on YouTube (if you're into that kinda thing): VOXOLOGY TV. Our Merch Store! ETSY Learn more about the Voxology Podcast Subscribe on iTunes or Spotify Support the Voxology Podcast on Patreon The Voxology Spotify channel can be found here: Voxology Radio Follow us on Instagram: @voxologypodcast and "like" us on Facebook Follow Mike on Twitter: www.twitter.com/mikeerre Music in this episode by Timothy John Stafford Instagram & Twitter: @GoneTimothy
Happy New Year! You may have noticed that in 2025 we had moved toward YouTube as our primary podcasting platform. As we'll explain in the next State of Latent Space post, we'll be doubling down on Substack again and improving the experience for the over 100,000 of you who look out for our emails and website updates!We first mentioned Artificial Analysis in 2024, when it was still a side project in a Sydney basement. They then were one of the few Nat Friedman and Daniel Gross' AIGrant companies to raise a full seed round from them and have now become the independent gold standard for AI benchmarking—trusted by developers, enterprises, and every major lab to navigate the exploding landscape of models, providers, and capabilities.We have chatted with both Clementine Fourrier of HuggingFace's OpenLLM Leaderboard and (the freshly valued at $1.7B) Anastasios Angelopoulos of LMArena on their approaches to LLM evals and trendspotting, but Artificial Analysis have staked out an enduring and important place in the toolkit of the modern AI Engineer by doing the best job of independently running the most comprehensive set of evals across the widest range of open and closed models, and charting their progress for broad industry analyst use.George Cameron and Micah-Hill Smith have spent two years building Artificial Analysis into the platform that answers the questions no one else will: Which model is actually best for your use case? What are the real speed-cost trade-offs? And how open is “open” really?We discuss:* The origin story: built as a side project in 2023 while Micah was building a legal AI assistant, launched publicly in January 2024, and went viral after Swyx's retweet* Why they run evals themselves: labs prompt models differently, cherry-pick chain-of-thought examples (Google Gemini 1.0 Ultra used 32-shot prompts to beat GPT-4 on MMLU), and self-report inflated numbers* The mystery shopper policy: they register accounts not on their own domain and run intelligence + performance benchmarks incognito to prevent labs from serving different models on private endpoints* How they make money: enterprise benchmarking insights subscription (standardized reports on model deployment, serverless vs. managed vs. leasing chips) and private custom benchmarking for AI companies (no one pays to be on the public leaderboard)* The Intelligence Index (V3): synthesizes 10 eval datasets (MMLU, GPQA, agentic benchmarks, long-context reasoning) into a single score, with 95% confidence intervals via repeated runs* Omissions Index (hallucination rate): scores models from -100 to +100 (penalizing incorrect answers, rewarding ”I don't know”), and Claude models lead with the lowest hallucination rates despite not always being the smartest* GDP Val AA: their version of OpenAI's GDP-bench (44 white-collar tasks with spreadsheets, PDFs, PowerPoints), run through their Stirrup agent harness (up to 100 turns, code execution, web search, file system), graded by Gemini 3 Pro as an LLM judge (tested extensively, no self-preference bias)* The Openness Index: scores models 0-18 on transparency of pre-training data, post-training data, methodology, training code, and licensing (AI2 OLMo 2 leads, followed by Nous Hermes and NVIDIA Nemotron)* The smiling curve of AI costs: GPT-4-level intelligence is 100-1000x cheaper than at launch (thanks to smaller models like Amazon Nova), but frontier reasoning models in agentic workflows cost more than ever (sparsity, long context, multi-turn agents)* Why sparsity might go way lower than 5%: GPT-4.5 is ~5% active, Gemini models might be ~3%, and Omissions Index accuracy correlates with total parameters (not active), suggesting massive sparse models are the future* Token efficiency vs. turn efficiency: GPT-5 costs more per token but solves Tau-bench in fewer turns (cheaper overall), and models are getting better at using more tokens only when needed (5.1 Codex has tighter token distributions)* V4 of the Intelligence Index coming soon: adding GDP Val AA, Critical Point, hallucination rate, and dropping some saturated benchmarks (human-eval-style coding is now trivial for small models)Links to Artificial Analysis* Website: https://artificialanalysis.ai* George Cameron on X: https://x.com/georgecameron* Micah-Hill Smith on X: https://x.com/micahhsmithFull Episode on YouTubeTimestamps* 00:00 Introduction: Full Circle Moment and Artificial Analysis Origins* 01:19 Business Model: Independence and Revenue Streams* 04:33 Origin Story: From Legal AI to Benchmarking Need* 16:22 AI Grant and Moving to San Francisco* 19:21 Intelligence Index Evolution: From V1 to V3* 11:47 Benchmarking Challenges: Variance, Contamination, and Methodology* 13:52 Mystery Shopper Policy and Maintaining Independence* 28:01 New Benchmarks: Omissions Index for Hallucination Detection* 33:36 Critical Point: Hard Physics Problems and Research-Level Reasoning* 23:01 GDP Val AA: Agentic Benchmark for Real Work Tasks* 50:19 Stirrup Agent Harness: Open Source Agentic Framework* 52:43 Openness Index: Measuring Model Transparency Beyond Licenses* 58:25 The Smiling Curve: Cost Falling While Spend Rising* 1:02:32 Hardware Efficiency: Blackwell Gains and Sparsity Limits* 1:06:23 Reasoning Models and Token Efficiency: The Spectrum Emerges* 1:11:00 Multimodal Benchmarking: Image, Video, and Speech Arenas* 1:15:05 Looking Ahead: Intelligence Index V4 and Future Directions* 1:16:50 Closing: The Insatiable Demand for IntelligenceTranscriptMicah [00:00:06]: This is kind of a full circle moment for us in a way, because the first time artificial analysis got mentioned on a podcast was you and Alessio on Latent Space. Amazing.swyx [00:00:17]: Which was January 2024. I don't even remember doing that, but yeah, it was very influential to me. Yeah, I'm looking at AI News for Jan 17, or Jan 16, 2024. I said, this gem of a models and host comparison site was just launched. And then I put in a few screenshots, and I said, it's an independent third party. It clearly outlines the quality versus throughput trade-off, and it breaks out by model and hosting provider. I did give you s**t for missing fireworks, and how do you have a model benchmarking thing without fireworks? But you had together, you had perplexity, and I think we just started chatting there. Welcome, George and Micah, to Latent Space. I've been following your progress. Congrats on... It's been an amazing year. You guys have really come together to be the presumptive new gardener of AI, right? Which is something that...George [00:01:09]: Yeah, but you can't pay us for better results.swyx [00:01:12]: Yes, exactly.George [00:01:13]: Very important.Micah [00:01:14]: Start off with a spicy take.swyx [00:01:18]: Okay, how do I pay you?Micah [00:01:20]: Let's get right into that.swyx [00:01:21]: How do you make money?Micah [00:01:24]: Well, very happy to talk about that. So it's been a big journey the last couple of years. Artificial analysis is going to be two years old in January 2026. Which is pretty soon now. We first run the website for free, obviously, and give away a ton of data to help developers and companies navigate AI and make decisions about models, providers, technologies across the AI stack for building stuff. We're very committed to doing that and tend to keep doing that. We have, along the way, built a business that is working out pretty sustainably. We've got just over 20 people now and two main customer groups. So we want to be... We want to be who enterprise look to for data and insights on AI, so we want to help them with their decisions about models and technologies for building stuff. And then on the other side, we do private benchmarking for companies throughout the AI stack who build AI stuff. So no one pays to be on the website. We've been very clear about that from the very start because there's no use doing what we do unless it's independent AI benchmarking. Yeah. But turns out a bunch of our stuff can be pretty useful to companies building AI stuff.swyx [00:02:38]: And is it like, I am a Fortune 500, I need advisors on objective analysis, and I call you guys and you pull up a custom report for me, you come into my office and give me a workshop? What kind of engagement is that?George [00:02:53]: So we have a benchmarking and insight subscription, which looks like standardized reports that cover key topics or key challenges enterprises face when looking to understand AI and choose between all the technologies. And so, for instance, one of the report is a model deployment report, how to think about choosing between serverless inference, managed deployment solutions, or leasing chips. And running inference yourself is an example kind of decision that big enterprises face, and it's hard to reason through, like this AI stuff is really new to everybody. And so we try and help with our reports and insight subscription. Companies navigate that. We also do custom private benchmarking. And so that's very different from the public benchmarking that we publicize, and there's no commercial model around that. For private benchmarking, we'll at times create benchmarks, run benchmarks to specs that enterprises want. And we'll also do that sometimes for AI companies who have built things, and we help them understand what they've built with private benchmarking. Yeah. So that's a piece mainly that we've developed through trying to support everybody publicly with our public benchmarks. Yeah.swyx [00:04:09]: Let's talk about TechStack behind that. But okay, I'm going to rewind all the way to when you guys started this project. You were all the way in Sydney? Yeah. Well, Sydney, Australia for me.Micah [00:04:19]: George was an SF, but he's Australian, but he moved here already. Yeah.swyx [00:04:22]: And I remember I had the Zoom call with you. What was the impetus for starting artificial analysis in the first place? You know, you started with public benchmarks. And so let's start there. We'll go to the private benchmark. Yeah.George [00:04:33]: Why don't we even go back a little bit to like why we, you know, thought that it was needed? Yeah.Micah [00:04:40]: The story kind of begins like in 2022, 2023, like both George and I have been into AI stuff for quite a while. In 2023 specifically, I was trying to build a legal AI research assistant. So it actually worked pretty well for its era, I would say. Yeah. Yeah. So I was finding that the more you go into building something using LLMs, the more each bit of what you're doing ends up being a benchmarking problem. So had like this multistage algorithm thing, trying to figure out what the minimum viable model for each bit was, trying to optimize every bit of it as you build that out, right? Like you're trying to think about accuracy, a bunch of other metrics and performance and cost. And mostly just no one was doing anything to independently evaluate all the models. And certainly not to look at the trade-offs for speed and cost. So we basically set out just to build a thing that developers could look at to see the trade-offs between all of those things measured independently across all the models and providers. Honestly, it was probably meant to be a side project when we first started doing it.swyx [00:05:49]: Like we didn't like get together and say like, Hey, like we're going to stop working on all this stuff. I'm like, this is going to be our main thing. When I first called you, I think you hadn't decided on starting a company yet.Micah [00:05:58]: That's actually true. I don't even think we'd pause like, like George had an acquittance job. I didn't quit working on my legal AI thing. Like it was genuinely a side project.George [00:06:05]: We built it because we needed it as people building in the space and thought, Oh, other people might find it useful too. So we'll buy domain and link it to the Vercel deployment that we had and tweet about it. And, but very quickly it started getting attention. Thank you, Swyx for, I think doing an initial retweet and spotlighting it there. This project that we released. And then very quickly though, it was useful to others, but very quickly it became more useful as the number of models released accelerated. We had Mixtrel 8x7B and it was a key. That's a fun one. Yeah. Like a open source model that really changed the landscape and opened up people's eyes to other serverless inference providers and thinking about speed, thinking about cost. And so that was a key. And so it became more useful quite quickly. Yeah.swyx [00:07:02]: What I love talking to people like you who sit across the ecosystem is, well, I have theories about what people want, but you have data and that's obviously more relevant. But I want to stay on the origin story a little bit more. When you started out, I would say, I think the status quo at the time was every paper would come out and they would report their numbers versus competitor numbers. And that's basically it. And I remember I did the legwork. I think everyone has some knowledge. I think there's some version of Excel sheet or a Google sheet where you just like copy and paste the numbers from every paper and just post it up there. And then sometimes they don't line up because they're independently run. And so your numbers are going to look better than... Your reproductions of other people's numbers are going to look worse because you don't hold their models correctly or whatever the excuse is. I think then Stanford Helm, Percy Liang's project would also have some of these numbers. And I don't know if there's any other source that you can cite. The way that if I were to start artificial analysis at the same time you guys started, I would have used the Luther AI's eval framework harness. Yup.Micah [00:08:06]: Yup. That was some cool stuff. At the end of the day, running these evals, it's like if it's a simple Q&A eval, all you're doing is asking a list of questions and checking if the answers are right, which shouldn't be that crazy. But it turns out there are an enormous number of things that you've got control for. And I mean, back when we started the website. Yeah. Yeah. Like one of the reasons why we realized that we had to run the evals ourselves and couldn't just take rules from the labs was just that they would all prompt the models differently. And when you're competing over a few points, then you can pretty easily get- You can put the answer into the model. Yeah. That in the extreme. And like you get crazy cases like back when I'm Googled a Gemini 1.0 Ultra and needed a number that would say it was better than GPT-4 and like constructed, I think never published like chain of thought examples. 32 of them in every topic in MLU to run it, to get the score, like there are so many things that you- They never shipped Ultra, right? That's the one that never made it up. Not widely. Yeah. Yeah. Yeah. I mean, I'm sure it existed, but yeah. So we were pretty sure that we needed to run them ourselves and just run them in the same way across all the models. Yeah. And we were, we also did certain from the start that you couldn't look at those in isolation. You needed to look at them alongside the cost and performance stuff. Yeah.swyx [00:09:24]: Okay. A couple of technical questions. I mean, so obviously I also thought about this and I didn't do it because of cost. Yep. Did you not worry about costs? Were you funded already? Clearly not, but you know. No. Well, we definitely weren't at the start.Micah [00:09:36]: So like, I mean, we're paying for it personally at the start. There's a lot of money. Well, the numbers weren't nearly as bad a couple of years ago. So we certainly incurred some costs, but we were probably in the order of like hundreds of dollars of spend across all the benchmarking that we were doing. Yeah. So nothing. Yeah. It was like kind of fine. Yeah. Yeah. These days that's gone up an enormous amount for a bunch of reasons that we can talk about. But yeah, it wasn't that bad because you can also remember that like the number of models we were dealing with was hardly any and the complexity of the stuff that we wanted to do to evaluate them was a lot less. Like we were just asking some Q&A type questions and then one specific thing was for a lot of evals initially, we were just like sampling an answer. You know, like, what's the answer for this? Like, we didn't want to go into the answer directly without letting the models think. We weren't even doing chain of thought stuff initially. And that was the most useful way to get some results initially. Yeah.swyx [00:10:33]: And so for people who haven't done this work, literally parsing the responses is a whole thing, right? Like because sometimes the models, the models can answer any way they feel fit and sometimes they actually do have the right answer, but they just returned the wrong format and they will get a zero for that unless you work it into your parser. And that involves more work. And so, I mean, but there's an open question whether you should give it points for not following your instructions on the format.Micah [00:11:00]: It depends what you're looking at, right? Because you can, if you're trying to see whether or not it can solve a particular type of reasoning problem, and you don't want to test it on its ability to do answer formatting at the same time, then you might want to use an LLM as answer extractor approach to make sure that you get the answer out no matter how unanswered. But these days, it's mostly less of a problem. Like, if you instruct a model and give it examples of what the answers should look like, it can get the answers in your format, and then you can do, like, a simple regex.swyx [00:11:28]: Yeah, yeah. And then there's other questions around, I guess, sometimes if you have a multiple choice question, sometimes there's a bias towards the first answer, so you have to randomize the responses. All these nuances, like, once you dig into benchmarks, you're like, I don't know how anyone believes the numbers on all these things. It's so dark magic.Micah [00:11:47]: You've also got, like… You've got, like, the different degrees of variance in different benchmarks, right? Yeah. So, if you run four-question multi-choice on a modern reasoning model at the temperatures suggested by the labs for their own models, the variance that you can see on a four-question multi-choice eval is pretty enormous if you only do a single run of it and it has a small number of questions, especially. So, like, one of the things that we do is run an enormous number of all of our evals when we're developing new ones and doing upgrades to our intelligence index to bring in new things. Yeah. So, that we can dial in the right number of repeats so that we can get to the 95% confidence intervals that we're comfortable with so that when we pull that together, we can be confident in intelligence index to at least as tight as, like, a plus or minus one at a 95% confidence. Yeah.swyx [00:12:32]: And, again, that just adds a straight multiple to the cost. Oh, yeah. Yeah, yeah.George [00:12:37]: So, that's one of many reasons that cost has gone up a lot more than linearly over the last couple of years. We report a cost to run the artificial analysis. We report a cost to run the artificial analysis intelligence index on our website, and currently that's assuming one repeat in terms of how we report it because we want to reflect a bit about the weighting of the index. But our cost is actually a lot higher than what we report there because of the repeats.swyx [00:13:03]: Yeah, yeah, yeah. And probably this is true, but just checking, you don't have any special deals with the labs. They don't discount it. You just pay out of pocket or out of your sort of customer funds. Oh, there is a mix. So, the issue is that sometimes they may give you a special end point, which is… Ah, 100%.Micah [00:13:21]: Yeah, yeah, yeah. Exactly. So, we laser focus, like, on everything we do on having the best independent metrics and making sure that no one can manipulate them in any way. There are quite a lot of processes we've developed over the last couple of years to make that true for, like, the one you bring up, like, right here of the fact that if we're working with a lab, if they're giving us a private endpoint to evaluate a model, that it is totally possible. That what's sitting behind that black box is not the same as they serve on a public endpoint. We're very aware of that. We have what we call a mystery shopper policy. And so, and we're totally transparent with all the labs we work with about this, that we will register accounts not on our own domain and run both intelligence evals and performance benchmarks… Yeah, that's the job. …without them being able to identify it. And no one's ever had a problem with that. Because, like, a thing that turns out to actually be quite a good… …good factor in the industry is that they all want to believe that none of their competitors could manipulate what we're doing either.swyx [00:14:23]: That's true. I never thought about that. I've been in the database data industry prior, and there's a lot of shenanigans around benchmarking, right? So I'm just kind of going through the mental laundry list. Did I miss anything else in this category of shenanigans? Oh, potential shenanigans.Micah [00:14:36]: I mean, okay, the biggest one, like, that I'll bring up, like, is more of a conceptual one, actually, than, like, direct shenanigans. It's that the things that get measured become things that get targeted by labs that they're trying to build, right? Exactly. So that doesn't mean anything that we should really call shenanigans. Like, I'm not talking about training on test set. But if you know that you're going to be great at another particular thing, if you're a researcher, there are a whole bunch of things that you can do to try to get better at that thing that preferably are going to be helpful for a wide range of how actual users want to use the thing that you're building. But will not necessarily work. Will not necessarily do that. So, for instance, the models are exceptional now at answering competition maths problems. There is some relevance of that type of reasoning, that type of work, to, like, how we might use modern coding agents and stuff. But it's clearly not one for one. So the thing that we have to be aware of is that once an eval becomes the thing that everyone's looking at, scores can get better on it without there being a reflection of overall generalized intelligence of these models. Getting better. That has been true for the last couple of years. It'll be true for the next couple of years. There's no silver bullet to defeat that other than building new stuff to stay relevant and measure the capabilities that matter most to real users. Yeah.swyx [00:15:58]: And we'll cover some of the new stuff that you guys are building as well, which is cool. Like, you used to just run other people's evals, but now you're coming up with your own. And I think, obviously, that is a necessary path once you're at the frontier. You've exhausted all the existing evals. I think the next point in history that I have for you is AI Grant that you guys decided to join and move here. What was it like? I think you were in, like, batch two? Batch four. Batch four. Okay.Micah [00:16:26]: I mean, it was great. Nat and Daniel are obviously great. And it's a really cool group of companies that we were in AI Grant alongside. It was really great to get Nat and Daniel on board. Obviously, they've done a whole lot of great work in the space with a lot of leading companies and were extremely aligned. With the mission of what we were trying to do. Like, we're not quite typical of, like, a lot of the other AI startups that they've invested in.swyx [00:16:53]: And they were very much here for the mission of what we want to do. Did they say any advice that really affected you in some way or, like, were one of the events very impactful? That's an interesting question.Micah [00:17:03]: I mean, I remember fondly a bunch of the speakers who came and did fireside chats at AI Grant.swyx [00:17:09]: Which is also, like, a crazy list. Yeah.George [00:17:11]: Oh, totally. Yeah, yeah, yeah. There was something about, you know, speaking to Nat and Daniel about the challenges of working through a startup and just working through the questions that don't have, like, clear answers and how to work through those kind of methodically and just, like, work through the hard decisions. And they've been great mentors to us as we've built artificial analysis. Another benefit for us was that other companies in the batch and other companies in AI Grant are pushing the capabilities. Yeah. And I think that's a big part of what AI can do at this time. And so being in contact with them, making sure that artificial analysis is useful to them has been fantastic for supporting us in working out how should we build out artificial analysis to continue to being useful to those, like, you know, building on AI.swyx [00:17:59]: I think to some extent, I'm mixed opinion on that one because to some extent, your target audience is not people in AI Grants who are obviously at the frontier. Yeah. Do you disagree?Micah [00:18:09]: To some extent. To some extent. But then, so a lot of what the AI Grant companies are doing is taking capabilities coming out of the labs and trying to push the limits of what they can do across the entire stack for building great applications, which actually makes some of them pretty archetypical power users of artificial analysis. Some of the people with the strongest opinions about what we're doing well and what we're not doing well and what they want to see next from us. Yeah. Yeah. Because when you're building any kind of AI application now, chances are you're using a whole bunch of different models. You're maybe switching reasonably frequently for different models and different parts of your application to optimize what you're able to do with them at an accuracy level and to get better speed and cost characteristics. So for many of them, no, they're like not commercial customers of ours, like we don't charge for all our data on the website. Yeah. They are absolutely some of our power users.swyx [00:19:07]: So let's talk about just the evals as well. So you start out from the general like MMU and GPQA stuff. What's next? How do you sort of build up to the overall index? What was in V1 and how did you evolve it? Okay.Micah [00:19:22]: So first, just like background, like we're talking about the artificial analysis intelligence index, which is our synthesis metric that we pulled together currently from 10 different eval data sets to give what? We're pretty much the same as that. Pretty confident is the best single number to look at for how smart the models are. Obviously, it doesn't tell the whole story. That's why we published the whole website of all the charts to dive into every part of it and look at the trade-offs. But best single number. So right now, it's got a bunch of Q&A type data sets that have been very important to the industry, like a couple that you just mentioned. It's also got a couple of agentic data sets. It's got our own long context reasoning data set and some other use case focused stuff. As time goes on. The things that we're most interested in that are going to be important to the capabilities that are becoming more important for AI, what developers are caring about, are going to be first around agentic capabilities. So surprise, surprise. We're all loving our coding agents and how the model is going to perform like that and then do similar things for different types of work are really important to us. The linking to use cases to economically valuable use cases are extremely important to us. And then we've got some of the. Yeah. These things that the models still struggle with, like working really well over long contexts that are not going to go away as specific capabilities and use cases that we need to keep evaluating.swyx [00:20:46]: But I guess one thing I was driving was like the V1 versus the V2 and how bad it was over time.Micah [00:20:53]: Like how we've changed the index to where we are.swyx [00:20:55]: And I think that reflects on the change in the industry. Right. So that's a nice way to tell that story.Micah [00:21:00]: Well, V1 would be completely saturated right now. Almost every model coming out because doing things like writing the Python functions and human evil is now pretty trivial. It's easy to forget, actually, I think how much progress has been made in the last two years. Like we obviously play the game constantly of like the today's version versus last week's version and the week before and all of the small changes in the horse race between the current frontier and who has the best like smaller than 10B model like right now this week. Right. And that's very important to a lot of developers and people and especially in this particular city of San Francisco. But when you zoom out a couple of years ago, literally most of what we were doing to evaluate the models then would all be 100% solved by even pretty small models today. And that's been one of the key things, by the way, that's driven down the cost of intelligence at every tier of intelligence. We can talk about more in a bit. So V1, V2, V3, we made things harder. We covered a wider range of use cases. And we tried to get closer to things developers care about as opposed to like just the Q&A type stuff that MMLU and GPQA represented. Yeah.swyx [00:22:12]: I don't know if you have anything to add there. Or we could just go right into showing people the benchmark and like looking around and asking questions about it. Yeah.Micah [00:22:21]: Let's do it. Okay. This would be a pretty good way to chat about a few of the new things we've launched recently. Yeah.George [00:22:26]: And I think a little bit about the direction that we want to take it. And we want to push benchmarks. Currently, the intelligence index and evals focus a lot on kind of raw intelligence. But we kind of want to diversify how we think about intelligence. And we can talk about it. But kind of new evals that we've kind of built and partnered on focus on topics like hallucination. And we've got a lot of topics that I think are not covered by the current eval set that should be. And so we want to bring that forth. But before we get into that.swyx [00:23:01]: And so for listeners, just as a timestamp, right now, number one is Gemini 3 Pro High. Then followed by Cloud Opus at 70. Just 5.1 high. You don't have 5.2 yet. And Kimi K2 Thinking. Wow. Still hanging in there. So those are the top four. That will date this podcast quickly. Yeah. Yeah. I mean, I love it. I love it. No, no. 100%. Look back this time next year and go, how cute. Yep.George [00:23:25]: Totally. A quick view of that is, okay, there's a lot. I love it. I love this chart. Yeah.Micah [00:23:30]: This is such a favorite, right? Yeah. And almost every talk that George or I give at conferences and stuff, we always put this one up first to just talk about situating where we are in this moment in history. This, I think, is the visual version of what I was saying before about the zooming out and remembering how much progress there's been. If we go back to just over a year ago, before 01, before Cloud Sonnet 3.5, we didn't have reasoning models or coding agents as a thing. And the game was very, very different. If we go back even a little bit before then, we're in the era where, when you look at this chart, open AI was untouchable for well over a year. And, I mean, you would remember that time period well of there being very open questions about whether or not AI was going to be competitive, like full stop, whether or not open AI would just run away with it, whether we would have a few frontier labs and no one else would really be able to do anything other than consume their APIs. I am quite happy overall that the world that we have ended up in is one where... Multi-model. Absolutely. And strictly more competitive every quarter over the last few years. Yeah. This year has been insane. Yeah.George [00:24:42]: You can see it. This chart with everything added is hard to read currently. There's so many dots on it, but I think it reflects a little bit what we felt, like how crazy it's been.swyx [00:24:54]: Why 14 as the default? Is that a manual choice? Because you've got service now in there that are less traditional names. Yeah.George [00:25:01]: It's models that we're kind of highlighting by default in our charts, in our intelligence index. Okay.swyx [00:25:07]: You just have a manually curated list of stuff.George [00:25:10]: Yeah, that's right. But something that I actually don't think every artificial analysis user knows is that you can customize our charts and choose what models are highlighted. Yeah. And so if we take off a few names, it gets a little easier to read.swyx [00:25:25]: Yeah, yeah. A little easier to read. Totally. Yeah. But I love that you can see the all one jump. Look at that. September 2024. And the DeepSeek jump. Yeah.George [00:25:34]: Which got close to OpenAI's leadership. They were so close. I think, yeah, we remember that moment. Around this time last year, actually.Micah [00:25:44]: Yeah, yeah, yeah. I agree. Yeah, well, a couple of weeks. It was Boxing Day in New Zealand when DeepSeek v3 came out. And we'd been tracking DeepSeek and a bunch of the other global players that were less known over the second half of 2024 and had run evals on the earlier ones and stuff. I very distinctly remember Boxing Day in New Zealand, because I was with family for Christmas and stuff, running the evals and getting back result by result on DeepSeek v3. So this was the first of their v3 architecture, the 671b MOE.Micah [00:26:19]: And we were very, very impressed. That was the moment where we were sure that DeepSeek was no longer just one of many players, but had jumped up to be a thing. The world really noticed when they followed that up with the RL working on top of v3 and R1 succeeding a few weeks later. But the groundwork for that absolutely was laid with just extremely strong base model, completely open weights that we had as the best open weights model. So, yeah, that's the thing that you really see in the game. But I think that we got a lot of good feedback on Boxing Day. us on Boxing Day last year.George [00:26:48]: Boxing Day is the day after Christmas for those not familiar.George [00:26:54]: I'm from Singapore.swyx [00:26:55]: A lot of us remember Boxing Day for a different reason, for the tsunami that happened. Oh, of course. Yeah, but that was a long time ago. So yeah. So this is the rough pitch of AAQI. Is it A-A-Q-I or A-A-I-I? I-I. Okay. Good memory, though.Micah [00:27:11]: I don't know. I'm not used to it. Once upon a time, we did call it Quality Index, and we would talk about quality, performance, and price, but we changed it to intelligence.George [00:27:20]: There's been a few naming changes. We added hardware benchmarking to the site, and so benchmarks at a kind of system level. And so then we changed our throughput metric to, we now call it output speed, and thenswyx [00:27:32]: throughput makes sense at a system level, so we took that name. Take me through more charts. What should people know? Obviously, the way you look at the site is probably different than how a beginner might look at it.Micah [00:27:42]: Yeah, that's fair. There's a lot of fun stuff to dive into. Maybe so we can hit past all the, like, we have lots and lots of emails and stuff. The interesting ones to talk about today that would be great to bring up are a few of our recent things, I think, that probably not many people will be familiar with yet. So first one of those is our omniscience index. So this one is a little bit different to most of the intelligence evils that we've run. We built it specifically to look at the embedded knowledge in the models and to test hallucination by looking at when the model doesn't know the answer, so not able to get it correct, what's its probability of saying, I don't know, or giving an incorrect answer. So the metric that we use for omniscience goes from negative 100 to positive 100. Because we're simply taking off a point if you give an incorrect answer to the question. We're pretty convinced that this is an example of where it makes most sense to do that, because it's strictly more helpful to say, I don't know, instead of giving a wrong answer to factual knowledge question. And one of our goals is to shift the incentive that evils create for models and the labs creating them to get higher scores. And almost every evil across all of AI up until this point, it's been graded by simple percentage correct as the main metric, the main thing that gets hyped. And so you should take a shot at everything. There's no incentive to say, I don't know. So we did that for this one here.swyx [00:29:22]: I think there's a general field of calibration as well, like the confidence in your answer versus the rightness of the answer. Yeah, we completely agree. Yeah. Yeah.George [00:29:31]: On that. And one reason that we didn't do that is because. Or put that into this index is that we think that the, the way to do that is not to ask the models how confident they are.swyx [00:29:43]: I don't know. Maybe it might be though. You put it like a JSON field, say, say confidence and maybe it spits out something. Yeah. You know, we have done a few evils podcasts over the, over the years. And when we did one with Clementine of hugging face, who maintains the open source leaderboard, and this was one of her top requests, which is some kind of hallucination slash lack of confidence calibration thing. And so, Hey, this is one of them.Micah [00:30:05]: And I mean, like anything that we do, it's not a perfect metric or the whole story of everything that you think about as hallucination. But yeah, it's pretty useful and has some interesting results. Like one of the things that we saw in the hallucination rate is that anthropics Claude models at the, the, the very left-hand side here with the lowest hallucination rates out of the models that we've evaluated amnesty is on. That is an interesting fact. I think it probably correlates with a lot of the previously, not really measured vibes stuff that people like about some of the Claude models. Is the dataset public or what's is it, is there a held out set? There's a hell of a set for this one. So we, we have published a public test set, but we we've only published 10% of it. The reason is that for this one here specifically, it would be very, very easy to like have data contamination because it is just factual knowledge questions. We would. We'll update it at a time to also prevent that, but with yeah, kept most of it held out so that we can keep it reliable for a long time. It leads us to a bunch of really cool things, including breakdown quite granularly by topic. And so we've got some of that disclosed on the website publicly right now, and there's lots more coming in terms of our ability to break out very specific topics. Yeah.swyx [00:31:23]: I would be interested. Let's, let's dwell a little bit on this hallucination one. I noticed that Haiku hallucinates less than Sonnet hallucinates less than Opus. And yeah. Would that be the other way around in a normal capability environments? I don't know. What's, what do you make of that?George [00:31:37]: One interesting aspect is that we've found that there's not really a, not a strong correlation between intelligence and hallucination, right? That's to say that the smarter the models are in a general sense, isn't correlated with their ability to, when they don't know something, say that they don't know. It's interesting that Gemini three pro preview was a big leap over here. Gemini 2.5. Flash and, and, and 2.5 pro, but, and if I add pro quickly here.swyx [00:32:07]: I bet pro's really good. Uh, actually no, I meant, I meant, uh, the GPT pros.George [00:32:12]: Oh yeah.swyx [00:32:13]: Cause GPT pros are rumored. We don't know for a fact that it's like eight runs and then with the LM judge on top. Yeah.George [00:32:20]: So we saw a big jump in, this is accuracy. So this is just percent that they get, uh, correct and Gemini three pro knew a lot more than the other models. And so big jump in accuracy. But relatively no change between the Google Gemini models, between releases. And the hallucination rate. Exactly. And so it's likely due to just kind of different post-training recipe, between the, the Claude models. Yeah.Micah [00:32:45]: Um, there's, there's driven this. Yeah. You can, uh, you can partially blame us and how we define intelligence having until now not defined hallucination as a negative in the way that we think about intelligence.swyx [00:32:56]: And so that's what we're changing. Uh, I know many smart people who are confidently incorrect.George [00:33:02]: Uh, look, look at that. That, that, that is very humans. Very true. And there's times and a place for that. I think our view is that hallucination rate makes sense in this context where it's around knowledge, but in many cases, people want the models to hallucinate, to have a go. Often that's the case in coding or when you're trying to generate newer ideas. One eval that we added to artificial analysis is, is, is critical point and it's really hard, uh, physics problems. Okay.swyx [00:33:32]: And is it sort of like a human eval type or something different or like a frontier math type?George [00:33:37]: It's not dissimilar to frontier frontier math. So these are kind of research questions that kind of academics in the physics physics world would be able to answer, but models really struggled to answer. So the top score here is not 9%.swyx [00:33:51]: And when the people that, that created this like Minway and, and, and actually off via who was kind of behind sweep and what organization is this? Oh, is this, it's Princeton.George [00:34:01]: Kind of range of academics from, from, uh, different academic institutions, really smart people. They talked about how they turn the models up in terms of the temperature as high temperature as they can, where they're trying to explore kind of new ideas in physics as a, as a thought partner, just because they, they want the models to hallucinate. Um, yeah, sometimes it's something new. Yeah, exactly.swyx [00:34:21]: Um, so not right in every situation, but, um, I think it makes sense, you know, to test hallucination in scenarios where it makes sense. Also, the obvious question is, uh, this is one of. Many that there is there, every lab has a system card that shows some kind of hallucination number, and you've chosen to not, uh, endorse that and you've made your own. And I think that's a, that's a choice. Um, totally in some sense, the rest of artificial analysis is public benchmarks that other people can independently rerun. You provide it as a service here. You have to fight the, well, who are we to, to like do this? And your, your answer is that we have a lot of customers and, you know, but like, I guess, how do you converge the individual?Micah [00:35:08]: I mean, I think, I think for hallucinations specifically, there are a bunch of different things that you might care about reasonably, and that you'd measure quite differently, like we've called this a amnesty and solutionation rate, not trying to declare the, like, it's humanity's last hallucination. You could, uh, you could have some interesting naming conventions and all this stuff. Um, the biggest picture answer to that. It's something that I actually wanted to mention. Just as George was explaining, critical point as well is, so as we go forward, we are building evals internally. We're partnering with academia and partnering with AI companies to build great evals. We have pretty strong views on, in various ways for different parts of the AI stack, where there are things that are not being measured well, or things that developers care about that should be measured more and better. And we intend to be doing that. We're not obsessed necessarily with that. Everything we do, we have to do entirely within our own team. Critical point. As a cool example of where we were a launch partner for it, working with academia, we've got some partnerships coming up with a couple of leading companies. Those ones, obviously we have to be careful with on some of the independent stuff, but with the right disclosure, like we're completely comfortable with that. A lot of the labs have released great data sets in the past that we've used to great success independently. And so it's between all of those techniques, we're going to be releasing more stuff in the future. Cool.swyx [00:36:26]: Let's cover the last couple. And then we'll, I want to talk about your trends analysis stuff, you know? Totally.Micah [00:36:31]: So that actually, I have one like little factoid on omniscience. If you go back up to accuracy on omniscience, an interesting thing about this accuracy metric is that it tracks more closely than anything else that we measure. The total parameter count of models makes a lot of sense intuitively, right? Because this is a knowledge eval. This is the pure knowledge metric. We're not looking at the index and the hallucination rate stuff that we think is much more about how the models are trained. This is just what facts did they recall? And yeah, it tracks parameter count extremely closely. Okay.swyx [00:37:05]: What's the rumored size of GPT-3 Pro? And to be clear, not confirmed for any official source, just rumors. But rumors do fly around. Rumors. I get, I hear all sorts of numbers. I don't know what to trust.Micah [00:37:17]: So if you, if you draw the line on omniscience accuracy versus total parameters, we've got all the open ways models, you can squint and see that likely the leading frontier models right now are quite a lot bigger than the ones that we're seeing right now. And the one trillion parameters that the open weights models cap out at, and the ones that we're looking at here, there's an interesting extra data point that Elon Musk revealed recently about XAI that for three trillion parameters for GROK 3 and 4, 6 trillion for GROK 5, but that's not out yet. Take those together, have a look. You might reasonably form a view that there's a pretty good chance that Gemini 3 Pro is bigger than that, that it could be in the 5 to 10 trillion parameters. To be clear, I have absolutely no idea, but just based on this chart, like that's where you would, you would land if you have a look at it. Yeah.swyx [00:38:07]: And to some extent, I actually kind of discourage people from guessing too much because what does it really matter? Like as long as they can serve it as a sustainable cost, that's about it. Like, yeah, totally.George [00:38:17]: They've also got different incentives in play compared to like open weights models who are thinking to supporting others in self-deployment for the labs who are doing inference at scale. It's I think less about total parameters in many cases. When thinking about inference costs and more around number of active parameters. And so there's a bit of an incentive towards larger sparser models. Agreed.Micah [00:38:38]: Understood. Yeah. Great. I mean, obviously if you're a developer or company using these things, not exactly as you say, it doesn't matter. You should be looking at all the different ways that we measure intelligence. You should be looking at cost to run index number and the different ways of thinking about token efficiency and cost efficiency based on the list prices, because that's all it matters.swyx [00:38:56]: It's not as good for the content creator rumor mill where I can say. Oh, GPT-4 is this small circle. Look at GPT-5 is this big circle. And then there used to be a thing for a while. Yeah.Micah [00:39:07]: But that is like on its own, actually a very interesting one, right? That is it just purely that chances are the last couple of years haven't seen a dramatic scaling up in the total size of these models. And so there's a lot of room to go up properly in total size of the models, especially with the upcoming hardware generations. Yes.swyx [00:39:29]: So, you know. Taking off my shitposting face for a minute. Yes. Yes. At the same time, I do feel like, you know, especially coming back from Europe, people do feel like Ilya is probably right that the paradigm is doesn't have many more orders of magnitude to scale out more. And therefore we need to start exploring at least a different path. GDPVal, I think it's like only like a month or so old. I was also very positive when it first came out. I actually talked to Tejo, who was the lead researcher on that. Oh, cool. And you have your own version.George [00:39:59]: It's a fantastic. It's a fantastic data set. Yeah.swyx [00:40:01]: And maybe it will recap for people who are still out of it. It's like 44 tasks based on some kind of GDP cutoff that's like meant to represent broad white collar work that is not just coding. Yeah.Micah [00:40:12]: Each of the tasks have a whole bunch of detailed instructions, some input files for a lot of them. It's within the 44 is divided into like two hundred and twenty two to five, maybe subtasks that are the level of that we run through the agenda. And yeah, they're really interesting. I will say that it doesn't. It doesn't necessarily capture like all the stuff that people do at work. No avail is perfect is always going to be more things to look at, largely because in order to make the tasks well enough to find that you can run them, they need to only have a handful of input files and very specific instructions for that task. And so I think the easiest way to think about them are that they're like quite hard take home exam tasks that you might do in an interview process.swyx [00:40:56]: Yeah, for listeners, it is not no longer like a long prompt. It is like, well, here's a zip file with like a spreadsheet or a PowerPoint deck or a PDF and go nuts and answer this question.George [00:41:06]: OpenAI released a great data set and they released a good paper which looks at performance across the different web chat bots on the data set. It's a great paper, encourage people to read it. What we've done is taken that data set and turned it into an eval that can be run on any model. So we created a reference agentic harness that can run. Run the models on the data set, and then we developed evaluator approach to compare outputs. That's kind of AI enabled, so it uses Gemini 3 Pro Preview to compare results, which we tested pretty comprehensively to ensure that it's aligned to human preferences. One data point there is that even as an evaluator, Gemini 3 Pro, interestingly, doesn't do actually that well. So that's kind of a good example of what we've done in GDPVal AA.swyx [00:42:01]: Yeah, the thing that you have to watch out for with LLM judge is self-preference that models usually prefer their own output, and in this case, it was not. Totally.Micah [00:42:08]: I think the way that we're thinking about the places where it makes sense to use an LLM as judge approach now, like quite different to some of the early LLM as judge stuff a couple of years ago, because some of that and MTV was a great project that was a good example of some of this a while ago was about judging conversations and like a lot of style type stuff. Here, we've got the task that the grader and grading model is doing is quite different to the task of taking the test. When you're taking the test, you've got all of the agentic tools you're working with, the code interpreter and web search, the file system to go through many, many turns to try to create the documents. Then on the other side, when we're grading it, we're running it through a pipeline to extract visual and text versions of the files and be able to provide that to Gemini, and we're providing the criteria for the task and getting it to pick which one more effectively meets the criteria of the task. Yeah. So we've got the task out of two potential outcomes. It turns out that we proved that it's just very, very good at getting that right, matched with human preference a lot of the time, because I think it's got the raw intelligence, but it's combined with the correct representation of the outputs, the fact that the outputs were created with an agentic task that is quite different to the way the grading model works, and we're comparing it against criteria, not just kind of zero shot trying to ask the model to pick which one is better.swyx [00:43:26]: Got it. Why is this an ELO? And not a percentage, like GDP-VAL?George [00:43:31]: So the outputs look like documents, and there's video outputs or audio outputs from some of the tasks. It has to make a video? Yeah, for some of the tasks. Some of the tasks.swyx [00:43:43]: What task is that?George [00:43:45]: I mean, it's in the data set. Like be a YouTuber? It's a marketing video.Micah [00:43:49]: Oh, wow. What? Like model has to go find clips on the internet and try to put it together. The models are not that good at doing that one, for now, to be clear. It's pretty hard to do that with a code editor. I mean, the computer stuff doesn't work quite well enough and so on and so on, but yeah.George [00:44:02]: And so there's no kind of ground truth, necessarily, to compare against, to work out percentage correct. It's hard to come up with correct or incorrect there. And so it's on a relative basis. And so we use an ELO approach to compare outputs from each of the models between the task.swyx [00:44:23]: You know what you should do? You should pay a contractor, a human, to do the same task. And then give it an ELO and then so you have, you have human there. It's just, I think what's helpful about GDPVal, the OpenAI one, is that 50% is meant to be normal human and maybe Domain Expert is higher than that, but 50% was the bar for like, well, if you've crossed 50, you are superhuman. Yeah.Micah [00:44:47]: So we like, haven't grounded this score in that exactly. I agree that it can be helpful, but we wanted to generalize this to a very large number. It's one of the reasons that presenting it as ELO is quite helpful and allows us to add models and it'll stay relevant for quite a long time. I also think it, it can be tricky looking at these exact tasks compared to the human performance, because the way that you would go about it as a human is quite different to how the models would go about it. Yeah.swyx [00:45:15]: I also liked that you included Lama 4 Maverick in there. Is that like just one last, like...Micah [00:45:20]: Well, no, no, no, no, no, no, it is the, it is the best model released by Meta. And... So it makes it into the homepage default set, still for now.George [00:45:31]: Other inclusion that's quite interesting is we also ran it across the latest versions of the web chatbots. And so we have...swyx [00:45:39]: Oh, that's right.George [00:45:40]: Oh, sorry.swyx [00:45:41]: I, yeah, I completely missed that. Okay.George [00:45:43]: No, not at all. So that, which has a checkered pattern. So that is their harness, not yours, is what you're saying. Exactly. And what's really interesting is that if you compare, for instance, Claude 4.5 Opus using the Claude web chatbot, it performs worse than the model in our agentic harness. And so in every case, the model performs better in our agentic harness than its web chatbot counterpart, the harness that they created.swyx [00:46:13]: Oh, my backwards explanation for that would be that, well, it's meant for consumer use cases and here you're pushing it for something.Micah [00:46:19]: The constraints are different and the amount of freedom that you can give the model is different. Also, you like have a cost goal. We let the models work as long as they want, basically. Yeah. Do you copy paste manually into the chatbot? Yeah. Yeah. That's, that was how we got the chatbot reference. We're not going to be keeping those updated at like quite the same scale as hundreds of models.swyx [00:46:38]: Well, so I don't know, talk to a browser base. They'll, they'll automate it for you. You know, like I have thought about like, well, we should turn these chatbot versions into an API because they are legitimately different agents in themselves. Yes. Right. Yeah.Micah [00:46:53]: And that's grown a huge amount of the last year, right? Like the tools. The tools that are available have actually diverged in my opinion, a fair bit across the major chatbot apps and the amount of data sources that you can connect them to have gone up a lot, meaning that your experience and the way you're using the model is more different than ever.swyx [00:47:10]: What tools and what data connections come to mind when you say what's interesting, what's notable work that people have done?Micah [00:47:15]: Oh, okay. So my favorite example on this is that until very recently, I would argue that it was basically impossible to get an LLM to draft an email for me in any useful way. Because most times that you're sending an email, you're not just writing something for the sake of writing it. Chances are context required is a whole bunch of historical emails. Maybe it's notes that you've made, maybe it's meeting notes, maybe it's, um, pulling something from your, um, any of like wherever you at work store stuff. So for me, like Google drive, one drive, um, in our super base databases, if we need to do some analysis or some data or something, preferably model can be plugged into all of those things and can go do some useful work based on it. The things that like I find most impressive currently that I am somewhat surprised work really well in late 2025, uh, that I can have models use super base MCP to query read only, of course, run a whole bunch of SQL queries to do pretty significant data analysis. And. And make charts and stuff and can read my Gmail and my notion. And okay. You actually use that. That's good. That's, that's, that's good. Is that a cloud thing? To various degrees of order, but chat GPD and Claude right now, I would say that this stuff like barely works in fairness right now. Like.George [00:48:33]: Because people are actually going to try this after they hear it. If you get an email from Micah, odds are it wasn't written by a chatbot.Micah [00:48:38]: So, yeah, I think it is true that I have never actually sent anyone an email drafted by a chatbot. Yet.swyx [00:48:46]: Um, and so you can, you can feel it right. And yeah, this time, this time next year, we'll come back and see where it's going. Totally. Um, super base shout out another famous Kiwi. Uh, I don't know if you've, you've any conversations with him about anything in particular on AI building and AI infra.George [00:49:03]: We have had, uh, Twitter DMS, um, with, with him because we're quite big, uh, super base users and power users. And we probably do some things more manually than we should in. In, in super base support line because you're, you're a little bit being super friendly. One extra, um, point regarding, um, GDP Val AA is that on the basis of the overperformance of the models compared to the chatbots turns out, we realized that, oh, like our reference harness that we built actually white works quite well on like gen generalist agentic tasks. This proves it in a sense. And so the agent harness is very. Minimalist. I think it follows some of the ideas that are in Claude code and we, all that we give it is context management capabilities, a web search, web browsing, uh, tool, uh, code execution, uh, environment. Anything else?Micah [00:50:02]: I mean, we can equip it with more tools, but like by default, yeah, that's it. We, we, we give it for GDP, a tool to, uh, view an image specifically, um, because the models, you know, can just use a terminal to pull stuff in text form into context. But to pull visual stuff into context, we had to give them a custom tool, but yeah, exactly. Um, you, you can explain an expert. No.George [00:50:21]: So it's, it, we turned out that we created a good generalist agentic harness. And so we, um, released that on, on GitHub yesterday. It's called stirrup. So if people want to check it out and, and it's a great, um, you know, base for, you know, generalist, uh, building a generalist agent for more specific tasks.Micah [00:50:39]: I'd say the best way to use it is get clone and then have your favorite coding. Agent make changes to it, to do whatever you want, because it's not that many lines of code and the coding agents can work with it. Super well.swyx [00:50:51]: Well, that's nice for the community to explore and share and hack on it. I think maybe in, in, in other similar environments, the terminal bench guys have done, uh, sort of the Harbor. Uh, and so it's, it's a, it's a bundle of, well, we need our minimal harness, which for them is terminus and we also need the RL environments or Docker deployment thing to, to run independently. So I don't know if you've looked at it. I don't know if you've looked at the harbor at all, is that, is that like a, a standard that people want to adopt?George [00:51:19]: Yeah, we've looked at it from a evals perspective and we love terminal bench and, and host benchmarks of, of, of terminal mention on artificial analysis. Um, we've looked at it from a, from a coding agent perspective, but could see it being a great, um, basis for any kind of agents. I think where we're getting to is that these models have gotten smart enough. They've gotten better, better tools that they can perform better when just given a minimalist. Set of tools and, and let them run, let the model control the, the agentic workflow rather than using another framework that's a bit more built out that tries to dictate the, dictate the flow. Awesome.swyx [00:51:56]: Let's cover the openness index and then let's go into the report stuff. Uh, so that's the, that's the last of the proprietary art numbers, I guess. I don't know how you sort of classify all these. Yeah.Micah [00:52:07]: Or call it, call it, let's call it the last of like the, the three new things that we're talking about from like the last few weeks. Um, cause I mean, there's a, we do a mix of stuff that. Where we're using open source, where we open source and what we do and, um, proprietary stuff that we don't always open source, like long context reasoning data set last year, we did open source. Um, and then all of the work on performance benchmarks across the site, some of them, we looking to open source, but some of them, like we're constantly iterating on and so on and so on and so on. So there's a huge mix, I would say, just of like stuff that is open source and not across the side. So that's a LCR for people. Yeah, yeah, yeah, yeah.swyx [00:52:41]: Uh, but let's, let's, let's talk about open.Micah [00:52:42]: Let's talk about openness index. This. Here is call it like a new way to think about how open models are. We, for a long time, have tracked where the models are open weights and what the licenses on them are. And that's like pretty useful. That tells you what you're allowed to do with the weights of a model, but there is this whole other dimension to how open models are. That is pretty important that we haven't tracked until now. And that's how much is disclosed about how it was made. So transparency about data, pre-training data and post-training data. And whether you're allowed to use that data and transparency about methodology and training code. So basically, those are the components. We bring them together to score an openness index for models so that you can in one place get this full picture of how open models are.swyx [00:53:32]: I feel like I've seen a couple other people try to do this, but they're not maintained. I do think this does matter. I don't know what the numbers mean apart from is there a max number? Is this out of 20?George [00:53:44]: It's out of 18 currently, and so we've got an openness index page, but essentially these are points, you get points for being more open across these different categories and the maximum you can achieve is 18. So AI2 with their extremely open OMO3 32B think model is the leader in a sense.swyx [00:54:04]: It's hooking face.George [00:54:05]: Oh, with their smaller model. It's coming soon. I think we need to run, we need to get the intelligence benchmarks right to get it on the site.swyx [00:54:12]: You can't have it open in the next. We can not include hooking face. We love hooking face. We'll have that, we'll have that up very soon. I mean, you know, the refined web and all that stuff. It's, it's amazing. Or is it called fine web? Fine web. Fine web.Micah [00:54:23]: Yeah, yeah, no, totally. Yep. One of the reasons this is cool, right, is that if you're trying to understand the holistic picture of the models and what you can do with all the stuff the company's contributing, this gives you that picture. And so we are going to keep it up to date alongside all the models that we do intelligence index on, on the site. And it's just an extra view to understand.swyx [00:54:43]: Can you scroll down to this? The, the, the, the trade-offs chart. Yeah, yeah. That one. Yeah. This, this really matters, right? Obviously, because you can b
Article: https://www.contemporarypediatrics.com/view/redefining-type-1-diabetes-early-identification-staging-and-clinical-implications-for-pediatric-careTable: https://thepediatriclounge.com/screening-to-prevent-dkaPetiete Trial: https://link.springer.com/article/10.1007/s00125-025-06586-1#Sec5Prevent Trial: https://www.nejm.org/doi/full/10.1056/NEJMoa2308743Screening Summit: https://medschool.cuanschutz.edu/barbara-davis-center-for-diabetes/news-profdev/conferences-events/8th-childhood-diabetes-prevention-symposium---november-10th-11th--2025In this episode, Herb Bravo is joined by Dr. Andrew Cagel, a pediatric endocrinologist, and Dr. Dan Feiten, a pediatrician , to discuss groundbreaking advancements in Type 1 Diabetes (T1D) care. The episode delves into their recent publication, 'Redefining Type 1 Diabetes: Early Identification, Staging, and Clinical Implications of Pediatric Care,' highlighting the critical importance of early detection and intervention. The guests emphasize the urgent need for universal screening.00:00 Introduction to the Pediatric Lounge00:45 Meet the Guests: Dr. Andrew Cagel and Dr. Dan Fen01:08 Redefining Type 1 Diabetes01:36 Personal Stories and Experiences01:52 The Importance of Early Detection04:40 Advancements in Type 1 Diabetes Treatment13:55 The Role of EHR and AI in Pediatric Care19:13 Future Directions and Guidelines29:06 Pivotal Study in Pediatric Diabetes30:45 The Protect Trial: Slowing Disease Progression33:19 Challenges in Screening and Implementation37:46 The Role of Pediatricians and Influencers43:03 Advocacy and Future Directions56:22 Conclusion and Final ThoughtsA Podcast taking you behind the door of the Physician's Lounge to get a deeper insight into what docs are talking about today, from the clinically profound to the wonderfully routine...and everything in between. The conversations are not intended as medical advice, and the opinions expressed are solely those of the host and guest.Support the show
In this episode of Iron Culture, hosts Eric Trexler and Eric Helms discuss various themes surrounding fitness, nutrition, and the importance of open discourse in the community. The conversation addresses criticism received from listeners, the role of cynicism versus skepticism in fitness discussions, and the necessity of engaging with differing perspectives for personal and professional growth. The episode concludes with a call for self-awareness and openness in navigating the complexities of fitness discourse. If you're looking for some high-quality lifting gear or apparel, be sure to visit elitefts.com and use our discount code "MRR10" for a 10% discount. Chapters 00:00 Introduction and Holiday Greetings 02:18 Tiny Utensils and Eating Behavior 05:40 The Controversy Surrounding Brian Wansink 08:23 Updates and Future Directions for Iron Culture 11:20 Addressing Criticism and Community Discourse 17:20 Navigating Evidence-Based Practice and Guest Selection 30:27 Evaluating Content and Moral Standards 32:35 The Role of Evidence in Interviews 34:11 Career Paths in Fitness and Coaching 36:54 Critiques and Misunderstandings in Fitness 40:34 Navigating Disagreements in Evidence-Based Fitness 46:16 Cynicism vs. Skepticism in Fitness Discourse 55:45 The Shift from Ideas to Personal Attacks 59:03 The Drama of Evidence-Based Fitness 01:01:47 The Importance of Empirical Science 01:06:14 Navigating Cynicism and Skepticism 01:12:21 Engaging with Different Perspectives 01:17:59 Self-Awareness in Fitness Discourse 01:24:08 The Role of Change and Growth in Fitness