POPULARITY
Categories
Another knock against the antiplatelet/anticoagulant combo, polypills in HF, the physical exam of the future, and the problem of underpowered trials that even Bayesian analyses cannot rescue are the topics John Mandrola, MD, discusses in this week's podcast. This podcast is intended for healthcare professionals only. To read a partial transcript or to comment, visit: https://www.medscape.com/twic I Listener Feedback Trends Study https://www.heartrhythmjournal.com/article/S1547-5271(11)00496-6/fulltext II Another knock against the Antiplatelet/Anticoagulation combination “Antiplatelet Plus Oral Anticoagulant Lowers Stroke, Raises Bleeding Risk” https://www.medscape.com/viewarticle/antiplatelet-plus-oral-anticoagulant-lowers-stroke-raises-2025a1000re0 ATIS-NVAF Trial https://jamanetwork.com/journals/jamaneurology/fullarticle/2839511 AQUATIC trial https://www.nejm.org/doi/abs/10.1056/NEJMoa2507532 III Polypill for HFrEF A Multilevel Polypill for Patients With HFrEF https://www.jacc.org/doi/10.1016/j.jacadv.2025.102195 IV The Physical Exam of the Future Point-of-Care Ultrasound https://doi.org/10.1016/j.jchf.2025.102707 V More on Underpowered Trials – GA vs Moderate Sedation in IV stroke SEGA Trial https://jamanetwork.com/journals/jamaneurology/fullarticle/2839838 Bayesian Analyses of CV Trials https://doi.org/10.1016/j.cjca.2021.03.014 You may also like: The Bob Harrington Show with the Stephen and Suzanne Weiss Dean of Weill Cornell Medicine, Robert A. Harrington, MD. https://www.medscape.com/author/bob-harrington Questions or feedback, please contact news@medscape.net
When American comedian and actor Betty White died, fans lamented the fact that she had just missed making it to her 100th birthday. They felt she'd been robbed of achieving a significant life moment. Some researchers think that this century could see more people making it to that moment and beyond. That's the focus of this episode of Stats and Stories with guest Michael Pearce. Michael Pearce is a PhD candidate in Statistics at the University of Washington, working under the supervision of Elena A. Erosheva. His primary research interests include preference learning and developing Bayesian statistical models for social science problems. In his spare time, Michael enjoys running, biking, and paddling around the Puget Sound.
Sign up for Alex's first live cohort, about Hierarchical Model building!Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Bayesian mindset in psychology: Why priors, model checking, and full uncertainty reporting make findings more honest and useful.Intermittent fasting & cognition: A Bayesian meta-analysis suggests effects are context- and age-dependent – and often small but meaningful.Framing matters: The way we frame dietary advice (focus, flexibility, timing) can shape adherence and perceived cognitive benefits.From cravings to choices: Appetite, craving, stress, and mood interact to influence eating and cognitive performance throughout the day.Define before you measure: Clear definitions (and DAGs to encode assumptions) reduce ambiguity and guide better study design.DAGs for causal thinking: Directed acyclic graphs help separate hypotheses from data pipelines and make causal claims auditable.Small effects, big implications: Well-estimated “small” effects can scale to public-health relevance when decisions repeat daily.Teaching by modeling: Helping students write models (not just run them) builds statistical thinking and scientific literacy.Bridging lab and life: Balancing careful experiments with real-world measurement is key to actionable health-psychology insights.Trust through transparency: Openly communicating assumptions, uncertainty, and limitations strengthens scientific credibility.Chapters:10:35 The Struggles of Bayesian Statistics in Psychology22:30 Exploring Appetite and Cognitive Performance29:45 Research Methodology and Causal Inference36:36 Understanding Cravings and Definitions39:02 Intermittent Fasting and Cognitive Performance42:57 Practical Recommendations for Intermittent Fasting49:40 Balancing Experimental Psychology and Statistical Modeling55:00 Pressing Questions in Health Psychology01:04:50 Future Directions in ResearchThank you to my Patrons for...
This is the extended "director's cut" of a talk delivered for "RatFest 2025" (next year to be "Conjecture Con"). This also serves as a supplement to my "Doom Debates" interview which can be found here: https://youtu.be/koubXR0YL4A?si=483M6SPOKwbQYmzb It is simply assumed some version of "Bayesian reasoning" is how AI will "create" knowledge. This misconception permeates the https://ai-2027.com paper, as well as Bostrom and Yudkowsky's work on this, as well as that of every other AI "Doomer" and even on the other extreme the so-called "AI-Accelerationists". All of that indicates a deep misconception about how new explanations are generated which comes from a deep misconception about how science works because almost no one in the field of AI seems to think the *philosophy of* science is even relevant. I explain what has gone wrong: 00:00 Introduction 09:14 The Big Questions and the new Priesthoods 18:40 Nick Bostrom and Superintelligence 25:10 If anyone builds it, everyone dies and Yudkowsky. 33:32 Prophecy, Inevitability, Induction and Bayesianism. 41:42 Popper, Kuhn, Feyerabend and Lakatos. 49:40 AI researchers ignore The Philosophy of Science. 58:46 A new test for AGI from Sam Altman and David Deutsch? 1:03:35 Accelerationists, Doomers and “Everyone dies”. 1:10:21 Conclusions 1:15:35 Audience Questions
In this episode, Dr. Kevin Esterling, Professor of Political Science and Public Policy at UC Riverside, talks with the UC Riverside School of Public Policy about using technology to make public meetings more inclusive and effective. This is the seventh episode in our 11-part series, Technology vs. Government, featuring former California State Assemblymember Lloyd Levine.About Dr. Kevin Esterling:Kevin Esterling is Professor of Public Policy and Political Science, chair of political science, and the Director of the Laboratory for Technology, Communication and Democracy (TeCD-Lab) at the University of California, Riverside, and affiliate of the UC Institute on Global Conflict and Cooperation (IGCC). He is the past interim dean and associate dean of the UCR Graduate Division. His research focuses on technology for communication in democratic politics, and in particular the use of artificial intelligence and large language models for understanding and improving the quality of democratic communication in online spaces. His methodological interests are in artificial intelligence, large language models, Bayesian statistics, machine learning, experimental design, and science ethics and validity. His books have been published on Cambridge University Press and the University of Michigan Press, and his journal articles have appeared in such journals as Science, Nature, the Proceedings of the National Academy of Sciences, Nature Human Behavior, the American Political Science Review, Political Analysis, the Journal of Educational and Behavioral Statistics, and the Journal of Politics. His work has been funded by the National Science Foundation, The Democracy Fund, the MacArthur Foundation, and the Institute of Education Sciences. Esterling was previously a Robert Wood Johnson Scholar in Health Policy Research at the University of California, Berkeley and a postdoctoral research fellow at the A. Alfred Taubman Center for Public Policy and American Institutions at Brown University. He received his Ph.D. in Political Science from the University of Chicago in 1999.Interviewer:Lloyd Levine (Former California State Assemblymember, UCR School of Public Policy Senior Policy Fellow)Music by: Vir SinhaCommercial Links:https://spp.ucr.edu/ba-mpphttps://spp.ucr.edu/mppThis is a production of the UCR School of Public Policy: https://spp.ucr.eduSubscribe to this podcast so you do not miss an episode. Learn more about the series and other episodes at https://spp.ucr.edu/podcast.
Bradley Keefer is the Chief Revenue Officer and Justin Jefferson is the VP of Strategy & Insights at Keen Decision Systems, where Bayesian-powered marketing mix modeling meets scenario planning and outcome forecasting, helping brands move from rearview analytics to predictive decisioning.With decades of combined experience across SaaS, analytics, and brand strategy, Bradley and Justin are redefining how marketers plan, forecast, and invest. Instead of treating marketing as a cost center, they help brands model “what if” scenarios, forecasting how every incremental dollar drives revenue across channels.Whether you're scaling a fast-growing brand or managing a multimillion-dollar marketing budget, Bradley and Justin offer a masterclass in using data to make confident, forward-looking decisions that compound over time.In This Conversation We Discuss: [00:38] Intro[01:12] Measuring how marketing spend drives growth[02:29] Building models that adapt to brand maturity[04:35] Balancing brand building with performance spend[07:24] Shifting focus from capturing to creating demand[08:41] Driving demand to boost bottom-funnel returns[09:34] Breaking growth limits with data-driven planning[12:49] Connecting viral moments to sustain momentum[14:50] Building brands that go beyond ad optimization[15:30] Stay updated with new episodes[15:43] Simplifying setup for data-heavy marketing tools[18:44] Designing analytics tools for marketing teams[20:23] Updating models fast to learn and adapt quicker[22:42] Using data to balance old and new media spendResources:Subscribe to Honest Ecommerce on YoutubeMarketing mix modeling powered by AI keends.com/Follow Bradley Keefer linkedin.com/in/bradley-keeferFollow Justin Jefferson linkedin.com/in/justin-a-jeffersonIf you're enjoying the show, we'd love it if you left Honest Ecommerce a review on Apple Podcasts. It makes a huge impact on the success of the podcast, and we love reading every one of your reviews!
Soccer Factor Model DashboardUnveiling True Talent: The Soccer Factor Model for Skill EvaluationLBS #91, Exploring European Football Analytics, with Max GöbelGet early access to Alex's next live-cohort courses!Today's clip is from episode 142 of the podcast, with Gabriel Stechschulte.Alex and Garbriel explore the re-implementation of BART (Bayesian Additive Regression Trees) in Rust, detailing the technical challenges and performance improvements achieved.They also share insights into the benefits of BART, such as uncertainty quantification, and its application in various data-intensive fields.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Get early access to Alex's next live-cohort courses!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:BART as a core tool: Gabriel explains how Bayesian Additive Regression Trees provide robust uncertainty quantification and serve as a reliable baseline model in many domains.Rust for performance: His Rust re-implementation of BART dramatically improves speed and scalability, making it feasible for larger datasets and real-world IoT applications.Strengths and trade-offs: BART avoids overfitting and handles missing data gracefully, though it is slower than other tree-based approaches.Big data meets Bayes: Gabriel shares strategies for applying Bayesian methods with big data, including when variational inference helps balance scale with rigor.Optimization and decision-making: He highlights how BART models can be embedded into optimization frameworks, opening doors for sequential decision-making.Open source matters: Gabriel emphasizes the importance of communities like PyMC and Bambi, encouraging newcomers to start with small contributions.Chapters:05:10 – From economics to IoT and Bayesian statistics18:55 – Introduction to BART (Bayesian Additive Regression Trees)24:40 – Re-implementing BART in Rust for speed and scalability32:05 – Comparing BART with Gaussian Processes and other tree methods39:50 – Strengths and limitations of BART47:15 – Handling missing data and different likelihoods54:30 – Variational inference and big data challenges01:01:10 – Embedding BART into optimization and decision-making frameworks01:08:45 – Open source, PyMC, and community support01:15:20 – Advice for newcomers01:20:55 – Future of BART, Rust, and probabilistic programmingThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian...
Welcome to JAT Chat, presented by the Journal of Athletic Training, the official journal of the National Athletic Trainers' Association. In this episode, co-host Dr. Kara Radzak speaks with Dr. Travis Anderson and Dr. Eric Post about their recently published article, "Multiplying Alpha: When Statistical Tests Compound in Sports Medicine Research". Drs. Anderson and Post discuss how multiple statistical tests can inflate false-positive rates in sports medicine research, explain family-wise and experiment-wise error, and illustrate risks with a large-scale Paris Olympic Games analysis. They recommend transparency, pre-registration, correction for multiplicity, and consider Bayesian approaches to improve rigor and clinical decision-making. Article: https://doi.org/10.4085/1062-6050-0700.24 Guest Bios: Travis Anderson, PhD: Travis recently joined US Soccer as the Manager of Research and Innovation, following his work as a Research Scientist at the USOPC where he worked closely with Eric. His academic background is in exercise physiology, although he dabbled in statistics throughout graduate school and enjoys continuing his education in applied statistics in sports medicine and exercise science. Eric Post, PhD, ATC: Eric is the Manager of the Sports Medicine Research Laboratory for the United States Olympic and Paralympic Committee. Eric previously served as Program Director for the Master's in Athletic Training Program at Indiana State University and as a faculty member at San Diego State University.
Episode 2.42Philosophical Case for the SupernaturalCan miracles be intellectually defended—or are they just wishful thinking in a scientific age?In this follow-up to the theological case for miracles, Zach and Michael explore the philosophical foundations for believing in the miraculous. Drawing from the work of C.S. Lewis, Alvin Plantinga, Richard Swinburne, William Lane Craig, and others, they address classical objections from Spinoza and Hume, explain Bayesian probability, and unpack why Christianity stands or falls on historical miracle claims—especially the resurrection.Covered in this episode:– What qualifies as a miracle (and what doesn't)– Why miracles are necessary for Christian faith– Whether natural laws rule out divine intervention– The failure of Hume's argument against miracles– How probability theory supports miracle testimony– Why Christianity's claims are evidential, not blindIf theism is true, miracles aren't just possible—they're expected. This episode shows why the miraculous still makes philosophical sense.WLC discussing Bayesian Equation: https://www.reasonablefaith.org/question-answer/P90/do-extraordinary-events-require-extraordinary-evidenceThe Book Michael Referenced: Miracles: The Credibility of the New Testament Accounts, https://a.co/d/hjzHvWLFind our videocast here: https://youtu.be/dshfk_jyXj0Merch here: https://take-2-podcast.printify.me/Music from #Uppbeat (free for Creators!):https://uppbeat.io/t/reakt-music/deep-stoneLicense code: 2QZOZ2YHZ5UTE7C8Find more Take 2 Theology content at http://www.take2theology.com
Get early access to Alex's next live-cohort courses!Today's clip is from episode 141 of the podcast, with Sam Witty.Alex and Sam discuss the ChiRho project, delving into the intricacies of causal inference, particularly focusing on Do-Calculus, regression discontinuity designs, and Bayesian structural causal inference. They explain ChiRho's design philosophy, emphasizing its modular and extensible nature, and highlights the importance of efficient estimation in causal inference, making complex statistical methods accessible to users without extensive expertise.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
Professor Andrew Wilson from NYU explains why many common-sense ideas in artificial intelligence might be wrong. For decades, the rule of thumb in machine learning has been to fear complexity. The thinking goes: if your model has too many parameters (is "too complex") for the amount of data you have, it will "overfit" by essentially memorizing the data instead of learning the underlying patterns. This leads to poor performance on new, unseen data. This is known as the classic "bias-variance trade-off" i.e. a balancing act between a model that's too simple and one that's too complex.**SPONSOR MESSAGES**—Tufa AI Labs is an AI research lab based in Zurich. **They are hiring ML research engineers!** This is a once in a lifetime opportunity to work with one of the best labs in EuropeContact Benjamin Crouzier - https://tufalabs.ai/ —Take the Prolific human data survey - https://www.prolific.com/humandatasurvey?utm_source=mlst and be the first to see the results and benchmark their practices against the wider community!—cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economyOct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlstSubmit investment deck: https://cyber.fund/contact?utm_source=mlst— Description Continued:Professor Wilson challenges this fundamental belief (fearing complexity). He makes a few surprising points:**Bigger Can Be Better**: massive models don't just get more flexible; they also develop a stronger "simplicity bias". So, if your model is overfitting, the solution might paradoxically be to make it even bigger.**The "Bias-Variance Trade-off" is a Misnomer**: Wilson claims you don't actually have to trade one for the other. You can have a model that is incredibly expressive and flexible while also being strongly biased toward simple solutions. He points to the "double descent" phenomenon, where performance first gets worse as models get more complex, but then surprisingly starts getting better again.**Honest Beliefs and Bayesian Thinking**: His core philosophy is that we should build models that honestly represent our beliefs about the world. We believe the world is complex, so our models should be expressive. But we also believe in Occam's razor—that the simplest explanation is often the best. He champions Bayesian methods, which naturally balance these two ideas through a process called marginalization, which he describes as an automatic Occam's razor.TOC:[00:00:00] Introduction and Thesis[00:04:19] Challenging Conventional Wisdom[00:11:17] The Philosophy of a Scientist-Engineer[00:16:47] Expressiveness, Overfitting, and Bias[00:28:15] Understanding, Compression, and Kolmogorov Complexity[01:05:06] The Surprising Power of Generalization[01:13:21] The Elegance of Bayesian Inference[01:33:02] The Geometry of Learning[01:46:28] Practical Advice and The Future of AIProf. Andrew Gordon Wilson:https://x.com/andrewgwilshttps://cims.nyu.edu/~andrewgw/https://scholar.google.com/citations?user=twWX2LIAAAAJ&hl=en https://www.youtube.com/watch?v=Aja0kZeWRy4 https://www.youtube.com/watch?v=HEp4TOrkwV4 TRANSCRIPT:https://app.rescript.info/public/share/H4Io1Y7Rr54MM05FuZgAv4yphoukCfkqokyzSYJwCK8Hosts:Dr. Tim Scarfe / Dr. Keith Duggar (MIT Ph.D)REFS:Deep Learning is Not So Mysterious or Different [Andrew Gordon Wilson]https://arxiv.org/abs/2503.02113Bayesian Deep Learning and a Probabilistic Perspective of Generalization [Andrew Gordon Wilson, Pavel Izmailov]https://arxiv.org/abs/2002.08791Compute-Optimal LLMs Provably Generalize Better With Scale [Marc Finzi, Sanyam Kapoor, Diego Granziol, Anming Gu, Christopher De Sa, J. Zico Kolter, Andrew Gordon Wilson]https://arxiv.org/abs/2504.15208
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Get early access to Alex's next live-cohort courses!Enroll in the Causal AI workshop, to learn live with Alex (15% off if you're a Patron of the show)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Causal inference is crucial for understanding the impact of interventions in various fields.ChiRho is a causal probabilistic programming language that bridges mechanistic and data-driven models.ChiRho allows for easy manipulation of causal models and counterfactual reasoning.The design of ChiRho emphasizes modularity and extensibility for diverse applications.Causal inference requires careful consideration of assumptions and model structures.Real-world applications of causal inference can lead to significant insights in science and engineering.Collaboration and communication are key in translating causal questions into actionable models.The future of causal inference lies in integrating probabilistic programming with scientific discovery.Chapters:05:53 Bridging Mechanistic and Data-Driven Models09:13 Understanding Causal Probabilistic Programming12:10 ChiRho and Its Design Principles15:03 ChiRho's Functionality and Use Cases17:55 Counterfactual Worlds and Mediation Analysis20:47 Efficient Estimation in ChiRho24:08 Future Directions for Causal AI50:21 Understanding the Do-Operator in Causal Inference56:45 ChiRho's Role in Causal Inference and Bayesian Modeling01:01:36 Roadmap and Future Developments for ChiRho01:05:29 Real-World Applications of Causal Probabilistic Programming01:10:51 Challenges in Causal Inference Adoption01:11:50 The Importance of Causal Claims in Research01:18:11 Bayesian Approaches to Causal Inference01:22:08 Combining Gaussian Processes with Causal Inference01:28:27 Future Directions in Probabilistic Programming and Causal InferenceThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad...
Get early access to Alex's next live-cohort courses!Today's clip is from episode 140 of the podcast, with Ron Yurko.Alex and Ron discuss the challenges of model deployment, and the complexities of modeling player contributions in team sports like soccer and football.They emphasize the importance of understanding replacement levels, the Going Deep framework in football analytics, and the need for proper modeling of expected points. Additionally, they share insights on teaching Bayesian modeling to students and the difficulties they face in grasping the concepts of model writing and application.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
Brian Burke, Sports Data Scientist at ESPN, joins Cade Massey, Eric Bradlow, and Shane Jensen to share insights on building advanced football power ranking systems, the role of Bayesian models in balancing priors and new data, and how analytics informs game-day decisions like fourth-down calls and playoff predictions. Cade, Eric, and Shane also analyze standout performances and key narratives from NFL Week One, preview pivotal college football games, examine the growing dominance of Carlos Alcaraz over Jannik Sinner in men's tennis, and highlight major offensive trends across Major League Baseball. Hosted on Acast. See acast.com/privacy for more information.
Brian Burke, Sports Data Scientist at ESPN, joins Cade Massey, Eric Bradlow, and Shane Jensen to share insights on building advanced football power ranking systems, the role of Bayesian models in balancing priors and new data, and how analytics informs game-day decisions like fourth-down calls and playoff predictions. Cade, Eric, and Shane also analyze standout performances and key narratives from NFL Week One, preview pivotal college football games, examine the growing dominance of Carlos Alcaraz over Jannik Sinner in men's tennis, and highlight major offensive trends across Major League Baseball. Hosted on Acast. See acast.com/privacy for more information.
While most conversations about generative AI focus on chatbots, Thomas Wiecki (PyMC Labs, PyMC) has been building systems that help companies make actual business decisions. In this episode, he shares how Bayesian modeling and synthetic consumers can be combined with LLMs to simulate customer reactions, guide marketing spend, and support strategy. Drawing from his work with Colgate and others, Thomas explains how to scale survey methods with AI, where agents fit into analytics workflows, and what it takes to make these systems reliable. We talk through: Using LLMs as “synthetic consumers” to simulate surveys and test product ideas How Bayesian modeling and causal graphs enable transparent, trustworthy decision-making Building closed-loop systems where AI generates and critiques ideas Guardrails for multi-agent workflows in marketing mix modeling Where generative AI breaks (and how to detect failure modes) The balance between useful models and “correct” models If you've ever wondered how to move from flashy prototypes to AI systems that actually inform business strategy, this episode shows what it takes. LINKS: The AI MMM Agent, An AI-Powered Shortcut to Bayesian Marketing Mix Insights (https://www.pymc-labs.com/blog-posts/the-ai-mmm-agent) AI-Powered Decision Making Under Uncertainty Workshop w/ Allen Downey & Chris Fonnesbeck (PyMC Labs) (https://youtube.com/live/2Auc57lxgeU) The Podcast livestream on YouTube (https://youtube.com/live/so4AzEbgSjw?feature=share) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk)
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Teaching students to write out their own models is crucial.Developing a sports analytics portfolio is essential for aspiring analysts.Modeling expectations in sports analytics can be misleading.Tracking data can significantly improve player performance models.Ron encourages students to engage in active learning through projects.The importance of understanding the dependency structure in data is vital.Ron aims to integrate more diverse sports analytics topics into his teaching.Chapters:03:51 The Journey into Sports Analytics15:20 The Evolution of Bayesian Statistics in Sports26:01 Innovations in NFL WAR Modeling39:23 Causal Modeling in Sports Analytics46:29 Defining Replacement Levels in Sports48:26 The Going Deep Framework and Big Data in Football52:47 Modeling Expectations in Football Data55:40 Teaching Statistical Concepts in Sports Analytics01:01:54 The Importance of Model Building in Education01:04:46 Statistical Thinking in Sports Analytics01:10:55 Innovative Research in Player Movement01:15:47 Exploring Data Needs in American Football01:18:43 Building a Sports Analytics PortfolioThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M,...
Grant, Matt, and Randy gear up to discuss precision, accuracy, and variability in wildlife studies. They dive into how variability measures data spread, using range, quartiles, standard deviation, variance, and coefficient of variation (CV).Accuracy reflects closeness to the true value, precision shows clustering of estimates. Standard errors, confidence intervals, and Bayesian credibility intervals quantify estimate precision. Required precision depends on study goals, sample size, and application, ensuring reliable, interpretable, and actionable results for wildlife management and conservation.They wrap up with some examples, but the take home message is deciding how precise results need to be hinges on the question and resources.Cite this episode: https://doi.org/10.7944/usfws.wbtn.s01ep011DOI Citation Formatter: https://citation.doi.org/ Episode music: Shapeshifter by Mr Smith is licensed under an Attribution 4.0 International License. https://creativecommons.org/licenses/by/4.0/https://freemusicarchive.org/music/mr-smith/studio-city/shapeshifter/
Today's clip is from episode 139 of the podcast, with with Max Balandat.Alex and Max discuss the integration of BoTorch with PyTorch, exploring its applications in Bayesian optimization and Gaussian processes. They highlight the advantages of using GPyTorch for structured matrices and the flexibility it offers for research. The discussion also covers the motivations behind building BoTorch, the importance of open-source culture at Meta, and the role of PyTorch in modern machine learning.Get the full discussion here.Attend Alex's tutorial at PyData Berlin: A Beginner's Guide to State Space Modeling Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
At a time when startups are primarily funded by private market investors, who you know has become a critical factor in gaining access to that venture capital. But how does the reliance on alumni and professional networks create barriers for startups from historically disadvantaged groups?Emmanuel Yimfor '20 is a finance professor at Columbia Business School and holds a Ph.D. from Rice University. His research focuses on entrepreneurial finance, diversity and private capital markets, with insights into gender and racial disparities in venture capital funding, board representation and how resources could be more equitably allocated.Emmanuel joins co-host Maya Pomroy '22 to discuss his career journey from working at a Cameroonian telecommunications company to teaching at some of the top U.S. business schools, as well as his research on the influence of alumni networks in venture capital funding, how AI tools can address biases in lending, and finally how he's teaming up with his son to bring AI tools to young innovators and entrepreneurs in Cameroon. Episode Guide:01:00 Exploring Entrepreneurial Finance03:36 The Role of Networks in VC Funding08:10 Emmanuel's Journey From Cameroon to the U.S.12:34 The Rice University Experience15:43 Research on Alumni Networks and Funding21:49 Algorithmic Bias in Lending33:17 Empowering Future Innovators in Cameroon38:42 Final Thoughts and Future OutlookOwl Have You Know is a production of Rice Business and is produced by University FM.Episode Quotes:Rethinking who gets funded in venture capital31:07: What does good networks mean exactly? If you look at venture capital partners, for example, right? They have worked at McKinsey before they became venture capital partners. So they have worked at certain companies, they have done certain jobs that then led them to become VCs. And so to the extent that we have a lack of representation in this pipeline of jobs that is leading to VC, then the founders that do not come from these same backgrounds do not have as equal access to the partners. And so what that suggests is something very basic, which is like, just rethink the set of deals that you are considering. That might expand the pool of deals that you consider, because, you know, there might be a smart person out there that is maybe not the same race as you, but that has an idea that you really, really want to fund. And that is something that I think, like, everybody would agree with. You know, we want to allocate capital to its most productive uses.From hard data to meaningful change29:13: So I have a belief in America, at least based on my life journey, which is: if you work hard for long enough, somebody is going to recognize you and you will be rewarded for it. And so I really believe that America takes in data, thinks about that data for a while to think about whether the research is credible enough, and then, using that data, they are a good Bayesian, so they get a new posterior. They act in a new way that is consistent with what the new before and the new data. And so I think about my role as a researcher as just like, you know, providing that data. Here is the data, and here is what is consistent with what we are doing right now. Now, you know, what you do with that information now is like, you know, update what you are doing in a way that is most consistent with efficient capital allocation—is my hope.Why Emmanuel finds empirical work so exciting 21:34: Empirical work is so exciting to me because then you are like, "I am a little bit of a police detective." So you take a little bit of this thing that feels hard to measure, and then you can create hypotheses to link it to the eventual outcomes, to the extent that that thing that is hard to measure is something that is leading to efficient capital allocation. Then, on average, you know, this feeling that you get about founders that are from the same alma mater should lead to good things as opposed to leading to bad things. And so, you know, that is exactly the right spirit of how to think about the work.Show Links: TranscriptGuest Profiles:Emmanuel Yimfor | Columbia Business SchoolEmmanuel Yimfor | LinkedInEmmanuel's Website
I get it. It seems crazy at first. However, when all the arguments are on the table you might be surprised how your mind may change.
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:BoTorch is designed for researchers who want flexibility in Bayesian optimization.The integration of BoTorch with PyTorch allows for differentiable programming.Scalability at Meta involves careful software engineering practices and testing.Open-source contributions enhance the development and community engagement of BoTorch.LLMs can help incorporate human knowledge into optimization processes.Max emphasizes the importance of clear communication of uncertainty to stakeholders.The role of a researcher in industry is often more application-focused than in academia.Max's team at Meta works on adaptive experimentation and Bayesian optimization.Chapters:08:51 Understanding BoTorch12:12 Use Cases and Flexibility of BoTorch15:02 Integration with PyTorch and GPyTorch17:57 Practical Applications of BoTorch20:50 Open Source Culture at Meta and BoTorch's Development43:10 The Power of Open Source Collaboration47:49 Scalability Challenges at Meta51:02 Balancing Depth and Breadth in Problem Solving55:08 Communicating Uncertainty to Stakeholders01:00:53 Learning from Missteps in Research01:05:06 Integrating External Contributions into BoTorch01:08:00 The Future of Optimization with LLMsThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode,...
QBs and ‘cuffs take center stage, in this week's Strat Sesh!The Ol' SFD goes solo in this mailbag Strat Sesh, heavy on names and in-season strategy! Hogue answers listener questions on the impact of the 2024 QB rookie class and how the landscape will change as a result; RB handcuffs to target (and why/to what degree); and in-season Quarterback X-Streaming. Plus, the backslides for Jayden Daniels and Bo Nix; the four categories of handcuffs; Bayesian theory in-season, and when start rates are the answer. All that and more, on this week's AMA episode!************* JOIN THE SFSS DISCORD SERVER HERE FOR THE SUPERSIZED, ONGOING CONVERSATION ON SUPERFLEX!! ************* The SuperFlex SuperShow – one of many great podcasts from the Dynasty League Football (@DLFootball) Family of Podcasts – is hosted by John Hogue (@SuperFlexDude) and Tommy Blair (@FFTommyB), and always dedicated in loving memory to James “The Brain” Koutoulas. Featuring weekly dynasty football content focused on superflex, 2QB and other alternate scoring settings. Special thanks to Heart and Soul Radio for their song, “The Addiction,” and special thanks to the Dynasty League Football Family of Podcasts and the entire DLF staff for the ongoing support! Stay Sexy… and SuperFlex-y!
Oggi parliamo di Gaza, delle condizioni delle carceri italiane, dell'anniversario del naufragio del Bayesian, con i suoi misteri, e di telemarketing selvaggio. ... Qui il link per iscriversi al canale Whatsapp di Notizie a colazione: https://whatsapp.com/channel/0029Va7X7C4DjiOmdBGtOL3z Per iscriverti al canale Telegram: https://t.me/notizieacolazione ... Qui gli altri podcast di Class Editori: https://milanofinanza.it/podcast Musica https://www.bensound.com Learn more about your ad choices. Visit megaphone.fm/adchoices
Today's clip is from episode 138 of the podcast, with Mélodie Monod, François-Xavier Briol and Yingzhen Li.During this live show at Imperial College London, Alex and his guests delve into the complexities and advancements in Bayesian deep learning, focusing on uncertainty quantification, the integration of machine learning tools, and the challenges faced in simulation-based inference.The speakers discuss their current projects, the evolution of Bayesian models, and the need for better computational tools in the field.Get the full discussion here.Attend Alex's tutorial at PyData Berlin: A Beginner's Guide to State Space Modeling Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
In this episode of the Barbell Rehab Podcast, we sit down with Dr. Susie Spirlock to discuss rehab and training. We chat about training women and unique considerations around hormones, menstrual cycles, and bone mineral density. We also discuss hypermobility, common misconceptions, and programming considerations. We wrap up by considering trends in the field around evidence-based practice. Susie can be found on Instagram at @dr.susie.squats. We hope you enjoy this episode! Here are some follow-up resources for you to check out, including research articles and additional readings related to the topics discussed in this episode: Move Your Bones Free 4-Week Beginner Strength Training Program Free Using Intensity Based Training For The Phases of The Menstrual Cycle 20% Off Your First 2 Months In Supple Strength with Code BRM20 Beighton Score Hospital Del Mar Criteria Diagnostic Criteria for Hypermobile Ehlers-Danlos Syndrome (hEDS) Defining the Clinical Complexity of hEDS and HSD: A Global Survey of Diagnostic Challenge, Comorbidities, and Unmet Needs: https://www.medrxiv.org/content/10.1101/2025.06.05.25329074v1.full.pdf Current evidence shows no influence of women's menstrual cycle phase on acute strength performance or adaptations to resistance exercise training: https://pmc.ncbi.nlm.nih.gov/articles/PMC10076834/ Menstrual Cycle Phase Has No Influence on Performance-Determining Variables in Endurance-Trained Athletes: The FENDURA Project: https://pubmed.ncbi.nlm.nih.gov/38600646/ Sex differences in absolute and relative changes in muscle size following resistance training in healthy adults: a systematic review with Bayesian meta-analysis: https://pubmed.ncbi.nlm.nih.gov/40028215/ FREE Research Roundup Email Series | Get research reviews sent to your inbox, once a month, and stay up-to-date on the latest trends in rehab and fitness The Barbell Rehab Method Certification Course Schedule | 2-days, 15 hours, and CEU approved The Barbell Rehab Weightlifting Certification Course Schedule | 2-days, 15 hours, and CEU approved
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Bayesian deep learning is a growing field with many challenges.Current research focuses on applying Bayesian methods to neural networks.Diffusion methods are emerging as a new approach for uncertainty quantification.The integration of machine learning tools into Bayesian models is a key area of research.The complexity of Bayesian neural networks poses significant computational challenges.Future research will focus on improving methods for uncertainty quantification. Generalized Bayesian inference offers a more robust approach to uncertainty.Uncertainty quantification is crucial in fields like medicine and epidemiology.Detecting out-of-distribution examples is essential for model reliability.Exploration-exploitation trade-off is vital in reinforcement learning.Marginal likelihood can be misleading for model selection.The integration of Bayesian methods in LLMs presents unique challenges.Chapters:00:00 Introduction to Bayesian Deep Learning03:12 Panelist Introductions and Backgrounds10:37 Current Research and Challenges in Bayesian Deep Learning18:04 Contrasting Approaches: Bayesian vs. Machine Learning26:09 Tools and Techniques for Bayesian Deep Learning31:18 Innovative Methods in Uncertainty Quantification36:23 Generalized Bayesian Inference and Its Implications41:38 Robust Bayesian Inference and Gaussian Processes44:24 Software Development in Bayesian Statistics46:51 Understanding Uncertainty in Language Models50:03 Hallucinations in Language Models53:48 Bayesian Neural Networks vs Traditional Neural Networks58:00 Challenges with Likelihood Assumptions01:01:22 Practical Applications of Uncertainty Quantification01:04:33 Meta Decision-Making with Uncertainty01:06:50 Exploring Bayesian Priors in Neural Networks01:09:17 Model Complexity and Data Signal01:12:10 Marginal Likelihood and Model Selection01:15:03 Implementing Bayesian Methods in LLMs01:19:21 Out-of-Distribution Detection in LLMsThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer,...
Substack glory and livestream anxiety ... Reading eugenics into a dopey jeans ad ... Does Sydney Sweeney spell the end of “inclusive marketing”? ... Bob vs. Paul on IQ and “general intelligence” ... Paul reviews the new Billy Joel documentary ... The Epstein prison video snafu: a Bayesian take ... Ghislaine gets a free upgrade ...
Substack glory and livestream anxiety ... Reading eugenics into a dopey jeans ad ... Does Sydney Sweeney spell the end of “inclusive marketing”? ... Bob vs. Paul on IQ and “general intelligence” ... Paul reviews the new Billy Joel documentary ... The Epstein prison video snafu: a Bayesian take ... Ghislaine gets a free upgrade ...
What if biotech's biggest scaling challenge isn't technical—but philosophical? In this episode, Massimo Portincaso, founder and CEO of Arsenale Bioyards, explains why industrial biotech must be reimagined from the ground up. He challenges legacy “scale-up” thinking, highlighting biology's context dependency and the economic dead ends of retrofitted pharma models. From modular, AI-informed bioreactors to a scale-out strategy and data-first infrastructure, Massimo shares how his team is rewriting the rules of economic viability, manufacturing innovation, and organizational design. Discover why scaling out — not up — is the future of biomanufacturing.--- Hey Climate Tech enthusiasts! Searching for new podcasts on sustainability? Check out the Leaders on a Mission podcast, where I interview climate tech leaders who are shaking up the industry and bringing us the next big thing in sustainable solutions. Join me for a deep dive into the future of green innovation exploring the highs, lows, and everything in between of pioneering new technologies.Get an exclusive insight into how these leaders started up their journey, and how their cutting edge products will make a real impact. Tune in on…YouTube: https://www.youtube.com/@leadersonamissionNet0Spotify: https://open.spotify.com/show/7o41ubdkzChAzD9C53xH82Apple Podcasts: https://podcasts.apple.com/us/podcast/leaders-on-a-mission/id1532211726…to listen to the latest episodes!Timestamps:00:46 - Biology resists code scaling03:18 - From BCG to biotech founder07:33 - Why biotech remains niche10:24 - Redesigning from first principles13:40 - Biology's context dependency16:55 - Intelligent design via Bayesian models18:46 - Building the bioproduction stack21:00 - Scale-out vs scale-up25:56 - Rethinking the CDMO model29:20 - Reinventing the capital stack33:30 - Future sectors & applications37:10 - Killing the org chart40:12 - Complexity as a strategic assetUseful links: Arsenale Bioyards website: https://arsenale.bio/Arsenale Bioywards LinkedIn: https://www.linkedin.com/company/arsenale-bioyards/Massimo Portincaso LinkedIn: https://www.linkedin.com/in/massimo-portincaso-36a8795/Leaders on a Mission website: https://cs-partners.net/podcasts/Simon Leich's LinkedIn: https://www.linkedin.com/in/executive-talent-headhunter-agtech-foodtech-agrifoodtech-agritech/
JOIN THE CHANNEL: https://www.youtube.com/channel/UChjRIs14reAo-on9z5iHJFA/join Find Merch: https://mattek.store/ Sign up to draft with us on UNDERDOG and use code DAVIS: https://play.underdogfantasy.com/p-davis-mattek Code DAVIS is live on Fast Draft to play in the fastest tournaments in fantasy football Download the app here: https://apps.apple.com/us/app/fastdraft-fantasy/id6478789910 Join Drafters Fantasy and get a 100% Deposit Match Bonus up to $100 with Code DAVIS. $2.5M in Prizes, Best Ball Total Points Format, Potential Overlay… https://drafters.com/refer/davis GET 10% OFF RUN THE SIMS W/ CODE "ENDGAME": www.runthesims.com Try Out UNABATED'S Premium Sports Betting + DFS Pick 'Em Tools: https://unabated.com/?ref=davis Try Out UNABATED'S Premium Sports Betting + DFS Pick 'Em Tools: https://unabated.com/?ref=davis Sign up for premium fantasy football content and get exclusive Discord access: www.patreon.com/davismattek Subscribe to the AutoMattek Absolutes Newsletter: https://automattekabsolutes.beehiiv.com/ Download THE DRAFT CADDY: https://endgamesyndicate.com/membership-levels/?pa=DavisMattek Timestamps: 00:00 Best Ball Fantasy Football Introduction 2:00 Best Ball mania draft begins 14:00 Home League Team Review 19:30 Keaton Mitchell 33:00 Best Ball Mania Draft #2 Begins 45:20 Kyle Pitts 1:03:00 Shaidy Advice Joins The Show To Draft A Best Ball Mania 1:07:30 The Sims Explain Themselves 1:15:00 How sims change when you put your own rankings in it 1:37:30 Best Ball Mania Draft Begins with a Bayesian process Audio-Only Podcast Feed For All Davis Mattek Streams: https://podcasts.apple.com/us/podcast/grinding-the-variance-a-davis-mattek-fantasy-football-pod/id1756145256
Apply to join us as a co-host! https://astrosoundbites.com/recruiting-2025 This week, Shashank, Cole and Cormac discuss a concept that has come up on many an ASB episode past: Bayesian statistics. They start by trying to wrap our heads around what a probability really means. Cole introduces us to a recent and attention-grabbing paper on a potential biosignature in the atmosphere of an exoplanet, with lots of statistics along the way. Then, Cormac brings up some counterpoints to this detection. They debate what it would take—statistically and scientifically—for a detection of biosignatures to cross the line from intriguing to compelling. New Constraints on DMS and DMDS in the Atmosphere of K2-18 b from JWST MIRI https://iopscience.iop.org/article/10.3847/2041-8213/adc1c8 Are there Spectral Features in the MIRI/LRS Transmission Spectrum of K2-18b? https://arxiv.org/abs/2504.15916 Insufficient evidence for DMS and DMDS in the atmosphere of K2-18 b. From a joint analysis of JWST NIRISS, NIRSpec, and MIRI observations https://arxiv.org/abs/2505.13407 Space Sound: https://www.youtube.com/watch?v=hGdk49LRB14
CHEST August 2025, Volume 168, Issue 2 CHEST® journal's Editor in Chief Peter Mazzone, MD, MPH, FCCP, highlights key research published in the journal CHEST August 2025 issue, including an exploration of the impacts of abortion bans on pulmonary and critical care physicians, a Bayesian meta-analysis of machine listening for obstructive sleep apnea diagnosis, and more. Moderator: Peter Mazzone, MD, MPH, FCCP
Today's clip is from episode 137 of the podcast, with Robert Ness.Alex and Robert discuss the intersection of causal inference and deep learning, emphasizing the importance of understanding causal concepts in statistical modeling. The discussion also covers the evolution of probabilistic machine learning, the role of inductive biases, and the potential of large language models in causal analysis, highlighting their ability to translate natural language into formal causal queries.Get the full conversation here.Attend Alex's tutorial at PyData Berlin: A Beginner's Guide to State Space Modeling Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Causal assumptions are crucial for statistical modeling.Deep learning can be integrated with causal models.Statistical rigor is essential in evaluating LLMs.Causal representation learning is a growing field.Inductive biases in AI should match key mechanisms.Causal AI can improve decision-making processes.The future of AI lies in understanding causal relationships.Chapters:00:00 Introduction to Causal AI and Its Importance16:34 The Journey to Writing Causal AI28:05 Integrating Graphical Causality with Deep Learning40:10 The Evolution of Probabilistic Machine Learning44:34 Practical Applications of Causal AI with LLMs49:48 Exploring Multimodal Models and Causality56:15 Tools and Frameworks for Causal AI01:03:19 Statistical Rigor in Evaluating LLMs01:12:22 Causal Thinking in Real-World Deployments01:19:52 Trade-offs in Generative Causal Models01:25:14 Future of Causal Generative ModelingThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant...
Olivia is a member of the Guild of the Rose and a total badass. Enjoy the intuitive and fun lesson in Bayesian reasoning she shared with me at VibeCamp.
The Blueprint Live Build - N8N:Post Prime Day Performance Automation
I had Brad Schoenfeld back on the Swole Radio Podcast to discuss optimal training for hypertrophy. We cover some of his recent research and perspectives:0:23 Without fail: muscular adaptations in single set resistance training performed to failure or with repetitions in reserve 4:18 is there an incremental benefit of training closer to failure?7:39 Give it a rest: a systematic review with Bayesian meta-analysis on the effect of inter-set rest interval duration on muscle hypertrophy16:38 how to program rep ranges for hypertrophy24:34 Do cheaters prosper? Effect of externally supplied momentum during resistance training on measures of upper body muscle hypertrophy28:10 Optimizing resistance training technique to maximize muscle hypertrophy: a narrative review32:08 how do you know if an exercise is good for you?37:24 how should people find their optimal training volume?43:20 how to run deloadsBrad's IG: @bradschoenfeldphd-------------------------------My e-books: https://askdrswole.com/MASS Research Review: https://www.massmember.com/a/2147986175/LwzhWs82(This is an affiliate link - I'll receive a small commission when you use it)Dream Physique Nutrition Course: https://dr-swole.thinkific.com/courses/dream-physique-nutrition-------------------------------Find me on social media:YOUTUBE: https://www.youtube.com/@DrSwoleINSTAGRAM: http://instagram.com/dr_swoleFACEBOOK GROUP: https://www.facebook.com/groups/drswoleTIKTOK: https://www.tiktok.com/@dr_swole/-------------------------------About me: I'm a medical doctor and pro natural physique athlete based in Vancouver, Canada. I share evidence-based perspectives on natural bodybuilding, and see to help people achieve health, wealth, and happiness.-------------------------------Disclaimers: Consider seeing a physician to assess your readiness before beginning any fitness program. Information presented here is to be applied intelligently in the individual context. I do not assume liability for any loss incurred by using information presented here. -------------------------------
Today's clip is from episode 136 of the podcast, with Haavard Rue & Janet van Niekerk.Alex, Haavard and Janet explore the world of Bayesian inference with INLA, a fast and deterministic method that revolutionizes how we handle large datasets and complex models. Discover the power of INLA, and why it can make your models go much faster! Get the full conversation here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
Send us a textUnlock the secret to high-impact ad creative and stop fighting the algorithms!In this episode, Jordan West sits down with performance marketing expert Andrew Faris to revolutionize the way you think about ad testing and creative strategy. Forget rigid A/B experiments—here's what really works on platforms like Meta and TikTok in 2025:Explore & Expand vs. Outdated A/B Testing • Discover why Meta's Bayesian engine fights your split-tests—and how switching to an “Explore & Expand” mindset turbocharges learning. • Learn to surface winning angles quickly, then scale them with minimal extra effort.Harnessing AI for Wild, CGI-Level Concepts • See real examples of “unhinged” AI-generated spots—think dinosaurs parachuting into lava—and learn how to keep your core message crystal clear. • Find out which AI tools are best for scripting, storyboarding, and turning crazy ideas into thumb-stopping ads.Prioritizing Human-to-Human Authenticity • Why polished avatars and lip-sync bots may backfire—and what your audience really craves instead. • Strategies for injecting genuine emotion, social proof, and community hooks into every campaign.Creative Volume & Variation: How Much Is Too Much? • The surprising truth about test volume: when “more creative” stops adding value—and how to find the sweet spot for your budget. • A framework for balancing length, hook styles, and visual formats so that every variation still drives toward a single, powerful message.Actionable Tips to Maximize Engagement • Messaging hierarchies: the one element you must nail before tweaking format or length. • Quick wins for rotating hooks, swapping B-roll, and layering in new calls-to-action—without blowing your production timeline.Whether you're a seasoned performance marketer or simply curious about the future of advertising, you'll walk away with:A clear roadmap for ditching endless A/B loopsCreative prompts to push your AI experiments beyond the ordinaryPractical checklists for testing and scaling your best-performing adsGuest info:Linkedin: https://www.linkedin.com/in/andrew-faris-980b84108/
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:INLA is a fast, deterministic method for Bayesian inference.INLA is particularly useful for large datasets and complex models.The R INLA package is widely used for implementing INLA methodology.INLA has been applied in various fields, including epidemiology and air quality control.Computational challenges in INLA are minimal compared to MCMC methods.The Smart Gradient method enhances the efficiency of INLA.INLA can handle various likelihoods, not just Gaussian.SPDs allow for more efficient computations in spatial modeling.The new INLA methodology scales better for large datasets, especially in medical imaging.Priors in Bayesian models can significantly impact the results and should be chosen carefully.Penalized complexity priors (PC priors) help prevent overfitting in models.Understanding the underlying mathematics of priors is crucial for effective modeling.The integration of GPUs in computational methods is a key future direction for INLA.The development of new sparse solvers is essential for handling larger models efficiently.Chapters:06:06 Understanding INLA: A Comparison with MCMC08:46 Applications of INLA in Real-World Scenarios11:58 Latent Gaussian Models and Their Importance15:12 Impactful Applications of INLA in Health and Environment18:09 Computational Challenges and Solutions in INLA21:06 Stochastic Partial Differential Equations in Spatial Modeling23:55 Future Directions and Innovations in INLA39:51 Exploring Stochastic Differential Equations43:02 Advancements in INLA Methodology50:40 Getting Started with INLA56:25 Understanding Priors in Bayesian ModelsThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad
We just got a new paper that compares initial treatment with adenosine compared with diltiazem for the treatment of adults with SVT in the ED. Wouldn't it be great if it turned out that diltiazem was just as effective, if not more effective, as adenosine without the crappy feeling? Yeah, that'd be great, but what do we do with statistically insignificant results. Is there, perhaps, a way to save this “insignificant” paper? Fear not, Bayes is here! Yes, that's right, Dr. Jarvis is grabbing this new paper and diving straight back into that deep dark rabbit hole of Bayesian analysis. Citation:1. Lee CA, Morrissey B, Chao K, Healy J, Ku K, Khan M, Kinteh E, Shedd A, Garrett J, Chou EH: Adenosine Versus Fixed-Dose Intravenous Bolus Diltiazem on Reversing Supraventricular Tachycardia in The Emergency Department: A Multi-Center Cohort Study. The Journal of Emergency Medicine. 2025;August 1;75:55–64. FAST25 | May 19-21, 2025 | Lexington, KY
[This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I'll be posting about one of these a week for several months. When you've read them all, I'll ask you to vote for a favorite, so remember which ones you liked] “Just as we don't accept students using AI to write their essays, we will not accept districts using AI to supplant the critical role of teachers.” — Arthur Steinberg, American Federation of Teachers‑PA, reacting to Alpha's cyber‑charter bid, January 2025 In January 2025, the charter school application of “Unbound Academy”, a subsidiary of “2 Hour Learning, Inc”, lit up the education press: two hours of “AI‑powered” academics, 2.6x learning velocity, and zero teachers. Sympathetic reporters repeated the slogans; union leaders reached for pitchforks; Reddit muttered “another rich‑kid scam.” More sophisticated critics dismissed the pitch as “selective data from expensive private schools”. But there is nowhere on the internet that provides a detailed, non-partisan, description of what the “2 hour learning” program actually is, let alone an objective third party analysis to back up its claims. 2-Hour Learning's flagship school is the “Alpha School” in Austin Texas. The Alpha homepage makes three claims: Love School Learn 2X in two-hours per day Learn Life Skills Only the second claim seems to be controversial, which may be exactly why that is the claim the Alpha PR team focuses on. That PR campaign makes three more sub-claims on what the two-hour, 2x learning really means: “Learn 2.6X faster.” (on average) “Only two hours of academics per day.” “Powered by AI (not teachers).” If all of this makes your inner Bayesian flinch, you're in good company. After twenty‑odd years of watching shiny education fixes wobble and crash—KIPP, AltSchool, Summit Learning, One-laptop-per-child, No child left behind, MOOCs, Khan‑for‑Everything—you should be skeptical. Either Alpha is (a) another program for the affluent propped up by selection effects, or (b) a clever way to turn children into joyless speed‑reading calculators. Those were, more or less, the two critical camps that emerged when Alpha's parent company was approved to launch the tuition‑free Arizona charter school this past January. Unfortunately, the public evidence base on whether this is “real” is thin in both directions. Alpha's own material is glossy and elliptical; mainstream coverage either repeats Alpha's talking points, or attacks the premise that kids should even be allowed to learn faster than their peers. Until Raj Chetty installs himself in the hallway with a clipboard counting MAP percentiles it is hard to get real information on what exactly Alpha is doing, whether it is actually working beyond selection effects, and if there is anyway it could scale in a way that all the other education initiatives seemed to fail to do. I first heard about Alpha in May 2024, and in the absence of randomized‑controlled clarity, I did what any moderately obsessive parent with three elementary-aged kids and an itch for data would do: I moved the family across the country to Austin for a year and ran the experiment myself (unfortunately, despite trying my best we never managed to have identical twins, so I stopped short of running a proper control group. My wife was less disappointed than I was). Since last autumn I've collected the sort of on‑the‑ground detail that doesn't surface in press releases, or is available anywhere online: long chats with founders, curriculum leads, “guides” (not teachers), Brazilian Zoom coaches, sceptical parents, ecstatic parents, and the kids who live inside the Alpha dashboard – including my own. I hope this seven-part review can help share what the program actually is and that this review is more open minded than the critics, but is something that would never get past an Alpha public relations gatekeeper: https://www.astralcodexten.com/p/your-review-alpha-school
Get 10% off Hugo's "Building LLM Applications for Data Scientists and Software Engineers" online course!Today's clip is from episode 135 of the podcast, with Teemu Säilynoja.Alex and Teemu discuss the importance of simulation-based calibration (SBC). They explore the practical implementation of SBC in probabilistic programming languages, the challenges faced in developing SBC methods, and the significance of both prior and posterior SBC in ensuring model reliability. The discussion emphasizes the need for careful model implementation and inference algorithms to achieve accurate calibration.Get the full conversation here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
The JournalFeed podcast for the week of 23-27, 2025.These are summaries from just 2 of the 5 articles we cover every week! For access to more, please visit JournalFeed.org for details about becoming a member.Wednesday Spoon Feed:Bayesian analysis of the use of EpiDex in bronchiolitis demonstrates a reduced probability of hospitalization for bronchiolitis, although highly skeptical clinicians may require additional evidence.Friday Spoon Feed:Over half of transferred patients with facial fractures don't need treatment or admission. This study proposes smart, evidence-based guidelines – Facial Injury Guidelines, or FIG – to help healthcare systems save money, time, and beds (and maybe a few ambulance rides), pending future validation.
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Teemu focuses on calibration assessments and predictive checking in Bayesian workflows.Simulation-based calibration (SBC) checks model implementationSBC involves drawing realizations from prior and generating prior predictive data.Visual predictive checking is crucial for assessing model predictions.Prior predictive checks should be done before looking at data.Posterior SBC focuses on the area of parameter space most relevant to the data.Challenges in SBC include inference time.Visualizations complement numerical metrics in Bayesian modeling.Amortized Bayesian inference benefits from SBC for quick posterior checks. The calibration of Bayesian models is more intuitive than Frequentist models.Choosing the right visualization depends on data characteristics.Using multiple visualization methods can reveal different insights.Visualizations should be viewed as models of the data.Goodness of fit tests can enhance visualization accuracy.Uncertainty visualization is crucial but often overlooked.Chapters:09:53 Understanding Simulation-Based Calibration (SBC)15:03 Practical Applications of SBC in Bayesian Modeling22:19 Challenges in Developing Posterior SBC29:41 The Role of SBC in Amortized Bayesian Inference33:47 The Importance of Visual Predictive Checking36:50 Predictive Checking and Model Fitting38:08 The Importance of Visual Checks40:54 Choosing Visualization Types49:06 Visualizations as Models55:02 Uncertainty Visualization in Bayesian Modeling01:00:05 Future Trends in Probabilistic ModelingThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand...
ICYMI, I'll be in London next week, for a live episode of the Learning Bayesian Statistics podcast
Today's clip is from episode 134 of the podcast, with David Kohns.Alex and David discuss the future of probabilistic programming, focusing on advancements in time series modeling, model selection, and the integration of AI in prior elicitation. The discussion highlights the importance of setting appropriate priors, the challenges of computational workflows, and the potential of normalizing flows to enhance Bayesian inference.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
Train the Best. Change EMS.Howdy, y'all, I'm Dr Jeff Jarvis, and I'm the host of the EMS lighthouse project podcast, but I'm also the medical director for the new EMS system we're building in Fort Worth Texas. We are looking for an experienced critical care paramedic who is an effective and inspiring educator to lead the initial and continuing training and credentialing of a new team of Critical Care Paramedics who will be responding to our highest acuity calls. The salary is negotiable but starts between $65,000 and $80,000 a year for this office position. Whether y'all wear cowboy boots or Birkenstocks, Fort Worth can be a great place to live and work. So if you're ready to create a world-class EMS system and change the EMS world with us, give us a call at 817-953-3083, take care y'all.The next time you go to intubate a patient, should you give the sedation before the paralytic or the paralytic before the sedative? Does it matter? And what the hell does Bayes have to do with any of this? Dr Jarvis reviews a paper that uses Bayesian statistics to calculate the association between drug sequence and first attempt failure. Then he returns to Nerd Valley to talk about how to interpret 95% confidence intervals derived from frequentists statistics compared to 95% credible intervals that come from Bayesian statistics. Citations:1. Catoire P, Driver B, Prekker ME, Freund Y: Effect of administration sequence of induction agents on first‐attempt failure during emergency intubation: A Bayesian analysis of a prospective cohort. Academic Emergency Medicine. 2025;February;32(2):123–9. 2. Casey JD, Janz DR, Russell DW, Vonderhaar DJ, Joffe AM, Dischert KM, Brown RM, Zouk AN, Gulati S, Heideman BE, et al.: Bag-Mask Ventilation during Tracheal Intubation of Critically Ill Adults. N Engl J Med. 2019;February 28;380(9):811–21.3. Greer A, Hewitt M, Khazaneh PT, Ergan B, Burry L, Semler MW, Rochwerg B, Sharif S: Ketamine Versus Etomidate for Rapid Sequence Intubation: A Systematic Review and Meta-Analysis of Randomized Trials. Critical Care Medicine. 2025;February;53(2):e374–83.