Podcasts about bayesian

  • 793PODCASTS
  • 2,070EPISODES
  • 44mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Nov 5, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about bayesian

Show all podcasts related to bayesian

Latest podcast episodes about bayesian

Learning Bayesian Statistics
BITESIZE | Why is Bayesian Deep Learning so Powerful?

Learning Bayesian Statistics

Play Episode Listen Later Nov 5, 2025 19:00 Transcription Available


Today's clip is from episode 144 of the podcast, with Maurizio Filippone.In this conversation, Alex and Maurizio delve into the intricacies of Gaussian processes and their deep learning counterparts. They explain the foundational concepts of Gaussian processes, the transition to deep Gaussian processes, and the advantages they offer in modeling complex data. The discussion also touches on practical applications, model selection, and the evolving landscape of machine learning, particularly in relation to transfer learning and the integration of deep learning techniques with Gaussian processes.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

Progress, Potential, and Possibilities
Dr. Adnan Masood, Ph.D. - Chief AI Architect, UST - AI, Quantum Computing And Future Digital Innovations

Progress, Potential, and Possibilities

Play Episode Listen Later Nov 5, 2025 66:37


Send us a textDr. Adnan Masood, Ph.D. ( https://medium.com/@adnanmasood ) is Chief AI Architect at UST ( https://www.ust.com/ ), the giant global digital transformation solutions company.  Dr. Masood brings over two decades of leadership in artificial intelligence, machine learning, and large-scale system architecture, bridging cutting-edge academic research with real-world business outcomes.At UST, Dr. Masood leads the firm's global strategy for cognitive computing, AI, ML, and generative-AI initiatives. He oversees the development of scalable data-integration platforms, high-performance recommendation systems, and enterprise-grade AI solutions — all while fostering deep collaborations with elite research institutions, including Stanford Artificial Intelligence Laboratory (SAIL) and MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Before his current role, Dr. Masood's experience spanned financial-services technology, start-ups, systems architecture, and cloud-native engineering — always grounded in a strong research mindset focused on machine learning, Bayesian belief networks, and enterprise engineering. Recognized as a Microsoft Regional Director and a Microsoft MVP for Artificial Intelligence, Dr. Masood is not only a practitioner but a thought leader and educator. He has taught data science, spoken at international conferences, authored best-selling books on functional programming and AI governance, and mentors the next generation — for example volunteering as a STEM robotics coach for middle-school students. In his role at UST, Dr. Masood has also helped shape the company's AI-center-of-excellence model — guiding clients to build governance, cross-functional AI teams, and measurable value from AI investments.#AdnanMasood #UST #AI #ML #GenerativeAI #ArtificialIntelligence #MachineLearning #CognitiveServices #CognitiveRobotics #QuantumComputation #QuantumComputing #Cryptography #MicrosoftMVP #MIT #Stanford#ProgressPotentialAndPossibilities #IraPastor #Podcast #Podcaster #Podcasting #ViralPodcast #STEM #Innovation #Science #Technology #ResearchSupport the show

Mobile Dev Memo Podcast
Season 6, Episode 16: Can an LLM evaluate ad creative? (with Luca Fiaschi)

Mobile Dev Memo Podcast

Play Episode Listen Later Nov 4, 2025 47:07


My guest on this episode of the podcast is Luca Fiaschi, a machine learning expert who previously held executive data science roles at MistPlay, StitchFix, and HelloFresh. Luca is now a Partner for the Generative AI vertical at PyMC Labs, a consultancy that specializes in the application of Bayesian methods to business problems and which maintains the open source PyMC library for Bayesian statistical modeling as well as the open source PyMC Marketing media mix modeling library. The subject of my discussion with Luca is PyMC Labs' recent paper, LLMs Reproduce Human Purchase Intent via Semantic Similarity Elicitation of Likert Ratings. I found the paper fascinating and wrote an overview on LinkedIn; the paper's authors (from PyMC Labs as well as Colgate) use an LLM to score product concepts, finding that the distribution of LLM-produced scores can be comparable to that produced by human panels.Among other things, the podcast episode covers:Background on the paper, and how the partnership with Colgate-Palmolive came about.An overview of the LLM querying methodologies used in the paper and how they produced score distributions.The approaches used in the paper for calculating similarities between the LLM-produced and human-produced scores.The conclusions / findings of the paper.What the implications of the paper are on marketing creative ideation and the use of LLMs for evaluating product and advertising concepts.Thanks to the sponsors of this week's episode of the Mobile Dev Memo podcast:Xsolla. With the Xsolla Web Shop, you can create a direct storefront, cut fees down to as low as 5%, and keep players engaged with bundles, rewards, and analytics.INCRMNTAL⁠⁠. True attribution measures incrementality, always on.Universal Ads is Comcast's self-serve TV ads platform that lets you launch campaigns in minutes across premium inventory from NBC, Paramount, Warner Bros. Discovery, Roku, and more.Interested in sponsoring the Mobile Dev Memo podcast? Contact ⁠Marketecture⁠.

Learning Bayesian Statistics
#144 Why is Bayesian Deep Learning so Powerful, with Maurizio Filippone

Learning Bayesian Statistics

Play Episode Listen Later Oct 30, 2025 88:22 Transcription Available


Sign up for Alex's first live cohort, about Hierarchical Model building!Get 25% off "Building AI Applications for Data Scientists and Software Engineers"Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Why GPs still matter: Gaussian Processes remain a go-to for function estimation, active learning, and experimental design – especially when calibrated uncertainty is non-negotiable.Scaling GP inference: Variational methods with inducing points (as in GPflow) make GPs practical on larger datasets without throwing away principled Bayes.MCMC in practice: Clever parameterizations and gradient-based samplers tighten mixing and efficiency; use MCMC when you need gold-standard posteriors.Bayesian deep learning, pragmatically: Stochastic-gradient training and approximate posteriors bring Bayesian ideas to neural networks at scale.Uncertainty that ships: Monte Carlo dropout and related tricks provide fast, usable uncertainty – even if they're approximations.Model complexity ≠ model quality: Understanding capacity, priors, and inductive bias is key to getting trustworthy predictions.Deep Gaussian Processes: Layered GPs offer flexibility for complex functions, with clear trade-offs in interpretability and compute.Generative models through a Bayesian lens: GANs and friends benefit from explicit priors and uncertainty – useful for safety and downstream decisions.Tooling that matters: Frameworks like GPflow lower the friction from idea to implementation, encouraging reproducible, well-tested modeling.Where we're headed: The future of ML is uncertainty-aware by default – integrating UQ tightly into optimization, design, and deployment.Chapters:08:44 Function Estimation and Bayesian Deep Learning10:41 Understanding Deep Gaussian Processes25:17 Choosing Between Deep GPs and Neural Networks32:01 Interpretability and Practical Tools for GPs43:52 Variational Methods in Gaussian Processes54:44 Deep Neural Networks and Bayesian Inference01:06:13 The Future of Bayesian Deep Learning01:12:28 Advice for Aspiring Researchers

Mind Over Stress
Gain Big Mood Boost With 30 Minutes Less Sitting

Mind Over Stress

Play Episode Listen Later Oct 28, 2025 6:25


In this episode, you discover new research showing how 30 minutes less of sitting combined with light activity for those 30 minutes boosts mood and wellbeing the following day.Resources:"Swap 30 Minutes of Sitting for Light Activity to Boost Next-Day Mood". "NeuroScienceNews.com". October 24, 2025. Link: https://neurosciencenews.com/movement-mood-neuroscience-29852/"Daily, prospective associations of sleep, physical activity, and sedentary behaviour with affect: A Bayesian multilevel compositional data analysis". "ScienceDirect.com" Link: https://www.sciencedirect.com/science/article/pii/S1469029225001967?via%3DihubNote: Original research article published in "Psychology of Sport and Exercise".---Host: Stephen Carter - Website: https://StressReliefRadio.com - Email: CarterMethod@gmail.com---Technical information:Recording and initial edits with Twisted Wave. Additional edits with Amadeus Pro, Levelator, Audyllic, and Soften. Final edits and rendering with Hindenburg Pro. Microphone: Rode Procaster.---Keywords:emotional_wellbeing, stress_relief,---

Less Stress More Joy!
Gain Big Mood Boost With 30 Minutes Less Sitting

Less Stress More Joy!

Play Episode Listen Later Oct 28, 2025 6:25


In this episode, you discover new research showing how 30 minutes less of sitting combined with light activity for those 30 minutes boosts mood and wellbeing the following day.Resources:"Swap 30 Minutes of Sitting for Light Activity to Boost Next-Day Mood". "NeuroScienceNews.com". October 24, 2025. Link: https://neurosciencenews.com/movement-mood-neuroscience-29852/"Daily, prospective associations of sleep, physical activity, and sedentary behaviour with affect: A Bayesian multilevel compositional data analysis". "ScienceDirect.com" Link: https://www.sciencedirect.com/science/article/pii/S1469029225001967?via%3DihubNote: Original research article published in "Psychology of Sport and Exercise".---Host: Stephen Carter - Website: https://StressReliefRadio.com - Email: CarterMethod@gmail.com---Technical information:Recording and initial edits with Twisted Wave. Additional edits with Amadeus Pro, Levelator, Audyllic, and Soften. Final edits and rendering with Hindenburg Pro. Microphone: Rode Procaster.---Keywords:emotional_wellbeing, stress_relief,---

Science Salon
Charles Murray: Why I'm Taking Religion Seriously

Science Salon

Play Episode Listen Later Oct 25, 2025 103:55


Michael Shermer sits down with Charles Murray (author of The Bell Curve, Coming Apart, and now Taking Religion Seriously) for a riveting 100-minute conversation about Murray's late-life turn from Harvard-bred agnosticism (“Smart people don't believe that stuff anymore”) to Bayesian theism (“I put the afterlife at just over 50%”). This wide-ranging discussion explores the evidence for the existence of God and the afterlife, the problem of evil, and the historical growth of Christianity. They also delve into topics such as the nature of consciousness, terminal lucidity, and even evolutionary vs. religious perspectives on love. A thought-provoking exploration for skeptics, seekers, and anyone wondering whether the universe has a purpose. Charles Murray is a policy analyst educated at Harvard and MIT and currently serves as the Hayek Emeritus Scholar at the American Enterprise Institute. He is the author of several influential books, including the controversial The Bell Curve, Coming Apart, and Facing Reality. His most recent book is Taking Religion Seriously.

The Archaeology Podcast Network Feed
Horses (Part 3) The Pawnee, the Plains, and the Spanish Caribbean with Dr. Carlton Shield Chief Gover - Ethno 26

The Archaeology Podcast Network Feed

Play Episode Listen Later Oct 25, 2025 47:13


In this third installment of the “Horse Series,” David sits down with Dr. Carlton Shield Chief Gover to explore the intersections of Indigenous oral traditions, radiocarbon dating, and the archaeology of horses across the Great Plains and the Caribbean.Carlton shares how Pawnee oral traditions align with archaeological evidence, revealing new insights into the transitions from hunter-gatherer to agricultural societies. The conversation expands into how the reintroduction of horses revolutionized Plains warfare, movement, and culture — transforming not just how people traveled, but how they defined bravery, honor, and trade.The episode then dives underwater — literally — as Carlton recounts his work with the Indiana University Underwater Science Program in the Dominican Republic. From Spanish shipwrecks to 400-year-old hazelnuts used to fight scurvy, the discussion highlights how horses, colonization, and trade converged across continents and oceans.Topics CoveredIntroduction to Carlton Shield Chief Gover's background and Pawnee heritageMerging radiocarbon dating with Indigenous oral historiesThe importance of corn, maize agriculture, and Plains village lifeHow the horse transformed Indigenous cultures and warfareThe practice of “counting coup” and individual honor in combatThe spread of horses before European contactCarlton's archaeological work in Ukraine and comparisons to the Great PlainsUnderwater archaeology in the Dominican RepublicSpanish shipwrecks, horseshoes, and gold-gilded stirrupsHazelnuts as a 16th-century Spanish cure for scurvyDangers and logistics of underwater fieldworkHow early Caribbean horses may connect genetically to modern mustangsThe future of Plains and underwater archaeologyAbout the GuestDr. Carlton Shield Chief Gover is a citizen of the Pawnee Nation and a leading voice in Indigenous and Plains archaeology. His research integrates oral histories, Bayesian radiocarbon analysis, and archaeological evidence to create a fuller understanding of the Great Plains' deep past. He currently serves as Assistant Professor and Curator of Archaeology at the University of Kansas and hosts The Great Plains Archaeology Podcast.Follow Carlton on InstagramListen to The Great Plains Archaeology PodcastMentioned in This EpisodeHoof Beats: The Horse in Human History — Dr. William TaylorCassidy Thornhill's work on the Blacks Fork HorseYvette and Paulette Steeves' research on pre-contact horsesIndiana University Underwater Science Program (Dr. Charles Beeker)University of Kansas Natural History MuseumKey Quote“When you reanalyze radiocarbon data with Indigenous oral traditions, you actually illustrate a much more holistic picture of human history.” — Dr. Carlton Shield Chief GoverTranscriptsFor a rough transcript head over to: https://www.archaeologypodcastnetwork.com/ethnocynology/26Links:davidianhowe.comDavidianhowe.com/storeArchPodNetAPN Website: https://www.archpodnet.comAPN on Facebook: https://www.facebook.com/archpodnetAPN on Twitter: https://www.twitter.com/archpodnetAPN on Instagram: https://www.instagram.com/archpodnetAPN ShopAffiliatesMotion Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

CEO Blindspots
"The Value Blindspot" - with Dr. Carlos Carvalho, President of UATX

CEO Blindspots

Play Episode Listen Later Oct 24, 2025 22:29


What happens when leaders try to serve everyone?They end up serving no one. UATX President Dr. Carlos Carvalho reveals “The Value Blindspot” reshaping higher ed — and how CEOs can avoid the same trap.===========CEO Blindspots® Podcast Guest: Dr. Carlos CarvalhoDr. Carlos Carvalho is the President of the University of Austin. Prior to taking on this role, he spent 15 years as a professor at the University of Texas at Austin's McCombs School of Business, where he held the La Quinta Centennial Professorship and founded the Salem Center for Policy. A native of Brazil, Dr. Carvalho earned his doctorate in statistics from Duke University and has also taught at the University of Chicago Booth School of Business. His research focuses on Bayesian statistics in complex, high-dimensional problems with applications ranging from economics to genetics to public policy. At UATX, he is leading a bold effort to build a new university that stands for American principles and academic excellence.

Learning Bayesian Statistics
BITESIZE | Are Bayesian Models the Missing Ingredient in Nutrition Research?

Learning Bayesian Statistics

Play Episode Listen Later Oct 23, 2025 23:14 Transcription Available


Sign up for Alex's first live cohort, about Hierarchical Model buildingSoccer Factor Model DashboardToday's clip is from episode 143 of the podcast, with Christoph Bamberg.Christoph shares his journey into Bayesian statistics and computational modeling, the challenges faced in academia, and the technical tools used in research. Alex and Christoph delve into a specific study on appetite regulation and cognitive performance, exploring the implications of framing in psychological research and the importance of careful communication in health-related contexts.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

This Week in Cardiology
Oct 17 2025 This Week in Cardiology

This Week in Cardiology

Play Episode Listen Later Oct 17, 2025 32:01


Another knock against the antiplatelet/anticoagulant combo, polypills in HF, the physical exam of the future, and the problem of underpowered trials that even Bayesian analyses cannot rescue are the topics John Mandrola, MD, discusses in this week's podcast. This podcast is intended for healthcare professionals only. To read a partial transcript or to comment, visit: https://www.medscape.com/twic I Listener Feedback Trends Study https://www.heartrhythmjournal.com/article/S1547-5271(11)00496-6/fulltext II Another knock against the Antiplatelet/Anticoagulation combination “Antiplatelet Plus Oral Anticoagulant Lowers Stroke, Raises Bleeding Risk” https://www.medscape.com/viewarticle/antiplatelet-plus-oral-anticoagulant-lowers-stroke-raises-2025a1000re0 ATIS-NVAF Trial https://jamanetwork.com/journals/jamaneurology/fullarticle/2839511 AQUATIC trial https://www.nejm.org/doi/abs/10.1056/NEJMoa2507532 III Polypill for HFrEF A Multilevel Polypill for Patients With HFrEF https://www.jacc.org/doi/10.1016/j.jacadv.2025.102195 IV The Physical Exam of the Future Point-of-Care Ultrasound https://doi.org/10.1016/j.jchf.2025.102707 V More on Underpowered Trials – GA vs Moderate Sedation in IV stroke SEGA Trial https://jamanetwork.com/journals/jamaneurology/fullarticle/2839838 Bayesian Analyses of CV Trials https://doi.org/10.1016/j.cjca.2021.03.014 You may also like: The Bob Harrington Show with the Stephen and Suzanne Weiss Dean of Weill Cornell Medicine, Robert A. Harrington, MD. https://www.medscape.com/author/bob-harrington Questions or feedback, please contact news@medscape.net

Stats + Stories
The Age of the Supercentenarian | Stats + Stories Episode 229 (REPOST)

Stats + Stories

Play Episode Listen Later Oct 16, 2025 26:34


When American comedian and actor Betty White died, fans lamented the fact that she had just missed making it to her 100th birthday. They felt she'd been robbed of achieving a significant life moment. Some researchers think that this century could see more people making it to that moment and beyond. That's the focus of this episode of Stats and Stories with guest Michael Pearce. Michael Pearce is a PhD candidate in Statistics at the University of Washington, working under the supervision of Elena A. Erosheva. His primary research interests include preference learning and developing Bayesian statistical models for social science problems. In his spare time, Michael enjoys running, biking, and paddling around the Puget Sound.

Learning Bayesian Statistics
#143 Transforming Nutrition Science with Bayesian Methods, with Christoph Bamberg

Learning Bayesian Statistics

Play Episode Listen Later Oct 15, 2025 72:56 Transcription Available


Sign up for Alex's first live cohort, about Hierarchical Model building!Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Bayesian mindset in psychology: Why priors, model checking, and full uncertainty reporting make findings more honest and useful.Intermittent fasting & cognition: A Bayesian meta-analysis suggests effects are context- and age-dependent – and often small but meaningful.Framing matters: The way we frame dietary advice (focus, flexibility, timing) can shape adherence and perceived cognitive benefits.From cravings to choices: Appetite, craving, stress, and mood interact to influence eating and cognitive performance throughout the day.Define before you measure: Clear definitions (and DAGs to encode assumptions) reduce ambiguity and guide better study design.DAGs for causal thinking: Directed acyclic graphs help separate hypotheses from data pipelines and make causal claims auditable.Small effects, big implications: Well-estimated “small” effects can scale to public-health relevance when decisions repeat daily.Teaching by modeling: Helping students write models (not just run them) builds statistical thinking and scientific literacy.Bridging lab and life: Balancing careful experiments with real-world measurement is key to actionable health-psychology insights.Trust through transparency: Openly communicating assumptions, uncertainty, and limitations strengthens scientific credibility.Chapters:10:35 The Struggles of Bayesian Statistics in Psychology22:30 Exploring Appetite and Cognitive Performance29:45 Research Methodology and Causal Inference36:36 Understanding Cravings and Definitions39:02 Intermittent Fasting and Cognitive Performance42:57 Practical Recommendations for Intermittent Fasting49:40 Balancing Experimental Psychology and Statistical Modeling55:00 Pressing Questions in Health Psychology01:04:50 Future Directions in ResearchThank you to my Patrons for...

JACC Speciality Journals
Brief Introduction - Forecasting Atherosclerotic Cardiovascular Disease in South Asia Until 2040: A Bayesian Modeling Approach | JACC: Asia

JACC Speciality Journals

Play Episode Listen Later Oct 14, 2025 1:43


ToKCast
Ep 248: AI and Philosophy of Science

ToKCast

Play Episode Listen Later Oct 13, 2025 85:39


This is the extended "director's cut" of a talk delivered for "RatFest 2025" (next year to be "Conjecture Con"). This also serves as a supplement to my "Doom Debates" interview which can be found here: https://youtu.be/koubXR0YL4A?si=483M6SPOKwbQYmzb It is simply assumed some version of "Bayesian reasoning" is how AI will "create" knowledge. This misconception permeates the https://ai-2027.com paper, as well as Bostrom and Yudkowsky's work on this, as well as that of every other AI "Doomer" and even on the other extreme the so-called "AI-Accelerationists". All of that indicates a deep misconception about how new explanations are generated which comes from a deep misconception about how science works because almost no one in the field of AI seems to think the *philosophy of* science is even relevant. I explain what has gone wrong: 00:00 Introduction 09:14 The Big Questions and the new Priesthoods 18:40 Nick Bostrom and Superintelligence 25:10 If anyone builds it, everyone dies and Yudkowsky. 33:32 Prophecy, Inevitability, Induction and Bayesianism. 41:42 Popper, Kuhn, Feyerabend and Lakatos. 49:40 AI researchers ignore The Philosophy of Science. 58:46 A new test for AGI from Sam Altman and David Deutsch? 1:03:35 Accelerationists, Doomers and “Everyone dies”. 1:10:21 Conclusions 1:15:35 Audience Questions

Policy Chats
21st Century Democracy: Using Collaboration Tech to Increase Civic Participation

Policy Chats

Play Episode Listen Later Oct 12, 2025 56:09


In this episode, Dr. Kevin Esterling, Professor of Political Science and Public Policy at UC Riverside, talks with the UC Riverside School of Public Policy about using technology to make public meetings more inclusive and effective. This is the seventh episode in our 11-part series, Technology vs. Government, featuring former California State Assemblymember Lloyd Levine.About Dr. Kevin Esterling:Kevin Esterling is Professor of Public Policy and Political Science, chair of political science, and the Director of the Laboratory for Technology, Communication and Democracy (TeCD-Lab) at the University of California, Riverside, and affiliate of the UC Institute on Global Conflict and Cooperation (IGCC). He is the past interim dean and associate dean of the UCR Graduate Division. His research focuses on technology for communication in democratic politics, and in particular the use of artificial intelligence and large language models for understanding and improving the quality of democratic communication in online spaces. His methodological interests are in artificial intelligence, large language models, Bayesian statistics, machine learning, experimental design, and science ethics and validity. His books have been published on Cambridge University Press and the University of Michigan Press, and his journal articles have appeared in such journals as Science, Nature, the Proceedings of the National Academy of Sciences, Nature Human Behavior, the American Political Science Review, Political Analysis, the Journal of Educational and Behavioral Statistics, and the Journal of Politics. His work has been funded by the National Science Foundation, The Democracy Fund, the MacArthur Foundation, and the Institute of Education Sciences. Esterling was previously a Robert Wood Johnson Scholar in Health Policy Research at the University of California, Berkeley and a postdoctoral research fellow at the A. Alfred Taubman Center for Public Policy and American Institutions at Brown University. He received his Ph.D. in Political Science from the University of Chicago in 1999.Interviewer:Lloyd Levine (Former California State Assemblymember, UCR School of Public Policy Senior Policy Fellow)Music by: Vir SinhaCommercial Links:https://spp.ucr.edu/ba-mpphttps://spp.ucr.edu/mppThis is a production of the UCR School of Public Policy: https://spp.ucr.eduSubscribe to this podcast so you do not miss an episode. Learn more about the series and other episodes at https://spp.ucr.edu/podcast.

Honest eCommerce
Bonus Episode: Flexibility Into Every Marketing Decision with Bradley Keefer & Justin Jefferson

Honest eCommerce

Play Episode Listen Later Oct 9, 2025 27:57


Bradley Keefer is the Chief Revenue Officer and Justin Jefferson is the VP of Strategy & Insights at Keen Decision Systems, where Bayesian-powered marketing mix modeling meets scenario planning and outcome forecasting, helping brands move from rearview analytics to predictive decisioning.With decades of combined experience across SaaS, analytics, and brand strategy, Bradley and Justin are redefining how marketers plan, forecast, and invest. Instead of treating marketing as a cost center, they help brands model “what if” scenarios, forecasting how every incremental dollar drives revenue across channels.Whether you're scaling a fast-growing brand or managing a multimillion-dollar marketing budget, Bradley and Justin offer a masterclass in using data to make confident, forward-looking decisions that compound over time.In This Conversation We Discuss: [00:38] Intro[01:12] Measuring how marketing spend drives growth[02:29] Building models that adapt to brand maturity[04:35] Balancing brand building with performance spend[07:24] Shifting focus from capturing to creating demand[08:41] Driving demand to boost bottom-funnel returns[09:34] Breaking growth limits with data-driven planning[12:49] Connecting viral moments to sustain momentum[14:50] Building brands that go beyond ad optimization[15:30] Stay updated with new episodes[15:43] Simplifying setup for data-heavy marketing tools[18:44] Designing analytics tools for marketing teams[20:23] Updating models fast to learn and adapt quicker[22:42] Using data to balance old and new media spendResources:Subscribe to Honest Ecommerce on YoutubeMarketing mix modeling powered by AI keends.com/Follow Bradley Keefer linkedin.com/in/bradley-keeferFollow Justin Jefferson linkedin.com/in/justin-a-jeffersonIf you're enjoying the show, we'd love it if you left Honest Ecommerce a review on Apple Podcasts. It makes a huge impact on the success of the podcast, and we love reading every one of your reviews!

Learning Bayesian Statistics
BITESIZE | How Bayesian Additive Regression Trees Work in Practice

Learning Bayesian Statistics

Play Episode Listen Later Oct 9, 2025 22:49 Transcription Available


Soccer Factor Model DashboardUnveiling True Talent: The Soccer Factor Model for Skill EvaluationLBS #91, Exploring European Football Analytics, with Max GöbelGet early access to Alex's next live-cohort courses!Today's clip is from episode 142 of the podcast, with Gabriel Stechschulte.Alex and Garbriel explore the re-implementation of BART (Bayesian Additive Regression Trees) in Rust, detailing the technical challenges and performance improvements achieved.They also share insights into the benefits of BART, such as uncertainty quantification, and its application in various data-intensive fields.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

Learning Bayesian Statistics
#142 Bayesian Trees & Deep Learning for Optimization & Big Data, with Gabriel Stechschulte

Learning Bayesian Statistics

Play Episode Listen Later Oct 2, 2025 70:28 Transcription Available


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Get early access to Alex's next live-cohort courses!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:BART as a core tool: Gabriel explains how Bayesian Additive Regression Trees provide robust uncertainty quantification and serve as a reliable baseline model in many domains.Rust for performance: His Rust re-implementation of BART dramatically improves speed and scalability, making it feasible for larger datasets and real-world IoT applications.Strengths and trade-offs: BART avoids overfitting and handles missing data gracefully, though it is slower than other tree-based approaches.Big data meets Bayes: Gabriel shares strategies for applying Bayesian methods with big data, including when variational inference helps balance scale with rigor.Optimization and decision-making: He highlights how BART models can be embedded into optimization frameworks, opening doors for sequential decision-making.Open source matters: Gabriel emphasizes the importance of communities like PyMC and Bambi, encouraging newcomers to start with small contributions.Chapters:05:10 – From economics to IoT and Bayesian statistics18:55 – Introduction to BART (Bayesian Additive Regression Trees)24:40 – Re-implementing BART in Rust for speed and scalability32:05 – Comparing BART with Gaussian Processes and other tree methods39:50 – Strengths and limitations of BART47:15 – Handling missing data and different likelihoods54:30 – Variational inference and big data challenges01:01:10 – Embedding BART into optimization and decision-making frameworks01:08:45 – Open source, PyMC, and community support01:15:20 – Advice for newcomers01:20:55 – Future of BART, Rust, and probabilistic programmingThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian...

JAT Podcasts
JAT Chat | When Statistical Tests Compound in Sports Medicine Research

JAT Podcasts

Play Episode Listen Later Sep 30, 2025 34:17 Transcription Available


Welcome to JAT Chat, presented by the Journal of Athletic Training, the official journal of the National Athletic Trainers' Association. In this episode, co-host Dr. Kara Radzak speaks with Dr. Travis Anderson and Dr. Eric Post about their recently published article, "Multiplying Alpha: When Statistical Tests Compound in Sports Medicine Research". Drs. Anderson and Post discuss how multiple statistical tests can inflate false-positive rates in sports medicine research, explain family-wise and experiment-wise error, and illustrate risks with a large-scale Paris Olympic Games analysis. They recommend transparency, pre-registration, correction for multiplicity, and consider Bayesian approaches to improve rigor and clinical decision-making.   Article: https://doi.org/10.4085/1062-6050-0700.24   Guest Bios: Travis Anderson, PhD: Travis recently joined US Soccer as the Manager of Research and Innovation, following his work as a Research Scientist at the USOPC where he worked closely with Eric. His academic background is in exercise physiology, although he dabbled in statistics throughout graduate school and enjoys continuing his education in applied statistics in sports medicine and exercise science. Eric Post, PhD, ATC: Eric is the Manager of the Sports Medicine Research Laboratory for the United States Olympic and Paralympic Committee. Eric previously served as Program Director for the Master's in Athletic Training Program at Indiana State University and as a faculty member at San Diego State University.

Take 2 Theology
Are Miracles Rational?

Take 2 Theology

Play Episode Listen Later Sep 25, 2025 35:45


Episode 2.42Philosophical Case for the SupernaturalCan miracles be intellectually defended—or are they just wishful thinking in a scientific age?In this follow-up to the theological case for miracles, Zach and Michael explore the philosophical foundations for believing in the miraculous. Drawing from the work of C.S. Lewis, Alvin Plantinga, Richard Swinburne, William Lane Craig, and others, they address classical objections from Spinoza and Hume, explain Bayesian probability, and unpack why Christianity stands or falls on historical miracle claims—especially the resurrection.Covered in this episode:– What qualifies as a miracle (and what doesn't)– Why miracles are necessary for Christian faith– Whether natural laws rule out divine intervention– The failure of Hume's argument against miracles– How probability theory supports miracle testimony– Why Christianity's claims are evidential, not blindIf theism is true, miracles aren't just possible—they're expected. This episode shows why the miraculous still makes philosophical sense.WLC discussing Bayesian Equation: https://www.reasonablefaith.org/question-answer/P90/do-extraordinary-events-require-extraordinary-evidenceThe Book Michael Referenced: Miracles: The Credibility of the New Testament Accounts, https://a.co/d/hjzHvWLFind our videocast here: https://youtu.be/dshfk_jyXj0Merch here: https://take-2-podcast.printify.me/Music from #Uppbeat (free for Creators!):⁠https://uppbeat.io/t/reakt-music/deep-stone⁠License code: 2QZOZ2YHZ5UTE7C8Find more Take 2 Theology content at http://www.take2theology.com

Learning Bayesian Statistics
BITESIZE | How Probability Becomes Causality?

Learning Bayesian Statistics

Play Episode Listen Later Sep 24, 2025 22:03 Transcription Available


Get early access to Alex's next live-cohort courses!Today's clip is from episode 141 of the podcast, with Sam Witty.Alex and Sam discuss the ChiRho project, delving into the intricacies of causal inference, particularly focusing on Do-Calculus, regression discontinuity designs, and Bayesian structural causal inference. They explain ChiRho's design philosophy, emphasizing its modular and extensible nature, and highlights the importance of efficient estimation in causal inference, making complex statistical methods accessible to users without extensive expertise.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

Machine Learning Street Talk
Deep Learning is Not So Mysterious or Different - Prof. Andrew Gordon Wilson (NYU)

Machine Learning Street Talk

Play Episode Listen Later Sep 19, 2025 123:48


Professor Andrew Wilson from NYU explains why many common-sense ideas in artificial intelligence might be wrong. For decades, the rule of thumb in machine learning has been to fear complexity. The thinking goes: if your model has too many parameters (is "too complex") for the amount of data you have, it will "overfit" by essentially memorizing the data instead of learning the underlying patterns. This leads to poor performance on new, unseen data. This is known as the classic "bias-variance trade-off" i.e. a balancing act between a model that's too simple and one that's too complex.**SPONSOR MESSAGES**—Tufa AI Labs is an AI research lab based in Zurich. **They are hiring ML research engineers!** This is a once in a lifetime opportunity to work with one of the best labs in EuropeContact Benjamin Crouzier - https://tufalabs.ai/ —Take the Prolific human data survey - https://www.prolific.com/humandatasurvey?utm_source=mlst and be the first to see the results and benchmark their practices against the wider community!—cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economyOct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlstSubmit investment deck: https://cyber.fund/contact?utm_source=mlst— Description Continued:Professor Wilson challenges this fundamental belief (fearing complexity). He makes a few surprising points:**Bigger Can Be Better**: massive models don't just get more flexible; they also develop a stronger "simplicity bias". So, if your model is overfitting, the solution might paradoxically be to make it even bigger.**The "Bias-Variance Trade-off" is a Misnomer**: Wilson claims you don't actually have to trade one for the other. You can have a model that is incredibly expressive and flexible while also being strongly biased toward simple solutions. He points to the "double descent" phenomenon, where performance first gets worse as models get more complex, but then surprisingly starts getting better again.**Honest Beliefs and Bayesian Thinking**: His core philosophy is that we should build models that honestly represent our beliefs about the world. We believe the world is complex, so our models should be expressive. But we also believe in Occam's razor—that the simplest explanation is often the best. He champions Bayesian methods, which naturally balance these two ideas through a process called marginalization, which he describes as an automatic Occam's razor.TOC:[00:00:00] Introduction and Thesis[00:04:19] Challenging Conventional Wisdom[00:11:17] The Philosophy of a Scientist-Engineer[00:16:47] Expressiveness, Overfitting, and Bias[00:28:15] Understanding, Compression, and Kolmogorov Complexity[01:05:06] The Surprising Power of Generalization[01:13:21] The Elegance of Bayesian Inference[01:33:02] The Geometry of Learning[01:46:28] Practical Advice and The Future of AIProf. Andrew Gordon Wilson:https://x.com/andrewgwilshttps://cims.nyu.edu/~andrewgw/https://scholar.google.com/citations?user=twWX2LIAAAAJ&hl=en https://www.youtube.com/watch?v=Aja0kZeWRy4 https://www.youtube.com/watch?v=HEp4TOrkwV4 TRANSCRIPT:https://app.rescript.info/public/share/H4Io1Y7Rr54MM05FuZgAv4yphoukCfkqokyzSYJwCK8Hosts:Dr. Tim Scarfe / Dr. Keith Duggar (MIT Ph.D)REFS:Deep Learning is Not So Mysterious or Different [Andrew Gordon Wilson]https://arxiv.org/abs/2503.02113Bayesian Deep Learning and a Probabilistic Perspective of Generalization [Andrew Gordon Wilson, Pavel Izmailov]https://arxiv.org/abs/2002.08791Compute-Optimal LLMs Provably Generalize Better With Scale [Marc Finzi, Sanyam Kapoor, Diego Granziol, Anming Gu, Christopher De Sa, J. Zico Kolter, Andrew Gordon Wilson]https://arxiv.org/abs/2504.15208

Learning Bayesian Statistics
#141 AI Assisted Causal Inference, with Sam Witty

Learning Bayesian Statistics

Play Episode Listen Later Sep 18, 2025 96:38 Transcription Available


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Get early access to Alex's next live-cohort courses!Enroll in the Causal AI workshop, to learn live with Alex (15% off if you're a Patron of the show)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Causal inference is crucial for understanding the impact of interventions in various fields.ChiRho is a causal probabilistic programming language that bridges mechanistic and data-driven models.ChiRho allows for easy manipulation of causal models and counterfactual reasoning.The design of ChiRho emphasizes modularity and extensibility for diverse applications.Causal inference requires careful consideration of assumptions and model structures.Real-world applications of causal inference can lead to significant insights in science and engineering.Collaboration and communication are key in translating causal questions into actionable models.The future of causal inference lies in integrating probabilistic programming with scientific discovery.Chapters:05:53 Bridging Mechanistic and Data-Driven Models09:13 Understanding Causal Probabilistic Programming12:10 ChiRho and Its Design Principles15:03 ChiRho's Functionality and Use Cases17:55 Counterfactual Worlds and Mediation Analysis20:47 Efficient Estimation in ChiRho24:08 Future Directions for Causal AI50:21 Understanding the Do-Operator in Causal Inference56:45 ChiRho's Role in Causal Inference and Bayesian Modeling01:01:36 Roadmap and Future Developments for ChiRho01:05:29 Real-World Applications of Causal Probabilistic Programming01:10:51 Challenges in Causal Inference Adoption01:11:50 The Importance of Causal Claims in Research01:18:11 Bayesian Approaches to Causal Inference01:22:08 Combining Gaussian Processes with Causal Inference01:28:27 Future Directions in Probabilistic Programming and Causal InferenceThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad...

Learning Bayesian Statistics
BITESIZE | How to Think Causally About Your Models?

Learning Bayesian Statistics

Play Episode Listen Later Sep 10, 2025 24:01 Transcription Available


Get early access to Alex's next live-cohort courses!Today's clip is from episode 140 of the podcast, with Ron Yurko.Alex and Ron discuss the challenges of model deployment, and the complexities of modeling player contributions in team sports like soccer and football.They emphasize the importance of understanding replacement levels, the Going Deep framework in football analytics, and the need for proper modeling of expected points. Additionally, they share insights on teaching Bayesian modeling to students and the difficulties they face in grasping the concepts of model writing and application.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

The Wharton Moneyball Post Game Podcast
Football Analytics at Work: Probabilities, Priors, and Fourth-Down Decisions

The Wharton Moneyball Post Game Podcast

Play Episode Listen Later Sep 10, 2025 63:37


Brian Burke, Sports Data Scientist at ESPN, joins Cade Massey, Eric Bradlow, and Shane Jensen to share insights on building advanced football power ranking systems, the role of Bayesian models in balancing priors and new data, and how analytics informs game-day decisions like fourth-down calls and playoff predictions. Cade, Eric, and Shane also analyze standout performances and key narratives from NFL Week One, preview pivotal college football games, examine the growing dominance of Carlos Alcaraz over Jannik Sinner in men's tennis, and highlight major offensive trends across Major League Baseball. Hosted on Acast. See acast.com/privacy for more information.

Knowledge@Wharton
Football Analytics at Work: Probabilities, Priors, and Fourth-Down Decisions

Knowledge@Wharton

Play Episode Listen Later Sep 10, 2025 63:43


Brian Burke, Sports Data Scientist at ESPN, joins Cade Massey, Eric Bradlow, and Shane Jensen to share insights on building advanced football power ranking systems, the role of Bayesian models in balancing priors and new data, and how analytics informs game-day decisions like fourth-down calls and playoff predictions. Cade, Eric, and Shane also analyze standout performances and key narratives from NFL Week One, preview pivotal college football games, examine the growing dominance of Carlos Alcaraz over Jannik Sinner in men's tennis, and highlight major offensive trends across Major League Baseball. Hosted on Acast. See acast.com/privacy for more information.

Vanishing Gradients
Episode 58: Building GenAI Systems That Make Business Decisions with Thomas Wiecki (PyMC Labs)

Vanishing Gradients

Play Episode Listen Later Sep 9, 2025 60:45


While most conversations about generative AI focus on chatbots, Thomas Wiecki (PyMC Labs, PyMC) has been building systems that help companies make actual business decisions. In this episode, he shares how Bayesian modeling and synthetic consumers can be combined with LLMs to simulate customer reactions, guide marketing spend, and support strategy. Drawing from his work with Colgate and others, Thomas explains how to scale survey methods with AI, where agents fit into analytics workflows, and what it takes to make these systems reliable. We talk through: Using LLMs as “synthetic consumers” to simulate surveys and test product ideas How Bayesian modeling and causal graphs enable transparent, trustworthy decision-making Building closed-loop systems where AI generates and critiques ideas Guardrails for multi-agent workflows in marketing mix modeling Where generative AI breaks (and how to detect failure modes) The balance between useful models and “correct” models If you've ever wondered how to move from flashy prototypes to AI systems that actually inform business strategy, this episode shows what it takes. LINKS: The AI MMM Agent, An AI-Powered Shortcut to Bayesian Marketing Mix Insights (https://www.pymc-labs.com/blog-posts/the-ai-mmm-agent) AI-Powered Decision Making Under Uncertainty Workshop w/ Allen Downey & Chris Fonnesbeck (PyMC Labs) (https://youtube.com/live/2Auc57lxgeU) The Podcast livestream on YouTube (https://youtube.com/live/so4AzEbgSjw?feature=share) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk)

Learning Bayesian Statistics
#140 NFL Analytics & Teaching Bayesian Stats, with Ron Yurko

Learning Bayesian Statistics

Play Episode Listen Later Sep 3, 2025 93:01 Transcription Available


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Teaching students to write out their own models is crucial.Developing a sports analytics portfolio is essential for aspiring analysts.Modeling expectations in sports analytics can be misleading.Tracking data can significantly improve player performance models.Ron encourages students to engage in active learning through projects.The importance of understanding the dependency structure in data is vital.Ron aims to integrate more diverse sports analytics topics into his teaching.Chapters:03:51 The Journey into Sports Analytics15:20 The Evolution of Bayesian Statistics in Sports26:01 Innovations in NFL WAR Modeling39:23 Causal Modeling in Sports Analytics46:29 Defining Replacement Levels in Sports48:26 The Going Deep Framework and Big Data in Football52:47 Modeling Expectations in Football Data55:40 Teaching Statistical Concepts in Sports Analytics01:01:54 The Importance of Model Building in Education01:04:46 Statistical Thinking in Sports Analytics01:10:55 Innovative Research in Player Movement01:15:47 Exploring Data Needs in American Football01:18:43 Building a Sports Analytics PortfolioThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M,...

Wildlife By The Numbers
How Precise is Precise Enough?

Wildlife By The Numbers

Play Episode Listen Later Aug 29, 2025 26:56


Grant, Matt, and Randy gear up to discuss precision, accuracy, and variability in wildlife studies. They dive into how variability measures data spread, using range, quartiles, standard deviation, variance, and coefficient of variation (CV).Accuracy reflects closeness to the true value, precision shows clustering of estimates. Standard errors, confidence intervals, and Bayesian credibility intervals quantify estimate precision. Required precision depends on study goals, sample size, and application, ensuring reliable, interpretable, and actionable results for wildlife management and conservation.They wrap up with some examples, but the take home message is deciding how precise results need to be hinges on the question and resources.Cite this episode: https://doi.org/10.7944/usfws.wbtn.s01ep011DOI Citation Formatter: https://citation.doi.org/ Episode music: Shapeshifter by Mr Smith is licensed under an Attribution 4.0 International License. https://creativecommons.org/licenses/by/4.0/https://freemusicarchive.org/music/mr-smith/studio-city/shapeshifter/

Learning Bayesian Statistics
BITESIZE | Is Bayesian Optimization the Answer?

Learning Bayesian Statistics

Play Episode Listen Later Aug 27, 2025 25:13 Transcription Available


Today's clip is from episode 139 of the podcast, with with Max Balandat.Alex and Max discuss the integration of BoTorch with PyTorch, exploring its applications in Bayesian optimization and Gaussian processes. They highlight the advantages of using GPyTorch for structured matrices and the flexibility it offers for research. The discussion also covers the motivations behind building BoTorch, the importance of open-source culture at Meta, and the role of PyTorch in modern machine learning.Get the full discussion here.Attend Alex's tutorial at PyData Berlin: A Beginner's Guide to State Space Modeling Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

Owl Have You Know
Making Venture Capital More Accessible feat. Emmanuel Yimfor '20

Owl Have You Know

Play Episode Listen Later Aug 27, 2025 41:28


At a time when startups are primarily funded by private market investors, who you know has become a critical factor in gaining access to that venture capital. But how does the reliance on alumni and professional networks create barriers for startups from historically disadvantaged groups?Emmanuel Yimfor '20 is a finance professor at Columbia Business School and holds a Ph.D. from Rice University. His research focuses on entrepreneurial finance, diversity and private capital markets, with insights into gender and racial disparities in venture capital funding, board representation and how resources could be more equitably allocated.Emmanuel joins co-host Maya Pomroy '22 to discuss his career journey from working at a Cameroonian telecommunications company to teaching at some of the top U.S. business schools, as well as his research on the influence of alumni networks in venture capital funding, how AI tools can address biases in lending, and finally how he's teaming up with his son to bring AI tools to young innovators and entrepreneurs in Cameroon. Episode Guide:01:00 Exploring Entrepreneurial Finance03:36 The Role of Networks in VC Funding08:10 Emmanuel's Journey From Cameroon to the U.S.12:34 The Rice University Experience15:43 Research on Alumni Networks and Funding21:49 Algorithmic Bias in Lending33:17 Empowering Future Innovators in Cameroon38:42 Final Thoughts and Future OutlookOwl Have You Know is a production of Rice Business and is produced by University FM.Episode Quotes:Rethinking who gets funded in venture capital31:07: What does good networks mean exactly? If you look at venture capital partners, for example, right? They have worked at McKinsey before they became venture capital partners. So they have worked at certain companies, they have done certain jobs that then led them to become VCs. And so to the extent that we have a lack of representation in this pipeline of jobs that is leading to VC, then the founders that do not come from these same backgrounds do not have as equal access to the partners. And so what that suggests is something very basic, which is like, just rethink the set of deals that you are considering. That might expand the pool of deals that you consider, because, you know, there might be a smart person out there that is maybe not the same race as you, but that has an idea that you really, really want to fund. And that is something that I think, like, everybody would agree with. You know, we want to allocate capital to its most productive uses.From hard data to meaningful change29:13: So I have a belief in America, at least based on my life journey, which is: if you work hard for long enough, somebody is going to recognize you and you will be rewarded for it. And so I really believe that America takes in data, thinks about that data for a while to think about whether the research is credible enough, and then, using that data, they are a good Bayesian, so they get a new posterior. They act in a new way that is consistent with what the new before and the new data. And so I think about my role as a researcher as just like, you know, providing that data. Here is the data, and here is what is consistent with what we are doing right now. Now, you know, what you do with that information now is like, you know, update what you are doing in a way that is most consistent with efficient capital allocation—is my hope.Why Emmanuel finds empirical work so exciting 21:34: Empirical work is so exciting to me because then you are like, "I am a little bit of a police detective." So you take a little bit of this thing that feels hard to measure, and then you can create hypotheses to link it to the eventual outcomes, to the extent that that thing that is hard to measure is something that is leading to efficient capital allocation. Then, on average, you know, this feeling that you get about founders that are from the same alma mater should lead to good things as opposed to leading to bad things. And so, you know, that is exactly the right spirit of how to think about the work.Show Links: TranscriptGuest Profiles:Emmanuel Yimfor | Columbia Business SchoolEmmanuel Yimfor | LinkedInEmmanuel's Website 

Cutting The Gordian Knot
The Bayesian Case for Mary's Assumption

Cutting The Gordian Knot

Play Episode Listen Later Aug 25, 2025 71:18


I get it. It seems crazy at first. However, when all the arguments are on the table you might be surprised how your mind may change.

Learning Bayesian Statistics
#139 Efficient Bayesian Optimization in PyTorch, with Max Balandat

Learning Bayesian Statistics

Play Episode Listen Later Aug 20, 2025 85:23 Transcription Available


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:BoTorch is designed for researchers who want flexibility in Bayesian optimization.The integration of BoTorch with PyTorch allows for differentiable programming.Scalability at Meta involves careful software engineering practices and testing.Open-source contributions enhance the development and community engagement of BoTorch.LLMs can help incorporate human knowledge into optimization processes.Max emphasizes the importance of clear communication of uncertainty to stakeholders.The role of a researcher in industry is often more application-focused than in academia.Max's team at Meta works on adaptive experimentation and Bayesian optimization.Chapters:08:51 Understanding BoTorch12:12 Use Cases and Flexibility of BoTorch15:02 Integration with PyTorch and GPyTorch17:57 Practical Applications of BoTorch20:50 Open Source Culture at Meta and BoTorch's Development43:10 The Power of Open Source Collaboration47:49 Scalability Challenges at Meta51:02 Balancing Depth and Breadth in Problem Solving55:08 Communicating Uncertainty to Stakeholders01:00:53 Learning from Missteps in Research01:05:06 Integrating External Contributions into BoTorch01:08:00 The Future of Optimization with LLMsThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode,...

DLF Family of Podcasts
SuperFlex SuperShow 347 - Thanks, Bill

DLF Family of Podcasts

Play Episode Listen Later Aug 20, 2025


QBs and ‘cuffs take center stage, in this week's Strat Sesh!The Ol' SFD goes solo in this mailbag Strat Sesh, heavy on names and in-season strategy! Hogue answers listener questions on the impact of the 2024 QB rookie class and how the landscape will change as a result; RB handcuffs to target (and why/to what degree); and in-season Quarterback X-Streaming. Plus, the backslides for Jayden Daniels and Bo Nix; the four categories of handcuffs; Bayesian theory in-season, and when start rates are the answer. All that and more, on this week's AMA episode!************* JOIN THE SFSS DISCORD SERVER HERE FOR THE SUPERSIZED, ONGOING CONVERSATION ON SUPERFLEX!! ************* The SuperFlex SuperShow – one of many great podcasts from the Dynasty League Football (@DLFootball) Family of Podcasts – is hosted by John Hogue (@SuperFlexDude) and Tommy Blair (@FFTommyB), and always dedicated in loving memory to James “The Brain” Koutoulas. Featuring weekly dynasty football content focused on superflex, 2QB and other alternate scoring settings. Special thanks to Heart and Soul Radio for their song, “The Addiction,” and special thanks to the Dynasty League Football Family of Podcasts and the entire DLF staff for the ongoing support! Stay Sexy… and SuperFlex-y!

Notizie a colazione
Mar 19 ago | Gaza, carceri, Bayesian, telemarketing selvaggio

Notizie a colazione

Play Episode Listen Later Aug 19, 2025 16:13


Oggi parliamo di Gaza, delle condizioni delle carceri italiane, dell'anniversario del naufragio del Bayesian, con i suoi misteri, e di telemarketing selvaggio. ... Qui il link per iscriversi al canale Whatsapp di Notizie a colazione: https://whatsapp.com/channel/0029Va7X7C4DjiOmdBGtOL3z Per iscriverti al canale Telegram: https://t.me/notizieacolazione ... Qui gli altri podcast di Class Editori: https://milanofinanza.it/podcast Musica https://www.bensound.com Learn more about your ad choices. Visit megaphone.fm/adchoices

Learning Bayesian Statistics
BITESIZE | What's Missing in Bayesian Deep Learning?

Learning Bayesian Statistics

Play Episode Listen Later Aug 13, 2025 20:34 Transcription Available


Today's clip is from episode 138 of the podcast, with Mélodie Monod, François-Xavier Briol and Yingzhen Li.During this live show at Imperial College London, Alex and his guests delve into the complexities and advancements in Bayesian deep learning, focusing on uncertainty quantification, the integration of machine learning tools, and the challenges faced in simulation-based inference.The speakers discuss their current projects, the evolution of Bayesian models, and the need for better computational tools in the field.Get the full discussion here.Attend Alex's tutorial at PyData Berlin: A Beginner's Guide to State Space Modeling Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

The Barbell Rehab Podcast
Hypermobility, Strength Training Women, and Hormones with Dr. Susie Spirlock | Ep 44

The Barbell Rehab Podcast

Play Episode Listen Later Aug 8, 2025 57:02


In this episode of the Barbell Rehab Podcast, we sit down with Dr. Susie Spirlock to discuss rehab and training. We chat about training women and unique considerations around hormones, menstrual cycles, and bone mineral density. We also discuss hypermobility, common misconceptions, and programming considerations. We wrap up by considering trends in the field around evidence-based practice. Susie can be found on Instagram at @dr.susie.squats.  We hope you enjoy this episode!   Here are some follow-up resources for you to check out, including research articles and additional readings related to the topics discussed in this episode: Move Your Bones Free 4-Week Beginner Strength Training Program Free Using Intensity Based Training For The Phases of The Menstrual Cycle 20% Off Your First 2 Months In Supple Strength with Code BRM20 Beighton Score Hospital Del Mar Criteria Diagnostic Criteria for Hypermobile Ehlers-Danlos Syndrome (hEDS)   Defining the Clinical Complexity of hEDS and HSD: A Global Survey of Diagnostic Challenge, Comorbidities, and Unmet Needs: https://www.medrxiv.org/content/10.1101/2025.06.05.25329074v1.full.pdf Current evidence shows no influence of women's menstrual cycle phase on acute strength performance or adaptations to resistance exercise training: https://pmc.ncbi.nlm.nih.gov/articles/PMC10076834/ Menstrual Cycle Phase Has No Influence on Performance-Determining Variables in Endurance-Trained Athletes: The FENDURA Project: https://pubmed.ncbi.nlm.nih.gov/38600646/ Sex differences in absolute and relative changes in muscle size following resistance training in healthy adults: a systematic review with Bayesian meta-analysis: https://pubmed.ncbi.nlm.nih.gov/40028215/   FREE Research Roundup Email Series | Get research reviews sent to your inbox, once a month, and stay up-to-date on the latest trends in rehab and fitness The Barbell Rehab Method Certification Course Schedule | 2-days, 15 hours, and CEU approved The Barbell Rehab Weightlifting Certification Course Schedule | 2-days, 15 hours, and CEU approved

Learning Bayesian Statistics
#138 Quantifying Uncertainty in Bayesian Deep Learning, Live from Imperial College London

Learning Bayesian Statistics

Play Episode Listen Later Aug 6, 2025 83:10 Transcription Available


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Bayesian deep learning is a growing field with many challenges.Current research focuses on applying Bayesian methods to neural networks.Diffusion methods are emerging as a new approach for uncertainty quantification.The integration of machine learning tools into Bayesian models is a key area of research.The complexity of Bayesian neural networks poses significant computational challenges.Future research will focus on improving methods for uncertainty quantification. Generalized Bayesian inference offers a more robust approach to uncertainty.Uncertainty quantification is crucial in fields like medicine and epidemiology.Detecting out-of-distribution examples is essential for model reliability.Exploration-exploitation trade-off is vital in reinforcement learning.Marginal likelihood can be misleading for model selection.The integration of Bayesian methods in LLMs presents unique challenges.Chapters:00:00 Introduction to Bayesian Deep Learning03:12 Panelist Introductions and Backgrounds10:37 Current Research and Challenges in Bayesian Deep Learning18:04 Contrasting Approaches: Bayesian vs. Machine Learning26:09 Tools and Techniques for Bayesian Deep Learning31:18 Innovative Methods in Uncertainty Quantification36:23 Generalized Bayesian Inference and Its Implications41:38 Robust Bayesian Inference and Gaussian Processes44:24 Software Development in Bayesian Statistics46:51 Understanding Uncertainty in Language Models50:03 Hallucinations in Language Models53:48 Bayesian Neural Networks vs Traditional Neural Networks58:00 Challenges with Likelihood Assumptions01:01:22 Practical Applications of Uncertainty Quantification01:04:33 Meta Decision-Making with Uncertainty01:06:50 Exploring Bayesian Priors in Neural Networks01:09:17 Model Complexity and Data Signal01:12:10 Marginal Likelihood and Model Selection01:15:03 Implementing Bayesian Methods in LLMs01:19:21 Out-of-Distribution Detection in LLMsThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer,...

The Wright Show
Must We Discuss Sydney Sweeney? (Robert Wright & Paul Bloom)

The Wright Show

Play Episode Listen Later Aug 5, 2025 60:00


Substack glory and livestream anxiety ... Reading eugenics into a dopey jeans ad ... Does Sydney Sweeney spell the end of “inclusive marketing”? ... Bob vs. Paul on IQ and “general intelligence” ... Paul reviews the new Billy Joel documentary ... The Epstein prison video snafu: a Bayesian take ... Ghislaine gets a free upgrade ...

Bloggingheads.tv
Must We Discuss Sydney Sweeney? (Robert Wright & Paul Bloom)

Bloggingheads.tv

Play Episode Listen Later Aug 5, 2025 60:00


Substack glory and livestream anxiety ... Reading eugenics into a dopey jeans ad ... Does Sydney Sweeney spell the end of “inclusive marketing”? ... Bob vs. Paul on IQ and “general intelligence” ... Paul reviews the new Billy Joel documentary ... The Epstein prison video snafu: a Bayesian take ... Ghislaine gets a free upgrade ...

Grinding The Variance (A Davis Mattek Fantasy Football Pod)
Best Ball Drafts With An Actual Computer Genius

Grinding The Variance (A Davis Mattek Fantasy Football Pod)

Play Episode Listen Later Aug 4, 2025 133:39


JOIN THE CHANNEL: https://www.youtube.com/channel/UChjRIs14reAo-on9z5iHJFA/join Find Merch: https://mattek.store/ Sign up to draft with us on UNDERDOG and use code DAVIS:  https://play.underdogfantasy.com/p-davis-mattek Code DAVIS is live on Fast Draft to play in the fastest tournaments in fantasy football Download the app here: https://apps.apple.com/us/app/fastdraft-fantasy/id6478789910 Join Drafters Fantasy and get a 100% Deposit Match Bonus up to $100 with Code DAVIS.  $2.5M in Prizes, Best Ball Total Points Format, Potential Overlay… https://drafters.com/refer/davis  GET 10% OFF RUN THE SIMS W/ CODE "ENDGAME": www.runthesims.com Try Out UNABATED'S Premium Sports Betting + DFS Pick 'Em Tools: https://unabated.com/?ref=davis Try Out UNABATED'S Premium Sports Betting + DFS Pick 'Em Tools: https://unabated.com/?ref=davis Sign up for premium fantasy football content and get exclusive Discord access: www.patreon.com/davismattek Subscribe to the AutoMattek Absolutes Newsletter: https://automattekabsolutes.beehiiv.com/ Download THE DRAFT CADDY: https://endgamesyndicate.com/membership-levels/?pa=DavisMattek Timestamps: 00:00 Best Ball Fantasy Football Introduction 2:00 Best Ball mania draft begins  14:00 Home League Team Review  19:30 Keaton Mitchell  33:00 Best Ball Mania Draft #2 Begins  45:20 Kyle Pitts  1:03:00 Shaidy Advice Joins The Show To Draft A Best Ball Mania  1:07:30 The Sims Explain Themselves  1:15:00 How sims change when you put your own rankings in it  1:37:30 Best Ball Mania Draft Begins with a Bayesian process  Audio-Only Podcast Feed For All Davis Mattek Streams: https://podcasts.apple.com/us/podcast/grinding-the-variance-a-davis-mattek-fantasy-football-pod/id1756145256

astro[sound]bites
Episode 110: Bayesian Biosignatures

astro[sound]bites

Play Episode Listen Later Aug 3, 2025 77:18


Apply to join us as a co-host! https://astrosoundbites.com/recruiting-2025   This week, Shashank, Cole and Cormac discuss a concept that has come up on many an ASB episode past: Bayesian statistics. They start by trying to wrap our heads around what a probability really means. Cole introduces us to a recent and attention-grabbing paper on a potential biosignature in the atmosphere of an exoplanet, with lots of statistics along the way. Then, Cormac brings up some counterpoints to this detection. They debate what it would take—statistically and scientifically—for a detection of biosignatures to cross the line from intriguing to compelling.   New Constraints on DMS and DMDS in the Atmosphere of K2-18 b from JWST MIRI https://iopscience.iop.org/article/10.3847/2041-8213/adc1c8   Are there Spectral Features in the MIRI/LRS Transmission Spectrum of K2-18b? https://arxiv.org/abs/2504.15916   Insufficient evidence for DMS and DMDS in the atmosphere of K2-18 b. From a joint analysis of JWST NIRISS, NIRSpec, and MIRI observations https://arxiv.org/abs/2505.13407   Space Sound: https://www.youtube.com/watch?v=hGdk49LRB14

Learning Bayesian Statistics
BITESIZE | Practical Applications of Causal AI with LLMs, with Robert Ness

Learning Bayesian Statistics

Play Episode Listen Later Jul 30, 2025 25:28


Today's clip is from episode 137 of the podcast, with Robert Ness.Alex and Robert discuss the intersection of causal inference and deep learning, emphasizing the importance of understanding causal concepts in statistical modeling. The discussion also covers the evolution of probabilistic machine learning, the role of inductive biases, and the potential of large language models in causal analysis, highlighting their ability to translate natural language into formal causal queries.Get the full conversation here.Attend Alex's tutorial at PyData Berlin: A Beginner's Guide to State Space Modeling Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

Learning Bayesian Statistics
#137 Causal AI & Generative Models, with Robert Ness

Learning Bayesian Statistics

Play Episode Listen Later Jul 23, 2025 98:19 Transcription Available


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Causal assumptions are crucial for statistical modeling.Deep learning can be integrated with causal models.Statistical rigor is essential in evaluating LLMs.Causal representation learning is a growing field.Inductive biases in AI should match key mechanisms.Causal AI can improve decision-making processes.The future of AI lies in understanding causal relationships.Chapters:00:00 Introduction to Causal AI and Its Importance16:34 The Journey to Writing Causal AI28:05 Integrating Graphical Causality with Deep Learning40:10 The Evolution of Probabilistic Machine Learning44:34 Practical Applications of Causal AI with LLMs49:48 Exploring Multimodal Models and Causality56:15 Tools and Frameworks for Causal AI01:03:19 Statistical Rigor in Evaluating LLMs01:12:22 Causal Thinking in Real-World Deployments01:19:52 Trade-offs in Generative Causal Models01:25:14 Future of Causal Generative ModelingThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant...

The Bayesian Conspiracy
Bayes Blast 43 – Die-ing to Intuit Bayes' Theorem

The Bayesian Conspiracy

Play Episode Listen Later Jul 22, 2025 13:19


Olivia is a member of the Guild of the Rose and a total badass. Enjoy the intuitive and fun lesson in Bayesian reasoning she shared with me at VibeCamp.

Seller Sessions
The Blueprint  Live Build - N8N:Post Prime Day Performance Automation

Seller Sessions

Play Episode Listen Later Jul 18, 2025 53:07


Learning Bayesian Statistics
BITESIZE | How to Make Your Models Faster, with Haavard Rue & Janet van Niekerk

Learning Bayesian Statistics

Play Episode Listen Later Jul 16, 2025 17:53 Transcription Available


Today's clip is from episode 136 of the podcast, with Haavard Rue & Janet van Niekerk.Alex, Haavard and Janet explore the world of Bayesian inference with INLA, a fast and deterministic method that revolutionizes how we handle large datasets and complex models. Discover the power of INLA, and why it can make your models go much faster! Get the full conversation here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

The EMS Lighthouse Project
Ep 99 - Adenosine or Diltiazem for SVT?

The EMS Lighthouse Project

Play Episode Listen Later Jul 8, 2025 35:15


We just got a new paper that compares initial treatment with adenosine compared with diltiazem for the treatment of adults with SVT in the ED. Wouldn't it be great if it turned out that diltiazem was just as effective, if not more effective, as adenosine without the crappy feeling? Yeah, that'd be great, but what do we do with statistically insignificant results. Is there, perhaps, a way to save this “insignificant” paper? Fear not, Bayes is here! Yes, that's right, Dr. Jarvis is grabbing this new paper and diving straight back into that deep dark rabbit hole of Bayesian analysis. Citation:1.     Lee CA, Morrissey B, Chao K, Healy J, Ku K, Khan M, Kinteh E, Shedd A, Garrett J, Chou EH: Adenosine Versus Fixed-Dose Intravenous Bolus Diltiazem on Reversing Supraventricular Tachycardia in The Emergency Department: A Multi-Center Cohort Study. The Journal of Emergency Medicine. 2025;August 1;75:55–64. FAST25 | May 19-21, 2025 | Lexington, KY