Podcasts about bayesian

  • 775PODCASTS
  • 2,031EPISODES
  • 43mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Aug 13, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about bayesian

Show all podcasts related to bayesian

Latest podcast episodes about bayesian

Learning Bayesian Statistics
BITESIZE | What's Missing in Bayesian Deep Learning?

Learning Bayesian Statistics

Play Episode Listen Later Aug 13, 2025 20:34 Transcription Available


Today's clip is from episode 138 of the podcast, with Mélodie Monod, François-Xavier Briol and Yingzhen Li.During this live show at Imperial College London, Alex and his guests delve into the complexities and advancements in Bayesian deep learning, focusing on uncertainty quantification, the integration of machine learning tools, and the challenges faced in simulation-based inference.The speakers discuss their current projects, the evolution of Bayesian models, and the need for better computational tools in the field.Get the full discussion here.Attend Alex's tutorial at PyData Berlin: A Beginner's Guide to State Space Modeling Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

Hemispherics
#82: Los apellidos del cerebro y el mito del cerebro bayesiano

Hemispherics

Play Episode Listen Later Aug 9, 2025 27:25


En este episodio exploramos las diversas formas en que se ha conceptualizado el funcionamiento del cerebro humano, guiados por las fascinantes metáforas divulgadas por el neuropsicólogo Javier Tirapu Ustárroz: el cerebro darwiniano (instinto y evolución), el skinneriano (aprendizaje por refuerzo), el popperiano (simulación mental y predicción), y el gregoriano (empatía y simulación social). Profundizamos luego en la controvertida hipótesis del "cerebro bayesiano", según la cual nuestro cerebro actuaría como un científico estadístico que actualiza constantemente sus creencias sobre el mundo combinando experiencias previas y nueva información sensorial. Apoyándonos en el provocador artículo de Madhur Mangalam, "The Myth of the Bayesian Brain" (2025), examinamos en profundidad las fortalezas y las debilidades de este paradigma tan influyente. Referencias del episodio: 1. Mangalam M. (2025). The myth of the Bayesian brain. European journal of applied physiology, 10.1007/s00421-025-05855-6. Advance online publication. https://doi.org/10.1007/s00421-025-05855-6 (https://pubmed.ncbi.nlm.nih.gov/40569419/). 2. Tirapu Ustárroz, J. (2008). ¿Para qué sirve el cerebro? Manual para principiantes. Editorial Alianza (https://www.calameo.com/books/00638959573c26c5b3fbc).

The Barbell Rehab Podcast
Hypermobility, Strength Training Women, and Hormones with Dr. Susie Spirlock | Ep 44

The Barbell Rehab Podcast

Play Episode Listen Later Aug 8, 2025 57:02


In this episode of the Barbell Rehab Podcast, we sit down with Dr. Susie Spirlock to discuss rehab and training. We chat about training women and unique considerations around hormones, menstrual cycles, and bone mineral density. We also discuss hypermobility, common misconceptions, and programming considerations. We wrap up by considering trends in the field around evidence-based practice. Susie can be found on Instagram at @dr.susie.squats.  We hope you enjoy this episode!   Here are some follow-up resources for you to check out, including research articles and additional readings related to the topics discussed in this episode: Move Your Bones Free 4-Week Beginner Strength Training Program Free Using Intensity Based Training For The Phases of The Menstrual Cycle 20% Off Your First 2 Months In Supple Strength with Code BRM20 Beighton Score Hospital Del Mar Criteria Diagnostic Criteria for Hypermobile Ehlers-Danlos Syndrome (hEDS)   Defining the Clinical Complexity of hEDS and HSD: A Global Survey of Diagnostic Challenge, Comorbidities, and Unmet Needs: https://www.medrxiv.org/content/10.1101/2025.06.05.25329074v1.full.pdf Current evidence shows no influence of women's menstrual cycle phase on acute strength performance or adaptations to resistance exercise training: https://pmc.ncbi.nlm.nih.gov/articles/PMC10076834/ Menstrual Cycle Phase Has No Influence on Performance-Determining Variables in Endurance-Trained Athletes: The FENDURA Project: https://pubmed.ncbi.nlm.nih.gov/38600646/ Sex differences in absolute and relative changes in muscle size following resistance training in healthy adults: a systematic review with Bayesian meta-analysis: https://pubmed.ncbi.nlm.nih.gov/40028215/   FREE Research Roundup Email Series | Get research reviews sent to your inbox, once a month, and stay up-to-date on the latest trends in rehab and fitness The Barbell Rehab Method Certification Course Schedule | 2-days, 15 hours, and CEU approved The Barbell Rehab Weightlifting Certification Course Schedule | 2-days, 15 hours, and CEU approved

Learning Bayesian Statistics
#138 Quantifying Uncertainty in Bayesian Deep Learning, Live from Imperial College London

Learning Bayesian Statistics

Play Episode Listen Later Aug 6, 2025 83:10 Transcription Available


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Bayesian deep learning is a growing field with many challenges.Current research focuses on applying Bayesian methods to neural networks.Diffusion methods are emerging as a new approach for uncertainty quantification.The integration of machine learning tools into Bayesian models is a key area of research.The complexity of Bayesian neural networks poses significant computational challenges.Future research will focus on improving methods for uncertainty quantification. Generalized Bayesian inference offers a more robust approach to uncertainty.Uncertainty quantification is crucial in fields like medicine and epidemiology.Detecting out-of-distribution examples is essential for model reliability.Exploration-exploitation trade-off is vital in reinforcement learning.Marginal likelihood can be misleading for model selection.The integration of Bayesian methods in LLMs presents unique challenges.Chapters:00:00 Introduction to Bayesian Deep Learning03:12 Panelist Introductions and Backgrounds10:37 Current Research and Challenges in Bayesian Deep Learning18:04 Contrasting Approaches: Bayesian vs. Machine Learning26:09 Tools and Techniques for Bayesian Deep Learning31:18 Innovative Methods in Uncertainty Quantification36:23 Generalized Bayesian Inference and Its Implications41:38 Robust Bayesian Inference and Gaussian Processes44:24 Software Development in Bayesian Statistics46:51 Understanding Uncertainty in Language Models50:03 Hallucinations in Language Models53:48 Bayesian Neural Networks vs Traditional Neural Networks58:00 Challenges with Likelihood Assumptions01:01:22 Practical Applications of Uncertainty Quantification01:04:33 Meta Decision-Making with Uncertainty01:06:50 Exploring Bayesian Priors in Neural Networks01:09:17 Model Complexity and Data Signal01:12:10 Marginal Likelihood and Model Selection01:15:03 Implementing Bayesian Methods in LLMs01:19:21 Out-of-Distribution Detection in LLMsThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer,...

The Wright Show
Must We Discuss Sydney Sweeney? (Robert Wright & Paul Bloom)

The Wright Show

Play Episode Listen Later Aug 5, 2025 60:00


Substack glory and livestream anxiety ... Reading eugenics into a dopey jeans ad ... Does Sydney Sweeney spell the end of “inclusive marketing”? ... Bob vs. Paul on IQ and “general intelligence” ... Paul reviews the new Billy Joel documentary ... The Epstein prison video snafu: a Bayesian take ... Ghislaine gets a free upgrade ...

Bloggingheads.tv
Must We Discuss Sydney Sweeney? (Robert Wright & Paul Bloom)

Bloggingheads.tv

Play Episode Listen Later Aug 5, 2025 60:00


Substack glory and livestream anxiety ... Reading eugenics into a dopey jeans ad ... Does Sydney Sweeney spell the end of “inclusive marketing”? ... Bob vs. Paul on IQ and “general intelligence” ... Paul reviews the new Billy Joel documentary ... The Epstein prison video snafu: a Bayesian take ... Ghislaine gets a free upgrade ...

Leaders on a Mission
Why Biotech Needs a New Playbook

Leaders on a Mission

Play Episode Listen Later Aug 5, 2025 44:08


What if biotech's biggest scaling challenge isn't technical—but philosophical? In this episode, Massimo Portincaso, founder and CEO of Arsenale Bioyards, explains why industrial biotech must be reimagined from the ground up. He challenges legacy “scale-up” thinking, highlighting biology's context dependency and the economic dead ends of retrofitted pharma models. From modular, AI-informed bioreactors to a scale-out strategy and data-first infrastructure, Massimo shares how his team is rewriting the rules of economic viability, manufacturing innovation, and organizational design. Discover why scaling out — not up — is the future of biomanufacturing.--- Hey Climate Tech enthusiasts! Searching for new podcasts on sustainability? Check out the Leaders on a Mission podcast, where I interview climate tech leaders who are shaking up the industry and bringing us the next big thing in sustainable solutions. Join me for a deep dive into the future of green innovation exploring the highs, lows, and everything in between of pioneering new technologies.Get an exclusive insight into how these leaders started up their journey, and how their cutting edge products will make a real impact. Tune in on…YouTube: https://www.youtube.com/@leadersonamissionNet0Spotify: https://open.spotify.com/show/7o41ubdkzChAzD9C53xH82Apple Podcasts: https://podcasts.apple.com/us/podcast/leaders-on-a-mission/id1532211726…to listen to the latest episodes!Timestamps:00:46 - Biology resists code scaling03:18 -  From BCG to biotech founder07:33 -  Why biotech remains niche10:24  - Redesigning from first principles13:40 -  Biology's context dependency16:55  - Intelligent design via Bayesian models18:46 -  Building the bioproduction stack21:00 -  Scale-out vs scale-up25:56 -  Rethinking the CDMO model29:20 -  Reinventing the capital stack33:30 -  Future sectors & applications37:10 -   Killing the org chart40:12 -  Complexity as a strategic assetUseful links: Arsenale Bioyards website: https://arsenale.bio/Arsenale Bioywards LinkedIn: https://www.linkedin.com/company/arsenale-bioyards/Massimo Portincaso LinkedIn: https://www.linkedin.com/in/massimo-portincaso-36a8795/Leaders on a Mission website: https://cs-partners.net/podcasts/Simon Leich's LinkedIn: https://www.linkedin.com/in/executive-talent-headhunter-agtech-foodtech-agrifoodtech-agritech/

Grinding The Variance (A Davis Mattek Fantasy Football Pod)
Best Ball Drafts With An Actual Computer Genius

Grinding The Variance (A Davis Mattek Fantasy Football Pod)

Play Episode Listen Later Aug 4, 2025 133:39


JOIN THE CHANNEL: https://www.youtube.com/channel/UChjRIs14reAo-on9z5iHJFA/join Find Merch: https://mattek.store/ Sign up to draft with us on UNDERDOG and use code DAVIS:  https://play.underdogfantasy.com/p-davis-mattek Code DAVIS is live on Fast Draft to play in the fastest tournaments in fantasy football Download the app here: https://apps.apple.com/us/app/fastdraft-fantasy/id6478789910 Join Drafters Fantasy and get a 100% Deposit Match Bonus up to $100 with Code DAVIS.  $2.5M in Prizes, Best Ball Total Points Format, Potential Overlay… https://drafters.com/refer/davis  GET 10% OFF RUN THE SIMS W/ CODE "ENDGAME": www.runthesims.com Try Out UNABATED'S Premium Sports Betting + DFS Pick 'Em Tools: https://unabated.com/?ref=davis Try Out UNABATED'S Premium Sports Betting + DFS Pick 'Em Tools: https://unabated.com/?ref=davis Sign up for premium fantasy football content and get exclusive Discord access: www.patreon.com/davismattek Subscribe to the AutoMattek Absolutes Newsletter: https://automattekabsolutes.beehiiv.com/ Download THE DRAFT CADDY: https://endgamesyndicate.com/membership-levels/?pa=DavisMattek Timestamps: 00:00 Best Ball Fantasy Football Introduction 2:00 Best Ball mania draft begins  14:00 Home League Team Review  19:30 Keaton Mitchell  33:00 Best Ball Mania Draft #2 Begins  45:20 Kyle Pitts  1:03:00 Shaidy Advice Joins The Show To Draft A Best Ball Mania  1:07:30 The Sims Explain Themselves  1:15:00 How sims change when you put your own rankings in it  1:37:30 Best Ball Mania Draft Begins with a Bayesian process  Audio-Only Podcast Feed For All Davis Mattek Streams: https://podcasts.apple.com/us/podcast/grinding-the-variance-a-davis-mattek-fantasy-football-pod/id1756145256

astro[sound]bites
Episode 110: Bayesian Biosignatures

astro[sound]bites

Play Episode Listen Later Aug 3, 2025 77:18


Apply to join us as a co-host! https://astrosoundbites.com/recruiting-2025   This week, Shashank, Cole and Cormac discuss a concept that has come up on many an ASB episode past: Bayesian statistics. They start by trying to wrap our heads around what a probability really means. Cole introduces us to a recent and attention-grabbing paper on a potential biosignature in the atmosphere of an exoplanet, with lots of statistics along the way. Then, Cormac brings up some counterpoints to this detection. They debate what it would take—statistically and scientifically—for a detection of biosignatures to cross the line from intriguing to compelling.   New Constraints on DMS and DMDS in the Atmosphere of K2-18 b from JWST MIRI https://iopscience.iop.org/article/10.3847/2041-8213/adc1c8   Are there Spectral Features in the MIRI/LRS Transmission Spectrum of K2-18b? https://arxiv.org/abs/2504.15916   Insufficient evidence for DMS and DMDS in the atmosphere of K2-18 b. From a joint analysis of JWST NIRISS, NIRSpec, and MIRI observations https://arxiv.org/abs/2505.13407   Space Sound: https://www.youtube.com/watch?v=hGdk49LRB14

CHEST Journal Podcasts
August 2025 CHEST Journal Editor Highlights

CHEST Journal Podcasts

Play Episode Listen Later Aug 1, 2025 17:43


CHEST August 2025, Volume 168, Issue 2   CHEST® journal's Editor in Chief Peter Mazzone, MD, MPH, FCCP, highlights key research published in the journal CHEST August 2025 issue, including an exploration of the impacts of abortion bans on pulmonary and critical care physicians, a Bayesian meta-analysis of machine listening for obstructive sleep apnea diagnosis, and more.   Moderator:  Peter Mazzone, MD, MPH, FCCP 

Software Lifecycle Stories
Building the Future: From COBOL to AI with Spart Parthasarathy

Software Lifecycle Stories

Play Episode Listen Later Aug 1, 2025 56:57


My guest today is S Parthasarathy, better known as Spart who is the founder at CuedIn Technologies.In this episode, Spart shares his extensive journey in the software engineering field. Spart's story begins with his initial projects in COBOL programming at Tata Burroughs and the World Bank, working on critical systems in retail, logistics, and financial sectors. He recounts his impactful stint at SWIFT, contributing to the foundation of what has become modern-day financial messaging systems. Spart reflects on his decision to pivot from electrical engineering to computer science, driven by his interest in the engineering of large systems. He details his tenure at Ramco Systems, implementing document-based transactions, model-based code generation, and achieving several tech milestones, including internet integration and 32-bit upgrades. After Ramco, Spart's continued passion for software engineering led him to co-found a SaaS-based ERP solution company, targeting SMEs. Despite early challenges and market readiness issues, he gained crucial insights into cloud-native architectures.Spart's career path took another turn towards consulting and exploring AI, specifically focusing on probabilistic graph learning and the challenges of natural language processing in software engineering. He emphasizes the importance of non-functional requirements, application architecture, and the potential of tools like Generative AI (GenAI) to enhance software development processes. Spart concludes by reflecting on the ongoing evolution of software engineering and his current projects aimed at making software engineering more accessible and efficient with modern tools.Spart has 40+ years of experience in various aspects of software services area covering consultancy, business systems management, product development and Engineering management. Worked with large North American Organizations handling complex projects to implement transaction processing business solutions and data communication networks. Key interests: • Gen AI based solutioning for key business activities • Gen AI enabled SW engineering • Contextual social network driven approach for building business systems • Predictive analytics over operational databases using Bayesian causal networks • Implementing innovative platform based techno-commercial models for software solutions/services delivery. • Cloud computing and SOA based multi-tenant solution Architecturehttps://www.linkedin.com/in/spartp/

Learning Bayesian Statistics
BITESIZE | Practical Applications of Causal AI with LLMs, with Robert Ness

Learning Bayesian Statistics

Play Episode Listen Later Jul 30, 2025 25:28


Today's clip is from episode 137 of the podcast, with Robert Ness.Alex and Robert discuss the intersection of causal inference and deep learning, emphasizing the importance of understanding causal concepts in statistical modeling. The discussion also covers the evolution of probabilistic machine learning, the role of inductive biases, and the potential of large language models in causal analysis, highlighting their ability to translate natural language into formal causal queries.Get the full conversation here.Attend Alex's tutorial at PyData Berlin: A Beginner's Guide to State Space Modeling Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

Learning Bayesian Statistics
#137 Causal AI & Generative Models, with Robert Ness

Learning Bayesian Statistics

Play Episode Listen Later Jul 23, 2025 98:19 Transcription Available


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Causal assumptions are crucial for statistical modeling.Deep learning can be integrated with causal models.Statistical rigor is essential in evaluating LLMs.Causal representation learning is a growing field.Inductive biases in AI should match key mechanisms.Causal AI can improve decision-making processes.The future of AI lies in understanding causal relationships.Chapters:00:00 Introduction to Causal AI and Its Importance16:34 The Journey to Writing Causal AI28:05 Integrating Graphical Causality with Deep Learning40:10 The Evolution of Probabilistic Machine Learning44:34 Practical Applications of Causal AI with LLMs49:48 Exploring Multimodal Models and Causality56:15 Tools and Frameworks for Causal AI01:03:19 Statistical Rigor in Evaluating LLMs01:12:22 Causal Thinking in Real-World Deployments01:19:52 Trade-offs in Generative Causal Models01:25:14 Future of Causal Generative ModelingThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant...

The Bayesian Conspiracy
Bayes Blast 43 – Die-ing to Intuit Bayes' Theorem

The Bayesian Conspiracy

Play Episode Listen Later Jul 22, 2025 13:19


Olivia is a member of the Guild of the Rose and a total badass. Enjoy the intuitive and fun lesson in Bayesian reasoning she shared with me at VibeCamp.

Seller Sessions
The Blueprint  Live Build - N8N:Post Prime Day Performance Automation

Seller Sessions

Play Episode Listen Later Jul 18, 2025 53:07


Swole Radio
74. Brad Schoenfeld: Optimal Hypertrophy Training

Swole Radio

Play Episode Listen Later Jul 18, 2025 48:29


I had Brad Schoenfeld back on the Swole Radio Podcast to discuss optimal training for hypertrophy. We cover some of his recent research and perspectives:0:23 Without fail: muscular adaptations in single set resistance training performed to failure or with repetitions in reserve 4:18 is there an incremental benefit of training closer to failure?7:39 Give it a rest: a systematic review with Bayesian meta-analysis on the effect of inter-set rest interval duration on muscle hypertrophy16:38 how to program rep ranges for hypertrophy24:34 Do cheaters prosper? Effect of externally supplied momentum during resistance training on measures of upper body muscle hypertrophy28:10 Optimizing resistance training technique to maximize muscle hypertrophy: a narrative review32:08 how do you know if an exercise is good for you?37:24 how should people find their optimal training volume?43:20 how to run deloadsBrad's IG: @bradschoenfeldphd-------------------------------My e-books: https://askdrswole.com/MASS Research Review: https://www.massmember.com/a/2147986175/LwzhWs82(This is an affiliate link - I'll receive a small commission when you use it)Dream Physique Nutrition Course: https://dr-swole.thinkific.com/courses/dream-physique-nutrition-------------------------------Find me on social media:YOUTUBE: https://www.youtube.com/@DrSwoleINSTAGRAM: http://instagram.com/dr_swoleFACEBOOK GROUP: https://www.facebook.com/groups/drswoleTIKTOK: https://www.tiktok.com/@dr_swole/-------------------------------About me: I'm a medical doctor and pro natural physique athlete based in Vancouver, Canada. I share evidence-based perspectives on natural bodybuilding, and see to help people achieve health, wealth, and happiness.-------------------------------Disclaimers: Consider seeing a physician to assess your readiness before beginning any fitness program. Information presented here is to be applied intelligently in the individual context. I do not assume liability for any loss incurred by using information presented here. -------------------------------

Learning Bayesian Statistics
BITESIZE | How to Make Your Models Faster, with Haavard Rue & Janet van Niekerk

Learning Bayesian Statistics

Play Episode Listen Later Jul 16, 2025 17:53 Transcription Available


Today's clip is from episode 136 of the podcast, with Haavard Rue & Janet van Niekerk.Alex, Haavard and Janet explore the world of Bayesian inference with INLA, a fast and deterministic method that revolutionizes how we handle large datasets and complex models. Discover the power of INLA, and why it can make your models go much faster! Get the full conversation here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

Secrets To Scaling Online
How To Use A.I. Ad Creative in 2025 (And What You NEED To Avoid!)

Secrets To Scaling Online

Play Episode Listen Later Jul 16, 2025 43:19


Send us a textUnlock the secret to high-impact ad creative and stop fighting the algorithms!In this episode, Jordan West sits down with performance marketing expert Andrew Faris to revolutionize the way you think about ad testing and creative strategy. Forget rigid A/B experiments—here's what really works on platforms like Meta and TikTok in 2025:Explore & Expand vs. Outdated A/B Testing • Discover why Meta's Bayesian engine fights your split-tests—and how switching to an “Explore & Expand” mindset turbocharges learning. • Learn to surface winning angles quickly, then scale them with minimal extra effort.Harnessing AI for Wild, CGI-Level Concepts • See real examples of “unhinged” AI-generated spots—think dinosaurs parachuting into lava—and learn how to keep your core message crystal clear. • Find out which AI tools are best for scripting, storyboarding, and turning crazy ideas into thumb-stopping ads.Prioritizing Human-to-Human Authenticity • Why polished avatars and lip-sync bots may backfire—and what your audience really craves instead. • Strategies for injecting genuine emotion, social proof, and community hooks into every campaign.Creative Volume & Variation: How Much Is Too Much? • The surprising truth about test volume: when “more creative” stops adding value—and how to find the sweet spot for your budget. • A framework for balancing length, hook styles, and visual formats so that every variation still drives toward a single, powerful message.Actionable Tips to Maximize Engagement • Messaging hierarchies: the one element you must nail before tweaking format or length. • Quick wins for rotating hooks, swapping B-roll, and layering in new calls-to-action—without blowing your production timeline.Whether you're a seasoned performance marketer or simply curious about the future of advertising, you'll walk away with:A clear roadmap for ditching endless A/B loopsCreative prompts to push your AI experiments beyond the ordinaryPractical checklists for testing and scaling your best-performing adsGuest info:Linkedin: https://www.linkedin.com/in/andrew-faris-980b84108/

Podcasts 4 Brainport, featured by Radio 4 Brainport
Bayesian Inference and the Brain with Hadi Vafaii

Podcasts 4 Brainport, featured by Radio 4 Brainport

Play Episode Listen Later Jul 14, 2025 20:04


Can probability theory help explain how the mind works? In this episode of Deep Dives with Iman, host Iman Mossavat talks with Dr. Hadi Vafaii, a postdoctoral scholar at UC Berkeley's Redwood Center for Theoretical Neuroscience, working in Jacob Yates's lab. The conversation focuses on “perception as inference”—a century-old idea that continues to influence modern neuroscience, from predictive coding to the free energy principle. While rooted in probability theory and Bayesian statistics, Hadi builds intuition step by step, using simple, relatable examples. He explains how the brain interprets the world by guessing the hidden causes behind what we see and hear, combining prior knowledge with new evidence. The episode also explores how personal beliefs and life experiences shape interpretation and why uncertainty is a vital part of life rather than a weakness.

SAFT Podcast
Is Cosmic Fine-Tuning Just Christian Apologetics HYPE? (ft. Luke Barnes) | EP 99

SAFT Podcast

Play Episode Listen Later Jul 13, 2025 61:59


Is the universe genuinely fine-tuned for life, or is it all hype? In this segment finale join renowned astrophysicist Dr. Luke Barnes of Western Sydney University as he tackles key objections to the fine-tuning argument, including whether fine-tuning truly exists, whether there is a true probability to speak of, and critiques from physicists like Sabine Hossenfelder. Explore how Bayesian probability, observational evidence, and life's unique existence shape our understanding of the universe's design. We also get to dip our feet into quantum mechanics (psst. the Copenhagen Interpretation isn't ultimate!)Links and citation:The Cosmic Revolutionary's Handbook: (Or: How to Beat the Big Bang) | https://www.amazon.com/Cosmic-Revolutionarys-Handbook-Beat-Bang/dp/1108486703A Fortunate Universe: Life in a Finely Tuned Cosmos | https://www.amazon.com/Fortunate-Universe-Finely-Tuned-Cosmos/dp/1107156610Morality Remains the MOST Persuasive Argument for GOD! (ft. Dave Baggett) | EP 85 | https://www.youtube.com/watch?v=gqMj6lCwwzUSabine Hossenfelder & Luke Barnes • The fine tuning of the Universe: Was the cosmos made for us? | https://www.youtube.com/watch?v=5OoYzcxzvvMSAFT Ebook: https://ebook.saftapologetics.com/Comics that teach apologetics: Apolotoons | https://www.instagram.com/apolotoons/Natural Theology Playlist:    • Natural Theology   Check out William Lane Craig's book 'Reasonable Faith' for a thorough defense of all the major arguments for God's existence.Record a question and stand a chance to be featured on SAFT Podcast: https://www.speakpipe.com/saftpodcastWatch the entire Ep at https://youtu.be/9XHDYfh4ZgoEquipping the believer defend their faith anytime, anywhere. Our vision is to do so beyond all language barriers in India and beyond!SAFT Apologetics stands for Seeking Answers Finding Truth and was formed off inspiration from the late Nabeel Qureshi's autobiography that captured his life journey where he followed truth where it led him. We too aim to be a beacon emulating his life's commitment towards following truth wherever it leads us.Connect with us:WhatsApp Channel: https://whatsapp.com/channel/0029Va6l4ADEwEk07iZXzV1vWebsite: https://www.saftapologetics.comNewsletter: https://www.sendfox.com/saftapologeticsInstagram: https://www.instagram.com/saftapologetics/Facebook: https://www.facebook.com/saftapologetics/X: https://www.twitter.com/saftapologetics SAFT Blog: https://blog.saftapologetics.com/YouVersion: https://www.bible.com/organizations/dcfc6f87-6f06-4205-82c1-bdc1d2415398 Is there a question that you would like to share with us?Send us your questions, suggestions and queries at: info@saftapologetics.com

Learning Bayesian Statistics
#136 Bayesian Inference at Scale: Unveiling INLA, with Haavard Rue & Janet van Niekerk

Learning Bayesian Statistics

Play Episode Listen Later Jul 9, 2025 77:37 Transcription Available


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:INLA is a fast, deterministic method for Bayesian inference.INLA is particularly useful for large datasets and complex models.The R INLA package is widely used for implementing INLA methodology.INLA has been applied in various fields, including epidemiology and air quality control.Computational challenges in INLA are minimal compared to MCMC methods.The Smart Gradient method enhances the efficiency of INLA.INLA can handle various likelihoods, not just Gaussian.SPDs allow for more efficient computations in spatial modeling.The new INLA methodology scales better for large datasets, especially in medical imaging.Priors in Bayesian models can significantly impact the results and should be chosen carefully.Penalized complexity priors (PC priors) help prevent overfitting in models.Understanding the underlying mathematics of priors is crucial for effective modeling.The integration of GPUs in computational methods is a key future direction for INLA.The development of new sparse solvers is essential for handling larger models efficiently.Chapters:06:06 Understanding INLA: A Comparison with MCMC08:46 Applications of INLA in Real-World Scenarios11:58 Latent Gaussian Models and Their Importance15:12 Impactful Applications of INLA in Health and Environment18:09 Computational Challenges and Solutions in INLA21:06 Stochastic Partial Differential Equations in Spatial Modeling23:55 Future Directions and Innovations in INLA39:51 Exploring Stochastic Differential Equations43:02 Advancements in INLA Methodology50:40 Getting Started with INLA56:25 Understanding Priors in Bayesian ModelsThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad

The EMS Lighthouse Project
Ep 99 - Adenosine or Diltiazem for SVT?

The EMS Lighthouse Project

Play Episode Listen Later Jul 8, 2025 35:15


We just got a new paper that compares initial treatment with adenosine compared with diltiazem for the treatment of adults with SVT in the ED. Wouldn't it be great if it turned out that diltiazem was just as effective, if not more effective, as adenosine without the crappy feeling? Yeah, that'd be great, but what do we do with statistically insignificant results. Is there, perhaps, a way to save this “insignificant” paper? Fear not, Bayes is here! Yes, that's right, Dr. Jarvis is grabbing this new paper and diving straight back into that deep dark rabbit hole of Bayesian analysis. Citation:1.     Lee CA, Morrissey B, Chao K, Healy J, Ku K, Khan M, Kinteh E, Shedd A, Garrett J, Chou EH: Adenosine Versus Fixed-Dose Intravenous Bolus Diltiazem on Reversing Supraventricular Tachycardia in The Emergency Department: A Multi-Center Cohort Study. The Journal of Emergency Medicine. 2025;August 1;75:55–64. FAST25 | May 19-21, 2025 | Lexington, KY

Slate Star Codex Podcast
Your Review: Alpha School

Slate Star Codex Podcast

Play Episode Listen Later Jul 4, 2025 108:28


[This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I'll be posting about one of these a week for several months. When you've read them all, I'll ask you to vote for a favorite, so remember which ones you liked] “Just as we don't accept students using AI to write their essays, we will not accept districts using AI to supplant the critical role of teachers.” — Arthur Steinberg, American Federation of Teachers‑PA, reacting to Alpha's cyber‑charter bid, January 2025 In January 2025, the charter school application of “Unbound Academy”, a subsidiary of “2 Hour Learning, Inc”, lit up the education press: two hours of “AI‑powered” academics, 2.6x learning velocity, and zero teachers. Sympathetic reporters repeated the slogans; union leaders reached for pitchforks; Reddit muttered “another rich‑kid scam.” More sophisticated critics dismissed the pitch as “selective data from expensive private schools”. But there is nowhere on the internet that provides a detailed, non-partisan, description of what the “2 hour learning” program actually is, let alone an objective third party analysis to back up its claims. 2-Hour Learning's flagship school is the “Alpha School” in Austin Texas. The Alpha homepage makes three claims: Love School Learn 2X in two-hours per day Learn Life Skills Only the second claim seems to be controversial, which may be exactly why that is the claim the Alpha PR team focuses on. That PR campaign makes three more sub-claims on what the two-hour, 2x learning really means: “Learn 2.6X faster.” (on average) “Only two hours of academics per day.” “Powered by AI (not teachers).” If all of this makes your inner Bayesian flinch, you're in good company. After twenty‑odd years of watching shiny education fixes wobble and crash—KIPP, AltSchool, Summit Learning, One-laptop-per-child, No child left behind, MOOCs, Khan‑for‑Everything—you should be skeptical. Either Alpha is (a) another program for the affluent propped up by selection effects, or (b) a clever way to turn children into joyless speed‑reading calculators. Those were, more or less, the two critical camps that emerged when Alpha's parent company was approved to launch the tuition‑free Arizona charter school this past January. Unfortunately, the public evidence base on whether this is “real” is thin in both directions. Alpha's own material is glossy and elliptical; mainstream coverage either repeats Alpha's talking points, or attacks the premise that kids should even be allowed to learn faster than their peers. Until Raj Chetty installs himself in the hallway with a clipboard counting MAP percentiles it is hard to get real information on what exactly Alpha is doing, whether it is actually working beyond selection effects, and if there is anyway it could scale in a way that all the other education initiatives seemed to fail to do. I first heard about Alpha in May 2024, and in the absence of randomized‑controlled clarity, I did what any moderately obsessive parent with three elementary-aged kids and an itch for data would do: I moved the family across the country to Austin for a year and ran the experiment myself (unfortunately, despite trying my best we never managed to have identical twins, so I stopped short of running a proper control group. My wife was less disappointed than I was). Since last autumn I've collected the sort of on‑the‑ground detail that doesn't surface in press releases, or is available anywhere online: long chats with founders, curriculum leads, “guides” (not teachers), Brazilian Zoom coaches, sceptical parents, ecstatic parents, and the kids who live inside the Alpha dashboard – including my own. I hope this seven-part review can help share what the program actually is and that this review is more open minded than the critics, but is something that would never get past an Alpha public relations gatekeeper: https://www.astralcodexten.com/p/your-review-alpha-school

Learning Bayesian Statistics
BITESIZE | Understanding Simulation-Based Calibration, with Teemu Säilynoja

Learning Bayesian Statistics

Play Episode Listen Later Jul 4, 2025 21:14 Transcription Available


Get 10% off Hugo's "Building LLM Applications for Data Scientists and Software Engineers" online course!Today's clip is from episode 135 of the podcast, with Teemu Säilynoja.Alex and Teemu discuss the importance of simulation-based calibration (SBC). They explore the practical implementation of SBC in probabilistic programming languages, the challenges faced in developing SBC methods, and the significance of both prior and posterior SBC in ensuring model reliability. The discussion emphasizes the need for careful model implementation and inference algorithms to achieve accurate calibration.Get the full conversation here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

JournalFeed Podcast
Bayesian's EpiDex | Facial Injury Guidelines

JournalFeed Podcast

Play Episode Listen Later Jun 28, 2025 11:33


The JournalFeed podcast for the week of 23-27, 2025.These are summaries from just 2 of the 5 articles we cover every week! For access to more, please visit JournalFeed.org for details about becoming a member.Wednesday Spoon Feed:Bayesian analysis of the use of EpiDex in bronchiolitis demonstrates a reduced probability of hospitalization for bronchiolitis, although highly skeptical clinicians may require additional evidence.Friday Spoon Feed:Over half of transferred patients with facial fractures don't need treatment or admission. This study proposes smart, evidence-based guidelines – Facial Injury Guidelines, or FIG – to help healthcare systems save money, time, and beds (and maybe a few ambulance rides), pending future validation.

Learning Bayesian Statistics
#135 Bayesian Calibration and Model Checking, with Teemu Säilynoja

Learning Bayesian Statistics

Play Episode Listen Later Jun 25, 2025 72:13 Transcription Available


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Teemu focuses on calibration assessments and predictive checking in Bayesian workflows.Simulation-based calibration (SBC) checks model implementationSBC involves drawing realizations from prior and generating prior predictive data.Visual predictive checking is crucial for assessing model predictions.Prior predictive checks should be done before looking at data.Posterior SBC focuses on the area of parameter space most relevant to the data.Challenges in SBC include inference time.Visualizations complement numerical metrics in Bayesian modeling.Amortized Bayesian inference benefits from SBC for quick posterior checks. The calibration of Bayesian models is more intuitive than Frequentist models.Choosing the right visualization depends on data characteristics.Using multiple visualization methods can reveal different insights.Visualizations should be viewed as models of the data.Goodness of fit tests can enhance visualization accuracy.Uncertainty visualization is crucial but often overlooked.Chapters:09:53 Understanding Simulation-Based Calibration (SBC)15:03 Practical Applications of SBC in Bayesian Modeling22:19 Challenges in Developing Posterior SBC29:41 The Role of SBC in Amortized Bayesian Inference33:47 The Importance of Visual Predictive Checking36:50 Predictive Checking and Model Fitting38:08 The Importance of Visual Checks40:54 Choosing Visualization Types49:06 Visualizations as Models55:02 Uncertainty Visualization in Bayesian Modeling01:00:05 Future Trends in Probabilistic ModelingThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand...

Effetto notte le notizie in 60 minuti
Un vertice a Ginevra per rilanciare la diplomazia

Effetto notte le notizie in 60 minuti

Play Episode Listen Later Jun 20, 2025


Si è tenuto oggi il vertice a Ginevra fra i ministri degli esteri di Francia, Germania e Gran Bretagna e quello dell’Iran. Ma c’è spazio per la diplomazia? Lo chiediamo ad Alessandro Marrone, responsabile del programma “Difesa, sicurezza e spazio” dell’Istituto Affari Internazionali. È stato riportato a galla il Bayesian. Ci spiega come funzionano questo genere di operazioni l’Ammiraglio Cristiano Bettini, già sottocapo di Stato Maggiore della Difesa, autore fra gli altri di “Nave scuola Amerigo Vespucci. Orgoglio italiano” edito Scripta Maneant. Mare Mostrum, cosa dice il nuovo rapporto di Legambiente sull’illegalità ambientale sul mare e sulle coste. Con noi Giorgio Zampetti, Direttore generale Legambiente.Come ogni venerdì, il Reportage: “Estate: dalle piante mediterranee il giardino del futuro”. Di Roberta Pellegatta.

FAZ Frühdenker
Suche nach diplomatischer Lösung im Israel-Iran-Krieg • Tesla startet ersten Robotaxi-Dienst • „Bayesian“ wird geborgen

FAZ Frühdenker

Play Episode Listen Later Jun 20, 2025 9:18


Die Nachrichten an diesem Morgen: Die Außenminister von Deutschland, Frankreich und Großbritannien treffen ihren iranischen Amtskollegen. Tesla startet den ersten Robotaxi-Dienst. Und die gesunkene Superyacht „Bayesian“ soll geborgen werden.

Learning Bayesian Statistics
Live Show Announcement | Come Meet Me in London!

Learning Bayesian Statistics

Play Episode Listen Later Jun 19, 2025 3:04 Transcription Available


Learning Bayesian Statistics
BITESIZE | Exploring Dynamic Regression Models, with David Kohns

Learning Bayesian Statistics

Play Episode Listen Later Jun 18, 2025 14:34 Transcription Available


Today's clip is from episode 134 of the podcast, with David Kohns.Alex and David discuss the future of probabilistic programming, focusing on advancements in time series modeling, model selection, and the integration of AI in prior elicitation. The discussion highlights the importance of setting appropriate priors, the challenges of computational workflows, and the potential of normalizing flows to enhance Bayesian inference.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

The Risk Takers Podcast
Bet Like a Bayesian & SP's DraftKings Beef | Ep 108

The Risk Takers Podcast

Play Episode Listen Later Jun 18, 2025 127:43 Transcription Available


This week we learn how to make more money gambling from the lessons of "The Reverend" Thomas Bayes.His theorem is the backbone of every successful sports bettor (even if they don't know it).This week we walk through examples of priors, posteriors and general Bayesian betting hygiene. It's more electric than it sounds! Andrew Mack's Book: Amazon0:00 Bayesian Thinking Intro10:05 Bayes in Sports Betting51:23 News1:07:30 SP v. DK Pick61:18:45 Q&AWelcome to The Risk Takers Podcast, hosted by professional sports bettor John Shilling (GoldenPants13) and SportsProjections. This podcast is the best betting education available - PERIOD. And it's free - please share and subscribe if you like it.My website: https://www.goldenpants.com/ Follow SportsProjections on Twitter: https://x.com/Sports__ProjWant to work with my betting group?: john@goldenpants.comWant 100s of +EV picks a day?: https://www.goldenpants.com/gp-picks

The EMS Lighthouse Project
Ep 98 - Does the Sequence of RSI Medications Matter

The EMS Lighthouse Project

Play Episode Listen Later Jun 15, 2025 33:42


Train the Best. Change EMS.Howdy, y'all, I'm Dr Jeff Jarvis, and I'm the host of the EMS lighthouse project podcast, but I'm also the medical director for the new EMS system we're building in Fort Worth Texas. We are looking for an experienced critical care paramedic who is an effective and inspiring educator to lead the initial and continuing training and credentialing of a new team of Critical Care Paramedics who will be responding to our highest acuity calls. The salary is negotiable but starts between $65,000 and $80,000 a year for this office position. Whether y'all wear cowboy boots or Birkenstocks, Fort Worth can be a great place to live and work. So if you're ready to create a world-class EMS system and change the EMS world with us, give us a call at 817-953-3083, take care y'all.The next time you go to intubate a patient, should you give the sedation before the paralytic or the paralytic before the sedative? Does it matter? And what the hell does Bayes have to do with any of this? Dr Jarvis reviews a paper that uses Bayesian statistics to calculate the association between drug sequence and first attempt failure. Then he returns to Nerd Valley to talk about how to interpret 95% confidence intervals derived from frequentists statistics compared to 95% credible intervals that come from Bayesian statistics. Citations:1.     Catoire P, Driver B, Prekker ME, Freund Y: Effect of administration sequence of induction agents on first‐attempt failure during emergency intubation: A Bayesian analysis of a prospective cohort. Academic Emergency Medicine. 2025;February;32(2):123–9. 2.     Casey JD, Janz DR, Russell DW, Vonderhaar DJ, Joffe AM, Dischert KM, Brown RM, Zouk AN, Gulati S, Heideman BE, et al.: Bag-Mask Ventilation during Tracheal Intubation of Critically Ill Adults. N Engl J Med. 2019;February 28;380(9):811–21.3.     Greer A, Hewitt M, Khazaneh PT, Ergan B, Burry L, Semler MW, Rochwerg B, Sharif S: Ketamine Versus Etomidate for Rapid Sequence Intubation: A Systematic Review and Meta-Analysis of Randomized Trials. Critical Care Medicine. 2025;February;53(2):e374–83.

Learning Bayesian Statistics
#134 Bayesian Econometrics, State Space Models & Dynamic Regression, with David Kohns

Learning Bayesian Statistics

Play Episode Listen Later Jun 10, 2025 100:55 Transcription Available


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Setting appropriate priors is crucial to avoid overfitting in models.R-squared can be used effectively in Bayesian frameworks for model evaluation.Dynamic regression can incorporate time-varying coefficients to capture changing relationships.Predictively consistent priors enhance model interpretability and performance.Identifiability is a challenge in time series models.State space models provide structure compared to Gaussian processes.Priors influence the model's ability to explain variance.Starting with simple models can reveal interesting dynamics.Understanding the relationship between states and variance is key.State-space models allow for dynamic analysis of time series data.AI can enhance the process of prior elicitation in statistical models.Chapters:10:09 Understanding State Space Models14:53 Predictively Consistent Priors20:02 Dynamic Regression and AR Models25:08 Inflation Forecasting50:49 Understanding Time Series Data and Economic Analysis57:04 Exploring Dynamic Regression Models01:05:52 The Role of Priors01:15:36 Future Trends in Probabilistic Programming01:20:05 Innovations in Bayesian Model SelectionThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki...

Learning Bayesian Statistics
BITESIZE | Why Your Models Might Be Wrong & How to Fix it, with Sean Pinkney & Adrian Seyboldt

Learning Bayesian Statistics

Play Episode Listen Later Jun 4, 2025 17:04 Transcription Available


Today's clip is from episode 133 of the podcast, with Sean Pinkney & Adrian Seyboldt.The conversation delves into the concept of Zero-Sum Normal and its application in statistical modeling, particularly in hierarchical models. Alex, Sean and Adrian discuss the implications of using zero-sum constraints, the challenges of incorporating new data points, and the importance of distinguishing between sample and population effects. They also explore practical solutions for making predictions based on population parameters and the potential for developing tools to facilitate these processes.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

The EMS Lighthouse Project
E97 - Bayes and Calcium Before Diltiazem in Atrial Fibrillation

The EMS Lighthouse Project

Play Episode Listen Later May 30, 2025 39:27


We covered a paper in episode 81 that suggested treating atrial fibrillation with rapid ventricular response in the field could lower mortality. But it also drops BP a bit. Could pretreating these patients with calcium lower the risk of hypotension? Dr Jarvis puts on his nerd hat and uses Bayesian analysis to assess a new randomized, placebo-controlled study that looked at just this thing. Why is he going off on this Bayes thing? Because he's been reading a couple of book on it and wanted to take it for a spin.  Tables:  Charts: Bayesian Distributions: Citation: 1.     Az A, Sogut O, Dogan Y, Akdemir T, Ergenc H, Umit TB, Celik AF, Armagan BN, Bilici E, Cakmak S: Reducing diltiazem-related hypotension in atrial fibrillation: Role of pretreatment intravenous calcium. The American Journal of Emergency Medicine. 2025;February;88:23–8.2.     Fornage LB, O'Neil C, Dowker SR, Wanta ER, Lewis RS, Brown LH: Prehospital Intervention Improves Outcomes for Patients Presenting in Atrial Fibrillation with Rapid Ventricular Response. Prehospital Emergency Care. doi: 10.1080/10903127.2023.2283885 (Epub ahead of print).3.     Kolkebeck T, Abbrescia K, Pfaff J, Glynn T, Ward JA: Calcium chloride before i.v. diltiazem in the management of atrial fibrillation. The Journal of Emergency Medicine. 2004;May 1;26(4):395–400.4.     Chivers T: Everything Is Predictable: How Bayes' Remarkable Theorem Explains the World. Weidenfeld & Nicolson, 2024.5.     McGrayne SB: The Theory That Would Not Die. how Bayes' Rule Cracked The Enigma Code, Hunted Down Russian Submarines & Emerged Triumphant From Two Centuries of Controversy. New Haven, CT, Yale University Press, 2011. FAST25 | May 19-21, 2025 | Lexington, KY

The Yacht Report
#009 The Sinking & Salvage of S/Y Bayesian

The Yacht Report

Play Episode Listen Later May 30, 2025 56:16


The 56m Sailing Yacht Bayesian sank in August 2024 with 7 Fatalities including the Tech Millionaire Mike Lynch. After the sinking the CEO of Italian Sea Group blamed the crew saying that his yacht was 'unsinkable if operated properly'.In May 2025 the salvage of the yacht which lies in 50m water off the coast of Sicily began. We talk about the entire think including the interim report form the UK Maritime investigation Branch, the MAIB.

Learning Bayesian Statistics
#133 Making Models More Efficient & Flexible, with Sean Pinkney & Adrian Seyboldt

Learning Bayesian Statistics

Play Episode Listen Later May 28, 2025 72:12 Transcription Available


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways:Zero Sum constraints allow for better sampling and estimation in hierarchical models.Understanding the difference between population and sample means is crucial.A library for zero-sum normal effects would be beneficial.Practical solutions can yield decent predictions even with limitations.Cholesky parameterization can be adapted for positive correlation matrices.Understanding the geometry of sampling spaces is crucial.The relationship between eigenvalues and sampling is complex.Collaboration and sharing knowledge enhance research outcomes.Innovative approaches can simplify complex statistical problems.Chapters:03:35 Sean Pinkney's Journey to Bayesian Modeling11:21 The Zero-Sum Normal Project Explained18:52 Technical Insights on Zero-Sum Constraints32:04 Handling New Elements in Bayesian Models36:19 Understanding Population Parameters and Predictions49:11 Exploring Flexible Cholesky Parameterization01:07:23 Closing Thoughts and Future DirectionsThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary...

JournalFeed Podcast
The Ten Test | Best Migraine Treatment

JournalFeed Podcast

Play Episode Listen Later May 24, 2025 10:10


The JournalFeed podcast for the week of 19-23, 2025.These are summaries from just 2 of the 5 articles we cover every week! For access to more, please visit JournalFeed.org for details about becoming a member.Monday Spoon Feed:The Ten Test is a quick, reliable, no-equipment sensory exam that performed as well as or better than traditional methods in assessing hand and finger injuries – with none of the cost.Friday Spoon Feed:In this Bayesian network meta analysis, researchers compared pharmacologic interventions for migraine treatment. There was no clear superior choice for single-agent pain control, but chlorpromazine IV/IM was among the most effective for adequate pain relief at two hours, and IV/IM ketorolac was possibly among the worst.

iCritical Care: All Audio
SCCM Pod-540: Advancing ARDS Care Through Precision Medicine

iCritical Care: All Audio

Play Episode Listen Later May 22, 2025 30:23


In this forward-looking episode of the SCCM Podcast, Daniel F. McAuley, MD, explores how the clinical and research communities are rethinking acute respiratory distress syndrome (ARDS), shifting from a one-size-fits-all model to a focus on identifying and targeting modifiable traits. Building on his Thought Leader Session at the 2024 Critical Care Congress, Dr. McAuley unpacks the major thematic shift toward precision medicine in critical care. Instead of treating ARDS as a single, homogenous condition, researchers are increasingly identifying biologically distinct subgroups—especially hyper- and hypoinflammatory phenotypes—that may respond differently to therapies. These insights are fueling a new generation of trials that aim to prospectively apply this knowledge to treatment strategies. Central to this evolution is the Precision medicine Adaptive platform Network Trial in Hypoaemic acutE respiratory failure (PANTHER), of which Dr. McAuley is a team member. PANTHER is a Bayesian adaptive platform randomized clinical trial studying novel interventions to improve outcomes for patients with acute hypoxemic respiratory failure. Designed to be adaptive and biomarker informed, PANTHER will test therapies such as simvastatin and baricitinib, based on real-time phenotyping of patients with ARDS. Throughout the episode, Dr. McAuley reflects on how advances in machine learning and biomarker identification are making precision treatment more feasible. He discusses the importance of maintaining evidence-based supportive care, such as lung-protective ventilation and prone positioning, while integrating new targeted therapies. Discover the latest investigations into potential therapeutic agents—including mesenchymal stromal cells, statins, and extracorporeal carbon dioxide removal—as Dr. McAuley aims to translate early findings into tangible improvements in patient outcomes. This episode offers critical insights into the changing landscape of ARDS research and patient care, as Dr. McAuley articulates a hopeful vision for the future—one in which targeted, individualized treatments can improve outcomes for patients with one of critical care's most challenging conditions. Dr. McAuley is a consultant and professor in intensive care medicine in the regional intensive care unit at the Royal Victoria Hospital and Queen's University of Belfast. He is program director for the  Medical Research Council/National Institute for Health and Care Research (MRC/NIHR) Efficacy and Mechanism Evaluation Program and scientific director for programs in NIHR.   Access Dr. McAuley's Congress Thought Leader Session, ARDS: From Treating a Syndrome to Identifying Modifiable Traits here.

Learning Bayesian Statistics
BITESIZE | How AI is Redefining Human Interactions, with Tom Griffiths

Learning Bayesian Statistics

Play Episode Listen Later May 21, 2025 22:06 Transcription Available


Today's clip is from episode 132 of the podcast, with Tom Griffiths.Tom and Alex Andorra discuss the fundamental differences between human intelligence and artificial intelligence, emphasizing the constraints that shape human cognition, such as limited data, computational resources, and communication bandwidth. They explore how AI systems currently learn and the potential for aligning AI with human cognitive processes. The discussion also delves into the implications of AI in enhancing human decision-making and the importance of understanding human biases to create more effective AI systems.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

BOAT Briefing
252: Bayesian: new report gives fresh insight into what led to the tragic sinking

BOAT Briefing

Play Episode Listen Later May 21, 2025 38:18


In this week's episode, we dig into the recently published Marine Accident Investigation Branch (MAIB) interim report into the sinking of the 56-metre superyacht Bayesian in August last year, resulting in the loss of seven lives. For the first time, the narrative of what happened that night can be disclosed, and we also review the MAIB's comments on the weather conditions that fateful evening and Bayesian's stability vulnerabilities. Georgia, meanwhile, is newly returned from the Galapagos and tells us why these very special islands should be on every superyacht itinerary.  BOAT Pro: https://boatint.com/zg Subscribe: https://boatint.com/zh Contact us: podcast@boatinternationalmedia.com

RNZ: Checkpoint
Kiwi-captained superyacht Bayesian debris brought to surface

RNZ: Checkpoint

Play Episode Listen Later May 21, 2025 7:29


United Kingdom correspondent Alice Wilkins spoke to Lisa Owen about how the first pieces of a superyacht that capsized off the coast of Italy with Kiwis on board, have been brought to the surface and how a flight to the spanish party island of Ibiza as been described as "hell" because of some rowdy passengers. She also spoke about how a British endurance athlete said he's broken the record for running across the width of Australia.

The Marketing Architects
Nerd Alert: The Bayesian Marketing Attribution Model

The Marketing Architects

Play Episode Listen Later May 15, 2025 12:03


Welcome to Nerd Alert, a series of special episodes bridging the gap between marketing academia and practitioners. We're breaking down highly involved, complex research into plain language and takeaways any marketer can use.In this episode, Elena and Rob explore how Bayesian modeling offers a more nuanced approach to marketing attribution than traditional methods. They discuss why many marketers still rely on oversimplified attribution models despite their limitations.Topics covered:   [01:00] "Bayesian Modeling of Marketing Attribution"[03:00] Problems with traditional attribution models[04:50] Why simple models persist despite their flaws[06:00] Key components of Bayesian attribution[08:00] Rapid decay of ad effects and negative interaction effects[09:45] How this approach can offer deeper marketing insights  To learn more, visit marketingarchitects.com/podcast or subscribe to our newsletter at marketingarchitects.com/newsletter.  Resources: Sinha, R., Arbour, D., & Puli, A. (2022). Bayesian Modeling of Marketing Attribution. Available at arXiv:2205.15965   Get more research-backed marketing strategies by subscribing to The Marketing Architects on Apple Podcasts, Spotify, or wherever you listen to podcasts. 

The Ocean Sailor Podcast
The Ocean Sailor Season 2, Ep2 - Bayesian MAIB Interim Report - Skipper and crew not to blame?

The Ocean Sailor Podcast

Play Episode Listen Later May 15, 2025 23:56


In May 2023, the sailing world was shaken by the tragic sinking of Bayesian, a British-flagged superyacht anchored off Sicily. Today, the UK's Marine Accident Investigation Branch has released its interim report on the sinking—and it's raising important points, especially about the actions of the crew.   This is Ocean Sailor — and today we're diving into what the MAIB has revealed so far, and why it matters.   #OceanSailing #BluewaterSailing #Seamanship #OffshoreSailing #SailingLife #SailboatLife #PassagePlanning #MaritimeSafety #SailingSafety #MarineAccident #MAIB #YachtSinking #SafetyAtSea #BayesianSinking #bayesianyacht #MAIBReport #SailingNews #SailingCommunity #SailingDiscussion #YachtDesign #SailboatConstruction #OceanGoingYachts #StructuralIntegrity #ModernYachts #OceanSailor #SailingChannel #SailingYouTube #SailingDocumentary #OceanSailorChannel

Learning Bayesian Statistics
#132 Bayesian Cognition and the Future of Human-AI Interaction, with Tom Griffiths

Learning Bayesian Statistics

Play Episode Listen Later May 13, 2025 90:15 Transcription Available


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Check out Hugo's latest episode with Fei-Fei Li, on How Human-Centered AI Actually Gets BuiltIntro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Computational cognitive science seeks to understand intelligence mathematically.Bayesian statistics is crucial for understanding human cognition.Inductive biases help explain how humans learn from limited data.Eliciting prior distributions can reveal implicit beliefs.The wisdom of individuals can provide richer insights than averaging group responses.Generative AI can mimic human cognitive processes.Human intelligence is shaped by constraints of data, computation, and communication.AI systems operate under different constraints than human cognition. Human intelligence differs fundamentally from machine intelligence.Generative AI can complement and enhance human learning.AI systems currently lack intrinsic human compatibility.Language training in AI helps align its understanding with human perspectives.Reinforcement learning from human feedback can lead to misalignment of AI goals.Representational alignment can improve AI's understanding of human concepts.AI can help humans make better decisions by providing relevant information.Research should focus on solving problems rather than just methods.Chapters:00:00 Understanding Computational Cognitive Science13:52 Bayesian Models and Human Cognition29:50 Eliciting Implicit Prior Distributions38:07 The Relationship Between Human and AI Intelligence45:15 Aligning Human and Machine Preferences50:26 Innovations in AI and Human Interaction55:35 Resource Rationality in Decision Making01:00:07 Language Learning in AI Models

Machine Learning Guide
MLG 035 Large Language Models 2

Machine Learning Guide

Play Episode Listen Later May 8, 2025 45:25


At inference, large language models use in-context learning with zero-, one-, or few-shot examples to perform new tasks without weight updates, and can be grounded with Retrieval Augmented Generation (RAG) by embedding documents into vector databases for real-time factual lookup using cosine similarity. LLM agents autonomously plan, act, and use external tools via orchestrated loops with persistent memory, while recent benchmarks like GPQA (STEM reasoning), SWE Bench (agentic coding), and MMMU (multimodal college-level tasks) test performance alongside prompt engineering techniques such as chain-of-thought reasoning, structured few-shot prompts, positive instruction framing, and iterative self-correction. Links Notes and resources at ocdevel.com/mlg/mlg35 Build the future of multi-agent software with AGNTCY Try a walking desk stay healthy & sharp while you learn & code In-Context Learning (ICL) Definition: LLMs can perform tasks by learning from examples provided directly in the prompt without updating their parameters. Types: Zero-shot: Direct query, no examples provided. One-shot: Single example provided. Few-shot: Multiple examples, balancing quantity with context window limitations. Mechanism: ICL works through analogy and Bayesian inference, using examples as semantic priors to activate relevant internal representations. Emergent Properties: ICL is an "inference-time training" approach, leveraging the model's pre-trained knowledge without gradient updates; its effectiveness can be enhanced with diverse, non-redundant examples. Retrieval Augmented Generation (RAG) and Grounding Grounding: Connecting LLMs with external knowledge bases to supplement or update static training data. Motivation: LLMs' training data becomes outdated or lacks proprietary/specialized knowledge. Benefit: Reduces hallucinations and improves factual accuracy by incorporating current or domain-specific information. RAG Workflow: Embedding: Documents are converted into vector embeddings (using sentence transformers or representation models). Storage: Vectors are stored in a vector database (e.g., FAISS, ChromaDB, Qdrant). Retrieval: When a query is made, relevant chunks are extracted based on similarity, possibly with re-ranking or additional query processing. Augmentation: Retrieved chunks are added to the prompt to provide up-to-date context for generation. Generation: The LLM generates responses informed by the augmented context. Advanced RAG: Includes agentic approaches—self-correction, aggregation, or multi-agent contribution to source ingestion, and can integrate external document sources (e.g., web search for real-time info, or custom datasets for private knowledge). LLM Agents Overview: Agents extend LLMs by providing goal-oriented, iterative problem-solving through interaction, memory, planning, and tool usage. Key Components: Reasoning Engine (LLM Core): Interprets goals, states, and makes decisions. Planning Module: Breaks down complex tasks using strategies such as Chain of Thought or ReAct; can incorporate reflection and adjustment. Memory: Short-term via context window; long-term via persistent storage like RAG-integrated databases or special memory systems. Tools and APIs: Agents select and use external functions—file manipulation, browser control, code execution, database queries, or invoking smaller/fine-tuned models. Capabilities: Support self-evaluation, correction, and multi-step planning; allow integration with other agents (multi-agent systems); face limitations in memory continuity, adaptivity, and controllability. Current Trends: Research and development are shifting toward these agentic paradigms as LLM core scaling saturates. Multimodal Large Language Models (MLLMs) Definition: Models capable of ingesting and generating across different modalities (text, image, audio, video). Architecture: Modality-Specific Encoders: Convert raw modalities (text, image, audio) into numeric embeddings (e.g., vision transformers for images). Fusion/Alignment Layer: Embeddings from different modalities are projected into a shared space, often via cross-attention or concatenation, allowing the model to jointly reason about their content. Unified Transformer Backbone: Processes fused embeddings to allow cross-modal reasoning and generates outputs in the required format. Recent Advances: Unified architectures (e.g., GPT-4o) use a single model for all modalities rather than switching between separate sub-models. Functionality: Enables actions such as image analysis via text prompts, visual Q&A, and integrated speech recognition/generation. Advanced LLM Architectures and Training Directions Predictive Abstract Representation: Incorporating latent concept prediction alongside token prediction (e.g., via autoencoders). Patch-Level Training: Predicting larger “patches” of tokens to reduce sequence lengths and computation. Concept-Centric Modeling: Moving from next-token prediction to predicting sequences of semantic concepts (e.g., Meta's Large Concept Model). Multi-Token Prediction: Training models to predict multiple future tokens for broader context capture. Evaluation Benchmarks (as of 2025) Key Benchmarks Used for LLM Evaluation: GPQA (Diamond): Graduate-level STEM reasoning. SWE Bench Verified: Real-world software engineering, verifying agentic code abilities. MMMU: Multimodal, college-level cross-disciplinary reasoning. HumanEval: Python coding correctness. HLE (Human's Last Exam): Extremely challenging, multimodal knowledge assessment. LiveCodeBench: Coding with contamination-free, up-to-date problems. MLPerf Inference v5.0 Long Context: Throughput/latency for processing long contexts. MultiChallenge Conversational AI: Multiturn dialogue, in-context reasoning. TAUBench/PFCL: Tool utilization in agentic tasks. TruthfulnessQA: Measures tendency toward factual accuracy/robustness against misinformation. Prompt Engineering: High-Impact Techniques Foundational Approaches: Few-Shot Prompting: Provide pairs of inputs and desired outputs to steer the LLM. Chain of Thought: Instructing the LLM to think step-by-step, either explicitly or through internal self-reprompting, enhances reasoning and output quality. Clarity and Structure: Use clear, detailed, and structured instructions—task definition, context, constraints, output format, use of delimiters or markdown structuring. Affirmative Directives: Phrase instructions positively (“write a concise summary” instead of “don't write a long summary”). Iterative Self-Refinement: Prompt the LLM to review and improve its prior response for better completeness, clarity, and factuality. System Prompt/Role Assignment: Assign a persona or role to the LLM for tailored behavior (e.g., “You are an expert Python programmer”). Guideline: Regularly consult official prompting guides from model developers as model capabilities evolve. Trends and Research Outlook Inference-time compute is increasingly important for pushing the boundaries of LLM task performance. Agentic LLMs and multimodal reasoning represent the primary frontiers for innovation. Prompt engineering and benchmarking remain essential for extracting optimal performance and assessing progress. Models are expected to continue evolving with research into new architectures, memory systems, and integration techniques.

Learning Bayesian Statistics
BITESIZE | Hacking Bayesian Models for Better Performance, with Luke Bornn

Learning Bayesian Statistics

Play Episode Listen Later May 7, 2025 13:35 Transcription Available


Today's clip is from episode 131 of the podcast, with Luke Bornn.Luke and Alex discuss the application of generative models in sports analytics. They emphasize the importance of Bayesian modeling to account for uncertainty and contextual variations in player data. The discussion also covers the challenges of balancing model complexity with computational efficiency, the innovative ways to hack Bayesian models for improved performance, and the significance of understanding model fitting and discretization in statistical modeling.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

Elucidations: A University of Chicago Podcast
Episode 151: Witold Więcek discusses statistics and academic research

Elucidations: A University of Chicago Podcast

Play Episode Listen Later May 3, 2025 46:14


Note: this episode was recorded in August of 2022.In the latest Elucidation, Matt talks to Witold Więcek about the difficulties that come up for researchers who would like to draw upon statistics. Lots of academic fields need to draw heavily on statistics, whether it's economics, psychology, sociologym, linguistics, computer science, or data science. This means that a lot of people coming from different backgrounds often need to learn basic statistics in order to investigate whatever question they're investigating. But as we've discussed on this podcast, statistical reasoning is easy for beginners to mess up, and it's also easy for bad faith parties to tamper with in undetectable ways. They can straight up fabricate data, they can cherry pick it, they can keep changing the hypothesis they are testing until they find one that is supported by a trend in the data they have. So what should we do? We can't give up on statistics; it is simply too useful a tool.Witold Więcek argues that researchers have to be mindful of “p-hacking”. Statistical significance, the golden standard of academic publishing, can easily be guaranteed by unscrupulous research or motivated reasoning: statistically speaking, even noise can look like signal if we keep asking more and more questions of our data. Modern statistical workflows require us to either adjust the results for number of hypotheses tested or to follow principles of Bayesian inference. As a broader strategy, Więcek recommends that every research project making significant use of statistical arguments bring in in an external consultant, who can productively stress test those arguments in an adversarial way, given that they aren't part of the main team.It was a great conversation! I hope you enjoy it.Matt Teichman Hosted on Acast. See acast.com/privacy for more information.

Learning Bayesian Statistics
#131 Decision-Making Under High Uncertainty, with Luke Bornn

Learning Bayesian Statistics

Play Episode Listen Later Apr 30, 2025 91:46 Transcription Available


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli.Takeaways:Player tracking data revolutionized sports analytics.Decision-making in sports involves managing uncertainty and budget constraints.Luke emphasizes the importance of portfolio optimization in team management.Clubs with high budgets can afford inefficiencies in player acquisition.Statistical methods provide a probabilistic approach to player value.Removing human bias is crucial in sports decision-making.Understanding player performance distributions aids in contract decisions.The goal is to maximize performance value per dollar spent.Model validation in sports requires focusing on edge cases.

Learning Bayesian Statistics
BITESIZE | Real-World Applications of Models in Public Health, with Adam Kucharski

Learning Bayesian Statistics

Play Episode Listen Later Apr 23, 2025 16:26 Transcription Available


Today's clip is from episode 130 of the podcast, with epidemiological modeler Adam Kucharski.This conversation explores the critical role of patient modeling during the COVID-19 pandemic, highlighting how these models informed public health decisions and the relationship between modeling and policy. The discussion emphasizes the need for improved communication and understanding of data among the public and policymakers.Get the full discussion at https://learnbayesstats.com/episode/129-bayesian-deep-learning-ai-for-science-vincent-fortuinIntro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.