POPULARITY
Categories
Die Nachrichten an diesem Morgen: Die Außenminister von Deutschland, Frankreich und Großbritannien treffen ihren iranischen Amtskollegen. Tesla startet den ersten Robotaxi-Dienst. Und die gesunkene Superyacht „Bayesian“ soll geborgen werden.
ICYMI, I'll be in London next week, for a live episode of the Learning Bayesian Statistics podcast
Today's clip is from episode 134 of the podcast, with David Kohns.Alex and David discuss the future of probabilistic programming, focusing on advancements in time series modeling, model selection, and the integration of AI in prior elicitation. The discussion highlights the importance of setting appropriate priors, the challenges of computational workflows, and the potential of normalizing flows to enhance Bayesian inference.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
This week we learn how to make more money gambling from the lessons of "The Reverend" Thomas Bayes.His theorem is the backbone of every successful sports bettor (even if they don't know it).This week we walk through examples of priors, posteriors and general Bayesian betting hygiene. It's more electric than it sounds! Andrew Mack's Book: Amazon0:00 Bayesian Thinking Intro10:05 Bayes in Sports Betting51:23 News1:07:30 SP v. DK Pick61:18:45 Q&AWelcome to The Risk Takers Podcast, hosted by professional sports bettor John Shilling (GoldenPants13) and SportsProjections. This podcast is the best betting education available - PERIOD. And it's free - please share and subscribe if you like it.My website: https://www.goldenpants.com/ Follow SportsProjections on Twitter: https://x.com/Sports__ProjWant to work with my betting group?: john@goldenpants.comWant 100s of +EV picks a day?: https://www.goldenpants.com/gp-picks
Train the Best. Change EMS.Howdy, y'all, I'm Dr Jeff Jarvis, and I'm the host of the EMS lighthouse project podcast, but I'm also the medical director for the new EMS system we're building in Fort Worth Texas. We are looking for an experienced critical care paramedic who is an effective and inspiring educator to lead the initial and continuing training and credentialing of a new team of Critical Care Paramedics who will be responding to our highest acuity calls. The salary is negotiable but starts between $65,000 and $80,000 a year for this office position. Whether y'all wear cowboy boots or Birkenstocks, Fort Worth can be a great place to live and work. So if you're ready to create a world-class EMS system and change the EMS world with us, give us a call at 817-953-3083, take care y'all.The next time you go to intubate a patient, should you give the sedation before the paralytic or the paralytic before the sedative? Does it matter? And what the hell does Bayes have to do with any of this? Dr Jarvis reviews a paper that uses Bayesian statistics to calculate the association between drug sequence and first attempt failure. Then he returns to Nerd Valley to talk about how to interpret 95% confidence intervals derived from frequentists statistics compared to 95% credible intervals that come from Bayesian statistics. Citations:1. Catoire P, Driver B, Prekker ME, Freund Y: Effect of administration sequence of induction agents on first‐attempt failure during emergency intubation: A Bayesian analysis of a prospective cohort. Academic Emergency Medicine. 2025;February;32(2):123–9. 2. Casey JD, Janz DR, Russell DW, Vonderhaar DJ, Joffe AM, Dischert KM, Brown RM, Zouk AN, Gulati S, Heideman BE, et al.: Bag-Mask Ventilation during Tracheal Intubation of Critically Ill Adults. N Engl J Med. 2019;February 28;380(9):811–21.3. Greer A, Hewitt M, Khazaneh PT, Ergan B, Burry L, Semler MW, Rochwerg B, Sharif S: Ketamine Versus Etomidate for Rapid Sequence Intubation: A Systematic Review and Meta-Analysis of Randomized Trials. Critical Care Medicine. 2025;February;53(2):e374–83.
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Setting appropriate priors is crucial to avoid overfitting in models.R-squared can be used effectively in Bayesian frameworks for model evaluation.Dynamic regression can incorporate time-varying coefficients to capture changing relationships.Predictively consistent priors enhance model interpretability and performance.Identifiability is a challenge in time series models.State space models provide structure compared to Gaussian processes.Priors influence the model's ability to explain variance.Starting with simple models can reveal interesting dynamics.Understanding the relationship between states and variance is key.State-space models allow for dynamic analysis of time series data.AI can enhance the process of prior elicitation in statistical models.Chapters:10:09 Understanding State Space Models14:53 Predictively Consistent Priors20:02 Dynamic Regression and AR Models25:08 Inflation Forecasting50:49 Understanding Time Series Data and Economic Analysis57:04 Exploring Dynamic Regression Models01:05:52 The Role of Priors01:15:36 Future Trends in Probabilistic Programming01:20:05 Innovations in Bayesian Model SelectionThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki...
Today's clip is from episode 133 of the podcast, with Sean Pinkney & Adrian Seyboldt.The conversation delves into the concept of Zero-Sum Normal and its application in statistical modeling, particularly in hierarchical models. Alex, Sean and Adrian discuss the implications of using zero-sum constraints, the challenges of incorporating new data points, and the importance of distinguishing between sample and population effects. They also explore practical solutions for making predictions based on population parameters and the potential for developing tools to facilitate these processes.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
We covered a paper in episode 81 that suggested treating atrial fibrillation with rapid ventricular response in the field could lower mortality. But it also drops BP a bit. Could pretreating these patients with calcium lower the risk of hypotension? Dr Jarvis puts on his nerd hat and uses Bayesian analysis to assess a new randomized, placebo-controlled study that looked at just this thing. Why is he going off on this Bayes thing? Because he's been reading a couple of book on it and wanted to take it for a spin. Tables: Charts: Bayesian Distributions: Citation: 1. Az A, Sogut O, Dogan Y, Akdemir T, Ergenc H, Umit TB, Celik AF, Armagan BN, Bilici E, Cakmak S: Reducing diltiazem-related hypotension in atrial fibrillation: Role of pretreatment intravenous calcium. The American Journal of Emergency Medicine. 2025;February;88:23–8.2. Fornage LB, O'Neil C, Dowker SR, Wanta ER, Lewis RS, Brown LH: Prehospital Intervention Improves Outcomes for Patients Presenting in Atrial Fibrillation with Rapid Ventricular Response. Prehospital Emergency Care. doi: 10.1080/10903127.2023.2283885 (Epub ahead of print).3. Kolkebeck T, Abbrescia K, Pfaff J, Glynn T, Ward JA: Calcium chloride before i.v. diltiazem in the management of atrial fibrillation. The Journal of Emergency Medicine. 2004;May 1;26(4):395–400.4. Chivers T: Everything Is Predictable: How Bayes' Remarkable Theorem Explains the World. Weidenfeld & Nicolson, 2024.5. McGrayne SB: The Theory That Would Not Die. how Bayes' Rule Cracked The Enigma Code, Hunted Down Russian Submarines & Emerged Triumphant From Two Centuries of Controversy. New Haven, CT, Yale University Press, 2011. FAST25 | May 19-21, 2025 | Lexington, KY
The 56m Sailing Yacht Bayesian sank in August 2024 with 7 Fatalities including the Tech Millionaire Mike Lynch. After the sinking the CEO of Italian Sea Group blamed the crew saying that his yacht was 'unsinkable if operated properly'.In May 2025 the salvage of the yacht which lies in 50m water off the coast of Sicily began. We talk about the entire think including the interim report form the UK Maritime investigation Branch, the MAIB.
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways:Zero Sum constraints allow for better sampling and estimation in hierarchical models.Understanding the difference between population and sample means is crucial.A library for zero-sum normal effects would be beneficial.Practical solutions can yield decent predictions even with limitations.Cholesky parameterization can be adapted for positive correlation matrices.Understanding the geometry of sampling spaces is crucial.The relationship between eigenvalues and sampling is complex.Collaboration and sharing knowledge enhance research outcomes.Innovative approaches can simplify complex statistical problems.Chapters:03:35 Sean Pinkney's Journey to Bayesian Modeling11:21 The Zero-Sum Normal Project Explained18:52 Technical Insights on Zero-Sum Constraints32:04 Handling New Elements in Bayesian Models36:19 Understanding Population Parameters and Predictions49:11 Exploring Flexible Cholesky Parameterization01:07:23 Closing Thoughts and Future DirectionsThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary...
The JournalFeed podcast for the week of 19-23, 2025.These are summaries from just 2 of the 5 articles we cover every week! For access to more, please visit JournalFeed.org for details about becoming a member.Monday Spoon Feed:The Ten Test is a quick, reliable, no-equipment sensory exam that performed as well as or better than traditional methods in assessing hand and finger injuries – with none of the cost.Friday Spoon Feed:In this Bayesian network meta analysis, researchers compared pharmacologic interventions for migraine treatment. There was no clear superior choice for single-agent pain control, but chlorpromazine IV/IM was among the most effective for adequate pain relief at two hours, and IV/IM ketorolac was possibly among the worst.
In this forward-looking episode of the SCCM Podcast, Daniel F. McAuley, MD, explores how the clinical and research communities are rethinking acute respiratory distress syndrome (ARDS), shifting from a one-size-fits-all model to a focus on identifying and targeting modifiable traits. Building on his Thought Leader Session at the 2024 Critical Care Congress, Dr. McAuley unpacks the major thematic shift toward precision medicine in critical care. Instead of treating ARDS as a single, homogenous condition, researchers are increasingly identifying biologically distinct subgroups—especially hyper- and hypoinflammatory phenotypes—that may respond differently to therapies. These insights are fueling a new generation of trials that aim to prospectively apply this knowledge to treatment strategies. Central to this evolution is the Precision medicine Adaptive platform Network Trial in Hypoaemic acutE respiratory failure (PANTHER), of which Dr. McAuley is a team member. PANTHER is a Bayesian adaptive platform randomized clinical trial studying novel interventions to improve outcomes for patients with acute hypoxemic respiratory failure. Designed to be adaptive and biomarker informed, PANTHER will test therapies such as simvastatin and baricitinib, based on real-time phenotyping of patients with ARDS. Throughout the episode, Dr. McAuley reflects on how advances in machine learning and biomarker identification are making precision treatment more feasible. He discusses the importance of maintaining evidence-based supportive care, such as lung-protective ventilation and prone positioning, while integrating new targeted therapies. Discover the latest investigations into potential therapeutic agents—including mesenchymal stromal cells, statins, and extracorporeal carbon dioxide removal—as Dr. McAuley aims to translate early findings into tangible improvements in patient outcomes. This episode offers critical insights into the changing landscape of ARDS research and patient care, as Dr. McAuley articulates a hopeful vision for the future—one in which targeted, individualized treatments can improve outcomes for patients with one of critical care's most challenging conditions. Dr. McAuley is a consultant and professor in intensive care medicine in the regional intensive care unit at the Royal Victoria Hospital and Queen's University of Belfast. He is program director for the Medical Research Council/National Institute for Health and Care Research (MRC/NIHR) Efficacy and Mechanism Evaluation Program and scientific director for programs in NIHR. Access Dr. McAuley's Congress Thought Leader Session, ARDS: From Treating a Syndrome to Identifying Modifiable Traits here.
In dieser Folge geht es um den Untergang der Luxusyacht „Bayesian“ vor Sizilien im August 2024. Gemeinsam mit dem YACHT-Chefredakteur Martin Hager analysiert Host Timm Kruse die Hintergründe des Unfalls, bei dem sieben Menschen ums Leben kamen, darunter der britische Tech-Milliardär Mike Lynch. Martin Hager erklärt die Ergebnisse des ersten offiziellen Untersuchungsberichts der britischen „Marine Accident Investigation Branch“ und geht auf die technischen Besonderheiten der Yacht ein. Dabei wird deutlich, wie Wetterextreme wie der „kalte Tropfen“ (spanisch: Gota Fria) plötzliche, heftige Gewitter im Mittelmeer – eine entscheidende Rolle spielen. Es wird diskutiert, warum die „Bayesian“ bei schlechten Wetterbedingungen so anfällig für eine starke Krängung war und was die Unterschiede des Kenterwinkels im Vergleich zu anderen modernen Segelyachten bedeuten. Gestützt auf die Untersuchungsergebnisse beleuchtet der Experte außerdem die Stabilität des Schiffes, des Kiels und den Einfluss auf die Sicherheit. Im Gespräch mit Timm Kruse wird auch der menschliche Faktor, der prominente Eigner Mike Lynch und das professionelle Handeln der Crew während der Seenot erwähnt. Zudem geht es um die Bedeutung von Fakten statt Spekulationen in der Berichterstattung und welche Lehren die Yachtbranche aus diesem Unglück ziehen kann. Weitere YSCHT-Artikel zum Untergang der „Bayesian“: [Erster Untersuchungsbericht zum tragischen Untergang der Luxusyacht](https://www.yacht.de/yachten/superyachten/bayesian-erster-untersuchungsbericht-zum-tragischen-untergang-der-luxusyacht/) [Taucher stirbt bei Bergung](https://www.yacht.de/yachten/superyachten/bayesian-taucher-stirbt-bei-bergung/) [Einschätzungen des Ex-Kapitäns](https://www.yacht.de/yachten/superyachten/bayesian-havarie-einschaetzungen-des-ex-kapitaens/) [56 Meter lange „Bayesian“ sinkt vor Palermo, sechs Personen vermisst](https://www.yacht.de/special/seenot/havarie-nach-wasserhose-56-meter-lange-bayesian-sinkt-vor-palermo-sechs-personen-vermisst/)
Today's clip is from episode 132 of the podcast, with Tom Griffiths.Tom and Alex Andorra discuss the fundamental differences between human intelligence and artificial intelligence, emphasizing the constraints that shape human cognition, such as limited data, computational resources, and communication bandwidth. They explore how AI systems currently learn and the potential for aligning AI with human cognitive processes. The discussion also delves into the implications of AI in enhancing human decision-making and the importance of understanding human biases to create more effective AI systems.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
In this week's episode, we dig into the recently published Marine Accident Investigation Branch (MAIB) interim report into the sinking of the 56-metre superyacht Bayesian in August last year, resulting in the loss of seven lives. For the first time, the narrative of what happened that night can be disclosed, and we also review the MAIB's comments on the weather conditions that fateful evening and Bayesian's stability vulnerabilities. Georgia, meanwhile, is newly returned from the Galapagos and tells us why these very special islands should be on every superyacht itinerary. BOAT Pro: https://boatint.com/zg Subscribe: https://boatint.com/zh Contact us: podcast@boatinternationalmedia.com
United Kingdom correspondent Alice Wilkins spoke to Lisa Owen about how the first pieces of a superyacht that capsized off the coast of Italy with Kiwis on board, have been brought to the surface and how a flight to the spanish party island of Ibiza as been described as "hell" because of some rowdy passengers. She also spoke about how a British endurance athlete said he's broken the record for running across the width of Australia.
In this episode of the Pre-Hospital Care Podcast, we explore the rapidly evolving role of artificial intelligence in trauma care, focusing on the AI Risk Prediction and Decision Support System (AI-TRiPS)—a cutting-edge AI tool designed to enhance decision-making in high-pressure trauma settings.AI-TRiPS is built on Bayesian networks for clinical decision support, bridging the gap between AI development and real-world application. But how do we ensure AI tools are accurate, usable, and trusted by frontline clinicians? We cover:
Join Kim Sweers, The Boat Boss, and Rick Thomas from Sunrise Harbor Marina (a Bradford Marine facility) as they unpack the latest headlines shaping the marine industry. From the halls of Capitol Hill to the docks of Sunrise Harbor, this episode dives into Kim's recent experience at the American Boating Congress in Washington, D.C., including critical updates on tariffs, the Sport Fish Restoration Fund, and future-focused fuel alternatives. They also cover the tragic sinking of the Bayesian, shifting economic tides, new yacht launches, and insights into U.S. innovation in yacht tech. Plus, a post-Palma Yacht Show catch-up and conversations on industry leadership, mentorship, and engaging youth in the boating world.
Welcome to Nerd Alert, a series of special episodes bridging the gap between marketing academia and practitioners. We're breaking down highly involved, complex research into plain language and takeaways any marketer can use.In this episode, Elena and Rob explore how Bayesian modeling offers a more nuanced approach to marketing attribution than traditional methods. They discuss why many marketers still rely on oversimplified attribution models despite their limitations.Topics covered: [01:00] "Bayesian Modeling of Marketing Attribution"[03:00] Problems with traditional attribution models[04:50] Why simple models persist despite their flaws[06:00] Key components of Bayesian attribution[08:00] Rapid decay of ad effects and negative interaction effects[09:45] How this approach can offer deeper marketing insights To learn more, visit marketingarchitects.com/podcast or subscribe to our newsletter at marketingarchitects.com/newsletter. Resources: Sinha, R., Arbour, D., & Puli, A. (2022). Bayesian Modeling of Marketing Attribution. Available at arXiv:2205.15965 Get more research-backed marketing strategies by subscribing to The Marketing Architects on Apple Podcasts, Spotify, or wherever you listen to podcasts.
In May 2023, the sailing world was shaken by the tragic sinking of Bayesian, a British-flagged superyacht anchored off Sicily. Today, the UK's Marine Accident Investigation Branch has released its interim report on the sinking—and it's raising important points, especially about the actions of the crew. This is Ocean Sailor — and today we're diving into what the MAIB has revealed so far, and why it matters. #OceanSailing #BluewaterSailing #Seamanship #OffshoreSailing #SailingLife #SailboatLife #PassagePlanning #MaritimeSafety #SailingSafety #MarineAccident #MAIB #YachtSinking #SafetyAtSea #BayesianSinking #bayesianyacht #MAIBReport #SailingNews #SailingCommunity #SailingDiscussion #YachtDesign #SailboatConstruction #OceanGoingYachts #StructuralIntegrity #ModernYachts #OceanSailor #SailingChannel #SailingYouTube #SailingDocumentary #OceanSailorChannel
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Check out Hugo's latest episode with Fei-Fei Li, on How Human-Centered AI Actually Gets BuiltIntro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Computational cognitive science seeks to understand intelligence mathematically.Bayesian statistics is crucial for understanding human cognition.Inductive biases help explain how humans learn from limited data.Eliciting prior distributions can reveal implicit beliefs.The wisdom of individuals can provide richer insights than averaging group responses.Generative AI can mimic human cognitive processes.Human intelligence is shaped by constraints of data, computation, and communication.AI systems operate under different constraints than human cognition. Human intelligence differs fundamentally from machine intelligence.Generative AI can complement and enhance human learning.AI systems currently lack intrinsic human compatibility.Language training in AI helps align its understanding with human perspectives.Reinforcement learning from human feedback can lead to misalignment of AI goals.Representational alignment can improve AI's understanding of human concepts.AI can help humans make better decisions by providing relevant information.Research should focus on solving problems rather than just methods.Chapters:00:00 Understanding Computational Cognitive Science13:52 Bayesian Models and Human Cognition29:50 Eliciting Implicit Prior Distributions38:07 The Relationship Between Human and AI Intelligence45:15 Aligning Human and Machine Preferences50:26 Innovations in AI and Human Interaction55:35 Resource Rationality in Decision Making01:00:07 Language Learning in AI Models
At inference, large language models use in-context learning with zero-, one-, or few-shot examples to perform new tasks without weight updates, and can be grounded with Retrieval Augmented Generation (RAG) by embedding documents into vector databases for real-time factual lookup using cosine similarity. LLM agents autonomously plan, act, and use external tools via orchestrated loops with persistent memory, while recent benchmarks like GPQA (STEM reasoning), SWE Bench (agentic coding), and MMMU (multimodal college-level tasks) test performance alongside prompt engineering techniques such as chain-of-thought reasoning, structured few-shot prompts, positive instruction framing, and iterative self-correction. Links Notes and resources at ocdevel.com/mlg/mlg35 Build the future of multi-agent software with AGNTCY Try a walking desk stay healthy & sharp while you learn & code In-Context Learning (ICL) Definition: LLMs can perform tasks by learning from examples provided directly in the prompt without updating their parameters. Types: Zero-shot: Direct query, no examples provided. One-shot: Single example provided. Few-shot: Multiple examples, balancing quantity with context window limitations. Mechanism: ICL works through analogy and Bayesian inference, using examples as semantic priors to activate relevant internal representations. Emergent Properties: ICL is an "inference-time training" approach, leveraging the model's pre-trained knowledge without gradient updates; its effectiveness can be enhanced with diverse, non-redundant examples. Retrieval Augmented Generation (RAG) and Grounding Grounding: Connecting LLMs with external knowledge bases to supplement or update static training data. Motivation: LLMs' training data becomes outdated or lacks proprietary/specialized knowledge. Benefit: Reduces hallucinations and improves factual accuracy by incorporating current or domain-specific information. RAG Workflow: Embedding: Documents are converted into vector embeddings (using sentence transformers or representation models). Storage: Vectors are stored in a vector database (e.g., FAISS, ChromaDB, Qdrant). Retrieval: When a query is made, relevant chunks are extracted based on similarity, possibly with re-ranking or additional query processing. Augmentation: Retrieved chunks are added to the prompt to provide up-to-date context for generation. Generation: The LLM generates responses informed by the augmented context. Advanced RAG: Includes agentic approaches—self-correction, aggregation, or multi-agent contribution to source ingestion, and can integrate external document sources (e.g., web search for real-time info, or custom datasets for private knowledge). LLM Agents Overview: Agents extend LLMs by providing goal-oriented, iterative problem-solving through interaction, memory, planning, and tool usage. Key Components: Reasoning Engine (LLM Core): Interprets goals, states, and makes decisions. Planning Module: Breaks down complex tasks using strategies such as Chain of Thought or ReAct; can incorporate reflection and adjustment. Memory: Short-term via context window; long-term via persistent storage like RAG-integrated databases or special memory systems. Tools and APIs: Agents select and use external functions—file manipulation, browser control, code execution, database queries, or invoking smaller/fine-tuned models. Capabilities: Support self-evaluation, correction, and multi-step planning; allow integration with other agents (multi-agent systems); face limitations in memory continuity, adaptivity, and controllability. Current Trends: Research and development are shifting toward these agentic paradigms as LLM core scaling saturates. Multimodal Large Language Models (MLLMs) Definition: Models capable of ingesting and generating across different modalities (text, image, audio, video). Architecture: Modality-Specific Encoders: Convert raw modalities (text, image, audio) into numeric embeddings (e.g., vision transformers for images). Fusion/Alignment Layer: Embeddings from different modalities are projected into a shared space, often via cross-attention or concatenation, allowing the model to jointly reason about their content. Unified Transformer Backbone: Processes fused embeddings to allow cross-modal reasoning and generates outputs in the required format. Recent Advances: Unified architectures (e.g., GPT-4o) use a single model for all modalities rather than switching between separate sub-models. Functionality: Enables actions such as image analysis via text prompts, visual Q&A, and integrated speech recognition/generation. Advanced LLM Architectures and Training Directions Predictive Abstract Representation: Incorporating latent concept prediction alongside token prediction (e.g., via autoencoders). Patch-Level Training: Predicting larger “patches” of tokens to reduce sequence lengths and computation. Concept-Centric Modeling: Moving from next-token prediction to predicting sequences of semantic concepts (e.g., Meta's Large Concept Model). Multi-Token Prediction: Training models to predict multiple future tokens for broader context capture. Evaluation Benchmarks (as of 2025) Key Benchmarks Used for LLM Evaluation: GPQA (Diamond): Graduate-level STEM reasoning. SWE Bench Verified: Real-world software engineering, verifying agentic code abilities. MMMU: Multimodal, college-level cross-disciplinary reasoning. HumanEval: Python coding correctness. HLE (Human's Last Exam): Extremely challenging, multimodal knowledge assessment. LiveCodeBench: Coding with contamination-free, up-to-date problems. MLPerf Inference v5.0 Long Context: Throughput/latency for processing long contexts. MultiChallenge Conversational AI: Multiturn dialogue, in-context reasoning. TAUBench/PFCL: Tool utilization in agentic tasks. TruthfulnessQA: Measures tendency toward factual accuracy/robustness against misinformation. Prompt Engineering: High-Impact Techniques Foundational Approaches: Few-Shot Prompting: Provide pairs of inputs and desired outputs to steer the LLM. Chain of Thought: Instructing the LLM to think step-by-step, either explicitly or through internal self-reprompting, enhances reasoning and output quality. Clarity and Structure: Use clear, detailed, and structured instructions—task definition, context, constraints, output format, use of delimiters or markdown structuring. Affirmative Directives: Phrase instructions positively (“write a concise summary” instead of “don't write a long summary”). Iterative Self-Refinement: Prompt the LLM to review and improve its prior response for better completeness, clarity, and factuality. System Prompt/Role Assignment: Assign a persona or role to the LLM for tailored behavior (e.g., “You are an expert Python programmer”). Guideline: Regularly consult official prompting guides from model developers as model capabilities evolve. Trends and Research Outlook Inference-time compute is increasingly important for pushing the boundaries of LLM task performance. Agentic LLMs and multimodal reasoning represent the primary frontiers for innovation. Prompt engineering and benchmarking remain essential for extracting optimal performance and assessing progress. Models are expected to continue evolving with research into new architectures, memory systems, and integration techniques.
Today's clip is from episode 131 of the podcast, with Luke Bornn.Luke and Alex discuss the application of generative models in sports analytics. They emphasize the importance of Bayesian modeling to account for uncertainty and contextual variations in player data. The discussion also covers the challenges of balancing model complexity with computational efficiency, the innovative ways to hack Bayesian models for improved performance, and the significance of understanding model fitting and discretization in statistical modeling.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
Note: this episode was recorded in August of 2022.In the latest Elucidation, Matt talks to Witold Więcek about the difficulties that come up for researchers who would like to draw upon statistics. Lots of academic fields need to draw heavily on statistics, whether it's economics, psychology, sociologym, linguistics, computer science, or data science. This means that a lot of people coming from different backgrounds often need to learn basic statistics in order to investigate whatever question they're investigating. But as we've discussed on this podcast, statistical reasoning is easy for beginners to mess up, and it's also easy for bad faith parties to tamper with in undetectable ways. They can straight up fabricate data, they can cherry pick it, they can keep changing the hypothesis they are testing until they find one that is supported by a trend in the data they have. So what should we do? We can't give up on statistics; it is simply too useful a tool.Witold Więcek argues that researchers have to be mindful of “p-hacking”. Statistical significance, the golden standard of academic publishing, can easily be guaranteed by unscrupulous research or motivated reasoning: statistically speaking, even noise can look like signal if we keep asking more and more questions of our data. Modern statistical workflows require us to either adjust the results for number of hypotheses tested or to follow principles of Bayesian inference. As a broader strategy, Więcek recommends that every research project making significant use of statistical arguments bring in in an external consultant, who can productively stress test those arguments in an adversarial way, given that they aren't part of the main team.It was a great conversation! I hope you enjoy it.Matt Teichman Hosted on Acast. See acast.com/privacy for more information.
In this episode Gudrun speaks with Nadja Klein and Moussa Kassem Sbeyti who work at the Scientific Computing Center (SCC) at KIT in Karlsruhe. Since August 2024, Nadja has been professor at KIT leading the research group Methods for Big Data (MBD) there. She is an Emmy Noether Research Group Leader, and a member of AcademiaNet, and Die Junge Akademie, among others. In 2025, Nadja was awarded the Committee of Presidents of Statistical Societies (COPSS) Emerging Leader Award (ELA). The COPSS ELA recognizes early career statistical scientists who show evidence of and potential for leadership and who will help shape and strengthen the field. She finished her doctoral studies in Mathematics at the Universität Göttingen before conducting a postdoc at the University of Melbourne as a Feodor-Lynen fellow by the Alexander von Humboldt Foundation. Afterwards she was a Professor for Statistics and Data Science at the Humboldt-Universität zu Berlin before joining KIT. Moussa joined Nadja's lab as an associated member in 2023 and later as a postdoctoral researcher in 2024. He pursued a PhD at the TU Berlin while working as an AI Research Scientist at the Continental AI Lab in Berlin. His research primarily focuses on deep learning, developing uncertainty-based automated labeling methods for 2D object detection in autonomous driving. Prior to this, Moussa earned his M.Sc. in Mechatronics Engineering from the TU Darmstadt in 2021. The research of Nadja and Moussa is at the intersection of statistics and machine learning. In Nadja's MBD Lab the research spans theoretical analysis, method development and real-world applications. One of their key focuses is Bayesian methods, which allow to incorporate prior knowledge, quantify uncertainties, and bring insights to the “black boxes” of machine learning. By fusing the precision and reliability of Bayesian statistics with the adaptability of machine and deep learning, these methods aim to leverage the best of both worlds. The KIT offers a strong research environment, making it an ideal place to continue their work. They bring new expertise that can be leveraged in various applications and on the other hand Helmholtz offers a great platform in that respect to explore new application areas. For example Moussa decided to join the group at KIT as part of the Helmholtz Pilot Program Core-Informatics at KIT (KiKIT), which is an initiative focused on advancing fundamental research in informatics within the Helmholtz Association. Vision models typically depend on large volumes of labeled data, but collecting and labeling this data is both expensive and prone to errors. During his PhD, his research centered on data-efficient learning using uncertainty-based automated labeling techniques. That means estimating and using the uncertainty of models to select the helpful data samples to train the models to label the rest themselves. Now, within KiKIT, his work has evolved to include knowledge-based approaches in multi-task models, eg. detection and depth estimation — with the broader goal of enabling the development and deployment of reliable, accurate vision systems in real-world applications. Statistics and data science are fascinating fields, offering a wide variety of methods and applications that constantly lead to new insights. Within this domain, Bayesian methods are especially compelling, as they enable the quantification of uncertainty and the incorporation of prior knowledge. These capabilities contribute to making machine learning models more data-efficient, interpretable, and robust, which are essential qualities in safety-critical domains such as autonomous driving and personalized medicine. Nadja is also enthusiastic about the interdisciplinarity of the subject — repeatedly changing the focus from mathematics to economics to statistics to computer science. The combination of theoretical fundamentals and practical applications makes statistics an agile and important field of research in data science. From a deep learning perspective, the focus is on making models both more efficient and more reliable when dealing with large-scale data and complex dependencies. One way to do this is by reducing the need for extensive labeled data. They also work on developing self-aware models that can recognize when they're unsure and even reject their own predictions when necessary. Additionally, they explore model pruning techniques to improve computational efficiency, and specialize in Bayesian deep learning, allowing machine learning models to better handle uncertainty and complex dependencies. Beyond the methods themselves, they also contribute by publishing datasets that help push the development of next-generation, state-of-the-art models. The learning methods are applied across different domains such as object detection, depth estimation, semantic segmentation, and trajectory prediction — especially in the context of autonomous driving and agricultural applications. As deep learning technologies continue to evolve, they're also expanding into new application areas such as medical imaging. Unlike traditional deep learning, Bayesian deep learning provides uncertainty estimates alongside predictions, allowing for more principled decision-making and reducing catastrophic failures in safety-critical application. It has had a growing impact in several real-world domains where uncertainty really matters. Bayesian learning incorporates prior knowledge and updates beliefs as new data comes in, rather than relying purely on data-driven optimization. In healthcare, for example, Bayesian models help quantify uncertainty in medical diagnoses, which supports more risk-aware treatment decisions and can ultimately lead to better patient outcomes. In autonomous vehicles, Bayesian models play a key role in improving safety. By recognizing when the system is uncertain, they help capture edge cases more effectively, reduce false positives and negatives in object detection, and navigate complex, dynamic environments — like bad weather or unexpected road conditions — more reliably. In finance, Bayesian deep learning enhances both risk assessment and fraud detection by allowing the system to assess how confident it is in its predictions. That added layer of information supports more informed decision-making and helps reduce costly errors. Across all these areas, the key advantage is the ability to move beyond just accuracy and incorporate trust and reliability into AI systems. Bayesian methods are traditionally more expensive, but modern approximations (e.g., variational inference or last layer inference) make them feasible. Computational costs depend on the problem — sometimes Bayesian models require fewer data points to achieve better performance. The trade-off is between interpretability and computational efficiency, but hardware improvements are helping bridge this gap. Their research on uncertainty-based automated labeling is designed to make models not just safer and more reliable, but also more efficient. By reducing the need for extensive manual labeling, one improves the overall quality of the dataset while cutting down on human effort and potential labeling errors. Importantly, by selecting informative samples, the model learns from better data — which means it can reach higher performance with fewer training examples. This leads to faster training and better generalization without sacrificing accuracy. They also focus on developing lightweight uncertainty estimation techniques that are computationally efficient, so these benefits don't come with heavy resource demands. In short, this approach helps build models that are more robust, more adaptive to new data, and significantly more efficient to train and deploy — which is critical for real-world systems where both accuracy and speed matter. Statisticians and deep learning researchers often use distinct methodologies, vocabulary and frameworks, making communication and collaboration challenging. Unfortunately, there is a lack of Interdisciplinary education: Traditional academic programs rarely integrate both fields. It is necessary to foster joint programs, workshops, and cross-disciplinary training can help bridge this gap. From Moussa's experience coming through an industrial PhD, he has seen how many industry settings tend to prioritize short-term gains — favoring quick wins in deep learning over deeper, more fundamental improvements. To overcome this, we need to build long-term research partnerships between academia and industry — ones that allow for foundational work to evolve alongside practical applications. That kind of collaboration can drive more sustainable, impactful innovation in the long run, something we do at methods for big data. Looking ahead, one of the major directions for deep learning in the next five to ten years is the shift toward trustworthy AI. We're already seeing growing attention on making models more explainable, fair, and robust — especially as AI systems are being deployed in critical areas like healthcare, mobility, and finance. The group also expect to see more hybrid models — combining deep learning with Bayesian methods, physics-based models, or symbolic reasoning. These approaches can help bridge the gap between raw performance and interpretability, and often lead to more data-efficient solutions. Another big trend is the rise of uncertainty-aware AI. As AI moves into more high-risk, real-world applications, it becomes essential that systems understand and communicate their own confidence. This is where uncertainty modeling will play a key role — helping to make AI not just more powerful, but also more safe and reliable. The lecture "Advanced Bayesian Data Analysis" covers fundamental concepts in Bayesian statistics, including parametric and non-parametric regression, computational techniques such as MCMC and variational inference, and Bayesian priors for handling high-dimensional data. Additionally, the lecturers offer a Research Seminar on Selected Topics in Statistical Learning and Data Science. The workgroup offers a variety of Master's thesis topics at the intersection of statistics and deep learning, focusing on Bayesian modeling, uncertainty quantification, and high-dimensional methods. Current topics include predictive information criteria for Bayesian models and uncertainty quantification in deep learning. Topics span theoretical, methodological, computational and applied projects. Students interested in rigorous theoretical and applied research are encouraged to explore our available projects and contact us for further details. The general advice of Nadja and Moussa for everybody interested to enter the field is: "Develop a strong foundation in statistical and mathematical principles, rather than focusing solely on the latest trends. Gain expertise in both theory and practical applications, as real-world impact requires a balance of both. Be open to interdisciplinary collaboration. Some of the most exciting and meaningful innovations happen at the intersection of fields — whether that's statistics and deep learning, or AI and domain-specific areas like medicine or mobility. So don't be afraid to step outside your comfort zone, ask questions across disciplines, and look for ways to connect different perspectives. That's often where real breakthroughs happen. With every new challenge comes an opportunity to innovate, and that's what keeps this work exciting. We're always pushing for more robust, efficient, and trustworthy AI. And we're also growing — so if you're a motivated researcher interested in this space, we'd love to hear from you." Literature and further information Webpage of the group G. Nuti, Lluis A.J. Rugama, A.-I. Cross: Efficient Bayesian Decision Tree Algorithm, arxiv Jan 2019 Wikipedia: Expected value of sample information C. Howson & P. Urbach: Scientific Reasoning: The Bayesian Approach (3rd ed.). Open Court Publishing Company. ISBN 978-0-8126-9578-6, 2005. A.Gelman e.a.: Bayesian Data Analysis Third Edition. Chapman and Hall/CRC. ISBN 978-1-4398-4095-5, 2013. Yu, Angela: Introduction to Bayesian Decision Theory cogsci.ucsd.edu, 2013. Devin Soni: Introduction to Bayesian Networks, 2015. G. Nuti, L. Rugama, A.-I. Cross: Efficient Bayesian Decision Tree Algorithm, arXiv:1901.03214 stat.ML, 2019. M. Carlan, T. Kneib and N. Klein: Bayesian conditional transformation models, Journal of the American Statistical Association, 119(546):1360-1373, 2024. N. Klein: Distributional regression for data analysis , Annual Review of Statistics and Its Application, 11:321-346, 2024 C.Hoffmann and N.Klein: Marginally calibrated response distributions for end-to-end learning in autonomous driving, Annals of Applied Statistics, 17(2):1740-1763, 2023 Kassem Sbeyti, M., Karg, M., Wirth, C., Klein, N., & Albayrak, S. (2024, September). Cost-Sensitive Uncertainty-Based Failure Recognition for Object Detection. In Uncertainty in Artificial Intelligence (pp. 1890-1900). PMLR. M. K. Sbeyti, N. Klein, A. Nowzad, F. Sivrikaya and S. Albayrak: Building Blocks for Robust and Effective Semi-Supervised Real-World Object Detection pdf. To appear in Transactions on Machine Learning Research, 2025 Podcasts Learning, Teaching, and Building in the Age of AI Ep 42 of Vanishing Gradient, Jan 2025. O. Beige, G. Thäter: Risikoentscheidungsprozesse, Gespräch im Modellansatz Podcast, Folge 193, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2019.
In this episode of The Backstory on the Shroud of Turin, host Guy Powell interviews evangelical apologist and theologian Tom Dallis. The two dive deep into Jewish burial customs from the first century and how these practices offer compelling support for the authenticity of the Shroud of Turin. Dallis details how key figures like Nicodemus and Joseph of Arimathea honored Jesus Christ with kingly burial rites—including 75 pounds of burial spices and fine linen, exactly what we'd expect for a royal entombment.But the discussion doesn't stop at tradition. Dallis explores how modern science can further bolster the case for the Shroud, applying odds calculus and Bayesian probability. By combining over 30 lines of evidence—from forensic blood analysis to image formation science—Dallis concludes that the probability of forgery is so low it borders on the impossible.You'll also learn how the Shroud reflects not only the physical suffering of Jesus but also echoes the symbolic role of Christ as the High Priest in the Holy of Holies.Whether you're grounded in faith or grounded in data, this episode challenges you to think deeper about the most famous burial cloth in history.
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli.Takeaways:Player tracking data revolutionized sports analytics.Decision-making in sports involves managing uncertainty and budget constraints.Luke emphasizes the importance of portfolio optimization in team management.Clubs with high budgets can afford inefficiencies in player acquisition.Statistical methods provide a probabilistic approach to player value.Removing human bias is crucial in sports decision-making.Understanding player performance distributions aids in contract decisions.The goal is to maximize performance value per dollar spent.Model validation in sports requires focusing on edge cases.
New treatment alert! The FDA recently approved Tapinarof, applied as a cream, for kids 2 years and up. We ask Dr. Leon Kircik from Icahn School of Medicine, NY, who led the clinical trials about the safety, efficacy and side effects of Tapinarof. And because we are parents too, we ask: How quickly does it work? Can you start/stop it as needed? How easy will it be to access? And more. If you like our podcast, please consider supporting it with a tax deductible donation. Research discussedTapinarof Improved Outcomes and Sleep for Patients and Families in Two Phase 3 Atopic Dermatitis Trials in Adults and ChildrenMaximal usage trial of tapinarof cream 1% once daily in pediatric patients down to 2 years of age with extensive atopic dermatitisTapinarof cream 1% once daily: Significant efficacy in the treatment of moderate to severe atopic dermatitis in adults and children down to 2 years of age in the pivotal phase 3 ADORING trialsTapinarof cream in the treatment of atopic dermatitis in children and adults a systematic review and meta-analysisEfficacy and safety of Ruxolitinib, Crisaborole, and Tapinarof for mild-to-moderate atopic dermatitis: a Bayesian network analysis of RCTs
Today's clip is from episode 130 of the podcast, with epidemiological modeler Adam Kucharski.This conversation explores the critical role of patient modeling during the COVID-19 pandemic, highlighting how these models informed public health decisions and the relationship between modeling and policy. The discussion emphasizes the need for improved communication and understanding of data among the public and policymakers.Get the full discussion at https://learnbayesstats.com/episode/129-bayesian-deep-learning-ai-for-science-vincent-fortuinIntro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli.Takeaways:Epidemiology requires a blend of mathematical and statistical understanding.Models are essential for informing public health decisions during epidemics.The COVID-19 pandemic highlighted the importance of rapid modeling.Misconceptions about data can lead to misunderstandings in public health.Effective communication is crucial for conveying complex epidemiological concepts.Epidemic thinking can be applied to various fields, including marketing and finance.Public health policies should be informed by robust modeling and data analysis.Automation can help streamline data analysis in epidemic response.Understanding the limitations of models...
Agents of Innovation: AI-Powered Product Ideation with Synthetic Consumer Testing // MLOps Podcast #306 with Luca Fiaschi, Partner of PyMC Labs.Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter // AbstractTraditional product development cycles require extensive consumer research and market testing, resulting in lengthy development timelines and significant resource investment. We've transformed this process by building a distributed multi-agent system that enables parallel quantitative evaluation of hundreds of product concepts. Our system combines three key components: an Agentic innovation lab generating high-quality product concepts, synthetic consumer panels using fine-tuned foundational models validated against historical data, and an evaluation framework that correlates with real-world testing outcomes. We can talk about how this architecture enables rapid concept discovery and digital experimentation, delivering insights into product success probability before development begins. Through case studies and technical deep-dives, you'll learn how we built an AI powered innovation lab that compresses months of product development and testing into minutes - without sacrificing the accuracy of insights. // BioWith over 15 years of leadership experience in AI, data science, and analytics, Luca has driven transformative growth in technology-first businesses. As Chief Data & AI Officer at Mistplay, he led the company's revenue growth through AI-powered personalization and data-driven pricing. Prior to that, he held executive roles at global industry leaders such as HelloFresh ($8B), Stitch Fix ($1.2B) and Rocket Internet ($1B). Luca's core competencies include machine learning, artificial intelligence, data mining, data engineering, and computer vision, which he has applied to various domains such as marketing, logistics, personalization, product, experimentation and pricing.He is currently a partner at PyMC Labs, a leading data science consultancy, providing insights and guidance on applications of Bayesian and Causal Inference techniques and Generative AI to fortune 500 companies. Luca holds a PhD in AI and Computer Vision from Heidelberg University and has more than 450 citations on his research work.// Related LinksWebsite: https://www.pymc-labs.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Luca on LinkedIn: /lfiaschi
As regulatory expectations evolve under the FDA's Project Optimus oncology dosing initiative, biostatistics is emerging as a central pillar in designing and executing trials that move beyond the traditional maximum tolerated dose (MTD) approach.In this fourth episode of our Project Optimus series, host Dr. Wael Harb is joined by biostatistics expert X.Q Xue, PhD, Vice President and Global Head, Biostatistics at Syneos Health to explore how statistical science is transforming dose optimization in oncology drug development. Dr. Xue discusses the limitations of legacy 3+3 dose-escalation designs and introduces innovative alternatives, including Bayesian modeling, adaptive trial strategies and randomized parallel dose-response studies, which support more precise dose selection and can ultimately improve patient outcomes and trial efficiency.Together, Drs. Harb and Xue examine how smaller biotech companies can overcome barriers to implementation, the role of simulation and AI in trial planning and how a biostatistics-driven approach may increase the likelihood of late-phase success, reduce post-marketing adjustments and support faster regulatory approvals.The views expressed in this podcast belong solely to the speakers and do not represent those of their organization. If you want access to more future-focused, actionable insights to help biopharmaceutical companies better execute and succeed in a constantly evolving environment, visit the Syneos Health Insights Hub. The perspectives you'll find there are driven by dynamic research and crafted by subject matter experts focused on real answers to help guide decision-making and investment. You can find it all at insightshub.health. Like what you're hearing? Be sure to rate and review us! We want to hear from you! If there's a topic you'd like us to cover on a future episode, contact us at podcast@syneoshealth.com.
Today's clip is from episode 129 of the podcast, with AI expert and researcher Vincent Fortuin.This conversation delves into the intricacies of Bayesian deep learning, contrasting it with traditional deep learning and exploring its applications and challenges.Get the full discussion at https://learnbayesstats.com/episode/129-bayesian-deep-learning-ai-for-science-vincent-fortuinIntro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
Prof. Kevin Ellis and Dr. Zenna Tavares talk about making AI smarter, like humans. They want AI to learn from just a little bit of information by actively trying things out, not just by looking at tons of data.They discuss two main ways AI can "think": one way is like following specific rules or steps (like a computer program), and the other is more intuitive, like guessing based on patterns (like modern AI often does). They found combining both methods works well for solving complex puzzles like ARC.A key idea is "compositionality" - building big ideas from small ones, like LEGOs. This is powerful but can also be overwhelming. Another important idea is "abstraction" - understanding things simply, without getting lost in details, and knowing there are different levels of understanding.Ultimately, they believe the best AI will need to explore, experiment, and build models of the world, much like humans do when learning something new.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT:https://www.dropbox.com/scl/fi/3ngggvhb3tnemw879er5y/BASIS.pdf?rlkey=lr2zbj3317mex1q5l0c2rsk0h&dl=0 Zenna Tavares:http://www.zenna.org/Kevin Ellis:https://www.cs.cornell.edu/~ellisk/TOC:1. Compositionality and Learning Foundations [00:00:00] 1.1 Compositional Search and Learning Challenges [00:03:55] 1.2 Bayesian Learning and World Models [00:12:05] 1.3 Programming Languages and Compositionality Trade-offs [00:15:35] 1.4 Inductive vs Transductive Approaches in AI Systems2. Neural-Symbolic Program Synthesis [00:27:20] 2.1 Integration of LLMs with Traditional Programming and Meta-Programming [00:30:43] 2.2 Wake-Sleep Learning and DreamCoder Architecture [00:38:26] 2.3 Program Synthesis from Interactions and Hidden State Inference [00:41:36] 2.4 Abstraction Mechanisms and Resource Rationality [00:48:38] 2.5 Inductive Biases and Causal Abstraction in AI Systems3. Abstract Reasoning Systems [00:52:10] 3.1 Abstract Concepts and Grid-Based Transformations in ARC [00:56:08] 3.2 Induction vs Transduction Approaches in Abstract Reasoning [00:59:12] 3.3 ARC Limitations and Interactive Learning Extensions [01:06:30] 3.4 Wake-Sleep Program Learning and Hybrid Approaches [01:11:37] 3.5 Project MARA and Future Research DirectionsREFS:[00:00:25] DreamCoder, Kevin Ellis et al.https://arxiv.org/abs/2006.08381[00:01:10] Mind Your Step, Ryan Liu et al.https://arxiv.org/abs/2410.21333[00:06:05] Bayesian inference, Griffiths, T. L., Kemp, C., & Tenenbaum, J. B.https://psycnet.apa.org/record/2008-06911-003[00:13:00] Induction and Transduction, Wen-Ding Li, Zenna Tavares, Yewen Pu, Kevin Ellishttps://arxiv.org/abs/2411.02272[00:23:15] Neurosymbolic AI, Garcez, Artur d'Avila et al.https://arxiv.org/abs/2012.05876[00:33:50] Induction and Transduction (II), Wen-Ding Li, Kevin Ellis et al.https://arxiv.org/abs/2411.02272[00:38:35] ARC, François Chollethttps://arxiv.org/abs/1911.01547[00:39:20] Causal Reactive Programs, Ria Das, Joshua B. Tenenbaum, Armando Solar-Lezama, Zenna Tavareshttp://www.zenna.org/publications/autumn2022.pdf[00:42:50] MuZero, Julian Schrittwieser et al.http://arxiv.org/pdf/1911.08265[00:43:20] VisualPredicator, Yichao Lianghttps://arxiv.org/abs/2410.23156[00:48:55] Bayesian models of cognition, Joshua B. Tenenbaumhttps://mitpress.mit.edu/9780262049412/bayesian-models-of-cognition/[00:49:30] The Bitter Lesson, Rich Suttonhttp://www.incompleteideas.net/IncIdeas/BitterLesson.html[01:06:35] Program induction, Kevin Ellis, Wen-Ding Lihttps://arxiv.org/pdf/2411.02272[01:06:50] DreamCoder (II), Kevin Ellis et al.https://arxiv.org/abs/2006.08381[01:11:55] Project MARA, Zenna Tavares, Kevin Ellishttps://www.basis.ai/blog/mara/
Kevin Werbach speaks with Eric Bradlow, Vice Dean of AI & Analytics at Wharton. Bradlow highlights the transformative impacts of AI from his perspective as an applied statistician and quantitative marketing expert. He describes the distinctive approach of Wharton's analytics program, and its recent evolution with the rise of AI. The conversation highlights the significance of legal and ethical responsibility within the AI field, and the genesis of the new Wharton Accountable AI Lab. Werbach and Bradlow then examine the role of academic institutions in shaping the future of AI, and how institutions like Wharton can lead the way in promoting accountability, learning and responsible AI deployment. Eric Bradlow is the Vice Dean of AI & Analytics at Wharton, Chair of the Marketing Department, and also a professor of Economics, Education, Statistics, and Data Science. His research interests include Bayesian modeling, statistical computing, and developing new methodology for unique data structures with application to business problems. In addition to publishing in a variety of top journals, he has won numerous teaching awards at Wharton, including the MBA Core Curriculum teaching award, the Miller-Sherrerd MBA Core Teaching Award and the Excellence in Teaching Award. Episode Transcript Wharton AI & Analytics Initiative Eric Bradlow - Knowledge at Wharton Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI's power while addressing its risks.
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:The hype around AI in science often fails to deliver practical results.Bayesian deep learning combines the strengths of deep learning and Bayesian statistics.Fine-tuning LLMs with Bayesian methods improves prediction calibration.There is no single dominant library for Bayesian deep learning yet.Real-world applications of Bayesian deep learning exist in various fields.Prior knowledge is crucial for the effectiveness of Bayesian deep learning.Data efficiency in AI can be enhanced by incorporating prior knowledge.Generative AI and Bayesian deep learning can inform each other.The complexity of a problem influences the choice between Bayesian and traditional deep learning.Meta-learning enhances the efficiency of Bayesian models.PAC-Bayesian theory merges Bayesian and frequentist ideas.Laplace inference offers a cost-effective approximation.Subspace inference can optimize parameter efficiency.Bayesian deep learning is crucial for reliable predictions.Effective communication of uncertainty is essential.Realistic benchmarks are needed for Bayesian methodsCollaboration and communication in the AI community are vital.Chapters:00:00 Introduction to Bayesian Deep Learning04:24 Vincent Fortuin's Journey to Bayesian Deep Learning11:52 Understanding Bayesian Deep Learning16:29 Current Landscape of Bayesian Libraries21:11 Real-World Applications of Bayesian Deep Learning23:33 When to Use Bayesian Deep Learning28:22 Data Efficiency in AI and Generative Modeling30:18 Integrating Bayesian Knowledge into Generative Models31:44 The Role of Meta-Learning in Bayesian Deep Learning34:06 Understanding Pack Bayesian Theory37:55 Algorithms for Bayesian Deep Learning Models
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Matt emphasizes the importance of Bayesian statistics in scenarios with limited data.Communicating insights to coaches is a crucial skill for data analysts.Building a data team requires understanding the needs of the coaching staff.Player recruitment is a significant focus in football analytics.The integration of data science in sports is still evolving.Effective data modeling must consider the practical application in games.Collaboration between data analysts and coaches enhances decision-making.Having a robust data infrastructure is essential for efficient analysis.The landscape of sports analytics is becoming increasingly competitive. Player recruitment involves analyzing various data models.Biases in traditional football statistics can skew player evaluations.Statistical techniques should leverage the structure of football data.Tracking data opens new avenues for understanding player movements.The role of data analysis in football will continue to grow.Aspiring analysts should focus on curiosity and practical experience.Chapters:00:00 Introduction to Football Analytics and Matt's Journey04:54 The Role of Bayesian Methods in Football10:20 Challenges in Communicating Data Insights17:03 Building Relationships with Coaches22:09 The Structure of the Data Team at Como26:18 Focus on Player Recruitment and Transfer Strategies28:48 January Transfer Window Insights30:54 Biases in Football Data Analysis34:11 Comparative Analysis of Men's and Women's Football36:55 Statistical Techniques in Football Analysis42:48 The Impact of Tracking Data on Football Analysis45:49 The Future of Data-Driven Football Strategies47:27 Advice for Aspiring Football Analysts
Today's revolutionary idea is something a bit different: David talks to statistician David Spiegelhalter about how an eighteenth-century theory of probability emerged from relative obscurity in the twentieth century to reconfigure our understanding of the relationship between past, present and future. What was Thomas Bayes's original idea about doing probability in reverse: from effect to cause? What happened when this way of thinking passed through the vortex of the French Revolution? How has it come to lie behind recent innovations in political polling, AI, self-driving cars, medical research and so much more? Why does it remain controversial to this day? The latest edition of our free fortnightly newsletter is available: to get it in your inbox sign up now https://www.ppfideas.com/newsletter Next time: 1848: The Liberal Revolution w/Chris Clark Past Present Future is part of the Airwave Podcast Network Learn more about your ad choices. Visit megaphone.fm/adchoices
Are We Looking at the End of Atheism? Dr. Christopher Sernaque sits down with Otangelo Grassio, an advocate for Intelligent Design, to break down the biggest questions in science and faith! In the final episode of Current Topics in Science Season 6, we dive deep into his latest book, Unraveling the Theistic Worldview, tackling Bayesian probability, the limits of naturalism, and why the demand for definitive proof of God might be a trap! Don't miss this eye-opening discussion that could change the way you see science forever! Watch now and decide for yourself!
Today's episode is about a very different revolution from any we've discussed so far: David talks to historian Hank Gonzalez about the Haitian Revolution, which for the first time in history saw a slave revolt result in an independent free state. How did the Haitian Revolution intersect with the American and French Revolutions that preceded it? Why were European powers unable to reverse it despite massive military intervention? What is its legacy for the state of Haiti today? Tickets are still available for PPF Live at the Bath Curious Minds Festival: join us on Saturday 29th March to hear David in conversation with Robert Saunders about the legacy of Winston Churchill: The Politician with Nine Lives https://bit.ly/42GPp3X Out tomorrow the latest edition of our free fortnightly newsletter: to get it in your inbox sign up now https://www.ppfideas.com/newsletters Next time: The Bayesian revolution w/David Spiegelhalter Past Present Future is part of the Airwave Podcast Network Learn more about your ad choices. Visit megaphone.fm/adchoices
This episode covers: Cardiology This Week: A concise summary of recent studies AI and the future of the Electrocardiogram The heart in rheumatic disorders and autoimmune diseases Statistics Made Easy: Bayesian analysis Host: Susanna Price Guests: Carlos Aguiar, Paul Friedman, Maya Buch Want to watch that episode? Go to: https://esc365.escardio.org/event/1801 Disclaimer: ESC TV Today is supported by Bristol Myers Squibb. This scientific content and opinions expressed in the programme have not been influenced in any way by its sponsor. This programme is intended for health care professionals only and is to be used for educational purposes. The European Society of Cardiology (ESC) does not aim to promote medicinal products nor devices. Any views or opinions expressed are the presenters' own and do not reflect the views of the ESC. Declarations of interests: Stephan Achenbach, Antonio Greco, Nicolle Kraenkel and Susanna Price have declared to have no potential conflicts of interest to report. Carlos Aguiar has declared to have potential conflicts of interest to report: personal fees for consultancy and/or speaker fees from Abbott, AbbVie, Alnylam, Amgen, AstraZeneca, Bayer, BiAL, Boehringer-Ingelheim, Daiichi-Sankyo, Ferrer, Gilead, GSK, Lilly, Novartis, Novo Nordisk, Pfizer, Sanofi, Servier, Takeda, Tecnimede. Maya Buch has declared to have potential conflicts of interest to report: grant/research support paid to University of Manchester from Gilead and Galapagos; consultant and/or speaker with funds paid to University of Manchester for AbbVie, Boehringer Ingelheim, CESAS Medical, Eli Lilly, Galapagos, Gilead Sciences, Medistream and Pfizer Inc; member of the Speakers' Bureau for AbbVie with funds paid to University of Manchester. Davide Capodanno has declared to have potential conflicts of interest to report: Bristol Myers Squibb, Daiichi Sankyo, Sanofi Aventis, Novo Nordisk, Terumo. Paul Friedman has declared to have potential conflicts of interest to report: co-inventor of AI ECG algorithms. Steffen Petersen has declared to have potential conflicts of interest to report: consultancy for Circle Cardiovascular Imaging Inc. Calgary, Alberta, Canada. Emma Svennberg has declared to have potential conflicts of interest to report: Abbott, Astra Zeneca, Bayer, Bristol-Myers, Squibb-Pfizer, Johnson & Johnson.
Did Jeffrey Epstein die by murder or suicide? In this episode, I argue that we should use Bayesian statistics to frame the debate. Indeed, we should use this approach to frame most "conspiracy theories". Most such theories are derided as compelling storytellers weaving half truths to fit their narrative.Bayes offers a more analytical approach.1. Make an educated guess about the probability of an event occurring. The likelihood of Epstein dying by suicide.2. Identify authenticated clues that support that hypothesis.3. Assess the odds of each clue happening independently, i.e. jail cameras not working, hyoid bone being broken, both guards falling asleep.4. Then calculate odds of those happening together.5. Perform calculation and then update original probability estimate based upon the probability those clues happening.Using Bayes with a "little" help from Grok, I identify the odds of murder versus suicide.I also identify ways that you should attack this analysis and not just my use of Grok.This approach should be used more frequently as we try to resolve debates surrounding "conspiracies". I don't even really like that word. We're really trying to assess whether Event x was caused by y or z.
When billionaire British entrepreneur Mike Lynch drowned during the sinking of the superyacht Bayesian in August, it sent shockwaves around the world.Having just successfully fought off the US Justice Department on fourteen counts of fraud and conspiracy, he was celebrating his newfound freedom when he was tragically killed during a freak storm.After months of work by our senior reporter, Henry Bodkin, the Daily T investigates what might have caused a boat that was previously described as unsinkable to vanish beneath the waves.Clips in this episode from:BBC NewsnightBBC NewsUniversity of Cambridge Judge Business SchoolBBC Radio 4Sky NewsAPPlanning Editor: Venetia RaineyExecutive Producer: Louisa WellsSound Design: Elliot LampittSocial Media Producer: Niamh WalshStudio Operator: Meghan Searle Hosted on Acast. See acast.com/privacy for more information.
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia and Michael Cao.Takeaways:Sharks play a crucial role in maintaining healthy ocean ecosystems.Bayesian statistics are particularly useful in data-poor environments like ecology.Teaching Bayesian statistics requires a shift in mindset from traditional statistical methods.The shark meat trade is significant and often overlooked.Ray meat trade is as large as shark meat trade, with specific markets dominating.Understanding the ecological roles of species is essential for effective conservation.Causal language is important in ecological research and should be encouraged.Evidence-driven decision-making is crucial in balancing human and ecological needs.Expert opinions are...
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Marketing analytics is crucial for understanding customer behavior.PyMC Marketing offers tools for customer lifetime value analysis.Media mix modeling helps allocate marketing spend effectively.Customer Lifetime Value (CLV) models are essential for understanding long-term customer behavior.Productionizing models is essential for real-world applications.Productionizing models involves challenges like model artifact storage and version control.MLflow integration enhances model tracking and management.The open-source community fosters collaboration and innovation.Understanding time series is vital in marketing analytics.Continuous learning is key in the evolving field of data science.Chapters:00:00 Introduction to Will Dean and His Work10:48 Diving into PyMC Marketing17:10 Understanding Media Mix Modeling25:54 Challenges in Productionizing Models35:27 Exploring Customer Lifetime Value Models44:10 Learning and Development in Data ScienceThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz,...
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire and Mike Loncaric.Takeaways:The evolution of sports modeling is tied to the availability of high-frequency data.Bayesian methods are valuable in handling messy, hierarchical data.Communication between data scientists and decision-makers is crucial for effective model use.Models are often wrong, and learning from mistakes is part of the process.Simplicity in models can sometimes yield better results than complexity.The integration of analytics in sports is still developing, with opportunities in various sports.Transparency in research and development teams enhances decision-making.Understanding uncertainty in models is essential for informed decisions.The balance between point estimates and full distributions is a...
Part one of our two-part investigation into the Rationalist cult “The Zizians.” We start with the killing of a border patrol officer and make our way back into the belly of the beast: Silicon Valley. Featuring: Harry Potter fanfic, samurai swords, Guy Fawkes masks, Blake Masters, Bayesian probability, and Eliezer Yudkowsky. Infohazard warning: some of your least favs will be implicated. Discover more episodes at podcast.trueanon.com
On Hands-On Tech, Mikah helps Lars, who is seeking help for a friend who is being flooded with spam emails and is looking for a way to filter out the spam emails! Mikah strongly recommends transitioning away from ISP-provided email addresses, as they often get shuffled between companies and can face service disruptions There are also some options to consider when moving from an old email account, such as setting up auto-responses directing to the new address or configuring your new email account to pull from your old account. Mikah suggests two things to do. The first is to create a Gmail account to leverage its excellent spam filtering capabilities, then configure it to pull mail from the Cox address and connect it to Outlook. The second option is to use local spam filtering software like Mailwasher, which offers Bayesian filtering that learns from user input to improve spam detection over time. Other software options to consider are Spambully, Spamfighter, and clean.io, though some may not work without MX record access. Also, be sure to stick around to the end of the episode for an upcoming change to Hands-On Tech. Send in your questions for Mikah to answer! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
On Hands-On Tech, Mikah helps Lars, who is seeking help for a friend who is being flooded with spam emails and is looking for a way to filter out the spam emails! Mikah strongly recommends transitioning away from ISP-provided email addresses, as they often get shuffled between companies and can face service disruptions There are also some options to consider when moving from an old email account, such as setting up auto-responses directing to the new address or configuring your new email account to pull from your old account. Mikah suggests two things to do. The first is to create a Gmail account to leverage its excellent spam filtering capabilities, then configure it to pull mail from the Cox address and connect it to Outlook. The second option is to use local spam filtering software like Mailwasher, which offers Bayesian filtering that learns from user input to improve spam detection over time. Other software options to consider are Spambully, Spamfighter, and clean.io, though some may not work without MX record access. Also, be sure to stick around to the end of the episode for an upcoming change to Hands-On Tech. Send in your questions for Mikah to answer! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
Please join my mailing list here