POPULARITY
Categories
The JournalFeed podcast for the week of 19-23, 2025.These are summaries from just 2 of the 5 articles we cover every week! For access to more, please visit JournalFeed.org for details about becoming a member.Monday Spoon Feed:The Ten Test is a quick, reliable, no-equipment sensory exam that performed as well as or better than traditional methods in assessing hand and finger injuries – with none of the cost.Friday Spoon Feed:In this Bayesian network meta analysis, researchers compared pharmacologic interventions for migraine treatment. There was no clear superior choice for single-agent pain control, but chlorpromazine IV/IM was among the most effective for adequate pain relief at two hours, and IV/IM ketorolac was possibly among the worst.
Today's clip is from episode 132 of the podcast, with Tom Griffiths.Tom and Alex Andorra discuss the fundamental differences between human intelligence and artificial intelligence, emphasizing the constraints that shape human cognition, such as limited data, computational resources, and communication bandwidth. They explore how AI systems currently learn and the potential for aligning AI with human cognitive processes. The discussion also delves into the implications of AI in enhancing human decision-making and the importance of understanding human biases to create more effective AI systems.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
In this week's episode, we dig into the recently published Marine Accident Investigation Branch (MAIB) interim report into the sinking of the 56-metre superyacht Bayesian in August last year, resulting in the loss of seven lives. For the first time, the narrative of what happened that night can be disclosed, and we also review the MAIB's comments on the weather conditions that fateful evening and Bayesian's stability vulnerabilities. Georgia, meanwhile, is newly returned from the Galapagos and tells us why these very special islands should be on every superyacht itinerary. BOAT Pro: https://boatint.com/zg Subscribe: https://boatint.com/zh Contact us: podcast@boatinternationalmedia.com
United Kingdom correspondent Alice Wilkins spoke to Lisa Owen about how the first pieces of a superyacht that capsized off the coast of Italy with Kiwis on board, have been brought to the surface and how a flight to the spanish party island of Ibiza as been described as "hell" because of some rowdy passengers. She also spoke about how a British endurance athlete said he's broken the record for running across the width of Australia.
Welcome to Nerd Alert, a series of special episodes bridging the gap between marketing academia and practitioners. We're breaking down highly involved, complex research into plain language and takeaways any marketer can use.In this episode, Elena and Rob explore how Bayesian modeling offers a more nuanced approach to marketing attribution than traditional methods. They discuss why many marketers still rely on oversimplified attribution models despite their limitations.Topics covered: [01:00] "Bayesian Modeling of Marketing Attribution"[03:00] Problems with traditional attribution models[04:50] Why simple models persist despite their flaws[06:00] Key components of Bayesian attribution[08:00] Rapid decay of ad effects and negative interaction effects[09:45] How this approach can offer deeper marketing insights To learn more, visit marketingarchitects.com/podcast or subscribe to our newsletter at marketingarchitects.com/newsletter. Resources: Sinha, R., Arbour, D., & Puli, A. (2022). Bayesian Modeling of Marketing Attribution. Available at arXiv:2205.15965 Get more research-backed marketing strategies by subscribing to The Marketing Architects on Apple Podcasts, Spotify, or wherever you listen to podcasts.
In May 2023, the sailing world was shaken by the tragic sinking of Bayesian, a British-flagged superyacht anchored off Sicily. Today, the UK's Marine Accident Investigation Branch has released its interim report on the sinking—and it's raising important points, especially about the actions of the crew. This is Ocean Sailor — and today we're diving into what the MAIB has revealed so far, and why it matters. #OceanSailing #BluewaterSailing #Seamanship #OffshoreSailing #SailingLife #SailboatLife #PassagePlanning #MaritimeSafety #SailingSafety #MarineAccident #MAIB #YachtSinking #SafetyAtSea #BayesianSinking #bayesianyacht #MAIBReport #SailingNews #SailingCommunity #SailingDiscussion #YachtDesign #SailboatConstruction #OceanGoingYachts #StructuralIntegrity #ModernYachts #OceanSailor #SailingChannel #SailingYouTube #SailingDocumentary #OceanSailorChannel
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Check out Hugo's latest episode with Fei-Fei Li, on How Human-Centered AI Actually Gets BuiltIntro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Computational cognitive science seeks to understand intelligence mathematically.Bayesian statistics is crucial for understanding human cognition.Inductive biases help explain how humans learn from limited data.Eliciting prior distributions can reveal implicit beliefs.The wisdom of individuals can provide richer insights than averaging group responses.Generative AI can mimic human cognitive processes.Human intelligence is shaped by constraints of data, computation, and communication.AI systems operate under different constraints than human cognition. Human intelligence differs fundamentally from machine intelligence.Generative AI can complement and enhance human learning.AI systems currently lack intrinsic human compatibility.Language training in AI helps align its understanding with human perspectives.Reinforcement learning from human feedback can lead to misalignment of AI goals.Representational alignment can improve AI's understanding of human concepts.AI can help humans make better decisions by providing relevant information.Research should focus on solving problems rather than just methods.Chapters:00:00 Understanding Computational Cognitive Science13:52 Bayesian Models and Human Cognition29:50 Eliciting Implicit Prior Distributions38:07 The Relationship Between Human and AI Intelligence45:15 Aligning Human and Machine Preferences50:26 Innovations in AI and Human Interaction55:35 Resource Rationality in Decision Making01:00:07 Language Learning in AI Models
E' proprio il caso di parlare della maledizione del Bayesian, una sorta di “vascello fantasma”. Infatti, un sub impegnato nei lavori per il recupero del veliero ultramoderno, è morto durante un'immersione.
At inference, large language models use in-context learning with zero-, one-, or few-shot examples to perform new tasks without weight updates, and can be grounded with Retrieval Augmented Generation (RAG) by embedding documents into vector databases for real-time factual lookup using cosine similarity. LLM agents autonomously plan, act, and use external tools via orchestrated loops with persistent memory, while recent benchmarks like GPQA (STEM reasoning), SWE Bench (agentic coding), and MMMU (multimodal college-level tasks) test performance alongside prompt engineering techniques such as chain-of-thought reasoning, structured few-shot prompts, positive instruction framing, and iterative self-correction. Links Notes and resources at ocdevel.com/mlg/mlg35 Build the future of multi-agent software with AGNTCY Try a walking desk stay healthy & sharp while you learn & code In-Context Learning (ICL) Definition: LLMs can perform tasks by learning from examples provided directly in the prompt without updating their parameters. Types: Zero-shot: Direct query, no examples provided. One-shot: Single example provided. Few-shot: Multiple examples, balancing quantity with context window limitations. Mechanism: ICL works through analogy and Bayesian inference, using examples as semantic priors to activate relevant internal representations. Emergent Properties: ICL is an "inference-time training" approach, leveraging the model's pre-trained knowledge without gradient updates; its effectiveness can be enhanced with diverse, non-redundant examples. Retrieval Augmented Generation (RAG) and Grounding Grounding: Connecting LLMs with external knowledge bases to supplement or update static training data. Motivation: LLMs' training data becomes outdated or lacks proprietary/specialized knowledge. Benefit: Reduces hallucinations and improves factual accuracy by incorporating current or domain-specific information. RAG Workflow: Embedding: Documents are converted into vector embeddings (using sentence transformers or representation models). Storage: Vectors are stored in a vector database (e.g., FAISS, ChromaDB, Qdrant). Retrieval: When a query is made, relevant chunks are extracted based on similarity, possibly with re-ranking or additional query processing. Augmentation: Retrieved chunks are added to the prompt to provide up-to-date context for generation. Generation: The LLM generates responses informed by the augmented context. Advanced RAG: Includes agentic approaches—self-correction, aggregation, or multi-agent contribution to source ingestion, and can integrate external document sources (e.g., web search for real-time info, or custom datasets for private knowledge). LLM Agents Overview: Agents extend LLMs by providing goal-oriented, iterative problem-solving through interaction, memory, planning, and tool usage. Key Components: Reasoning Engine (LLM Core): Interprets goals, states, and makes decisions. Planning Module: Breaks down complex tasks using strategies such as Chain of Thought or ReAct; can incorporate reflection and adjustment. Memory: Short-term via context window; long-term via persistent storage like RAG-integrated databases or special memory systems. Tools and APIs: Agents select and use external functions—file manipulation, browser control, code execution, database queries, or invoking smaller/fine-tuned models. Capabilities: Support self-evaluation, correction, and multi-step planning; allow integration with other agents (multi-agent systems); face limitations in memory continuity, adaptivity, and controllability. Current Trends: Research and development are shifting toward these agentic paradigms as LLM core scaling saturates. Multimodal Large Language Models (MLLMs) Definition: Models capable of ingesting and generating across different modalities (text, image, audio, video). Architecture: Modality-Specific Encoders: Convert raw modalities (text, image, audio) into numeric embeddings (e.g., vision transformers for images). Fusion/Alignment Layer: Embeddings from different modalities are projected into a shared space, often via cross-attention or concatenation, allowing the model to jointly reason about their content. Unified Transformer Backbone: Processes fused embeddings to allow cross-modal reasoning and generates outputs in the required format. Recent Advances: Unified architectures (e.g., GPT-4o) use a single model for all modalities rather than switching between separate sub-models. Functionality: Enables actions such as image analysis via text prompts, visual Q&A, and integrated speech recognition/generation. Advanced LLM Architectures and Training Directions Predictive Abstract Representation: Incorporating latent concept prediction alongside token prediction (e.g., via autoencoders). Patch-Level Training: Predicting larger “patches” of tokens to reduce sequence lengths and computation. Concept-Centric Modeling: Moving from next-token prediction to predicting sequences of semantic concepts (e.g., Meta's Large Concept Model). Multi-Token Prediction: Training models to predict multiple future tokens for broader context capture. Evaluation Benchmarks (as of 2025) Key Benchmarks Used for LLM Evaluation: GPQA (Diamond): Graduate-level STEM reasoning. SWE Bench Verified: Real-world software engineering, verifying agentic code abilities. MMMU: Multimodal, college-level cross-disciplinary reasoning. HumanEval: Python coding correctness. HLE (Human's Last Exam): Extremely challenging, multimodal knowledge assessment. LiveCodeBench: Coding with contamination-free, up-to-date problems. MLPerf Inference v5.0 Long Context: Throughput/latency for processing long contexts. MultiChallenge Conversational AI: Multiturn dialogue, in-context reasoning. TAUBench/PFCL: Tool utilization in agentic tasks. TruthfulnessQA: Measures tendency toward factual accuracy/robustness against misinformation. Prompt Engineering: High-Impact Techniques Foundational Approaches: Few-Shot Prompting: Provide pairs of inputs and desired outputs to steer the LLM. Chain of Thought: Instructing the LLM to think step-by-step, either explicitly or through internal self-reprompting, enhances reasoning and output quality. Clarity and Structure: Use clear, detailed, and structured instructions—task definition, context, constraints, output format, use of delimiters or markdown structuring. Affirmative Directives: Phrase instructions positively (“write a concise summary” instead of “don't write a long summary”). Iterative Self-Refinement: Prompt the LLM to review and improve its prior response for better completeness, clarity, and factuality. System Prompt/Role Assignment: Assign a persona or role to the LLM for tailored behavior (e.g., “You are an expert Python programmer”). Guideline: Regularly consult official prompting guides from model developers as model capabilities evolve. Trends and Research Outlook Inference-time compute is increasingly important for pushing the boundaries of LLM task performance. Agentic LLMs and multimodal reasoning represent the primary frontiers for innovation. Prompt engineering and benchmarking remain essential for extracting optimal performance and assessing progress. Models are expected to continue evolving with research into new architectures, memory systems, and integration techniques.
Today's clip is from episode 131 of the podcast, with Luke Bornn.Luke and Alex discuss the application of generative models in sports analytics. They emphasize the importance of Bayesian modeling to account for uncertainty and contextual variations in player data. The discussion also covers the challenges of balancing model complexity with computational efficiency, the innovative ways to hack Bayesian models for improved performance, and the significance of understanding model fitting and discretization in statistical modeling.Get the full discussion here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
Note: this episode was recorded in August of 2022.In the latest Elucidation, Matt talks to Witold Więcek about the difficulties that come up for researchers who would like to draw upon statistics. Lots of academic fields need to draw heavily on statistics, whether it's economics, psychology, sociologym, linguistics, computer science, or data science. This means that a lot of people coming from different backgrounds often need to learn basic statistics in order to investigate whatever question they're investigating. But as we've discussed on this podcast, statistical reasoning is easy for beginners to mess up, and it's also easy for bad faith parties to tamper with in undetectable ways. They can straight up fabricate data, they can cherry pick it, they can keep changing the hypothesis they are testing until they find one that is supported by a trend in the data they have. So what should we do? We can't give up on statistics; it is simply too useful a tool.Witold Więcek argues that researchers have to be mindful of “p-hacking”. Statistical significance, the golden standard of academic publishing, can easily be guaranteed by unscrupulous research or motivated reasoning: statistically speaking, even noise can look like signal if we keep asking more and more questions of our data. Modern statistical workflows require us to either adjust the results for number of hypotheses tested or to follow principles of Bayesian inference. As a broader strategy, Więcek recommends that every research project making significant use of statistical arguments bring in in an external consultant, who can productively stress test those arguments in an adversarial way, given that they aren't part of the main team.It was a great conversation! I hope you enjoy it.Matt Teichman Hosted on Acast. See acast.com/privacy for more information.
In this episode Gudrun speaks with Nadja Klein and Moussa Kassem Sbeyti who work at the Scientific Computing Center (SCC) at KIT in Karlsruhe. Since August 2024, Nadja has been professor at KIT leading the research group Methods for Big Data (MBD) there. She is an Emmy Noether Research Group Leader, and a member of AcademiaNet, and Die Junge Akademie, among others. In 2025, Nadja was awarded the Committee of Presidents of Statistical Societies (COPSS) Emerging Leader Award (ELA). The COPSS ELA recognizes early career statistical scientists who show evidence of and potential for leadership and who will help shape and strengthen the field. She finished her doctoral studies in Mathematics at the Universität Göttingen before conducting a postdoc at the University of Melbourne as a Feodor-Lynen fellow by the Alexander von Humboldt Foundation. Afterwards she was a Professor for Statistics and Data Science at the Humboldt-Universität zu Berlin before joining KIT. Moussa joined Nadja's lab as an associated member in 2023 and later as a postdoctoral researcher in 2024. He pursued a PhD at the TU Berlin while working as an AI Research Scientist at the Continental AI Lab in Berlin. His research primarily focuses on deep learning, developing uncertainty-based automated labeling methods for 2D object detection in autonomous driving. Prior to this, Moussa earned his M.Sc. in Mechatronics Engineering from the TU Darmstadt in 2021. The research of Nadja and Moussa is at the intersection of statistics and machine learning. In Nadja's MBD Lab the research spans theoretical analysis, method development and real-world applications. One of their key focuses is Bayesian methods, which allow to incorporate prior knowledge, quantify uncertainties, and bring insights to the “black boxes” of machine learning. By fusing the precision and reliability of Bayesian statistics with the adaptability of machine and deep learning, these methods aim to leverage the best of both worlds. The KIT offers a strong research environment, making it an ideal place to continue their work. They bring new expertise that can be leveraged in various applications and on the other hand Helmholtz offers a great platform in that respect to explore new application areas. For example Moussa decided to join the group at KIT as part of the Helmholtz Pilot Program Core-Informatics at KIT (KiKIT), which is an initiative focused on advancing fundamental research in informatics within the Helmholtz Association. Vision models typically depend on large volumes of labeled data, but collecting and labeling this data is both expensive and prone to errors. During his PhD, his research centered on data-efficient learning using uncertainty-based automated labeling techniques. That means estimating and using the uncertainty of models to select the helpful data samples to train the models to label the rest themselves. Now, within KiKIT, his work has evolved to include knowledge-based approaches in multi-task models, eg. detection and depth estimation — with the broader goal of enabling the development and deployment of reliable, accurate vision systems in real-world applications. Statistics and data science are fascinating fields, offering a wide variety of methods and applications that constantly lead to new insights. Within this domain, Bayesian methods are especially compelling, as they enable the quantification of uncertainty and the incorporation of prior knowledge. These capabilities contribute to making machine learning models more data-efficient, interpretable, and robust, which are essential qualities in safety-critical domains such as autonomous driving and personalized medicine. Nadja is also enthusiastic about the interdisciplinarity of the subject — repeatedly changing the focus from mathematics to economics to statistics to computer science. The combination of theoretical fundamentals and practical applications makes statistics an agile and important field of research in data science. From a deep learning perspective, the focus is on making models both more efficient and more reliable when dealing with large-scale data and complex dependencies. One way to do this is by reducing the need for extensive labeled data. They also work on developing self-aware models that can recognize when they're unsure and even reject their own predictions when necessary. Additionally, they explore model pruning techniques to improve computational efficiency, and specialize in Bayesian deep learning, allowing machine learning models to better handle uncertainty and complex dependencies. Beyond the methods themselves, they also contribute by publishing datasets that help push the development of next-generation, state-of-the-art models. The learning methods are applied across different domains such as object detection, depth estimation, semantic segmentation, and trajectory prediction — especially in the context of autonomous driving and agricultural applications. As deep learning technologies continue to evolve, they're also expanding into new application areas such as medical imaging. Unlike traditional deep learning, Bayesian deep learning provides uncertainty estimates alongside predictions, allowing for more principled decision-making and reducing catastrophic failures in safety-critical application. It has had a growing impact in several real-world domains where uncertainty really matters. Bayesian learning incorporates prior knowledge and updates beliefs as new data comes in, rather than relying purely on data-driven optimization. In healthcare, for example, Bayesian models help quantify uncertainty in medical diagnoses, which supports more risk-aware treatment decisions and can ultimately lead to better patient outcomes. In autonomous vehicles, Bayesian models play a key role in improving safety. By recognizing when the system is uncertain, they help capture edge cases more effectively, reduce false positives and negatives in object detection, and navigate complex, dynamic environments — like bad weather or unexpected road conditions — more reliably. In finance, Bayesian deep learning enhances both risk assessment and fraud detection by allowing the system to assess how confident it is in its predictions. That added layer of information supports more informed decision-making and helps reduce costly errors. Across all these areas, the key advantage is the ability to move beyond just accuracy and incorporate trust and reliability into AI systems. Bayesian methods are traditionally more expensive, but modern approximations (e.g., variational inference or last layer inference) make them feasible. Computational costs depend on the problem — sometimes Bayesian models require fewer data points to achieve better performance. The trade-off is between interpretability and computational efficiency, but hardware improvements are helping bridge this gap. Their research on uncertainty-based automated labeling is designed to make models not just safer and more reliable, but also more efficient. By reducing the need for extensive manual labeling, one improves the overall quality of the dataset while cutting down on human effort and potential labeling errors. Importantly, by selecting informative samples, the model learns from better data — which means it can reach higher performance with fewer training examples. This leads to faster training and better generalization without sacrificing accuracy. They also focus on developing lightweight uncertainty estimation techniques that are computationally efficient, so these benefits don't come with heavy resource demands. In short, this approach helps build models that are more robust, more adaptive to new data, and significantly more efficient to train and deploy — which is critical for real-world systems where both accuracy and speed matter. Statisticians and deep learning researchers often use distinct methodologies, vocabulary and frameworks, making communication and collaboration challenging. Unfortunately, there is a lack of Interdisciplinary education: Traditional academic programs rarely integrate both fields. It is necessary to foster joint programs, workshops, and cross-disciplinary training can help bridge this gap. From Moussa's experience coming through an industrial PhD, he has seen how many industry settings tend to prioritize short-term gains — favoring quick wins in deep learning over deeper, more fundamental improvements. To overcome this, we need to build long-term research partnerships between academia and industry — ones that allow for foundational work to evolve alongside practical applications. That kind of collaboration can drive more sustainable, impactful innovation in the long run, something we do at methods for big data. Looking ahead, one of the major directions for deep learning in the next five to ten years is the shift toward trustworthy AI. We're already seeing growing attention on making models more explainable, fair, and robust — especially as AI systems are being deployed in critical areas like healthcare, mobility, and finance. The group also expect to see more hybrid models — combining deep learning with Bayesian methods, physics-based models, or symbolic reasoning. These approaches can help bridge the gap between raw performance and interpretability, and often lead to more data-efficient solutions. Another big trend is the rise of uncertainty-aware AI. As AI moves into more high-risk, real-world applications, it becomes essential that systems understand and communicate their own confidence. This is where uncertainty modeling will play a key role — helping to make AI not just more powerful, but also more safe and reliable. The lecture "Advanced Bayesian Data Analysis" covers fundamental concepts in Bayesian statistics, including parametric and non-parametric regression, computational techniques such as MCMC and variational inference, and Bayesian priors for handling high-dimensional data. Additionally, the lecturers offer a Research Seminar on Selected Topics in Statistical Learning and Data Science. The workgroup offers a variety of Master's thesis topics at the intersection of statistics and deep learning, focusing on Bayesian modeling, uncertainty quantification, and high-dimensional methods. Current topics include predictive information criteria for Bayesian models and uncertainty quantification in deep learning. Topics span theoretical, methodological, computational and applied projects. Students interested in rigorous theoretical and applied research are encouraged to explore our available projects and contact us for further details. The general advice of Nadja and Moussa for everybody interested to enter the field is: "Develop a strong foundation in statistical and mathematical principles, rather than focusing solely on the latest trends. Gain expertise in both theory and practical applications, as real-world impact requires a balance of both. Be open to interdisciplinary collaboration. Some of the most exciting and meaningful innovations happen at the intersection of fields — whether that's statistics and deep learning, or AI and domain-specific areas like medicine or mobility. So don't be afraid to step outside your comfort zone, ask questions across disciplines, and look for ways to connect different perspectives. That's often where real breakthroughs happen. With every new challenge comes an opportunity to innovate, and that's what keeps this work exciting. We're always pushing for more robust, efficient, and trustworthy AI. And we're also growing — so if you're a motivated researcher interested in this space, we'd love to hear from you." Literature and further information Webpage of the group G. Nuti, Lluis A.J. Rugama, A.-I. Cross: Efficient Bayesian Decision Tree Algorithm, arxiv Jan 2019 Wikipedia: Expected value of sample information C. Howson & P. Urbach: Scientific Reasoning: The Bayesian Approach (3rd ed.). Open Court Publishing Company. ISBN 978-0-8126-9578-6, 2005. A.Gelman e.a.: Bayesian Data Analysis Third Edition. Chapman and Hall/CRC. ISBN 978-1-4398-4095-5, 2013. Yu, Angela: Introduction to Bayesian Decision Theory cogsci.ucsd.edu, 2013. Devin Soni: Introduction to Bayesian Networks, 2015. G. Nuti, L. Rugama, A.-I. Cross: Efficient Bayesian Decision Tree Algorithm, arXiv:1901.03214 stat.ML, 2019. M. Carlan, T. Kneib and N. Klein: Bayesian conditional transformation models, Journal of the American Statistical Association, 119(546):1360-1373, 2024. N. Klein: Distributional regression for data analysis , Annual Review of Statistics and Its Application, 11:321-346, 2024 C.Hoffmann and N.Klein: Marginally calibrated response distributions for end-to-end learning in autonomous driving, Annals of Applied Statistics, 17(2):1740-1763, 2023 Kassem Sbeyti, M., Karg, M., Wirth, C., Klein, N., & Albayrak, S. (2024, September). Cost-Sensitive Uncertainty-Based Failure Recognition for Object Detection. In Uncertainty in Artificial Intelligence (pp. 1890-1900). PMLR. M. K. Sbeyti, N. Klein, A. Nowzad, F. Sivrikaya and S. Albayrak: Building Blocks for Robust and Effective Semi-Supervised Real-World Object Detection pdf. To appear in Transactions on Machine Learning Research, 2025 Podcasts Learning, Teaching, and Building in the Age of AI Ep 42 of Vanishing Gradient, Jan 2025. O. Beige, G. Thäter: Risikoentscheidungsprozesse, Gespräch im Modellansatz Podcast, Folge 193, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2019.
In this episode Gudrun speaks with Nadja Klein and Moussa Kassem Sbeyti who work at the Scientific Computing Center (SCC) at KIT in Karlsruhe. Since August 2024, Nadja has been professor at KIT leading the research group Methods for Big Data (MBD) there. She is an Emmy Noether Research Group Leader, and a member of AcademiaNet, and Die Junge Akademie, among others. In 2025, Nadja was awarded the Committee of Presidents of Statistical Societies (COPSS) Emerging Leader Award (ELA). The COPSS ELA recognizes early career statistical scientists who show evidence of and potential for leadership and who will help shape and strengthen the field. She finished her doctoral studies in Mathematics at the Universität Göttingen before conducting a postdoc at the University of Melbourne as a Feodor-Lynen fellow by the Alexander von Humboldt Foundation. Afterwards she was a Professor for Statistics and Data Science at the Humboldt-Universität zu Berlin before joining KIT. Moussa joined Nadja's lab as an associated member in 2023 and later as a postdoctoral researcher in 2024. He pursued a PhD at the TU Berlin while working as an AI Research Scientist at the Continental AI Lab in Berlin. His research primarily focuses on deep learning, developing uncertainty-based automated labeling methods for 2D object detection in autonomous driving. Prior to this, Moussa earned his M.Sc. in Mechatronics Engineering from the TU Darmstadt in 2021. The research of Nadja and Moussa is at the intersection of statistics and machine learning. In Nadja's MBD Lab the research spans theoretical analysis, method development and real-world applications. One of their key focuses is Bayesian methods, which allow to incorporate prior knowledge, quantify uncertainties, and bring insights to the “black boxes” of machine learning. By fusing the precision and reliability of Bayesian statistics with the adaptability of machine and deep learning, these methods aim to leverage the best of both worlds. The KIT offers a strong research environment, making it an ideal place to continue their work. They bring new expertise that can be leveraged in various applications and on the other hand Helmholtz offers a great platform in that respect to explore new application areas. For example Moussa decided to join the group at KIT as part of the Helmholtz Pilot Program Core-Informatics at KIT (KiKIT), which is an initiative focused on advancing fundamental research in informatics within the Helmholtz Association. Vision models typically depend on large volumes of labeled data, but collecting and labeling this data is both expensive and prone to errors. During his PhD, his research centered on data-efficient learning using uncertainty-based automated labeling techniques. That means estimating and using the uncertainty of models to select the helpful data samples to train the models to label the rest themselves. Now, within KiKIT, his work has evolved to include knowledge-based approaches in multi-task models, eg. detection and depth estimation — with the broader goal of enabling the development and deployment of reliable, accurate vision systems in real-world applications. Statistics and data science are fascinating fields, offering a wide variety of methods and applications that constantly lead to new insights. Within this domain, Bayesian methods are especially compelling, as they enable the quantification of uncertainty and the incorporation of prior knowledge. These capabilities contribute to making machine learning models more data-efficient, interpretable, and robust, which are essential qualities in safety-critical domains such as autonomous driving and personalized medicine. Nadja is also enthusiastic about the interdisciplinarity of the subject — repeatedly changing the focus from mathematics to economics to statistics to computer science. The combination of theoretical fundamentals and practical applications makes statistics an agile and important field of research in data science. From a deep learning perspective, the focus is on making models both more efficient and more reliable when dealing with large-scale data and complex dependencies. One way to do this is by reducing the need for extensive labeled data. They also work on developing self-aware models that can recognize when they're unsure and even reject their own predictions when necessary. Additionally, they explore model pruning techniques to improve computational efficiency, and specialize in Bayesian deep learning, allowing machine learning models to better handle uncertainty and complex dependencies. Beyond the methods themselves, they also contribute by publishing datasets that help push the development of next-generation, state-of-the-art models. The learning methods are applied across different domains such as object detection, depth estimation, semantic segmentation, and trajectory prediction — especially in the context of autonomous driving and agricultural applications. As deep learning technologies continue to evolve, they're also expanding into new application areas such as medical imaging. Unlike traditional deep learning, Bayesian deep learning provides uncertainty estimates alongside predictions, allowing for more principled decision-making and reducing catastrophic failures in safety-critical application. It has had a growing impact in several real-world domains where uncertainty really matters. Bayesian learning incorporates prior knowledge and updates beliefs as new data comes in, rather than relying purely on data-driven optimization. In healthcare, for example, Bayesian models help quantify uncertainty in medical diagnoses, which supports more risk-aware treatment decisions and can ultimately lead to better patient outcomes. In autonomous vehicles, Bayesian models play a key role in improving safety. By recognizing when the system is uncertain, they help capture edge cases more effectively, reduce false positives and negatives in object detection, and navigate complex, dynamic environments — like bad weather or unexpected road conditions — more reliably. In finance, Bayesian deep learning enhances both risk assessment and fraud detection by allowing the system to assess how confident it is in its predictions. That added layer of information supports more informed decision-making and helps reduce costly errors. Across all these areas, the key advantage is the ability to move beyond just accuracy and incorporate trust and reliability into AI systems. Bayesian methods are traditionally more expensive, but modern approximations (e.g., variational inference or last layer inference) make them feasible. Computational costs depend on the problem — sometimes Bayesian models require fewer data points to achieve better performance. The trade-off is between interpretability and computational efficiency, but hardware improvements are helping bridge this gap. Their research on uncertainty-based automated labeling is designed to make models not just safer and more reliable, but also more efficient. By reducing the need for extensive manual labeling, one improves the overall quality of the dataset while cutting down on human effort and potential labeling errors. Importantly, by selecting informative samples, the model learns from better data — which means it can reach higher performance with fewer training examples. This leads to faster training and better generalization without sacrificing accuracy. They also focus on developing lightweight uncertainty estimation techniques that are computationally efficient, so these benefits don't come with heavy resource demands. In short, this approach helps build models that are more robust, more adaptive to new data, and significantly more efficient to train and deploy — which is critical for real-world systems where both accuracy and speed matter. Statisticians and deep learning researchers often use distinct methodologies, vocabulary and frameworks, making communication and collaboration challenging. Unfortunately, there is a lack of Interdisciplinary education: Traditional academic programs rarely integrate both fields. It is necessary to foster joint programs, workshops, and cross-disciplinary training can help bridge this gap. From Moussa's experience coming through an industrial PhD, he has seen how many industry settings tend to prioritize short-term gains — favoring quick wins in deep learning over deeper, more fundamental improvements. To overcome this, we need to build long-term research partnerships between academia and industry — ones that allow for foundational work to evolve alongside practical applications. That kind of collaboration can drive more sustainable, impactful innovation in the long run, something we do at methods for big data. Looking ahead, one of the major directions for deep learning in the next five to ten years is the shift toward trustworthy AI. We're already seeing growing attention on making models more explainable, fair, and robust — especially as AI systems are being deployed in critical areas like healthcare, mobility, and finance. The group also expect to see more hybrid models — combining deep learning with Bayesian methods, physics-based models, or symbolic reasoning. These approaches can help bridge the gap between raw performance and interpretability, and often lead to more data-efficient solutions. Another big trend is the rise of uncertainty-aware AI. As AI moves into more high-risk, real-world applications, it becomes essential that systems understand and communicate their own confidence. This is where uncertainty modeling will play a key role — helping to make AI not just more powerful, but also more safe and reliable. The lecture "Advanced Bayesian Data Analysis" covers fundamental concepts in Bayesian statistics, including parametric and non-parametric regression, computational techniques such as MCMC and variational inference, and Bayesian priors for handling high-dimensional data. Additionally, the lecturers offer a Research Seminar on Selected Topics in Statistical Learning and Data Science. The workgroup offers a variety of Master's thesis topics at the intersection of statistics and deep learning, focusing on Bayesian modeling, uncertainty quantification, and high-dimensional methods. Current topics include predictive information criteria for Bayesian models and uncertainty quantification in deep learning. Topics span theoretical, methodological, computational and applied projects. Students interested in rigorous theoretical and applied research are encouraged to explore our available projects and contact us for further details. The general advice of Nadja and Moussa for everybody interested to enter the field is: "Develop a strong foundation in statistical and mathematical principles, rather than focusing solely on the latest trends. Gain expertise in both theory and practical applications, as real-world impact requires a balance of both. Be open to interdisciplinary collaboration. Some of the most exciting and meaningful innovations happen at the intersection of fields — whether that's statistics and deep learning, or AI and domain-specific areas like medicine or mobility. So don't be afraid to step outside your comfort zone, ask questions across disciplines, and look for ways to connect different perspectives. That's often where real breakthroughs happen. With every new challenge comes an opportunity to innovate, and that's what keeps this work exciting. We're always pushing for more robust, efficient, and trustworthy AI. And we're also growing — so if you're a motivated researcher interested in this space, we'd love to hear from you." Literature and further information Webpage of the group G. Nuti, Lluis A.J. Rugama, A.-I. Cross: Efficient Bayesian Decision Tree Algorithm, arxiv Jan 2019 Wikipedia: Expected value of sample information C. Howson & P. Urbach: Scientific Reasoning: The Bayesian Approach (3rd ed.). Open Court Publishing Company. ISBN 978-0-8126-9578-6, 2005. A.Gelman e.a.: Bayesian Data Analysis Third Edition. Chapman and Hall/CRC. ISBN 978-1-4398-4095-5, 2013. Yu, Angela: Introduction to Bayesian Decision Theory cogsci.ucsd.edu, 2013. Devin Soni: Introduction to Bayesian Networks, 2015. G. Nuti, L. Rugama, A.-I. Cross: Efficient Bayesian Decision Tree Algorithm, arXiv:1901.03214 stat.ML, 2019. M. Carlan, T. Kneib and N. Klein: Bayesian conditional transformation models, Journal of the American Statistical Association, 119(546):1360-1373, 2024. N. Klein: Distributional regression for data analysis , Annual Review of Statistics and Its Application, 11:321-346, 2024 C.Hoffmann and N.Klein: Marginally calibrated response distributions for end-to-end learning in autonomous driving, Annals of Applied Statistics, 17(2):1740-1763, 2023 Kassem Sbeyti, M., Karg, M., Wirth, C., Klein, N., & Albayrak, S. (2024, September). Cost-Sensitive Uncertainty-Based Failure Recognition for Object Detection. In Uncertainty in Artificial Intelligence (pp. 1890-1900). PMLR. M. K. Sbeyti, N. Klein, A. Nowzad, F. Sivrikaya and S. Albayrak: Building Blocks for Robust and Effective Semi-Supervised Real-World Object Detection pdf. To appear in Transactions on Machine Learning Research, 2025 Podcasts Learning, Teaching, and Building in the Age of AI Ep 42 of Vanishing Gradient, Jan 2025. O. Beige, G. Thäter: Risikoentscheidungsprozesse, Gespräch im Modellansatz Podcast, Folge 193, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2019.
In this episode of The Backstory on the Shroud of Turin, host Guy Powell interviews evangelical apologist and theologian Tom Dallis. The two dive deep into Jewish burial customs from the first century and how these practices offer compelling support for the authenticity of the Shroud of Turin. Dallis details how key figures like Nicodemus and Joseph of Arimathea honored Jesus Christ with kingly burial rites—including 75 pounds of burial spices and fine linen, exactly what we'd expect for a royal entombment.But the discussion doesn't stop at tradition. Dallis explores how modern science can further bolster the case for the Shroud, applying odds calculus and Bayesian probability. By combining over 30 lines of evidence—from forensic blood analysis to image formation science—Dallis concludes that the probability of forgery is so low it borders on the impossible.You'll also learn how the Shroud reflects not only the physical suffering of Jesus but also echoes the symbolic role of Christ as the High Priest in the Holy of Holies.Whether you're grounded in faith or grounded in data, this episode challenges you to think deeper about the most famous burial cloth in history.
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli.Takeaways:Player tracking data revolutionized sports analytics.Decision-making in sports involves managing uncertainty and budget constraints.Luke emphasizes the importance of portfolio optimization in team management.Clubs with high budgets can afford inefficiencies in player acquisition.Statistical methods provide a probabilistic approach to player value.Removing human bias is crucial in sports decision-making.Understanding player performance distributions aids in contract decisions.The goal is to maximize performance value per dollar spent.Model validation in sports requires focusing on edge cases.
New treatment alert! The FDA recently approved Tapinarof, applied as a cream, for kids 2 years and up. We ask Dr. Leon Kircik from Icahn School of Medicine, NY, who led the clinical trials about the safety, efficacy and side effects of Tapinarof. And because we are parents too, we ask: How quickly does it work? Can you start/stop it as needed? How easy will it be to access? And more. If you like our podcast, please consider supporting it with a tax deductible donation. Research discussedTapinarof Improved Outcomes and Sleep for Patients and Families in Two Phase 3 Atopic Dermatitis Trials in Adults and ChildrenMaximal usage trial of tapinarof cream 1% once daily in pediatric patients down to 2 years of age with extensive atopic dermatitisTapinarof cream 1% once daily: Significant efficacy in the treatment of moderate to severe atopic dermatitis in adults and children down to 2 years of age in the pivotal phase 3 ADORING trialsTapinarof cream in the treatment of atopic dermatitis in children and adults a systematic review and meta-analysisEfficacy and safety of Ruxolitinib, Crisaborole, and Tapinarof for mild-to-moderate atopic dermatitis: a Bayesian network analysis of RCTs
Today's clip is from episode 130 of the podcast, with epidemiological modeler Adam Kucharski.This conversation explores the critical role of patient modeling during the COVID-19 pandemic, highlighting how these models informed public health decisions and the relationship between modeling and policy. The discussion emphasizes the need for improved communication and understanding of data among the public and policymakers.Get the full discussion at https://learnbayesstats.com/episode/129-bayesian-deep-learning-ai-for-science-vincent-fortuinIntro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli.Takeaways:Epidemiology requires a blend of mathematical and statistical understanding.Models are essential for informing public health decisions during epidemics.The COVID-19 pandemic highlighted the importance of rapid modeling.Misconceptions about data can lead to misunderstandings in public health.Effective communication is crucial for conveying complex epidemiological concepts.Epidemic thinking can be applied to various fields, including marketing and finance.Public health policies should be informed by robust modeling and data analysis.Automation can help streamline data analysis in epidemic response.Understanding the limitations of models...
Agents of Innovation: AI-Powered Product Ideation with Synthetic Consumer Testing // MLOps Podcast #306 with Luca Fiaschi, Partner of PyMC Labs.Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter // AbstractTraditional product development cycles require extensive consumer research and market testing, resulting in lengthy development timelines and significant resource investment. We've transformed this process by building a distributed multi-agent system that enables parallel quantitative evaluation of hundreds of product concepts. Our system combines three key components: an Agentic innovation lab generating high-quality product concepts, synthetic consumer panels using fine-tuned foundational models validated against historical data, and an evaluation framework that correlates with real-world testing outcomes. We can talk about how this architecture enables rapid concept discovery and digital experimentation, delivering insights into product success probability before development begins. Through case studies and technical deep-dives, you'll learn how we built an AI powered innovation lab that compresses months of product development and testing into minutes - without sacrificing the accuracy of insights. // BioWith over 15 years of leadership experience in AI, data science, and analytics, Luca has driven transformative growth in technology-first businesses. As Chief Data & AI Officer at Mistplay, he led the company's revenue growth through AI-powered personalization and data-driven pricing. Prior to that, he held executive roles at global industry leaders such as HelloFresh ($8B), Stitch Fix ($1.2B) and Rocket Internet ($1B). Luca's core competencies include machine learning, artificial intelligence, data mining, data engineering, and computer vision, which he has applied to various domains such as marketing, logistics, personalization, product, experimentation and pricing.He is currently a partner at PyMC Labs, a leading data science consultancy, providing insights and guidance on applications of Bayesian and Causal Inference techniques and Generative AI to fortune 500 companies. Luca holds a PhD in AI and Computer Vision from Heidelberg University and has more than 450 citations on his research work.// Related LinksWebsite: https://www.pymc-labs.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Luca on LinkedIn: /lfiaschi
As regulatory expectations evolve under the FDA's Project Optimus oncology dosing initiative, biostatistics is emerging as a central pillar in designing and executing trials that move beyond the traditional maximum tolerated dose (MTD) approach.In this fourth episode of our Project Optimus series, host Dr. Wael Harb is joined by biostatistics expert X.Q Xue, PhD, Vice President and Global Head, Biostatistics at Syneos Health to explore how statistical science is transforming dose optimization in oncology drug development. Dr. Xue discusses the limitations of legacy 3+3 dose-escalation designs and introduces innovative alternatives, including Bayesian modeling, adaptive trial strategies and randomized parallel dose-response studies, which support more precise dose selection and can ultimately improve patient outcomes and trial efficiency.Together, Drs. Harb and Xue examine how smaller biotech companies can overcome barriers to implementation, the role of simulation and AI in trial planning and how a biostatistics-driven approach may increase the likelihood of late-phase success, reduce post-marketing adjustments and support faster regulatory approvals.The views expressed in this podcast belong solely to the speakers and do not represent those of their organization. If you want access to more future-focused, actionable insights to help biopharmaceutical companies better execute and succeed in a constantly evolving environment, visit the Syneos Health Insights Hub. The perspectives you'll find there are driven by dynamic research and crafted by subject matter experts focused on real answers to help guide decision-making and investment. You can find it all at insightshub.health. Like what you're hearing? Be sure to rate and review us! We want to hear from you! If there's a topic you'd like us to cover on a future episode, contact us at podcast@syneoshealth.com.
Today's clip is from episode 129 of the podcast, with AI expert and researcher Vincent Fortuin.This conversation delves into the intricacies of Bayesian deep learning, contrasting it with traditional deep learning and exploring its applications and challenges.Get the full discussion at https://learnbayesstats.com/episode/129-bayesian-deep-learning-ai-for-science-vincent-fortuinIntro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
Prof. Kevin Ellis and Dr. Zenna Tavares talk about making AI smarter, like humans. They want AI to learn from just a little bit of information by actively trying things out, not just by looking at tons of data.They discuss two main ways AI can "think": one way is like following specific rules or steps (like a computer program), and the other is more intuitive, like guessing based on patterns (like modern AI often does). They found combining both methods works well for solving complex puzzles like ARC.A key idea is "compositionality" - building big ideas from small ones, like LEGOs. This is powerful but can also be overwhelming. Another important idea is "abstraction" - understanding things simply, without getting lost in details, and knowing there are different levels of understanding.Ultimately, they believe the best AI will need to explore, experiment, and build models of the world, much like humans do when learning something new.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT:https://www.dropbox.com/scl/fi/3ngggvhb3tnemw879er5y/BASIS.pdf?rlkey=lr2zbj3317mex1q5l0c2rsk0h&dl=0 Zenna Tavares:http://www.zenna.org/Kevin Ellis:https://www.cs.cornell.edu/~ellisk/TOC:1. Compositionality and Learning Foundations [00:00:00] 1.1 Compositional Search and Learning Challenges [00:03:55] 1.2 Bayesian Learning and World Models [00:12:05] 1.3 Programming Languages and Compositionality Trade-offs [00:15:35] 1.4 Inductive vs Transductive Approaches in AI Systems2. Neural-Symbolic Program Synthesis [00:27:20] 2.1 Integration of LLMs with Traditional Programming and Meta-Programming [00:30:43] 2.2 Wake-Sleep Learning and DreamCoder Architecture [00:38:26] 2.3 Program Synthesis from Interactions and Hidden State Inference [00:41:36] 2.4 Abstraction Mechanisms and Resource Rationality [00:48:38] 2.5 Inductive Biases and Causal Abstraction in AI Systems3. Abstract Reasoning Systems [00:52:10] 3.1 Abstract Concepts and Grid-Based Transformations in ARC [00:56:08] 3.2 Induction vs Transduction Approaches in Abstract Reasoning [00:59:12] 3.3 ARC Limitations and Interactive Learning Extensions [01:06:30] 3.4 Wake-Sleep Program Learning and Hybrid Approaches [01:11:37] 3.5 Project MARA and Future Research DirectionsREFS:[00:00:25] DreamCoder, Kevin Ellis et al.https://arxiv.org/abs/2006.08381[00:01:10] Mind Your Step, Ryan Liu et al.https://arxiv.org/abs/2410.21333[00:06:05] Bayesian inference, Griffiths, T. L., Kemp, C., & Tenenbaum, J. B.https://psycnet.apa.org/record/2008-06911-003[00:13:00] Induction and Transduction, Wen-Ding Li, Zenna Tavares, Yewen Pu, Kevin Ellishttps://arxiv.org/abs/2411.02272[00:23:15] Neurosymbolic AI, Garcez, Artur d'Avila et al.https://arxiv.org/abs/2012.05876[00:33:50] Induction and Transduction (II), Wen-Ding Li, Kevin Ellis et al.https://arxiv.org/abs/2411.02272[00:38:35] ARC, François Chollethttps://arxiv.org/abs/1911.01547[00:39:20] Causal Reactive Programs, Ria Das, Joshua B. Tenenbaum, Armando Solar-Lezama, Zenna Tavareshttp://www.zenna.org/publications/autumn2022.pdf[00:42:50] MuZero, Julian Schrittwieser et al.http://arxiv.org/pdf/1911.08265[00:43:20] VisualPredicator, Yichao Lianghttps://arxiv.org/abs/2410.23156[00:48:55] Bayesian models of cognition, Joshua B. Tenenbaumhttps://mitpress.mit.edu/9780262049412/bayesian-models-of-cognition/[00:49:30] The Bitter Lesson, Rich Suttonhttp://www.incompleteideas.net/IncIdeas/BitterLesson.html[01:06:35] Program induction, Kevin Ellis, Wen-Ding Lihttps://arxiv.org/pdf/2411.02272[01:06:50] DreamCoder (II), Kevin Ellis et al.https://arxiv.org/abs/2006.08381[01:11:55] Project MARA, Zenna Tavares, Kevin Ellishttps://www.basis.ai/blog/mara/
Kevin Werbach speaks with Eric Bradlow, Vice Dean of AI & Analytics at Wharton. Bradlow highlights the transformative impacts of AI from his perspective as an applied statistician and quantitative marketing expert. He describes the distinctive approach of Wharton's analytics program, and its recent evolution with the rise of AI. The conversation highlights the significance of legal and ethical responsibility within the AI field, and the genesis of the new Wharton Accountable AI Lab. Werbach and Bradlow then examine the role of academic institutions in shaping the future of AI, and how institutions like Wharton can lead the way in promoting accountability, learning and responsible AI deployment. Eric Bradlow is the Vice Dean of AI & Analytics at Wharton, Chair of the Marketing Department, and also a professor of Economics, Education, Statistics, and Data Science. His research interests include Bayesian modeling, statistical computing, and developing new methodology for unique data structures with application to business problems. In addition to publishing in a variety of top journals, he has won numerous teaching awards at Wharton, including the MBA Core Curriculum teaching award, the Miller-Sherrerd MBA Core Teaching Award and the Excellence in Teaching Award. Episode Transcript Wharton AI & Analytics Initiative Eric Bradlow - Knowledge at Wharton Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI's power while addressing its risks.
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:The hype around AI in science often fails to deliver practical results.Bayesian deep learning combines the strengths of deep learning and Bayesian statistics.Fine-tuning LLMs with Bayesian methods improves prediction calibration.There is no single dominant library for Bayesian deep learning yet.Real-world applications of Bayesian deep learning exist in various fields.Prior knowledge is crucial for the effectiveness of Bayesian deep learning.Data efficiency in AI can be enhanced by incorporating prior knowledge.Generative AI and Bayesian deep learning can inform each other.The complexity of a problem influences the choice between Bayesian and traditional deep learning.Meta-learning enhances the efficiency of Bayesian models.PAC-Bayesian theory merges Bayesian and frequentist ideas.Laplace inference offers a cost-effective approximation.Subspace inference can optimize parameter efficiency.Bayesian deep learning is crucial for reliable predictions.Effective communication of uncertainty is essential.Realistic benchmarks are needed for Bayesian methodsCollaboration and communication in the AI community are vital.Chapters:00:00 Introduction to Bayesian Deep Learning04:24 Vincent Fortuin's Journey to Bayesian Deep Learning11:52 Understanding Bayesian Deep Learning16:29 Current Landscape of Bayesian Libraries21:11 Real-World Applications of Bayesian Deep Learning23:33 When to Use Bayesian Deep Learning28:22 Data Efficiency in AI and Generative Modeling30:18 Integrating Bayesian Knowledge into Generative Models31:44 The Role of Meta-Learning in Bayesian Deep Learning34:06 Understanding Pack Bayesian Theory37:55 Algorithms for Bayesian Deep Learning Models
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Matt emphasizes the importance of Bayesian statistics in scenarios with limited data.Communicating insights to coaches is a crucial skill for data analysts.Building a data team requires understanding the needs of the coaching staff.Player recruitment is a significant focus in football analytics.The integration of data science in sports is still evolving.Effective data modeling must consider the practical application in games.Collaboration between data analysts and coaches enhances decision-making.Having a robust data infrastructure is essential for efficient analysis.The landscape of sports analytics is becoming increasingly competitive. Player recruitment involves analyzing various data models.Biases in traditional football statistics can skew player evaluations.Statistical techniques should leverage the structure of football data.Tracking data opens new avenues for understanding player movements.The role of data analysis in football will continue to grow.Aspiring analysts should focus on curiosity and practical experience.Chapters:00:00 Introduction to Football Analytics and Matt's Journey04:54 The Role of Bayesian Methods in Football10:20 Challenges in Communicating Data Insights17:03 Building Relationships with Coaches22:09 The Structure of the Data Team at Como26:18 Focus on Player Recruitment and Transfer Strategies28:48 January Transfer Window Insights30:54 Biases in Football Data Analysis34:11 Comparative Analysis of Men's and Women's Football36:55 Statistical Techniques in Football Analysis42:48 The Impact of Tracking Data on Football Analysis45:49 The Future of Data-Driven Football Strategies47:27 Advice for Aspiring Football Analysts
Today's revolutionary idea is something a bit different: David talks to statistician David Spiegelhalter about how an eighteenth-century theory of probability emerged from relative obscurity in the twentieth century to reconfigure our understanding of the relationship between past, present and future. What was Thomas Bayes's original idea about doing probability in reverse: from effect to cause? What happened when this way of thinking passed through the vortex of the French Revolution? How has it come to lie behind recent innovations in political polling, AI, self-driving cars, medical research and so much more? Why does it remain controversial to this day? The latest edition of our free fortnightly newsletter is available: to get it in your inbox sign up now https://www.ppfideas.com/newsletter Next time: 1848: The Liberal Revolution w/Chris Clark Past Present Future is part of the Airwave Podcast Network Learn more about your ad choices. Visit megaphone.fm/adchoices
Are We Looking at the End of Atheism? Dr. Christopher Sernaque sits down with Otangelo Grassio, an advocate for Intelligent Design, to break down the biggest questions in science and faith! In the final episode of Current Topics in Science Season 6, we dive deep into his latest book, Unraveling the Theistic Worldview, tackling Bayesian probability, the limits of naturalism, and why the demand for definitive proof of God might be a trap! Don't miss this eye-opening discussion that could change the way you see science forever! Watch now and decide for yourself!
Today's episode is about a very different revolution from any we've discussed so far: David talks to historian Hank Gonzalez about the Haitian Revolution, which for the first time in history saw a slave revolt result in an independent free state. How did the Haitian Revolution intersect with the American and French Revolutions that preceded it? Why were European powers unable to reverse it despite massive military intervention? What is its legacy for the state of Haiti today? Tickets are still available for PPF Live at the Bath Curious Minds Festival: join us on Saturday 29th March to hear David in conversation with Robert Saunders about the legacy of Winston Churchill: The Politician with Nine Lives https://bit.ly/42GPp3X Out tomorrow the latest edition of our free fortnightly newsletter: to get it in your inbox sign up now https://www.ppfideas.com/newsletters Next time: The Bayesian revolution w/David Spiegelhalter Past Present Future is part of the Airwave Podcast Network Learn more about your ad choices. Visit megaphone.fm/adchoices
This episode covers: Cardiology This Week: A concise summary of recent studies AI and the future of the Electrocardiogram The heart in rheumatic disorders and autoimmune diseases Statistics Made Easy: Bayesian analysis Host: Susanna Price Guests: Carlos Aguiar, Paul Friedman, Maya Buch Want to watch that episode? Go to: https://esc365.escardio.org/event/1801 Disclaimer: ESC TV Today is supported by Bristol Myers Squibb. This scientific content and opinions expressed in the programme have not been influenced in any way by its sponsor. This programme is intended for health care professionals only and is to be used for educational purposes. The European Society of Cardiology (ESC) does not aim to promote medicinal products nor devices. Any views or opinions expressed are the presenters' own and do not reflect the views of the ESC. Declarations of interests: Stephan Achenbach, Antonio Greco, Nicolle Kraenkel and Susanna Price have declared to have no potential conflicts of interest to report. Carlos Aguiar has declared to have potential conflicts of interest to report: personal fees for consultancy and/or speaker fees from Abbott, AbbVie, Alnylam, Amgen, AstraZeneca, Bayer, BiAL, Boehringer-Ingelheim, Daiichi-Sankyo, Ferrer, Gilead, GSK, Lilly, Novartis, Novo Nordisk, Pfizer, Sanofi, Servier, Takeda, Tecnimede. Maya Buch has declared to have potential conflicts of interest to report: grant/research support paid to University of Manchester from Gilead and Galapagos; consultant and/or speaker with funds paid to University of Manchester for AbbVie, Boehringer Ingelheim, CESAS Medical, Eli Lilly, Galapagos, Gilead Sciences, Medistream and Pfizer Inc; member of the Speakers' Bureau for AbbVie with funds paid to University of Manchester. Davide Capodanno has declared to have potential conflicts of interest to report: Bristol Myers Squibb, Daiichi Sankyo, Sanofi Aventis, Novo Nordisk, Terumo. Paul Friedman has declared to have potential conflicts of interest to report: co-inventor of AI ECG algorithms. Steffen Petersen has declared to have potential conflicts of interest to report: consultancy for Circle Cardiovascular Imaging Inc. Calgary, Alberta, Canada. Emma Svennberg has declared to have potential conflicts of interest to report: Abbott, Astra Zeneca, Bayer, Bristol-Myers, Squibb-Pfizer, Johnson & Johnson.
Did Jeffrey Epstein die by murder or suicide? In this episode, I argue that we should use Bayesian statistics to frame the debate. Indeed, we should use this approach to frame most "conspiracy theories". Most such theories are derided as compelling storytellers weaving half truths to fit their narrative.Bayes offers a more analytical approach.1. Make an educated guess about the probability of an event occurring. The likelihood of Epstein dying by suicide.2. Identify authenticated clues that support that hypothesis.3. Assess the odds of each clue happening independently, i.e. jail cameras not working, hyoid bone being broken, both guards falling asleep.4. Then calculate odds of those happening together.5. Perform calculation and then update original probability estimate based upon the probability those clues happening.Using Bayes with a "little" help from Grok, I identify the odds of murder versus suicide.I also identify ways that you should attack this analysis and not just my use of Grok.This approach should be used more frequently as we try to resolve debates surrounding "conspiracies". I don't even really like that word. We're really trying to assess whether Event x was caused by y or z.
When billionaire British entrepreneur Mike Lynch drowned during the sinking of the superyacht Bayesian in August, it sent shockwaves around the world.Having just successfully fought off the US Justice Department on fourteen counts of fraud and conspiracy, he was celebrating his newfound freedom when he was tragically killed during a freak storm.After months of work by our senior reporter, Henry Bodkin, the Daily T investigates what might have caused a boat that was previously described as unsinkable to vanish beneath the waves.Clips in this episode from:BBC NewsnightBBC NewsUniversity of Cambridge Judge Business SchoolBBC Radio 4Sky NewsAPPlanning Editor: Venetia RaineyExecutive Producer: Louisa WellsSound Design: Elliot LampittSocial Media Producer: Niamh WalshStudio Operator: Meghan Searle Hosted on Acast. See acast.com/privacy for more information.
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia and Michael Cao.Takeaways:Sharks play a crucial role in maintaining healthy ocean ecosystems.Bayesian statistics are particularly useful in data-poor environments like ecology.Teaching Bayesian statistics requires a shift in mindset from traditional statistical methods.The shark meat trade is significant and often overlooked.Ray meat trade is as large as shark meat trade, with specific markets dominating.Understanding the ecological roles of species is essential for effective conservation.Causal language is important in ecological research and should be encouraged.Evidence-driven decision-making is crucial in balancing human and ecological needs.Expert opinions are...
Plus AI Robots Helps SeniorsLike this? Get AIDAILY, delivered to your inbox, every weekday. Subscribe to our newsletter at https://aidaily.usTeleperformance Introduces AI to Neutralize Indian Call Center AccentsTeleperformance, the world's largest call center operator, has implemented AI technology developed by Sanas to "neutralize" the accents of Indian customer service agents in real-time. This initiative aims to enhance clarity and improve customer interactions. While the company asserts that this will foster better connections between customers and agents, critics express concerns about potential impacts on cultural identity and authenticity in customer service.AI Robots May Hold Key to Nursing Japan's Ageing PopulationJapan faces a critical shortage of aged-care workers due to its rapidly ageing population and declining birth rate. Researchers in Tokyo have developed AIREC, an AI-driven humanoid robot capable of assisting with tasks like patient movement and household chores. While promising, these robots require significant advancements in precision and safety before widespread adoption, anticipated around 2030.Tencent's Hunyuan Turbo S AI Model Outpaces DeepSeek R1Tencent has unveiled its latest AI model, Hunyuan Turbo S, which delivers responses in under a second, surpassing the speed of DeepSeek's R1 model. This development intensifies the AI competition among Chinese tech giants, with Tencent emphasizing Turbo S's cost-efficiency and advanced capabilities in knowledge, mathematics, and reasoning. citeturn0news3Majority of Small Businesses Now Embracing Artificial IntelligenceA 2024 survey by Goldman Sachs 10,000 Small Businesses reveals that 69% of small businesses have integrated AI into their operations, a significant increase from 56% in 2023. Business owners report that AI adoption has led to time and cost savings, with applications ranging from coding assistance and content creation to application screening and inventory management. A Breakthrough in AI-Designed Lightweight, Strong MaterialsResearchers have utilized AI to develop nanostructured carbon materials that combine the compressive strength of carbon steel (180–360 MPa) with the low density of Styrofoam (125–215 kg/m³). This advancement holds promise for applications in aviation and other industries where strength-to-weight ratio is critical. The materials were created using a multi-objective Bayesian optimization algorithm and fabricated through two-photon polymerization photolithography.Google's AI Summaries Face Legal Battle Over Search TrafficChegg has sued Google, claiming its AI-generated search summaries are cutting traffic to its site and harming revenue. The lawsuit argues that Google's AI Overviews provide direct answers, discouraging users from clicking external links. This case highlights rising tensions between content creators and AI-driven search models.
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Marketing analytics is crucial for understanding customer behavior.PyMC Marketing offers tools for customer lifetime value analysis.Media mix modeling helps allocate marketing spend effectively.Customer Lifetime Value (CLV) models are essential for understanding long-term customer behavior.Productionizing models is essential for real-world applications.Productionizing models involves challenges like model artifact storage and version control.MLflow integration enhances model tracking and management.The open-source community fosters collaboration and innovation.Understanding time series is vital in marketing analytics.Continuous learning is key in the evolving field of data science.Chapters:00:00 Introduction to Will Dean and His Work10:48 Diving into PyMC Marketing17:10 Understanding Media Mix Modeling25:54 Challenges in Productionizing Models35:27 Exploring Customer Lifetime Value Models44:10 Learning and Development in Data ScienceThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz,...
This episode is sponsored by Thuma. Thuma is a modern design company that specializes in timeless home essentials that are mindfully made with premium materials and intentional details. To get $100 towards your first bed purchase, go to http://thuma.co/eyeonai In this episode of the Eye on AI podcast, Pedro Domingos, renowned AI researcher and author of The Master Algorithm, joins Craig Smith to explore the evolution of machine learning, the resurgence of Bayesian AI, and the future of artificial intelligence. Pedro unpacks the ongoing battle between Bayesian and Frequentist approaches, explaining why probability is one of the most misunderstood concepts in AI. He delves into Bayesian networks, their role in AI decision-making, and how they powered Google's ad system before deep learning. We also discuss how Bayesian learning is still outperforming humans in medical diagnosis, search & rescue, and predictive modeling, despite its computational challenges. The conversation shifts to deep learning's limitations, with Pedro revealing how neural networks might be just a disguised form of nearest-neighbor learning. He challenges conventional wisdom on AGI, AI regulation, and the scalability of deep learning, offering insights into why Bayesian reasoning and analogical learning might be the future of AI. We also dive into analogical learning—a field championed by Douglas Hofstadter—exploring its impact on pattern recognition, case-based reasoning, and support vector machines (SVMs). Pedro highlights how AI has cycled through different paradigms, from symbolic AI in the '80s to SVMs in the 2000s, and why the next big breakthrough may not come from neural networks at all. From theoretical AI debates to real-world applications, this episode offers a deep dive into the science behind AI learning methods, their limitations, and what's next for machine intelligence. Don't forget to like, subscribe, and hit the notification bell for more expert discussions on AI, technology, and the future of innovation! Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction (02:55) The Five Tribes of Machine Learning Explained (06:34) Bayesian vs. Frequentist: The Probability Debate (08:27) What is Bayes' Theorem & How AI Uses It (12:46) The Power & Limitations of Bayesian Networks (16:43) How Bayesian Inference Works in AI (18:56) The Rise & Fall of Bayesian Machine Learning (20:31) Bayesian AI in Medical Diagnosis & Search and Rescue (25:07) How Google Used Bayesian Networks for Ads (28:56) The Role of Uncertainty in AI Decision-Making (30:34) Why Bayesian Learning is Computationally Hard (34:18) Analogical Learning – The Overlooked AI Paradigm (38:09) Support Vector Machines vs. Neural Networks (41:29) How SVMs Once Dominated Machine Learning (45:30) The Future of AI – Bayesian, Neural, or Hybrid? (50:38) Where AI is Heading Next
Evan Wimpey joins me to chat about working as a professional comedian, some Bayesian jokes, and other stuff that happened before ;)
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire and Mike Loncaric.Takeaways:The evolution of sports modeling is tied to the availability of high-frequency data.Bayesian methods are valuable in handling messy, hierarchical data.Communication between data scientists and decision-makers is crucial for effective model use.Models are often wrong, and learning from mistakes is part of the process.Simplicity in models can sometimes yield better results than complexity.The integration of analytics in sports is still developing, with opportunities in various sports.Transparency in research and development teams enhances decision-making.Understanding uncertainty in models is essential for informed decisions.The balance between point estimates and full distributions is a...
Sports analytics is a booming industry with new technologies allowing for the parsing of ever more sophisticated statistics. Analysts can now examine the height and the force of a gymnast tumbling pass, the probability of going for it on a 4th down in football, actually working out, and the arc of the best swing for a baseball player. Analytics are also used in the conditioning of athletes, particularly for all the baseball players preparing for the start of the MLB's spring training. Analytics is the focus of this episode of stats and stories with guest Alexandre Andorra. Alexandre Andorra is a Senior Applied Scientist for the Miami Marlins as well a Bayesian modeler at the PyMC Labs consultancy firm that he cofounded as well as the host the podcast dedicated to Bayesian inference “Learning Bayesian Statistics” His areas of expertise include Hierarchical Models, Gaussian Processes and Causal Inference.
Part one of our two-part investigation into the Rationalist cult “The Zizians.” We start with the killing of a border patrol officer and make our way back into the belly of the beast: Silicon Valley. Featuring: Harry Potter fanfic, samurai swords, Guy Fawkes masks, Blake Masters, Bayesian probability, and Eliezer Yudkowsky. Infohazard warning: some of your least favs will be implicated. Discover more episodes at podcast.trueanon.com
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meOur theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Bayesian statistics offers a robust framework for econometric modeling.State space models provide a comprehensive way to understand time series data.Gaussian random walks serve as a foundational model in time series analysis.Innovations represent external shocks that can significantly impact forecasts.Understanding the assumptions behind models is key to effective forecasting.Complex models are not always better; simplicity can be powerful.Forecasting requires careful consideration of potential disruptions. Understanding observed and hidden states is crucial in modeling.Latent abilities can be modeled as Gaussian random walks.State space models can be highly flexible and diverse.Composability allows for the integration of different model components.Trends in time series should reflect real-world dynamics.Seasonality can be captured through Fourier bases.AR components help model residuals in time series data.Exogenous regression components can enhance state space models.Causal analysis in time series often involves interventions and counterfactuals.Time-varying regression allows for dynamic relationships between variables.Kalman filters were originally developed for tracking rockets in space.The Kalman filter iteratively updates beliefs based on new data.Missing data can be treated as hidden states in the Kalman filter framework.The Kalman filter is a practical application of Bayes' theorem in a sequential context.Understanding the dynamics of systems is crucial for effective modeling.The state space module in PyMC simplifies complex time series modeling tasks.Chapters:00:00 Introduction to Jesse Krabowski and Time Series Analysis04:33 Jesse's Journey into Bayesian Statistics10:51 Exploring State Space Models18:28 Understanding State Space Models and Their Components
On Hands-On Tech, Mikah helps Lars, who is seeking help for a friend who is being flooded with spam emails and is looking for a way to filter out the spam emails! Mikah strongly recommends transitioning away from ISP-provided email addresses, as they often get shuffled between companies and can face service disruptions There are also some options to consider when moving from an old email account, such as setting up auto-responses directing to the new address or configuring your new email account to pull from your old account. Mikah suggests two things to do. The first is to create a Gmail account to leverage its excellent spam filtering capabilities, then configure it to pull mail from the Cox address and connect it to Outlook. The second option is to use local spam filtering software like Mailwasher, which offers Bayesian filtering that learns from user input to improve spam detection over time. Other software options to consider are Spambully, Spamfighter, and clean.io, though some may not work without MX record access. Also, be sure to stick around to the end of the episode for an upcoming change to Hands-On Tech. Send in your questions for Mikah to answer! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
On Hands-On Tech, Mikah helps Lars, who is seeking help for a friend who is being flooded with spam emails and is looking for a way to filter out the spam emails! Mikah strongly recommends transitioning away from ISP-provided email addresses, as they often get shuffled between companies and can face service disruptions There are also some options to consider when moving from an old email account, such as setting up auto-responses directing to the new address or configuring your new email account to pull from your old account. Mikah suggests two things to do. The first is to create a Gmail account to leverage its excellent spam filtering capabilities, then configure it to pull mail from the Cox address and connect it to Outlook. The second option is to use local spam filtering software like Mailwasher, which offers Bayesian filtering that learns from user input to improve spam detection over time. Other software options to consider are Spambully, Spamfighter, and clean.io, though some may not work without MX record access. Also, be sure to stick around to the end of the episode for an upcoming change to Hands-On Tech. Send in your questions for Mikah to answer! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
On Hands-On Tech, Mikah helps Lars, who is seeking help for a friend who is being flooded with spam emails and is looking for a way to filter out the spam emails! Mikah strongly recommends transitioning away from ISP-provided email addresses, as they often get shuffled between companies and can face service disruptions There are also some options to consider when moving from an old email account, such as setting up auto-responses directing to the new address or configuring your new email account to pull from your old account. Mikah suggests two things to do. The first is to create a Gmail account to leverage its excellent spam filtering capabilities, then configure it to pull mail from the Cox address and connect it to Outlook. The second option is to use local spam filtering software like Mailwasher, which offers Bayesian filtering that learns from user input to improve spam detection over time. Other software options to consider are Spambully, Spamfighter, and clean.io, though some may not work without MX record access. Also, be sure to stick around to the end of the episode for an upcoming change to Hands-On Tech. Send in your questions for Mikah to answer! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
On Hands-On Tech, Mikah helps Lars, who is seeking help for a friend who is being flooded with spam emails and is looking for a way to filter out the spam emails! Mikah strongly recommends transitioning away from ISP-provided email addresses, as they often get shuffled between companies and can face service disruptions There are also some options to consider when moving from an old email account, such as setting up auto-responses directing to the new address or configuring your new email account to pull from your old account. Mikah suggests two things to do. The first is to create a Gmail account to leverage its excellent spam filtering capabilities, then configure it to pull mail from the Cox address and connect it to Outlook. The second option is to use local spam filtering software like Mailwasher, which offers Bayesian filtering that learns from user input to improve spam detection over time. Other software options to consider are Spambully, Spamfighter, and clean.io, though some may not work without MX record access. Also, be sure to stick around to the end of the episode for an upcoming change to Hands-On Tech. Send in your questions for Mikah to answer! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
On Hands-On Tech, Mikah helps Lars, who is seeking help for a friend who is being flooded with spam emails and is looking for a way to filter out the spam emails! Mikah strongly recommends transitioning away from ISP-provided email addresses, as they often get shuffled between companies and can face service disruptions There are also some options to consider when moving from an old email account, such as setting up auto-responses directing to the new address or configuring your new email account to pull from your old account. Mikah suggests two things to do. The first is to create a Gmail account to leverage its excellent spam filtering capabilities, then configure it to pull mail from the Cox address and connect it to Outlook. The second option is to use local spam filtering software like Mailwasher, which offers Bayesian filtering that learns from user input to improve spam detection over time. Other software options to consider are Spambully, Spamfighter, and clean.io, though some may not work without MX record access. Also, be sure to stick around to the end of the episode for an upcoming change to Hands-On Tech. Send in your questions for Mikah to answer! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meOur theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:BART models are non-parametric Bayesian models that approximate functions by summing trees.BART is recommended for quick modeling without extensive domain knowledge.PyMC-BART allows mixing BART models with various likelihoods and other models.Variable importance can be easily interpreted using BART models.PreliZ aims to provide better tools for prior elicitation in Bayesian statistics.The integration of BART with Bambi could enhance exploratory modeling.Teaching Bayesian statistics involves practical problem-solving approaches.Future developments in PyMC-BART include significant speed improvements.Prior predictive distributions can aid in understanding model behavior.Interactive learning tools can enhance understanding of statistical concepts.Integrating PreliZ with PyMC improves workflow transparency.Arviz 1.0 is being completely rewritten for better usability.Prior elicitation is crucial in Bayesian modeling.Point intervals and forest plots are effective for visualizing complex data.Chapters:00:00 Introduction to Osvaldo Martin and Bayesian Statistics08:12 Exploring Bayesian Additive Regression Trees (BART)18:45 Prior Elicitation and the PreliZ Package29:56 Teaching Bayesian Statistics and Future Directions45:59 Exploring Prior Predictive Distributions52:08 Interactive Modeling with PreliZ54:06 The Evolution of ArviZ01:01:23 Advancements in ArviZ 1.001:06:20 Educational Initiatives in Bayesian Statistics01:12:33 The Future of Bayesian MethodsThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin...
Please join my mailing list here
Contributor: Ricky Dhaliwal MD Educational Pearls: Etomidate was previously the drug of choice for rapid sequence intubation (RSI) However, it carries a risk of adrenal insufficiency as an adverse effect through inhibition of mitochondrial 11-β-hydroxylase activity A recent meta-analysis analyzing etomidate as an induction agent showed the following: 11 randomized-controlled trials with 2704 patients Number needed to harm is 31; i.e. for every 31 patients that receive etomidate for induction, there is one death The probability of any mortality increase was 98.1% Ketamine is preferable due to a better adverse effect profile Laryngeal spasms and bronchorrhea are the most common adverse effects after IV push Beneficial effects on hemodynamics via catecholamine surge, albeit not as pronounced in shock patients 2023 meta-analysis compared ketamine and etomidate for RSI Ketamine's probability of reducing mortality is cited as 83.2% Overall, induction with ketamine demonstrates a reduced risk of mortality compared with etomidate The dosage of each medication for induction Etomidate: 20 mg based on 0.3 mg/kg for a 70 kg adult Ketamine: 1-2 mg/kg (or 0.5-1 mg/kg in patients with shock) Patients with asthma and/or COPD also benefit from ketamine induction due to putative bronchodilatory properties References Goyal S, Agrawal A. Ketamine in status asthmaticus: A review. Indian J Crit Care Med. 2013;17(3):154-161. doi:10.4103/0972-5229.117048 Koroki T, Kotani Y, Yaguchi T, et al. Ketamine versus etomidate as an induction agent for tracheal intubation in critically ill adults: a Bayesian meta-analysis. Crit Care. 2024;28(1):1-9. doi:10.1186/s13054-024-04831-4 Kotani Y, Piersanti G, Maiucci G, et al. Etomidate as an induction agent for endotracheal intubation in critically ill patients: A meta-analysis of randomized trials. J Crit Care. 2023;77(April 2023):154317. doi:10.1016/j.jcrc.2023.154317 Summarized & Edited by Jorge Chalit, OMS3 Donate: https://emergencymedicalminute.org/donate/
When we gain new information about beliefs we hold, it’s good practice to update our viewpoints accordingly to avoid incoherence in our thinking. On today’s ID The Future, host Jonathan McLatchie invites professor and author Dr. Tim McGrew to the show to discuss how Bayesian reasoning can help us maintain coherence across our set of beliefs. The pair also apply Bayesian logic to the debate over Darwinian evolution to show that a confidence in design arguments can be mathematically rigorous and logically sound. Bayesian logic provides a mathematical way to update prior probabilities with new information to produce a more realistic likelihood ratio. And when it comes to evaluating different hypotheses, small pieces of evidence can add up. “Even evidence Read More › Source