POPULARITY
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meOur theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)-------------------------Love the insights from this episode? Make sure you never miss a beat with Chatpods! Whether you're commuting, working out, or just on the go, Chatpods lets you capture and summarize key takeaways effortlessly.Save time, stay organized, and keep your thoughts at your fingertips.Download Chatpods directly from App Store or Google Play and use it to listen to this podcast today!https://www.chatpods.com/?fr=LearningBayesianStatistics-------------------------Takeaways:Epidemiology focuses on health at various scales, while biology often looks at micro-level details.Bayesian statistics helps connect models to data and quantify uncertainty.Recent advancements in data collection have improved the quality of epidemiological research.Collaboration between domain experts and statisticians is essential for effective research.The COVID-19 pandemic has led to increased data availability and international cooperation.Modeling infectious diseases requires understanding complex dynamics and statistical methods.Challenges in coding and communication between disciplines can hinder progress.Innovations in machine learning and neural networks are shaping the future of epidemiology.The importance of understanding the context and limitations of data in research. Chapters:00:00 Introduction to Bayesian Statistics and Epidemiology03:35 Guest Backgrounds and Their Journey10:04 Understanding Computational Biology vs. Epidemiology16:11 The Role of Bayesian Statistics in Epidemiology21:40 Recent Projects and Applications in Epidemiology31:30...
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meOur theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Bob's research focuses on corruption and political economy.Measuring corruption is challenging due to the unobservable nature of the behavior.The challenge of studying corruption lies in obtaining honest data.Innovative survey techniques, like randomized response, can help gather sensitive data.Non-traditional backgrounds can enhance statistical research perspectives.Bayesian methods are particularly useful for estimating latent variables.Bayesian methods shine in situations with prior information.Expert surveys can help estimate uncertain outcomes effectively.Bob's novel, 'The Bayesian Heatman,' explores academia through a fictional lens.Writing fiction can enhance academic writing skills and creativity.The importance of community in statistics is emphasized, especially in the Stan community.Real-time online surveys could revolutionize data collection in social science.Chapters:00:00 Introduction to Bayesian Statistics and Bob Kubinec06:01 Bob's Academic Journey and Research Focus12:40 Measuring Corruption: Challenges and Methods18:54 Transition from Government to Academia26:41 The Influence of Non-Traditional Backgrounds in Statistics34:51 Bayesian Methods in Political Science Research42:08 Bayesian Methods in COVID Measurement51:12 The Journey of Writing a Novel01:00:24 The Intersection of Fiction and AcademiaThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell,...
Dr. Norman Fenton and Dr. Martin Neil are mathematicians from Queen Mary University of London who are experts in the unreasonable power of mathematics. For example - it is possible to produce an algorithm that will predict the likelihood that a piece of hardware or software will fail - and then to use that information to predict the stability of much larger systems - military vehicles, fly-by-wire software for aircraft, medical technologies. Along the way, they developed a rare intuition for statistics and probability, which allowed them to start to see places where statistical analysis was being done in such a slapdash way that it was leading people to believe things that… didn't make a lot of sense. At first they attributed this simply to ignorance, but over the last few years underwent a dramatic transformation. They went from believing in the standard narrative, to questioning most of it. Their journey on this path is detailed in the book “Fighting Goliath,” and the full conversation is too hot for this platform - so can be heard wherever you listen to podcasts by looking for DemystifySci #295 Sign up for our Patreon and get episodes early + join our weekly Patron Chat https://bit.ly/3lcAasB AND rock some Demystify Gear to spread the word: https://demystifysci.myspreadshop.com/ OR do your Amazon shopping through this link: https://amzn.to/4g2cPVV (00:00:00) Go! (00:05:05) Academic Background and Bayesian Applications (00:09:10) History and Basics of Bayesian Statistics (00:15:34) Skepticism and Statistical Reasoning (00:20:29) Bayesian Applications in Real-World Problems (00:23:26) Engineering Complexity (00:27:09) Probabilities in Legal Cases (00:34:05) Challenges in Legal Reasoning (00:45:56) Monty Hall Problem and Probability Misunderstanding (00:50:18) Flaws in Traditional Statistical Education (00:55:22) Misinterpretation and Issues with Statistical Testing #Statistics, #BayesianAnalysis, #Probability, #LegalReasoning, #DataMisinterpretation, #StatisticalFallacies, #BayesianStatistics, #MontyHallProblem, #PValueProblems, #RealWorldApplications, #ComplexSystems, #ProbabilisticThinking, #EngineeringReliability, #StatisticalEducation, #MisunderstoodStatistics, #BayesianLogic, #LegalStatistics, #MathematicalReasoning, #sciencepodcast, #longformpodcast Check our short-films channel, @DemystifySci: https://www.youtube.com/c/DemystifyingScience AND our material science investigations of atomics, @MaterialAtomics https://www.youtube.com/@MaterialAtomics Join our mailing list https://bit.ly/3v3kz2S PODCAST INFO: Anastasia completed her PhD studying bioelectricity at Columbia University. When not talking to brilliant people or making movies, she spends her time painting, reading, and guiding backcountry excursions. Shilo also did his PhD at Columbia studying the elastic properties of molecular water. When he's not in the film studio, he's exploring sound in music. They are both freelance professors at various universities. - Blog: http://DemystifySci.com/blog - RSS: https://anchor.fm/s/2be66934/podcast/rss - Donate: https://bit.ly/3wkPqaD - Swag: https://bit.ly/2PXdC2y SOCIAL: - Discord: https://discord.gg/MJzKT8CQub - Facebook: https://www.facebook.com/groups/DemystifySci - Instagram: https://www.instagram.com/DemystifySci/ - Twitter: https://twitter.com/DemystifySci MUSIC: -Shilo Delay: https://g.co/kgs/oty671
Bayesian Statistics allows combining prior information of a population to the current sample of experimentation to create stronger inferences. Dr. Taylor Winter, Senior Lecturer in Mathematics and Statistics at University of Canterbury, uses Bayesian methods to investigate a range of societal and group factors (Social Psychology).Dr. Winter takes us through some of the basic ideas around Bayesian statistics and how it differs from traditional methods of hypothesis testing in research. We discuss examples from his work on authoritarianism and social identity theory as well as learn the the differences between his time working in industry vs academia. Lastly, we discuss his culture focused projects including Dungeons and Dragons and how Māori culture can manifest behavioural change.Support the showSupport us and reach out!https://smoothbrainsociety.comInstagram: @thesmoothbrainsocietyTikTok: @thesmoothbrainsocietyTwitter/X: @SmoothBrainSocFacebook: @thesmoothbrainsocietyMerch and all other links: Linktreeemail: thesmoothbrainsociety@gmail.com
Tom Chivers is a journalist who writes a lot about science and applied statistics. We talk about his new book on Bayesian statistics, the biography of Thomas Bayes, the history of probability theory, how Bayes can help with the replication crisis, how Tom became a journalist, and much more.BJKS Podcast is a podcast about neuroscience, psychology, and anything vaguely related, hosted by Benjamin James Kuper-Smith.Support the show: https://geni.us/bjks-patreonTimestamps0:00:00: Tom's book about Bayes & Bayesian statistics relates to many of my previous episodes and much of my own research0:03:12: A brief biography of Thomas Bayes (about whom very little is known)0:11:00: The history of probability theory 0:36:23: Bayesian songs0:43:17: Bayes & the replication crisis0:57:27: How Tom got into science journalism1:08:32: A book or paper more people should read1:10:05: Something Tom wishes he'd learnt sooner1:14:36: Advice for PhD students/postdocs/people in a transition periodPodcast linksWebsite: https://geni.us/bjks-podTwitter: https://geni.us/bjks-pod-twtTom's linksWebsite: https://geni.us/chivers-webTwitter: https://geni.us/chivers-twtPodcast: https://geni.us/chivers-podBen's linksWebsite: https://geni.us/bjks-webGoogle Scholar: https://geni.us/bjks-scholarTwitter: https://geni.us/bjks-twtReferences and linksEpisode with Stuart Ritchie: https://geni.us/bjks-ritchieScott Alexander: https://www.astralcodexten.com/Bayes (1731). Divine benevolence, or an attempt to prove that the principal end of the divine providence and government is the happiness of his creatures. Being an answer to a pamphlet entitled Divine Rectitude or an inquiry concerning the moral perfections of the deity with a refutation of the notions therein advanced concerning beauty and order, the reason of punishment and the necessity of a state of trial antecedent to perfect happiness.Bayes (1763). An essay towards solving a problem in the doctrine of chances. Philosophical transactions of the Royal Society of London.Bellhouse (2004). The Reverend Thomas Bayes, FRS: a biography to celebrate the tercentenary of his birth. Project Euclid.Bem (2011). Feeling the future: experimental evidence for anomalous retroactive influences on cognition and affect. Journal of personality and social psychology.Chivers (2024). Everything is Predictable: How Bayesian Statistics Explain Our World.Chivers & Chivers (2021). How to read numbers: A guide to statistics in the news (and knowing when to trust them).Chivers (2019). The Rationalist's Guide to the Galaxy: Superintelligent AI and the Geeks Who Are Trying to Save Humanity's Future.Clarke [not Black, as Tom said] (2020). Piranesi.Goldacre (2009). Bad science.Goldacre (2014). Bad pharma: how drug companies mislead doctors and harm patients.Simmons, Nelson & Simonsohn (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science.
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meOur theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Communicating Bayesian concepts to non-technical audiences in sports analytics can be challenging, but it is important to provide clear explanations and address limitations.Understanding the model and its assumptions is crucial for effective communication and decision-making.Involving domain experts, such as scouts and coaches, can provide valuable insights and improve the model's relevance and usefulness.Customizing the model to align with the specific needs and questions of the stakeholders is essential for successful implementation. Understanding the needs of decision-makers is crucial for effectively communicating and utilizing models in sports analytics.Predicting the impact of training loads on athletes' well-being and performance is a challenging frontier in sports analytics.Identifying discrete events in team sports data is essential for analysis and development of models.Chapters:00:00 Bayesian Statistics in Sports Analytics18:29 Applying Bayesian Stats in Analyzing Player Performance and Injury Risk36:21 Challenges in Communicating Bayesian Concepts to Non-Statistical Decision-Makers41:04 Understanding Model Behavior and Validation through Simulations43:09 Applying Bayesian Methods in Sports Analytics48:03 Clarifying Questions and Utilizing Frameworks53:41 Effective Communication of Statistical Concepts57:50 Integrating Domain Expertise with Statistical Models01:13:43 The Importance of Good Data01:18:11 The Future of Sports AnalyticsThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew...
Have you ever wondered how we make crucial decisions in the early phases of clinical trials? How can a Bayesian framework enhance these decisions? Today, I talk with Audrey Yeo from Roche to answer these questions. Fresh from the PSI conference, Audrey, a seasoned statistical software engineer, introduces us to an innovative R package designed for early-phase clinical trials. This tool promises to revolutionize decision-making with its Bayesian approach. Join us as we explore the development, features, and impact of this groundbreaking tool, and discover the collaborative efforts that drive its evolution.
On episode 214, we welcome Tom Chivers to discuss Bayesian statistics, how their counterintuitive nature tends to turn people off, the philosophical disagreements between the Bayesians and the frequentists, why “priors” aren't purely subjective and why all theories should be considered as priors, the difficulty of quantifying emotional states in psychological research, how priors are used and misused to inform interpretations of new data, our innate tendency toward black and white thinking, the replication crisis, and why statistically significant research is often wrong. Tom Chivers is an author and the award-winning science writer for Semafor. His writing has appeared in The Times (London), The Guardian, New Scientist, Wired, CNN, and more. He is the co-host of The Studies Show podcast alongside Stuart Richie.His books include The Rationalist's Guide to the Galaxy, and How to Read Numbers. His newest book, available now, is called Everything Is Predictable: How Bayesian Statistics Explain Our World. | Tom Chivers | ► Website | https://tomchivers.com ► Twitter | https://x.com/TomChivers ► Semafor | https://www.semafor.com/author/tom-chivers ► Podcast | https://www.thestudiesshowpod.com ► Everything is Predictable Book | https://amzn.to/3UJTOxD Where you can find us: | Seize The Moment Podcast | ► Facebook | https://www.facebook.com/SeizeTheMoment ► Twitter | https://twitter.com/seize_podcast ► Instagram | https://www.instagram.com/seizethemoment ► TikTok | https://www.tiktok.com/@seizethemomentpodcast
Everything Is Predictable: How Bayesian Statistics Explain Our World by Tom Chivers https://amzn.to/3wxZAKu A captivating and user-friendly tour of Bayes's theorem and its global impact on modern life from the acclaimed science writer and author of The Rationalist's Guide to the Galaxy. At its simplest, Bayes's theorem describes the probability of an event, based on prior knowledge of conditions that might be related to the event. But in Everything Is Predictable, Tom Chivers lays out how it affects every aspect of our lives. He explains why highly accurate screening tests can lead to false positives and how a failure to account for it in court has put innocent people in jail. A cornerstone of rational thought, many argue that Bayes's theorem is a description of almost everything. But who was the man who lent his name to this theorem? How did an 18th-century Presbyterian minister and amateur mathematician uncover a theorem that would affect fields as diverse as medicine, law, and artificial intelligence? Fusing biography, razor-sharp science writing, and intellectual history, Everything Is Predictable is an entertaining tour of Bayes's theorem and its impact on modern life, showing how a single compelling idea can have far reaching consequences.
At its simplest, Bayes's theorem describes the probability of an event, based on prior knowledge of conditions that might be related to the event. But in Everything Is Predictable, Tom Chivers lays out how it affects every aspect of our lives. He explains why highly accurate screening tests can lead to false positives and how a failure to account for it in court has put innocent people in jail. A cornerstone of rational thought, many argue that Bayes's theorem is a description of almost everything. But who was the man who lent his name to this theorem? How did an 18th-century Presbyterian minister and amateur mathematician uncover a theorem that would affect fields as diverse as medicine, law, and artificial intelligence? Fusing biography and intellectual history, Everything Is Predictable is an entertaining tour of Bayes's theorem and its impact on modern life, showing how a single compelling idea can have far reaching consequences. Tom Chivers is an author and the award-winning science writer for Semafor. Previously he was the science editor at UnHerd.com and BuzzFeed UK. His writing has appeared in The Times (London), The Guardian, New Scientist, Wired, CNN, and more. He was awarded the Royal Statistical Society's “Statistical Excellence in Journalism” awards in 2018 and 2020, and was declared the science writer of the year by the Association of British Science Writers in 2021. His books include The Rationalist's Guide to the Galaxy: Superintelligent AI and the Geeks Who Are Trying to Save Humanity's Future, and How to Read Numbers: A Guide to Stats in the News (and Knowing When to Trust Them). His new book is Everything Is Predictable: How Bayesian Statistics Explain Our World. Shermer and Chivers discuss: Thomas Bayes, his equation, and the problem it solves • Bayesian decision theory vs. statistical decision theory • Popperian falsification vs. Bayesian estimation • Sagan's ECREE principle • Bayesian epistemology and family resemblance • paradox of the heap • Reality as controlled hallucination • human irrationality • superforecasting • mystical experiences and religious truths • Replication Crisis in science • Statistical Detection Theory and Signal Detection Theory • Medical diagnosis problem and why most people get it wrong.
SummaryTom Chivers discusses his book 'Everything is Predictable: How Bayesian Statistics Explain Our World' and the applications of Bayesian statistics in various fields. He explains how Bayesian reasoning can be used to make predictions and evaluate the likelihood of hypotheses. Chivers also touches on the intersection of AI and ethics, particularly in relation to AI-generated art. The conversation explores the history of Bayes' theorem and its role in science, law, and medicine. Overall, the discussion highlights the power and implications of Bayesian statistics in understanding and navigating the world. The conversation explores the role of AI in prediction and the importance of Bayesian thinking. It discusses the progress of AI in image classification and the challenges it still faces, such as accurately depicting fine details like hands. The conversation also delves into the topic of predictions going wrong, particularly in the context of conspiracy theories. It highlights the Bayesian nature of human beliefs and the influence of prior probabilities on updating beliefs with new evidence. The conversation concludes with a discussion on the relevance of Bayesian statistics in various fields and the need for beliefs to have probabilities and predictions attached to them. Takeaways Bayesian statistics can be used to make predictions and evaluate the likelihood of hypotheses. Bayes' theorem has applications in various fields, including science, law, and medicine. The intersection of AI and ethics raises complex questions about AI-generated art and the predictability of human behavior. Understanding Bayesian reasoning can enhance decision-making and critical thinking skills. AI has made significant progress in image classification, but still faces challenges in accurately depicting fine details. Predictions can go wrong due to the influence of prior beliefs and the interpretation of new evidence. Beliefs should have probabilities and predictions attached to them, allowing for updates with new information. Bayesian thinking is crucial in various fields, including AI, pharmaceuticals, and decision-making. The importance of defining predictions and probabilities when engaging in debates and discussions.
What's going on with deep learning? What sorts of models get learned, and what are the learning dynamics? Singular learning theory is a theory of Bayesian statistics broad enough in scope to encompass deep neural networks that may help answer these questions. In this episode, I speak with Daniel Murfet about this research program and what it tells us. Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast Topics we discuss, and timestamps: 0:00:26 - What is singular learning theory? 0:16:00 - Phase transitions 0:35:12 - Estimating the local learning coefficient 0:44:37 - Singular learning theory and generalization 1:00:39 - Singular learning theory vs other deep learning theory 1:17:06 - How singular learning theory hit AI alignment 1:33:12 - Payoffs of singular learning theory for AI alignment 1:59:36 - Does singular learning theory advance AI capabilities? 2:13:02 - Open problems in singular learning theory for AI alignment 2:20:53 - What is the singular fluctuation? 2:25:33 - How geometry relates to information 2:30:13 - Following Daniel Murfet's work The transcript: https://axrp.net/episode/2024/05/07/episode-31-singular-learning-theory-dan-murfet.html Daniel Murfet's twitter/X account: https://twitter.com/danielmurfet Developmental interpretability website: https://devinterp.com Developmental interpretability YouTube channel: https://www.youtube.com/@Devinterp Main research discussed in this episode: - Developmental Landscape of In-Context Learning: https://arxiv.org/abs/2402.02364 - Estimating the Local Learning Coefficient at Scale: https://arxiv.org/abs/2402.03698 - Simple versus Short: Higher-order degeneracy and error-correction: https://www.lesswrong.com/posts/nWRj6Ey8e5siAEXbK/simple-versus-short-higher-order-degeneracy-and-error-1 Other links: - Algebraic Geometry and Statistical Learning Theory (the grey book): https://www.cambridge.org/core/books/algebraic-geometry-and-statistical-learning-theory/9C8FD1BDC817E2FC79117C7F41544A3A - Mathematical Theory of Bayesian Statistics (the green book): https://www.routledge.com/Mathematical-Theory-of-Bayesian-Statistics/Watanabe/p/book/9780367734817 In-context learning and induction heads: https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html - Saddle-to-Saddle Dynamics in Deep Linear Networks: Small Initialization Training, Symmetry, and Sparsity: https://arxiv.org/abs/2106.15933 - A mathematical theory of semantic development in deep neural networks: https://www.pnas.org/doi/abs/10.1073/pnas.1820226116 - Consideration on the Learning Efficiency Of Multiple-Layered Neural Networks with Linear Units: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4404877 - Neural Tangent Kernel: Convergence and Generalization in Neural Networks: https://arxiv.org/abs/1806.07572 - The Interpolating Information Criterion for Overparameterized Models: https://arxiv.org/abs/2307.07785 - Feature Learning in Infinite-Width Neural Networks: https://arxiv.org/abs/2011.14522 - A central AI alignment problem: capabilities generalization, and the sharp left turn: https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization - Quantifying degeneracy in singular models via the learning coefficient: https://arxiv.org/abs/2308.12108 Episode art by Hamish Doodles: hamishdoodles.com
Summary Artificial intelligence has dominated the headlines for several months due to the successes of large language models. This has prompted numerous debates about the possibility of, and timeline for, artificial general intelligence (AGI). Peter Voss has dedicated decades of his life to the pursuit of truly intelligent software through the approach of cognitive AI. In this episode he explains his approach to building AI in a more human-like fashion and the emphasis on learning rather than statistical prediction. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster (https://www.dataengineeringpodcast.com/dagster) today to get started. Your first 30 days are free! Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Your host is Tobias Macey and today I'm interviewing Peter Voss about what is involved in making your AI applications more "human" Interview Introduction How did you get involved in machine learning? Can you start by unpacking the idea of "human-like" AI? How does that contrast with the conception of "AGI"? The applications and limitations of GPT/LLM models have been dominating the popular conversation around AI. How do you see that impacting the overrall ecosystem of ML/AI applications and investment? The fundamental/foundational challenge of every AI use case is sourcing appropriate data. What are the strategies that you have found useful to acquire, evaluate, and prepare data at an appropriate scale to build high quality models? What are the opportunities and limitations of causal modeling techniques for generalized AI models? As AI systems gain more sophistication there is a challenge with establishing and maintaining trust. What are the risks involved in deploying more human-level AI systems and monitoring their reliability? What are the practical/architectural methods necessary to build more cognitive AI systems? How would you characterize the ecosystem of tools/frameworks available for creating, evolving, and maintaining these applications? What are the most interesting, innovative, or unexpected ways that you have seen cognitive AI applied? What are the most interesting, unexpected, or challenging lessons that you have learned while working on desiging/developing cognitive AI systems? When is cognitive AI the wrong choice? What do you have planned for the future of cognitive AI applications at Aigo? Contact Info LinkedIn (https://www.linkedin.com/in/vosspeter/) Website (http://optimal.org/voss.html) Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story. Links Aigo.ai (https://aigo.ai/) Artificial General Intelligence (https://aigo.ai/what-is-real-agi/) Cognitive AI (https://aigo.ai/cognitive-ai/) Knowledge Graph (https://en.wikipedia.org/wiki/Knowledge_graph) Causal Modeling (https://en.wikipedia.org/wiki/Causal_model) Bayesian Statistics (https://en.wikipedia.org/wiki/Bayesian_statistics) Thinking Fast & Slow (https://amzn.to/3UJKsmK) by Daniel Kahneman (affiliate link) Agent-Based Modeling (https://en.wikipedia.org/wiki/Agent-based_model) Reinforcement Learning (https://en.wikipedia.org/wiki/Reinforcement_learning) DARPA 3 Waves of AI (https://www.darpa.mil/about-us/darpa-perspective-on-ai) presentation Why Don't We Have AGI Yet? (https://arxiv.org/abs/2308.03598) whitepaper Concepts Is All You Need (https://arxiv.org/abs/2309.01622) Whitepaper Hellen Keller (https://en.wikipedia.org/wiki/Helen_Keller) Stephen Hawking (https://en.wikipedia.org/wiki/Stephen_Hawking) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meIn this episode, Andy Aschwanden and Doug Brinkerhoff tell us about their work in glaciology and the application of Bayesian statistics in studying glaciers. They discuss the use of computer models and data analysis in understanding glacier behavior and predicting sea level rise, and a lot of other fascinating topics.Andy grew up in the Swiss Alps, and studied Earth Sciences, with a focus on atmospheric and climate science and glaciology. After his PhD, Andy moved to Fairbanks, Alaska, and became involved with the Parallel Ice Sheet Model, the first open-source and openly-developed ice sheet model.His first PhD student was no other than… Doug Brinkerhoff! Doug did an MS in computer science at the University of Montana, focusing on numerical methods for ice sheet modeling, and then moved to Fairbanks to complete his PhD. While in Fairbanks, he became an ardent Bayesian after “seeing that uncertainty needs to be embraced rather than ignored”. Doug has since moved back to Montana, becoming faculty in the University of Montana's computer science department.Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero and Will Geary.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meListen to the full episode: https://learnbayesstats.com/episode/99-exploring-quantum-physics-bayesian-stats-chris-ferrie/Watch the interview: https://www.youtube.com/watch?v=pRaT6FLF7A8 Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie and Cory Kiser.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)
Summary Software systems power much of the modern world. For applications that impact the safety and well-being of people there is an extra set of precautions that need to be addressed before deploying to production. If machine learning and AI are part of that application then there is a greater need to validate the proper functionality of the models. In this episode Erez Kaminski shares the work that he is doing at Ketryx to make that validation easier to implement and incorporate into the ongoing maintenance of software and machine learning products. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Erez Kaminski about using machine learning in safety critical and highly regulated medical applications Interview Introduction How did you get involved in machine learning? Can you start by describing some of the regulatory burdens placed on ML teams who are building solutions for medical applications? How do these requirements impact the development and validation processes of model design and development? What are some examples of the procedural and record-keeping aspects of the machine learning workflow that are required for FDA compliance? What are the opportunities for automating pieces of that overhead? Can you describe what you are doing at Ketryx to streamline the development/training/deployment of ML/AI applications for medical use cases? What are the ideas/assumptions that you had at the start of Ketryx that have been challenged/updated as you work with customers? What are the most interesting, innovative, or unexpected ways that you have seen ML used in medical applications? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Ketryx? When is Ketryx the wrong choice? What do you have planned for the future of Ketryx? Contact Info Email (mailto:info@ketryx.com) LinkedIn (https://www.linkedin.com/in/erezkaminski/) Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers. Links Ketryx (https://www.ketryx.com/) Wolfram Alpha (https://www.wolframalpha.com/) Mathematica (https://www.wolfram.com/mathematica/) Tensorflow (https://www.tensorflow.org/) SBOM == Software Bill Of Materials (https://www.cisa.gov/sbom) Air-gapped Systems (https://en.wikipedia.org/wiki/Air_gap_(networking)) AlexNet (https://en.wikipedia.org/wiki/AlexNet) Shapley Values (https://c3.ai/glossary/data-science/shapley-values/) SHAP (https://github.com/shap/shap) Podcast.__init__ Episode (https://www.pythonpodcast.com/shap-explainable-machine-learning-episode-335/) Bayesian Statistics (https://en.wikipedia.org/wiki/Bayesian_inference) Causal Modeling (https://en.wikipedia.org/wiki/Causal_inference) Prophet (https://facebook.github.io/prophet/) FDA Principles Of Software Validation (https://www.fda.gov/regulatory-information/search-fda-guidance-documents/general-principles-software-validation) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Everything and more one might ever want to know about the topic...that other epistemology people often talk about. The central project is to distinguish between 4 "species" of what is often called "Bayesianism" 1. Bayes' Theorem. 2. Bayesian Statistics. 3. Bayesian Reasoning 4. Bayesian Epistemology. Actual timestampes and chapters are: 00:00 - Introduction to this podcast 02:55 Epistemology 11:30 Substrate Independence 12:30 Inexplicit Knowledge/Knowledge without a knower 21:30 Explanatory Universality and Supernaturalism 24:30 When we lack good explanations 29:00 Rational Decision Theory 33:39: Bayes' Theorem 41:40 Bayesian Statistics 1:07:50 Bayesian Reasoning 1:16:00 Bayesian “Epistemology” 1:20:13: Quick Recap 1:20:49 A question from Stephen Mix 1:21:50 “Confidence” in epistemology 1:26:13 Measurement and Uncertainty 1:31:50 Confidence and experimental replication Join the conversation https://getairchat.com/s/p3ql7kNB
How a once-derided approach to statistics paved the way for AI. Jim Al-Khalili talks to pioneering mathematician, Professor Sir Adrian Smith. Accused early in his career of ‘trying to destroy the processes of science', Adrian went on to prove that a branch of statistics (invented by the Reverend Thomas Bayes in 1764) could be used by computers to analyse vast sets of data and to learn from that data. His mathematical proofs showed that Bayesian statistics could be applied to all sorts of real world problems: from improving survival rates for kidney transplant patients to tracking Russian submarines. And paved the way for a dramatic explosion in machine learning and AI. Working as a civil servant (2008-2012) he helped to protect the science budget in 2010, transforming the landscape for scientific research in the UK. And he has been vocal, over many years, about the urgent need to make sure children in the UK leave school more mathematically able. In 2020, he became President of the UK's prestigious national science academy, The Royal Society. Producer: Anna Buckley
Marginal Bayesian Statistics Using Masked Autoregressive Flows and Kernel Density Estimators with Examples in Cosmology by Harry Bevins et al. on Monday 28 November Cosmological experiments often employ Bayesian workflows to derive constraints on cosmological and astrophysical parameters from their data. It has been shown that these constraints can be combined across different probes such as Planck and the Dark Energy Survey and that this can be a valuable exercise to improve our understanding of the universe and quantify tension between multiple experiments. However, these experiments are typically plagued by differing systematics, instrumental effects and contaminating signals, which we collectively refer to as `nuisance' components, that have to be modelled alongside target signals of interest. This leads to high dimensional parameter spaces, especially when combining data sets, with > 20 dimensions of which only around 5 correspond to key physical quantities. We present a means by which to combine constraints from different data sets in a computationally efficient manner by generating rapid, reusable and reliable marginal probability density estimators, giving us access to nuisance-free likelihoods. This is possible through the unique combination of nested sampling, which gives us access to Bayesian evidences, and the marginal Bayesian statistics code MARGARINE. Our method is lossless in the signal parameters, resulting in the same posterior distributions as would be found from a full nested sampling run over all nuisance parameters, and typically quicker than evaluating full likelihoods. We demonstrate our approach by applying it to the combination of posteriors from the Dark Energy Survey and Planck. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2207.11457v3
Marginal Bayesian Statistics Using Masked Autoregressive Flows and Kernel Density Estimators with Examples in Cosmology by Harry Bevins et al. on Sunday 27 November Cosmological experiments often employ Bayesian workflows to derive constraints on cosmological and astrophysical parameters from their data. It has been shown that these constraints can be combined across different probes such as Planck and the Dark Energy Survey and that this can be a valuable exercise to improve our understanding of the universe and quantify tension between multiple experiments. However, these experiments are typically plagued by differing systematics, instrumental effects and contaminating signals, which we collectively refer to as `nuisance' components, that have to be modelled alongside target signals of interest. This leads to high dimensional parameter spaces, especially when combining data sets, with > 20 dimensions of which only around 5 correspond to key physical quantities. We present a means by which to combine constraints from different data sets in a computationally efficient manner by generating rapid, reusable and reliable marginal probability density estimators, giving us access to nuisance-free likelihoods. This is possible through the unique combination of nested sampling, which gives us access to Bayesian evidences, and the marginal Bayesian statistics code MARGARINE. Our method is lossless in the signal parameters, resulting in the same posterior distributions as would be found from a full nested sampling run over all nuisance parameters, and typically quicker than evaluating full likelihoods. We demonstrate our approach by applying it to the combination of posteriors from the Dark Energy Survey and Planck. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2207.11457v3
More than a decade ago, Bruno Boulanger made a big bet on applying Bayesian statistics in clinical trials. At the time, very few in the industry thought the method, which applies probabilities to statistical problems, had a place in clinical development. Boulanger saw an opportunity, founding a company that quickly grew and was acquired by CRO PharmaLex in 2018, where he now serves as global head of statistics and data science.In this episode, Boulanger explains how Bayesian statistics uses probability and prediction to solve challenges in the increasingly complex world of clinical research and clinical trial design. Bayesian statistics allows researchers to expand decision making for clinical trials beyond its participants, which is imperative for trials targeting rare diseases. Looking forward, Boulanger is optimistic about the expansion of therapeutic innovation combined with digitalization and data science to meet the unmet needs of patients. All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
In this interview from the Department of Statistical Science at UCL, we speak with Dr Mine Dogucu who is a Lecturer in the department of Statistical Science at UCL. Dr Dogucu shares with us her experiences of teaching both frequentist and Bayesian statistics to undergraduates. She also explains what accessibility means in education and in the context of statistics, including being part of changing knitr and R Markdown to improve accessibility with image alternative text. Bayes Rules! book: www.bayesrulesbook.com/ New in knitr: Improved Accessibility with Image Alt Text: www.rstudio.com/blog/knitr-fig-alt/ Teach Access: teachaccess.org/ BrailleR: cran.r-project.org/web/packages/BrailleR/ Writing Alt Text for Data Visualization, Amy Cesal: medium.com/nightingale/writing…zation-2a218ef43f81 gradetools R package: federicazoe.github.io/gradetools/ Papers: Framework for Accessible and Inclusive Teaching Materials for Statistics and Data Science Courses: arxiv.org/abs/2110.06355 Teaching Visual Accessibility in Introductory Data Science Classes with Multi-Modal Data Representations: arxiv.org/abs/2208.02565 Date of episode recording: 2022-09-29 Duration: 00:25:37 Language of episode: English Presenter:Nathan Green Guests: Mine Dogucu Producer: Nathan Green
Marginal Bayesian Statistics Using Masked Autoregressive Flows and Kernel Density Estimators with Examples in Cosmology by Harry Bevins et al. on Wednesday 14 September Cosmological experiments often employ Bayesian workflows to derive constraints on cosmological and astrophysical parameters from their data. It has been shown that these constraints can be combined across different probes such as Planck and the Dark Energy Survey and that this can be a valuable exercise to improve our understanding of the universe and quantify tension between multiple experiments. However, these experiments are typically plagued by differing systematics, instrumental effects and contaminating signals, which we collectively refer to as `nuisance' components, that have to be modelled alongside target signals of interest. This leads to high dimensional parameter spaces, especially when combining data sets, with > 20 dimensions of which only around 5 correspond to key physical quantities. We present a means by which to combine constraints from different data sets in a computationally efficient manner by generating rapid, reusable and reliable marginal probability density estimators, giving us access to nuisance-free likelihoods. This is possible through the unique combination of nested sampling, which gives us access to Bayesian evidences, and the marginal Bayesian statistics code MARGARINE. Our method is lossless in the signal parameters, resulting in the same posterior distributions as would be found from a full nested sampling run over all nuisance parameters, and typically quicker than evaluating full likelihoods. We demonstrate our approach by applying it to the combination of posteriors from the Dark Energy Survey and Planck. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2207.11457v2
Greg says our guest's book, “Bernoulli's Fallacy: Statistical Illogic and the Crisis of Modern Science” is “a bombshell in a sense,” making some very, very bold claims. Aubrey Clayton is an applied mathematical researcher, lecturer, and writer. He currently teaches graduate courses in the philosophy of probability at the Harvard Extension School, and has written for publications like the New York Times, Boston Globe, and Nautilus. Additionally, Aubrey says he technically “worked on Wall Street” but only in the same sense that a hot dog vendor does. Greg and Aubrey dive deep into the radical ideas behind Aubrey's book, the merits of the scientific method as a process, Bayesian Statistics, and the replication crisis in this conversation.Episode Quotes:Probability and informationWe have to come up with a form of probability that has all the mathematical properties that we want it to have. But that also is usable in the sense of, you know, applies to all these different settings where you need to assign probabilities to things. And I think that the answer probably has to do with information.The essence of Bernoulli's FallacyIt gets back to a desire to make probabilities observable and measurable in the form of frequency.Bernoulli's FallacyBernoulli's Fallacy is the idea that you can make good decisions about hypotheses, scientific hypotheses or statistical hypotheses, or just research theories in general, using the language of probabilities. But focusing entirely on probabilities that are oriented in the direction of: “if a hypothesis is true, then what is the probability of some observation or some data.”Show Links:Resources: Statistical Rethinking | Richard McElreathDaryl Bem Ronald Fisher Thomas Bayes Edwin Thompson JaynesP-valueGuest Profile:Professional Profile on Moody's AnalyticsAubrey Clayton WebsiteAubrey Clayton on LinkedInAubrey Clayton on TwitterAubrey Clayton on YoutubeHis Work:Aubrey Clayton's ArticlesBernoulli's Fallacy: Statistical Illogic and the Crisis of Modern Science
In this episode, Dr. Thomas Wiecki, Core Developer of the PyMC Library and CEO of PyMC Labs, joins Jon for a masterclass in Bayesian statistics. Tune in to hear about PyMC, and discover why Bayesian statistics can be more powerful and interpretable than any other data modeling approach. In this episode you will learn: • What Bayesian statistics is [7:30] • Why Bayesian statistics can be more powerful and interpretable than any other data modeling approach [17:20] • How PyMC was developed [20:41] • Commercial applications of Bayesian stats [43:07] • How to build a successful company culture [1:03:14] • What Thomas looks for when hiring [1:11:13] • Thomas's top resources for learning Bayesian stats yourself [1:13:57] Additional materials: www.superdatascience.com/585
In a wide-ranging conversation, Max talks to Will Kurt, author of "Bayesian Statistics the Fun Way" about math, writing, philosophy, truth, and of course Bayesian techniques!
In this week's episode Greg and Patrick discuss the critical distinction between sample distributions and sampling distributions, and explore all the different ways in which sampling distributions are foundational to how we conduct research. Along the way they also mention Starbucks jazz, one item tests, hot pockets, delusions of grandeur, Tetris and Pong, drawing inappropriate distributions, magical properties, texting pictures of kindle pages, Roman arches, 1970s graphics, never saying never, mumbling, Greenday, ignoring Roy Levy, real life bootstrap, and Goodnight Gracie.
Let's be honest: evolution is awesome! I started reading Improbable Destinies: Fate, Chance, and the Future of Evolution, by Jonathan Losos, and I'm utterly fascinated. So I'm thrilled to welcome Florian Hartig on the show. Florian is a professor of Theoretical Ecology at the University of Regensburg, Germany. His research concentrates on theory, computer simulations, statistical methods and machine learning in ecology & evolution. He is also interested in open science and open software development, and maintains, among other projects, the R packages DHARMa and BayesianTools. Among other things, we talked about approximate Bayesian computation, best practices when building models and the big pain points that remain in the Bayesian pipeline. Most importantly, Florian's main hobbies are whitewater kayaking, snowboarding, badminton and playing the guitar. Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ (https://bababrinkman.com/) ! Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Brian Huey, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, Adam Bartonicek, William Benton, Alan O'Donnell, Mark Ormsby, Demetri Pananos, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, George Ho, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Matthew McAnear, Michael Hankin, Cameron Smith, Luis Iberico, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Aaron Jones and Daniel Lindroth. Visit https://www.patreon.com/learnbayesstats (https://www.patreon.com/learnbayesstats) to unlock exclusive Bayesian swag ;) Links from the show: Florian's website: https://theoreticalecology.wordpress.com/ (https://theoreticalecology.wordpress.com/) Florian on Twitter: https://twitter.com/florianhartig (https://twitter.com/florianhartig) Florian on GitHub: https://github.com/florianhartig (https://github.com/florianhartig) DHARMa -- Residual Diagnostics for Hierarchical Regression Models: https://cran.r-project.org/web/packages/DHARMa/index.html (https://cran.r-project.org/web/packages/DHARMa/index.html) BayesianTools -- General-Purpose MCMC and SMC Samplers and Tools for Bayesian Statistics: https://cran.r-project.org/web/packages/BayesianTools/index.html (https://cran.r-project.org/web/packages/BayesianTools/index.html) Statistical inference for stochastic simulation inference -- theory and application: https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1461-0248.2011.01640.x (https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1461-0248.2011.01640.x) ArviZ plot rank function: https://arviz-devs.github.io/arviz/api/generated/arviz.plot_rank.html (https://arviz-devs.github.io/arviz/api/generated/arviz.plot_rank.html) Rank-normalization, folding, and localization -- An improved R-hat for assessing convergence of MCMC: https://arxiv.org/abs/1903.08008 (https://arxiv.org/abs/1903.08008) LBS #51 Bernoulli's Fallacy & the Crisis of Modern Science, with Aubrey Clayton: https://www.learnbayesstats.com/episode/51-bernoullis-fallacy-crisis-modern-science-aubrey-clayton (https://www.learnbayesstats.com/episode/51-bernoullis-fallacy-crisis-modern-science-aubrey-clayton) LBS #50 Ta(l)king Risks & Embracing Uncertainty, with David Spiegelhalter: https://www.learnbayesstats.com/episode/50-talking-risks-embracing-uncertainty-david-spiegelhalter... Support this podcast
This episode of PharmaLex Talks features Bruno Boulanger, Senior Director, Global Head Statistics and Data Science at PharmaLex and an award-wining author on Global Statistics and Data Science (link https://www.pharmalex.com/bookauthority-announce-the-winner-of-the-best-new-bayesian-statistics-books-best-new-biostatistics-books-and-best-statistics-ebooks-of-all-time/). Bruno´s visionary approach to statistics and his efforts to streamline predictive models and facilitate decision making in pharma are of considerable value for the industry. Clement Laloux, Specialist Statistics & Data Sciences at Pharmalex supports the discussion and shares his thoughts on the influence of prior usage, helping to predict recruitment in Clinical Trials. This episode focuses around the smooth running of patient enrolment as a key determinant of success for Clinical Trials. The reality is that many Clinical Trials fail to complete on time due to delays in patient recruitment (more than 80% of the Trials do not reach recruitment targets on schedule, Huang et al., 2018). And despite efforts over multiple decades to identify and address barriers, recruitment challenges persist. In this episode, we shed light on patient enrolment as a key determinant of success in clinical trials, challenges with delays in drug submission and even shortages in hospitals. We touch upon factors that can impact the decision making and facilitate the choice of procedures to adapt models of prediction, particularly Bayesian models. An important question is, how many additional centres should be opened in order to ensure completion within timelines? More practically, the proposed methodology is carried out under the Bayesian framework and aims at predicting the randomisation dates of future patients in the context of ongoing multicentre clinical trials. If this topic is of interest to you, visit our webpage and check out our free webinar on the same topic: https://go.pharmalex.com/pharmalex_webinar_bayesianapproachsupplychain_ondemand
On this ID the Future, Baylor University computer engineering professor Robert J. Marks hosts Ola Hössjer of Stockholm University and Daniel Díaz of the University of Miami to discuss a recent research paper the three contributed to the Journal of Cosmology and Astroparticle Physics, “Is Cosmological Tuning Fine or Coarse?” It turns out that's no easy question to answer rigorously, but this is where the new paper comes in. In this episode the three unpack the long answer. What about the short answer? It's akin to a description in The Hitchhiker's Guide to the Galaxy: “Space,” it informs us, “is big. Really big.” Measuring how finely tuned our universe is for life is all about searching large spaces of possibilities; Read More › Source
Rob Trangucci joins us to discuss his work and study in Bayesian statistics and how he applies it to real-world problems. In this episode you will learn: • Getting Rob on the show [8:12] • Stan [9:34] • Gradients [18:15] • What is Bayesian statistics? [23:05] • Multi-modal deep learning [45:20] • Stan package [53:46] • Applications of Bayesian stats [1:09:47] • The day-to-day of a PhD in stats [1:21:56] • What does the future hold? [1:42:37] Additional materials: www.superdatascience.com/507
Do you model professionally? Would you like to? Or, are you uncertain. These are the topics of this episode: Bayesian statistician (among other official roles that are way less fun to say) Dr. Elea Feit joined the gang to discuss how we, as analysts, think about data put it to use. Things got pretty deep, included the exploration of questions such as, "If you run a test that includes a holdout group, is that an A/B test?" This episode ran a little long, but our confidence level is quite high that you will be totally fine with that. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page. This episode originally aired on March 13, 2018.
Our guest is Paul Sabin, a sports data scientist and an analytics writer at ESPN. Paul has worked on predictive and descriptive models for sports performance including ESPN's proprietary metrics such as BPI, FPI, and Strength of Record (SOR). Paul explains how he ended up working for ESPN, why he is Bayesian instead of a frequentist, and how a Bayesian approach to the real world can make you more informed and better off. We also discuss the applications of sports analytics in the major US sports leagues including the NFL and the NBA. Paul references a book for those who want to learn more. The book is called The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy.
This is a teaser from our first Patreon episode. Subscribe here (https://www.patreon.com/exhaust) to listen to the full thing! We talk with Canada Mike about the stastical efficacy of vaccine success studies, how mRNA vaccines are produced, and the downfall of the CDC. Bibliography (https://exhaust.fireside.fm/articles/ep27bib). Twitter (https://twitter.com/ex_haustpodcast). Special Guest: Mike.
What exciting data science problems emerge when you try to forecast an election? Many, it turns out!We're very excited to turn our DataCafé lens on the current Presidential race in the US as an exemplar of statistical modelling right now. Typically state election polls are asking around 1000 people in a state of maybe 12 million people how they will vote (or even if they have voted already) and return a predictive result with an estimated polling error of about 4%.In this episode, we look at polling as a data science activity and discuss how issues of sampling bias can have dramatic impacts on the outcome of a given poll. Elections are a fantastic use-case for Bayesian modelling where pollsters have to tackle questions like "What's the probability that a voter in Florida will vote for President Trump, given that they are white, over 60 and college educated".There are many such questions as each electorate feature (gender, age, race, education, and so on) potentially adds another multiplicative factor to the size of demographic sample needed to get a meaningful result out of an election poll.Finally, we even hazard a quick piece of psephological analysis ourselves and show how some naive Bayes techniques can at least get a foot in the door of these complex forecasting problems. (Caveat: correlation is still very important and can be a source of error if not treated appropriately!)Further reading:Article: Ensemble Learning to Improve Machine Learning Results (https://bit.ly/34MW3HO via statsbot.co)Paper: Combining Forecasts: An Application to Elections (https://bit.ly/3efx5nm via researchgate.net)Interactive map: Explore The Ways Trump Or Biden Could Win The Election (https://53eig.ht/2TIlAvh via fivethirtyeight.com)Podcast: 538 Politics Podcast (https://53eig.ht/2HSkwCA via fivethirtyeight.com)Update US polling map: Consensus Forecast Electoral Map (https://bit.ly/2HY1FWk via 270towin.com)Some links above may require payment or login. We are not endorsing them or receiving any payment for mentioning them. They are provided as is. Often free versions of papers are available and we would encourage you to investigate.Recording date: 30 October 2020Intro music by Music 4 Video Library (Patreon supporter)
I don’t know about you, but I’m starting to really miss traveling and just talking to people without having to think about masks, social distance and activating the covid tracking app on my phone. In the coming days, there is one event that, granted, won’t make all of that disappear, but will remind me how enriching it is to meet new people — this event is PyMCon, the first-ever conference about the PyMC ecosystem! To talk about the conference format, goals and program, I had the pleasure to host Ravin Kumar and Quan Nguyen on the show. Quan is a PhD student in computer science at Washington University in St Louis, USA, researching Bayesian machine learning and one of the PyMCon program committee chairs. He is also the author of several programming books on Python and scientific computing. Ravin is a core contributor to Arviz and PyMC, and is leading the PyMCon conference. He holds a Bachelors in Mechanical Engineering and a Masters in Manufacturing Engineering. As a Principal Data Scientist he has used Bayesian Statistics to characterize and aid decision making at organizations like SpaceX and Sweetgreen. Ravin is also currently co-authoring a book with Ari Hartikainen, Osvaldo Martin, and Junpeng Lao on Bayesian Statistics due for release in February. We talked about why they became involved in the conference, parsed through the numerous, amazing talks that are planned, and detailed who the keynote speakers will be… So, If you’re interested the link to register is in the show notes, and there are even two ways to get a free ticket: either by applying to a diversity scholarship, or by being a community partner, which is anyone or any organization working towards diversity and inclusion in tech — all links are in the show notes. Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ ! Links from the show: PyMCon speakers: https://pymc-devs.github.io/pymcon/speakers Register to PyMCon: https://www.eventbrite.com/e/pymcon-2020-tickets-121404065829 PyMCon Diversity Scholarship: https://bit.ly/2J3Vb9d PyMCon Community Partner Form: https://bit.ly/35yq90L PyMC3 -- Probabilistic Programming in Python: https://docs.pymc.io Donate to PyMC3: https://numfocus.org/donate-to-pymc3 PyMC3 for enterprise: https://bit.ly/3jo9jq9 Ravin on Twitter: https://twitter.com/canyon289 Quan on the web: https://krisnguyen135.github.io/ Quan's author page: https://amzn.to/37JsB7r Alex talks about polls on the "Local Maximum" podcast: https://bit.ly/3e1Ro7O Support "Learning Bayesian Statistics" on Patreon: https://www.patreon.com/learnbayesstats --- Send in a voice message: https://anchor.fm/learn-bayes-stats/message
Andrew is an American statistician, professor of statistics and political science, and director of the Applied Statistics Center at Columbia University. He frequently writes about Bayesian statistics, displaying data, and interesting trends in social science. He's also well known for writing posts sharing his thoughts on best statistical practices in the sciences, with a frequent emphasis on what he sees as the absurd and unscientific. FIND ANDREW ONLINE Website: https://statmodeling.stat.columbia.edu/ Twitter: https://twitter.com/StatModeling QUOTES [00:04:16] "We've already passed peak statistics..." [00:05:13] "One thing that we sometimes like to say is that big data need big model because big data are available data. They're not designed experiments, they're not random samples. Often big data means these are measurements. " [00:22:05] "If you design an experiment, you want to know what you're going to do later. So most obviously, you want your sample size to be large enough so that given the effect size that you expect to see, you'll get a strong enough signal that you can make a strong statement." [00:31:00] "The alternative to good philosophy is not no philosophy, it's bad philosophy. " SHOW NOTES [00:03:12] How Dr. Gelman got interested in statistics [00:04:09] How much more hyped has statistical and machine learning become since you first broke into the field? [00:04:44] Where do you see the field of statistical machine learning headed in the next two to five years? [00:06:12] What do you think the biggest positive impact machine learning will have in society in the next two to five years? [00:07:24] What do you think would be some of our biggest concerns in the future? [00:09:07] The thee parts of Bayesian inference [00:12:05] What's the main difference between the frequentist and the Bayesian? [00:13:02] What is a workflow? [00:16:21] Iteratively building models [00:17:50] How does the Bayesian workflow differ from the frequent workflow? [00:18:32] Why is it that what makes this statistical method effective is not what it does with the data, but what data it uses? [00:20:48] Why do Bayesians then tend to be a little bit more skeptical in their thought processes? [00:21:47] Your method of evaluation can be inspired by the model or the model can be inspired by your method of evaluation [00:24:38] What is the usual story when it comes to statistics? And why don't you like it? [00:30:16] Why should statisticians and data scientist care about philosophy? [00:35:04] How can we solve all of our statistics problems using P values? [00:36:14] Is there a difference in interpretations for P-Values between Bayesian and frequentist. [00:36:54] Do you feel like the P value is a difficult concept for a lot of people to understand? And if so, why do you think it's a bit challenging? [00:38:22] Why the least important part of data science is statistics. [00:40:09] Why is it that Americans vote the way they do? [00:42:40] What's the one thing you want people to learn from your story? [00:44:48] The lightning round Special Guest: Andrew Gelman, PhD.
The world is nowhere near as binary as we think. In our current machine learning models, we have adopted a statistical theory of the world called Bayesian Statistics. This model is fundamentally flawed, so why is it so widely adopted? It boils down to a false perspective of reality and lack of understanding of natural law within reason. Therefore leading us all down an unreasonable path of observational analysis. Tcast is an education, business, and technology video podcast that informs listeners and viewers on best practices, theory, technical functions of the TARTLE data marketplace system and how it is designed to serve society with the highest and best intentions. Tcast is brought to you by TARTLE. A global personal data marketplace that allows users to sell their personal information anonymously when they want to, while allowing buyers to access clean ready to analyze data sets on digital identities from all across the globe. The show is hosted by Co-Founder and Source Data Pioneer Alexander McCaig and Head of Conscious Marketing Jason Rigby. What's your data worth? Find out at ( https://tartle.co/ ) Watch the podcast on YouTube ( https://www.youtube.com/channel/UC46qT-wHaRzUZBDTc9uBwJg ) Like our Facebook Page ( https://www.facebook.com/TARTLEofficial/ ) Follow us on Instagram ( https://www.instagram.com/tartle_official/ ) Follow us on Twitter ( https://twitter.com/TARTLEofficial ) Spread the word!
Lecture 26 of the Sports Biomechanics Lecture Series #SportsBiomLS
Bayesian statistics help decide what email you get is spam. It can assess security and medical risks, decode DNA, enhance blurry pictures, help explain stock market volatility, predict the spread of an infectious disease, and its methods were even used by Alan Turing in World War 2 to crack the secret of the Nazi enigma code. In this episode, we explore the history of Bayes' theorem, the ideas behind it, and why it's really becoming a powerful statistical tool in the 21st century. The Random Sample is a podcast by the Australian Research Council Centre of Excellence for Mathematical and Statistical Frontiers. In this show, we share stories about mathematics, statistics and the people involved. To learn more about ACEMS, visit https://acems.org.au.See omnystudio.com/listener for privacy information.
A librarian, a philosopher and a statistician walk into a bar — and they can’t find anybody to talk to; nobody seems to understand what they are talking about. Nobody? No! There is someone, and this someone is Will Kurt! Will Kurt is the author of ‘Bayesian Statistics the Fun Way’ and ‘Get Programming With Haskell’. Currently the lead Data Scientist for the pricing and recommendations team at Hopper, he also blogs about stats and probability at countbayesie.com. In this episode, he’ll tell us how a Boston librarian can become a Data Scientist and work with Bayesian models everyday. He’ll also explain the value of Bayesian inference from a philosophical standpoint, why it’s useful in the travel industry and how his latest book came into life. Finally, Will is also a big fan of the “mind projection fallacy”, an informal fallacy first described by physicist and Bayesian philosopher Edwin Thompson Jaynes. Does that intrigue you? Well, stay tuned, he’ll tell us more in the episode… Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ ! Links from the show: Will's Blog: https://www.countbayesie.com Will on Twitter: https://twitter.com/willkurt Bayesian Statistics the Fun Way -- Understanding Statistics and Probability with Star Wars, LEGO, and Rubber Ducks: https://nostarch.com/learnbayes Get Programming with Haskell: https://www.amazon.com/Get-Programming-Haskell-Will-Kurt/dp/1617293768 The Mind Projection Fallacy: https://en.wikipedia.org/wiki/Mind_projection_fallacy Probability Theory -- The Logic of Science by E.T. Jaynes: https://www.cambridge.org/core/books/probability-theory/9CA08E224FF30123304E6D8935CF1A99 Wittgenstein's Lectures on the Foundations of Mathematics: https://www.amazon.com/Wittgensteins-Lectures-Foundations-Mathematics-Cambridge/dp/0226904261 --- Send in a voice message: https://anchor.fm/learn-bayes-stats/message
Episode: 1876 In which Thomas Bayes mixes prior knowledge with a priori deduction. Today, we learn how to hedge bets.
Diesmal unterhalten uns mit Thomas Wiecki über Quantitative Finance, Probabilistic Programming und die Corona-Pandemie. Thomas hat übrigens einen eigenen Podcast namens PyData Deep Dive den wir wärmstens empfehlen können. So ab Minute 36 wird das mit den Audio-Knacksern übrigens auch besser :). Shownotes Unsere E-Mail für Fragen, Anregungen & Kommentare: hallo@python-podcast.de Quantitative Finance Quantopian Backtesting Quantopian auf github zipline (backtesting library) Linear Regression statsmodels ARIMA Probabilistic Programming pymc Markov chain Monte Carlo The Best Of Both Worlds: Hierarchical Linear Regression in PyMC3 Bayesian Statistics COVID-19 Thomas covid-19 repository Some covid19 dashboards Compartmental models in epidemiology Student's t-distribution Using epidemiological models is like counting spoons tweet by @nntaleb "Thousands of lines of undocumented c code" tweet by @neil_ferguson Johns Hopkins Daten aus WHO Pdfs European Centre for Disease Prevention and Control Our world in data (coronavirus) Hackathon Coronavirus COVID19 Global Forecasting Kaggle-Competition COVID-19 Open Research Dataset Challenge Kaggle-Competition CCC Cert Informationssammlung CERT Bulletin Datasette Datasette Query für Italien auf der covid-19 datasette Glitch Öffentliches Tag auf konektom
What do you do when your brain gets full? Time for some Intentional Unlearning. This is a valuable skill, and you need it. I mention an article on Bayesian Statistics by Liv Bouree. Here it is.
Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a degree of belief in an event. The degree of belief may be based on prior knowledge about the event, such as the results of previous experiments, or on personal beliefs about the event. This differs from a number of other interpretations of probability, such as the frequentist interpretation that views probability as the limit of the relative frequency of an event after many trials.
In today’s episode, Greg and Eric field listener questions about carbohydrate intake, sodium intake, training to improve speed or strength-endurance, experimenting with training styles and variables to find out what works for you, the minimum necessary volume per session, and more. To finish off the episode, Greg and Eric discuss Bayesian statistics, and how to start a fitness career without a formal academic background in exercise or nutrition.If you want your questions answered on a future episode, you can submit them using the following link: tiny.cc/sbsqa TIME STAMPS0:01:55 What is the best approach for increasing strength endurance (that is, increasing maximum reps for a given exercise)?0:16:36 Two questions combined:Is it ever beneficial to lift weights in a fasted state?I train very early in the morning and drink a protein shake prior to training. Anything else you recommend to do or eat before working out in a fasted state?0:29:00 What is the relationship between training frequency and recoverable volume? Spreading work across more sessions seems as if it would allow more to be done, but is a minimum volume per session necessary to get sufficient stimulus?0:37:32 What effects does sugar intake have on performance and composition?0:49:44 What are the best ways to improve speed using resistance training?0:52:29 Does the relative split of daily dietary intake of carbs and fat really matter for hypertrophy, strength, and body composition?1:08:46 How important is delayed onset muscle soreness (DOMS)? I've been powerlifting for a little over 18 months and have never experienced any significant amount of soreness, but my program contains reasonably high training volume and frequency.1:13:46 What are your thoughts on sodium intake for lifters, whether in absolute terms or relative to potassium intake?1:29:23 How important is it for trainees to experiment with different training styles to see what methods may work best for them? How would you recommend organizing an experimental period of training to see if for example you respond better to speed or power training and what should be measured/benchmarked against?1:41:10 Do you think that Bayesian Statistics will be used in future studies for analysis?1:53:41 As someone who went the standard business route after college and is getting minimal satisfaction from their current career, how possible is it to get proper certifications for nutrition and personal training to make a career out of something I am more passionate about?
I am beyond excited to share this first episode of the PyData podcast with you. The idea is to have a free-form discussion with interesting guests which does not shy away from more advanced topics.In this episode I talk to Chris Fonnesbeck: Professor for biostatistics at Vanderbilt University and, as of recent, Data Scientist at the New York Yankees. We start off this discussion by talking about Bayesian statistics, probabilistic programming. Chris then talks about the history of PyMC and what the current status of PyMC4 is.We then dive more into his background and how he moved from marine biology to become a data scientist in sports analytics and the lessons he learned along the way.Special thanks to my Patreons Andrew Ng, Daniel Gerlanc, and Richard Craib.If you would like to support the podcast go to: https://patreon.com/twieckiFollow Chris on Twitter: https://twitter.com/fonnesbeckSupport the show (https://www.patreon.com/twiecki)
The Book of Mormon is one in a billion. Actually, more accurately, it's one in one thousand billion, billion, billion, billion. Through Bayesian statistical analysis, Distinguished Professor Bruce E Dale from Michigan State University explains the historical and ancient authenticity of the Book of Mormon with a direct response to critics and prescribing a real world geographical setting in ancient Mesoamerica. Read his research article here https://bit.ly/31WKOrJ for your personal studies. For more information, please visit us online at www.BookofMormonHistory.comSupport the show (http://www.BookofMormonHistory.com)
On today's episode of #Growth, host Matt Bilotti is talking all about testing – A/B testing to be exact. He breaks down the best ways to test, teaches us all about Bayesian Statistics and chats through this and more with Guy Yalif, co-founder and CEO of Intellimize.
On today's episode of #Growth, host Matt Bilotti is talking all about testing – A/B testing to be exact. He breaks down the best ways to test, teaches us all about Bayesian Statistics and chats through this and more with Guy Yalif, co-founder and CEO of Intellimize.
Tommy discusses Bayesian statistics.
Do you model professionally? Would you like to? Or, are you uncertain. These are the topics of this episode: Bayesian statistician (among other official roles that are way less fun to say) Dr. Elea Feit joined the gang to discuss how we, as analysts, think about data put it to use. Things got pretty deep, included the exploration of questions such as, "If you run a test that includes a holdout group, is that an A/B test?" This episode ran a little long, but our confidence level is quite high that you will be totally fine with that. For complete show notes, including links to items mentioned in this show and a transcript of the show, visit the show page.
EJ Wagenmakers (University of Amsterdam) gives a talk for the Oxford Reproducibility School.
In this lecture, Prof. Rigollet talked about Bayesian confidence regions and Bayesian estimation.
In this lecture, Prof. Rigollet talked about Bayesian approach, Bayes rule, posterior distribution, and non-informative priors.
In the second part of their discussion, Pedro Ferreira and Jerome Martin consider ways to build the naturalness of an inflationary model into our expectations for observing it. They debate the feasibility of measuring the degree to which an inflationary model is inspired by considerations from other parts of physics, and describe the applicability of Bayesian methods when we have background knowledge. This discussion was conducted at the University of Oxford on March 15, 2017.
In the first part of their discussion, Pedro Ferreira and Jerome Martin talk about the variety of inflationary models. They discuss methods for distinguishing between them based on evidence and describe the application of Bayesian statistics to inflation. This discussion was conducted at the University of Oxford on March 15, 2017.
Software Process and Measurement Cast 383 features our essay on peer reviews. Peer reviews are a tool to remove defects before we need to either test them out or ask our customers to find them for us. While the data about the benefits of peer reviews is UNAMBIGUOUS, they are rarely practiced well and often turn into a blame apportionment tool. The essay discusses how to do peer reviews, whether you are using Agile or not so that you get the benefits you expect! Our second segment is a visit to the QA Corner. Jeremy Berriault discusses a piece of advice he got from a mentor that continues to pay dividends. This installment of the QA Corner discusses how a QA leader can generate and leverage responsibility without formal authority. Steve Tendon anchors this week’s SPaMCAST discussing Chapter 8 of Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban published J Ross. Chapter 8 is titled “Creating A Shared Vision At The Team Level”. We discuss why it is important for the team to have a shared vision, the downside of not having a shared vision and most importantly, how to get a share vision. Remember Steve has a great offer for SPaMCAST listeners. Check out https://tameflow.com/spamcast for a way to get Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach, and Its Application to Scrum and Kanban at 40% off the list price. Re-Read Saturday News This week we are back with Chapter 10 of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter 10 we visited how to use Bayesian Statistics to account for having prior knowledge before we begin measuring. Most common statistics assume that we don’t have prior knowledge of the potential range of what we are measuring or the shape of the distribution. This is often a gross simplification with ramifications! Upcoming Events I am facilitating the CMMI Capability Challenge. This new competition showcases thought leaders who are building organizational capability and improving performance. Listeners will be asked to vote on the winning idea which will be presented at the CMMI Institute’s Capability Counts 2016 conference. The next CMMI Capability Challenge session will be held on March 15th at 1 PM EST. http://cmmiinstitute.com/conferences#thecapabilitychallenge I will be at the QAI Quest 2016 in Chicago beginning April 18th through April 22nd. I will be teaching a full day class on Agile Estimation on April 18 and presenting Budgeting, Estimating, Planning and #NoEstimates: They ALL Make Sense for Agile Testing! on Wednesday, April 20th. Register now! Upcoming Webinars Budgeting, Estimation, Planning, #NoEstimates and the Agile Planning Onion March 1, 2016, 11 AM EST There are many levels of estimation, including budgeting, high-level estimation and task planning (detailed estimation). This webinar challenges the listener to consider estimation as a form of planning. Register Here Next SPaMCAST The next Software Process and Measurement Cast features our interview with Gwen Walsh. Gwen is the President of TechEdge LLC. We discussed leadership and why leadership is important. We also discussed the topic of performance appraisals and how classic methods can hurt your organization. Gwen’s advice both redefines industry standards and provides you with an idea of what is truly possible. Shameless Ad for my book! Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.
Welcome TRC Family to another jam-packed episode! Adam kicks off the show this week by looking into the brouhaha over whether Caitlyn Jenner really beat out Noah Galloway to win an ESPN Arthur Ashe Courage Award. Our guest panelist statistician Alex Demarsh introduces us to Bayesian Statistics demonstrating that some of us would have enjoyed stats way more in high school if he was at the chalkboard. Finally, Cristina unearths the dung in Biodynamic Farming. Special shout out to TRC’er David for sending in such a great parody suggestion + lyrics for “How Deep Is Your Woo” that even Baritone Pat couldn’t resist!
Karen Price talks about all things Bayesian in the world of pharmaceutical development: methodology, the DIA Bayesian Scientific Working Group, and the special Bayesian issue of Pharmaceutical Statistics.
The seven articles in the series address one aspect of a multi-phase project to define sediment quality objectives, including a new sediment quality guideline, SQG, index.
Dr. David Barton, Guest Editor of the special series Bayesian Networks in Environmental and Resource Management discusses the basics of Bayesian approaches in environmental management.
LISA: Laboratory for Interdisciplinary Statistical Analysis - Short Courses
Some of you may have come across a growing number of publications in your field using an alternative paradigm called Bayesian statistics in which to perform their statistical analyses. The goal of this talk is to help explain some of the basic terminology of Bayesian statistics (prior distributions, posterior distributions, credible intervals, conjugacy, etc.), some options regarding software to perform the analyses, and how interpretations of results change in this new paradigm. We’ll use the R software language to run some examples of multiple linear regression and probit regression using the bayesm package that will illustrate these concepts. Hopefully you'll come away with a better concept of what these researchers are doing next time you read one of their papers and possibly an interest in performing them yourself. Course files available here: www.lisa.stat.vt.edu/?q=node/907.
On today's episode of #Growth, host Matt Bilotti is talking all about testing – A/B testing to be exact. He breaks down the best ways to test, teaches us all about Bayesian Statistics and chats through this and more with Guy Yalif, co-founder and CEO of Intellimize.