Branch of statistics concerned with inferring causal relationships between variables
POPULARITY
Carly Brantner is an assistant professor of Biostatistics & Bioinformatics at Duke University and Duke Clinical Research Institute. Resources from this episode: multicate: R package for estimating conditional average treatment effects across one or more studies using machine learning methods PCORnet® Front Door: Access point for potential investigators, patient groups, and other stakeholders to connect with PCORnet and get support for potential research studies Patient-Centered Outcomes Data Repository (PDOCR): De-identified data from 24 (and counting) PCORI-funded studies Follow along on Bluesky: Carly: @carlybrantner.bsky.social Ellie: @epiellie.bsky.social Lucy: @lucystats.bsky.social
Send us a text*Causal Inference From Human Behavior, Reproducibility Crisis & The Power of Causal Graphs*Is Jonathan Heidt right that social media causes the mental health crisis in young people?If so, how can we be sure?Can other disciplines learn something from the reproducibility crisis in Psychology, and what is multiverse analysis?Join us for a conversation on causal inference from human behavior, the reproducibility crisis in sciences, and the power of causal graphs!------------------------------------------------------------------------------------------------------Audio version available on YouTube: https://youtu.be/YQetmI-y5gMRecorded on May 16, 2025, in Leipzig, Germany.------------------------------------------------------------------------------------------------------*About The Guest*Julia Rohrer, PhD, is a researcher and personality psychologist at the University of Leipzig. She's interested in the effects of birth order, age patterns in personality, human well-being, and causal inference. Her works have been published in top journals, including Nature Human Behavior. She has been an active advocate for increased research transparency, and she continues this mission as a senior editor of Psychological Science. Julia frequently gives talks about good practices in science and causal inference. You can read Julia's blog at https://www.the100.ci/*Links*Papers- Rohrer, J. (2024) "Causal inference for psychologists who think that causal inference is not for them" (https://compass.onlinelibrary.wiley.com/doi/10.1111/spc3.12948)- Bailey, D., ..., Rohrer, J. et al (2024) "Causal inference on human behaviour" (https://www.nature.com/articles/s41562-024-01939-z.epdf)- Rohrer, J. et al (2024) "The Effects of Satisfaction with Different Domains of Life on GenInspiring Tech Leaders - The Technology PodcastInterviews with Tech Leaders and insights on the latest emerging technology trends.Listen on: Apple Podcasts SpotifySupport the showCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
Andrew Heiss is an assistant professor in the Department of Public Management and Policy at the Andrew Young School of Policy Studies at Georgia State University. Vincent's “What is your estimand” section in his {marginaleffects} book: https://marginaleffects.com/chapters/challenge.html#sec-goals_estimand Article on defining estimands: https://doi.org/10.1177/00031224211004187 Andrew's marginal effects post: https://www.andrewheiss.com/blog/2022/05/20/marginalia/ Andrew's post on “fixed effects” and mariginal effects across different disciplines: https://www.andrewheiss.com/blog/2022/11/29/conditional-marginal-marginaleffects/ Follow along on Bluesky: Andrew: @andrew.heiss.phd Ellie: @epiellie.bsky.social Lucy: @lucystats.bsky.social
Korrelation ist nicht gleich Kausalität, und wer fundierte Entscheidungen treffen will, braucht mehr als gute Vorhersagen. In dieser Folge geht es um Confounder, Spurious Correlations und die Frage, wann Machine Learning kausale Einsichten liefern kann. Mit dabei: DoubleML als Brücke zwischen klassischer Statistik und Machine Learning. **Zusammenfassung** Unterscheidung zwischen Vorhersage und Intervention: Nur Kausalität beantwortet die "Was-wäre-wenn?"-Frage Praxisbeispiele: Bugs & Discounts, Eiskonsum & Kriminalität, Salzgehalt & Flussmenge Wichtig: Confounder identifizieren und herausrechnen, z. B. durch Zeitreihenzerlegung Einführung in Double ML: ML-Modelle für Response und Treatment, Effektschätzung über Residuen Herausforderungen: Overfitting-Bias, Regularisierung, verzerrte Effekte bei hoher Komplexität Alternativen & Ergänzungen: A/B-Tests, strukturelle Gleichungsmodelle, Kausaldiagramme Fazit: Vorsicht bei Spurious Correlations, Ceteris-paribus-Fallen und Feature-Interpretation - Kausalität braucht Kontext und Methode **Links** Blogartikel von Scott Lundberg: Be Careful When Interpreting Predictive Models in Search of Causal Insights https://medium.com/data-science/be-careful-when-interpreting-predictive-models-in-search-of-causal-insights-e68626e664b6 ICECREAM-Datensatz (verfügbar über das tsapp R-Paket): https://search.r-project.org/CRAN/refmans/tsapp/html/ICECREAM.html Victor Chernozhukov et al. (2018): Double/debiased machine learning for treatment and structural parameters, The Econometrics Journal, Volume 21, Issue 1 https://doi.org/10.1111/ectj.12097 Matheus Facure Alves (2022): Causal Inference for The Brave and True (kostenfreies Online-Buch) https://matheusfacure.github.io/python-causality-handbook/landing-page.html DoubleML (Python & R): https://docs.doubleml.org/stable/index.html EconML (Microsoft Research): https://econml.azurewebsites.net/index.html Causal ML (Uber Engineering): https://causalml.readthedocs.io/en/latest/ Vortragsfolien von Prof. Dr. Steffen Wagner: "Navigating the Ocean of Correlations to the Islands of Causality – Time Series Analyses at its Best", gehalten bei der Machine Learning Week München 2024 https://de.slideshare.net/secret/aArFURFQSBxrzB
Summary In this episode, Andy talks with Dr. Joe Sutherland, co-author of the new book Analytics the Right Way: A Business Leader's Guide to Putting Data to Productive Use. Joe is a leader in AI policy and practice, serving as the founding director of the Emory Center for AI Learning and lead principal investigator for the U.S. AI Safety Institute Consortium. Andy and Joe explore what it really takes to make better decisions in a world drowning in data and exploding with AI hype. They discuss the myths of data collection, how randomized controlled trials and causal inference impact decision quality, and Joe's “two magic questions” that help project managers stay focused on outcomes. They also dive into recent AI breakthroughs like DeepSeek, and why executives may be paralyzed when it comes to implementing AI strategy. If you're looking for insights on how to use data and AI more effectively to support leadership and project decision-making, this episode is for you! Sound Bites “What are we trying to achieve? And how would we know if we achieved it?” “Sometimes we're measuring success by handing out coupons to people who already had the product in their cart.” “AI doesn't replace decision-making—it demands better decisions from us.” “Causality is important for really big decisions because you want to know with a level of certainty that if I make this choice, this outcome is going to happen.” “Too often, we make decisions based on bad causal inference and wonder why the outcomes don't match our expectations.” “The ladder of evidence helps you decide how much certainty you need before making a decision—and how much it'll cost to climb higher.” “The truth is, we're not ready for human-out-of-the-loop AI—we're barely asking the right questions yet.” “Leadership isn't about replacing people with AI. It's about using AI to make your people more productive and happier.” “We're starting to see some evidence that when you use large language models in education, test scores go up in excess of 60%.” “This may be the first time the kids feel more behind than the parents when it comes to a new technology.” Chapters 00:00 Introduction 02:00 Start of Interview 02:09 What Are Some Myths About Data? 03:49 What Is the Potential Outcomes Framework? 08:50 What Are Counterfactuals? 13:00 How Do You Personally Evaluate Causality? 18:22 What Are the Two Magic Questions for Projects? 20:45 What's Getting Traction From the Book? 24:26 What Can We Learn From DeepSeek's Disruption? 27:30 Human In or Out of the AI Loop? 30:41 How Joe Uses AI Personally and Professionally 33:33 What Is the Future of Agentic AI? 35:37 Will AI Replace Jobs? 37:18 How Can Parents Prepare Kids for the AI Future? 41:19 End of Interview 41:46 Andy Comments After the Interview 45:07 Outtakes Learn More You can learn more about Joe and his book at AnalyticsTRW.com. For more learning on this topic, check out: Episode 381 with Jim Loehr about how to make wiser decisions. Episode 372 with Annie Duke on knowing when to quit. Episode 437 with Nada Sanders about future-prepping your career in the age of AI. Thank you for joining me for this episode of The People and Projects Podcast! Talent Triangle: Power Skills Topics: Leadership, Decision Making, Data Analytics, Artificial Intelligence, Project Management, Strategic Thinking, Causal Inference, Agile, AI Ethics, AI in Education, Machine Learning, Career Development, Future of Work The following music was used for this episode: Music: Ignotus by Agnese Valmaggia License (CC BY 4.0): https://filmmusic.io/standard-license Music: Synthiemania by Frank Schroeter License (CC BY 4.0): https://filmmusic.io/standard-license
In this week's episode Patrick and Greg have some serious fun with song lyrics they misunderstood at some point in their personal lives. They then use this as a thinly veiled excuse to explore some very basic statistical things that they have also misunderstood at some point in their professional lives. Along the way they discuss over-engineered front ends, mumbling, Scaramouche, mondegreens, Tony Danza, Bingo Jed, word salad, containers, sitting next to Kurt Cobain, kicking cats, tiddles, ears ringing, the Dunder Chief, wrinkles in the space time continuum, naked or not, missing data bouncer, colite gas, and dying on the dance floor. Stay in contact with Quantitude! Web page: quantitudepod.org TwitterX: @quantitudepod YouTube: @quantitudepod Merch: redbubble.com
Agents of Innovation: AI-Powered Product Ideation with Synthetic Consumer Testing // MLOps Podcast #306 with Luca Fiaschi, Partner of PyMC Labs.Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter // AbstractTraditional product development cycles require extensive consumer research and market testing, resulting in lengthy development timelines and significant resource investment. We've transformed this process by building a distributed multi-agent system that enables parallel quantitative evaluation of hundreds of product concepts. Our system combines three key components: an Agentic innovation lab generating high-quality product concepts, synthetic consumer panels using fine-tuned foundational models validated against historical data, and an evaluation framework that correlates with real-world testing outcomes. We can talk about how this architecture enables rapid concept discovery and digital experimentation, delivering insights into product success probability before development begins. Through case studies and technical deep-dives, you'll learn how we built an AI powered innovation lab that compresses months of product development and testing into minutes - without sacrificing the accuracy of insights. // BioWith over 15 years of leadership experience in AI, data science, and analytics, Luca has driven transformative growth in technology-first businesses. As Chief Data & AI Officer at Mistplay, he led the company's revenue growth through AI-powered personalization and data-driven pricing. Prior to that, he held executive roles at global industry leaders such as HelloFresh ($8B), Stitch Fix ($1.2B) and Rocket Internet ($1B). Luca's core competencies include machine learning, artificial intelligence, data mining, data engineering, and computer vision, which he has applied to various domains such as marketing, logistics, personalization, product, experimentation and pricing.He is currently a partner at PyMC Labs, a leading data science consultancy, providing insights and guidance on applications of Bayesian and Causal Inference techniques and Generative AI to fortune 500 companies. Luca holds a PhD in AI and Computer Vision from Heidelberg University and has more than 450 citations on his research work.// Related LinksWebsite: https://www.pymc-labs.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Luca on LinkedIn: /lfiaschi
Send us a text*Agents, Causal AI & The Future of DoWhy*The idea of agentic systems taking over more complex human tasks is compelling.New "production-grade" frameworks to build agentic systems pop up, suggesting that we're close to achieving full automation of these challenging multi-step tasks.But is the underlying agentic technology itself ready for production?And if not, can LLM-based systems help us making better decisions?Recent new developments in the DoWhy/PyWhy ecosystem might bring some answers.Will they—combined with new methods for validating causal models now available in DoWhy—impact the way we build and interact with causal models in industry?------------------------------------------------------------------------------------------------------Video version available on Youtube: https://youtu.be/8yWKQqNFrmYRecorded on Mar 12, 2025 in Bengaluru, India.------------------------------------------------------------------------------------------------------*About The Guest*Amit Sharma is a Principal Researcher at Microsoft Research and one of the original creators of the open-source Python library DoWhy, considered the "scikit-learn of causal inference." He holds a PhD in Computer Science from Cornell University. His research focuses on causality and its intersection with LLM-based and agentic systems. Amit deeply cares about the social impact of machine learning systems and sees causality as one of the main drivers of more useful and robust systems.Connect with Amit:- Amit on LinkedIn: https://www.linkedin.com/in/amitshar/- Amit on BlueSky:- Amit 's web page: http://amitsharma.in/*About The Host*Everyday AI: Your daily guide to grown with Generative AICan't keep up with AI? We've got you. Everyday AI helps you keep up and get ahead.Listen on: Apple Podcasts SpotifySupport the showCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia and Michael Cao.Takeaways:Sharks play a crucial role in maintaining healthy ocean ecosystems.Bayesian statistics are particularly useful in data-poor environments like ecology.Teaching Bayesian statistics requires a shift in mindset from traditional statistical methods.The shark meat trade is significant and often overlooked.Ray meat trade is as large as shark meat trade, with specific markets dominating.Understanding the ecological roles of species is essential for effective conservation.Causal language is important in ecological research and should be encouraged.Evidence-driven decision-making is crucial in balancing human and ecological needs.Expert opinions are...
Lucy chats with Len Testa about a recent analysis he did which combined over 150 publicly available data sources to answer a question about the affordability of Disney World. Len's Deep Dive Post on the Touring Plans Blog [Blog Post] Wall Street Journal Artcile, "Even Disney Is Worried About the High Cost of a Disney Vacation" [Article] Follow along on Bluesky: Len: @lentesta.bsky.social Ellie: @EpiEllie.bsky.social Lucy: @LucyStats.bsky.social
Sports analytics is a booming industry with new technologies allowing for the parsing of ever more sophisticated statistics. Analysts can now examine the height and the force of a gymnast tumbling pass, the probability of going for it on a 4th down in football, actually working out, and the arc of the best swing for a baseball player. Analytics are also used in the conditioning of athletes, particularly for all the baseball players preparing for the start of the MLB's spring training. Analytics is the focus of this episode of stats and stories with guest Alexandre Andorra. Alexandre Andorra is a Senior Applied Scientist for the Miami Marlins as well a Bayesian modeler at the PyMC Labs consultancy firm that he cofounded as well as the host the podcast dedicated to Bayesian inference “Learning Bayesian Statistics” His areas of expertise include Hierarchical Models, Gaussian Processes and Causal Inference.
Send us a textFrom Quantum Causal Models to Causal AI at SpotifyCiarán loved Lego.Fascinated by the endless possibilities offered by the blocks, he once asked his parents what he could do as an adult to keep building with them.The answer: engineering.As he delved deeper into engineering, Ciarán noticed that its rules relied on a deeper structure. This realization inspired him to pursue quantum physics, which eventually brought him face-to-face with fundamental questions about causality.Today, Ciarán blends his deep understanding of physics and quantum causal models with applied work at Spotify, solving complex problems in innovative ways.Recently, while collaborating with one of his students, he stumbled upon a new interesting question: could we learn something about the early history of the universe by applying causal inference methods in astrophysics?Could we? Hear it from Ciarán himself.Join us for this one-of-a-kind conversation!------------------------------------------------------------------------------------------------------Video version and episode links available on YouTubeRecorded on Nov 6, 2024 in Dublin, Ireland.------------------------------------------------------------------------------------------------------About The GuestCiarán Gilligan-Lee is Head of the Causal Inference Research Lab at Spotify and Honorary Associate Professor at University College London. He got interested in causality during his studies in quantum physics. This interest led him to study quantum causal models. He published in Nature Machine Intelligence, Nature Quantum Information, Physical Review Letters, New Journal of Physics and more. In his free time, he writes for New Scientist and helps his students apply causal methods in new fields (e.g., astrophysics).Connect with Ciarán:- Ciarán on LinkedIn: https://www.linkedin.com/in/ciaran-gilligan-lee/- Ciarán's web page: https://www.ciarangilliganlee.com/About The HostAleksander (Alex) Molak is an independent machine learning researcher, educator, entreSupport the showCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
Send us a textStefan Feuerriegel is the Head of the Institute of AI in Management at LMU.His team consistently publishes work on causal machine learning at top AI conferences, including NeurIPS, ICML, and more.At the same time, they help businesses implement causal methods in practice.They worked on projects with companies like ABB Hitachi, and Booking.com.Stefan believes his team thrives because of its diversity and aims to bring more causal machine learning to medicine.I had a great conversation with him, and I hope you'll enjoy it too!>> Guest info:Stefan Feuerriegel is a professor and the Head of the Institute of AI in Management at LMU. Previously, he worked as a consultant at McKinsey & Co. and ran his own AI startup.>> Episode Links:Papers- Feuerriegel, S. et al. (2024) - Causal machine learning for predicting treatment outcomes (https://www.nature.com/articles/s41591-024-02902-1)- Kuzmanivic, M. et al. (2024) - Causal Machine Learning for Cost-Effective Allocation of Development Aid (https://arxiv.org/abs/2401.16986)- Schröder, M. et al. (2024) - Conformal Prediction for Causal Effects of Continuous Treatments (https://arxiv.org/abs/2407.03094)>> WWW: https://www.som.lmu.de/ai/>> LinkedIn: https://www.linkedin.com/in/stefan-feuerriegel/Support the showCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
Today Brook Santangelo and John Sterrett
Send us a textCausal Bandits at cAI 2024 (The Royal Society, London)The cAI Conference in London slammed the door on baseless claims that causality cannot be used in industrial practice.In the episode of Causal Bandits Extra we interview participants and speakers at Causal AI Conference London, who share their main insights from the event, and the challenges they face in applying causal methods in their everyday work.Time codes:00:29 - Eyal Kazin (Zimmer Biomet)01:44 - Athanasios Vlontzos (Spotify)04:02 - Mimie Liotsiou (Dunnhumby)06:13 - Fernanda Hinze (Croud)09:00 - Clara Higuera Cabañes (BBVA)10:28 - Javier Moral Hernández (BBVA)11:25 - Álvaro Ibraín Rodríguez (BBVA)12:10 - Hugo Proença (Booking.com)13:21 - Debora Andrade (Seamless AI)15:09 - Puneeth Nikin (Croud)17:54 - Puneet Gupta (Cisco)19:43 - Arthur Mello (Sephora)=============================
Welcome to the latest episode of The Mixtape with Scott! This week's guest on the podcast is Jann Spiess. Many of you probably know Jann from his work with Kirill Borusyak and Xavier Jaravel on diff-in-diff. Others may know him for his work on machine learning. Now you get to know him for a third reason which is contained on this podcast! Jann is an assistant professor at Stanford. He's one of a younger cohort of talented econometricians who have been making practically helpful contributions to the toolkit in causal inference and machine learning, including work on synthetic control with Guido Imbens and much more. This was a great interview and I learned a lot about Jann I didn't know about. And I hope you enjoy it it too!Thanks again for all your support! Share this video or podcast with whoever you think would like it!Scott's Mixtape Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to Scott's Mixtape Substack at causalinf.substack.com/subscribe
Join Professor Werbach in his conversation with Alice Xiang, Global Head of AI Ethics at Sony and Lead Research Scientist at Sony AI. With both a research and corporate background, Alice provides an inside look at how her team integrates AI ethics across Sony's diverse business units. She explains how the evolving landscape of AI ethics is both a challenge and an opportunity for organizations to reposition themselves as the world embraces AI. Alice discusses fairness, bias, and incorporating these ethical ideas in practical business environments. She emphasizes the importance of collaboration, transparency, and diveristy in embedding a culture of accountable AI at Sony, showing other organizations how they can do the same. Alice Xiang manages the team responsible for conducting AI ethics assessments across Sony's business units and implementing Sony's AI Ethics Guidelines. She also recently served as a General Chair for the ACM Conference on Fairness, Accountability, and Transparency (FAccT), the premier multidisciplinary research conference on these topics. Alice previously served on the leadership team of the Partnership on AI. She was a Visiting Scholar at Tsinghua University's Yau Mathematical Sciences Center, where she taught a course on Algorithmic Fairness, Causal Inference, and the Law. Her work has been quoted in a variety of high profile journals and published in top machine learning conferences, journals, and law reviews. Sony AI Flagship Project Augmented Datasheets for Speech Datasets and Ethical Decision-Making by Alice Xiang and Others
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meOur theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Bob's research focuses on corruption and political economy.Measuring corruption is challenging due to the unobservable nature of the behavior.The challenge of studying corruption lies in obtaining honest data.Innovative survey techniques, like randomized response, can help gather sensitive data.Non-traditional backgrounds can enhance statistical research perspectives.Bayesian methods are particularly useful for estimating latent variables.Bayesian methods shine in situations with prior information.Expert surveys can help estimate uncertain outcomes effectively.Bob's novel, 'The Bayesian Heatman,' explores academia through a fictional lens.Writing fiction can enhance academic writing skills and creativity.The importance of community in statistics is emphasized, especially in the Stan community.Real-time online surveys could revolutionize data collection in social science.Chapters:00:00 Introduction to Bayesian Statistics and Bob Kubinec06:01 Bob's Academic Journey and Research Focus12:40 Measuring Corruption: Challenges and Methods18:54 Transition from Government to Academia26:41 The Influence of Non-Traditional Backgrounds in Statistics34:51 Bayesian Methods in Political Science Research42:08 Bayesian Methods in COVID Measurement51:12 The Journey of Writing a Novel01:00:24 The Intersection of Fiction and AcademiaThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell,...
Send us a textWhich models work best for causal discovery and double machine learning?In this extra episode, we present 4 more conversations with the researchers presenting their work at the CLeaR 2024 conference in Los Angeles, California.What you'll learn:- Which causal discovery models perform best with their default hyperparameters?- How to tune your double machine learning model?- Does putting your paper on ArXiv early increase its chances of being accepted at a conference?- How to deal with causal representation learning with multiple latent interventions?Time codes:00:24 Damian Machlanski - Hyperparameter Tuning for Causal Discovery08:52 Oliver Schacht - Hyperparameter Tuning for DML14:41 Yanai Elazar - Causal Effect of Early ArXiving on Paper Acceptance18:53 Simon Bing - Identifying Linearly-Mixed Causal Representations from Multi-Node Interventions=============================
In Episode 112 of Bionic Planet, titled "Fantasy Football and Dynamic Baselines: New Tools for Impact Assessment," we unpack the often misunderstood concept of dynamic baselines and its origin in synthetic controls, using fantasy football as an analogy. The episode begins with a clear and relatively simple explanation of dynamic baselines, which have emerged as a valuable tool in climate finance. Unlike traditional static baselines, which rely on fixed reference points, dynamic baselines adapt to changing conditions and provide a more accurate measure of impact. We discuss the importance of data and the need for robust methodologies to ensure that we can effectively attribute changes in deforestation and other environmental metrics to specific interventions. Our guests for this episode are Lynn Riley from the American Forest Foundation and David Schoch from TerraCarbon, both of whom have played pivotal roles in advancing the application of synthetic controls in carbon markets. They share insights from their work on the Family Forest Carbon Program, which aims to engage small family landowners in sustainable forest management practices. Through their collaboration, they have developed methodologies that not only improve the accuracy of carbon accounting but also empower landowners to adapt their practices based on real-time feedback. Throughout the episode, we examine the challenges of establishing effective baselines in diverse contexts, particularly in the United States. We highlight the significance of the Forest Inventory and Analysis (FIA) data, which provides a rich source of information for modeling deforestation risk and assessing the impact of various interventions. The conversation also touches on the importance of addressing confounding variables and ensuring that methodologies are applicable across different forest types and ownership structures. As we wrap up, we reflect on the broader implications of dynamic baselines for climate finance and the potential for these innovative approaches to drive meaningful change in forest management. By fostering a more responsive and data-driven framework, we can better understand the impacts of our actions and work towards a more sustainable future. Join us for this engaging episode as we bridge the worlds of sports and environmental science, uncovering the lessons that can be learned from both fields in our quest to navigate the Anthropocene. Timestamps 00:00:00 - Introduction to Bionic Planet and Episode Overview 00:01:03 - Justin Fields and the NFL Draft Dynamics 00:02:14 - Caleb Williams vs. Justin Fields: A Season Comparison 00:04:27 - Troy Aikman on Rookie Quarterback Struggles 00:05:53 - Sam Darnold's Journey Through the NFL 00:06:58 - Kurt Warner's Unlikely Rise to Success 00:07:48 - Connecting Sports Performance to Climate Impact Assessment 00:08:31 - Challenges in Measuring Success in Climate Finance 00:09:12 - Dynamic Baselines vs. Traditional Baselines 00:10:32 - Introduction of Guests: Lynn Riley and David Schoch 00:11:18 - Overview of the Family Forest Carbon Program 00:11:59 - The Green Municipalities Program in Brazil 00:12:53 - Evaluating the Impact of the Green Municipalities Program 00:13:58 - Synthetic Control Method Explained 00:15:30 - Causal Inference and Its Importance 00:16:52 - Fantasy Football as an Analogy for Synthetic Controls 00:19:00 - Comparison of Real and Synthetic Outcomes 00:20:58 - The Role of Data in Impact Assessment 00:21:31 - Discussion on the Synthetic Control Method Paper 00:22:30 - David Schoch's Contributions to the Research 00:25:05 - Weighting in Synthetic Control Methodology 00:26:32 - Eliminating Uncertainty in Climate Finance 00:28:13 - Linking Methodologies to Improved Forest Management 00:30:59 - Data Sufficiency and Methodology Applicability 00:31:39 - Engaging Small Landowners in Carbon Markets 00:33:43 - The Role of the U.S. Forest Service Data 00:35:41 - Public Consultation and Methodology Development 00:36:09 - Interventions for Improved Forest Management 00:38:36 - Risk Sharing in Carbon Credit Projects 00:40:56 - The Importance of Monitoring and Feedback 00:42:05 - Evolution of the Family Forest Carbon Program 00:50:07 - Challenges in Data Collection and Stakeholder Engagement Quotes "Bionic Planet is the longest-running program in any medium devoted to navigating the Anthropocene, the new epoch defined by man's impact on Earth." - 00:00:10 "Football fans, like all sports fans, love arguing about who is better and who's just lucky." - 00:01:25 "Different people, different circumstances. And how do you tell who's better?" - 00:06:04 "We can restore it. Make it better, greener, more resilient, more sustainable. But how?" - 00:08:09 "Dynamic baselines adapt to shifting conditions and update more frequently." - 00:09:34 "The fundamental concept of synthetic controls is something we all use every day." - 00:16:52 "To see if an intervention works, you can synthetically model a control unit or an imaginary city where the variables are similar." - 00:16:09 "The ultimate goal in both cases is comparison." - 00:19:00 "It's not that the introduction of these methods eliminates uncertainty, but it did eliminate an important source of uncertainty and confounding." - 00:26:42 "There's always going to be a gap between a scenario that you model and what happens in real life, because no models are perfect." - 00:46:45
For most people, data science is synonymous with machine learning, and many see the role of the data scientist as simply being to build predictive models. Yet, predictive analytics can only get you so far. Predicting what will happen next is great, but what good is knowing the future if you don't know how to change it?That's where causal analytics can help. However, causal inference is rarely taught as part of traditional prediction-centric data science training. Where it is taught, though, is in the social sciences.In this episode, Joanne Rodrigues joins Dr Genevieve Hayes to discuss how techniques drawn from the social sciences, in particular, causal inference, can be combined with data science techniques to give data scientists the ability to understand and change consumer behaviour at scale.Guest BioJoanne Rodrigues is an experienced data scientist with master's degrees in mathematics, political science and demography. She is the author of Product Analytics: Applied Data Science Techniques for Actionable Consumer Insights and the founder of health technology company ClinicPriceCheck.com.Highlights(00:49) Combining social sciences with data science(02:01) Joanne's journey from social sciences to data science(04:15) Understanding causal inference(07:40) Real-world applications of causal inference(12:22) Challenges in causal inference(19:41) Correlation vs. causation in data science(26:12) Operationalising randomness in experiments(27:16) Observational experiments vs. medical trials(27:47) Designing experiments with existing data(28:50) Challenges in natural experiments(29:55) Ethical considerations in experimentation(31:50) Qualitative frameworks in causal inference(35:58) Integrating causal inference with machine learning(38:59) Common techniques in causal inference(41:02) Marketing causal inference to management(43:48) Ethical implications of predictive modelling(48:08) Final advice for data scientistsLinksConnect with Joanne on LinkedInJoanne's websiteConnect with Genevieve on LinkedInBe among the first to hear about the release of each new podcast episode by signing up HERE
Genevieve Hayes Consulting Episode 47: Leveraging Causal Inference to Drive Business Value in Data Science For most people, data science is synonymous with machine learning, and many see the role of the data scientist as simply being to build predictive models. Yet, predictive analytics can only get you so far. Predicting what will happen next is great, but what good is knowing the future if you don't know how to change it?That's where causal analytics can help. However, causal inference is rarely taught as part of traditional prediction-centric data science training. Where it is taught, though, is in the social sciences.In this episode, Joanne Rodrigues joins Dr Genevieve Hayes to discuss how techniques drawn from the social sciences, in particular, causal inference, can be combined with data science techniques to give data scientists the ability to understand and change consumer behaviour at scale. Guest Bio Joanne Rodrigues is an experienced data scientist with master’s degrees in mathematics, political science and demography. She is the author of Product Analytics: Applied Data Science Techniques for Actionable Consumer Insights and the founder of health technology company ClinicPriceCheck.com. Highlights (00:49) Combining social sciences with data science(02:01) Joanne’s journey from social sciences to data science(04:15) Understanding causal inference(07:40) Real-world applications of causal inference(12:22) Challenges in causal inference(19:41) Correlation vs. causation in data science(26:12) Operationalising randomness in experiments(27:16) Observational experiments vs. medical trials(27:47) Designing experiments with existing data(28:50) Challenges in natural experiments(29:55) Ethical considerations in experimentation(31:50) Qualitative frameworks in causal inference(35:58) Integrating causal inference with machine learning(38:59) Common techniques in causal inference(41:02) Marketing causal inference to management(43:48) Ethical implications of predictive modelling(48:08) Final advice for data scientists Links Connect with Joanne on LinkedInJoanne’s website Connect with Genevieve on LinkedInBe among the first to hear about the release of each new podcast episode by signing up HERE The post Episode 47: Leveraging Causal Inference to Drive Business Value in Data Science first appeared on Genevieve Hayes Consulting and is written by Dr Genevieve Hayes.
Send us a textRoot cause analysis, model explanations, causal discovery.Are we facing a missing benchmark problem?Or not anymore?In this special episode, we travel to Los Angeles to talk with researchers at the forefront of causal research, exploring their projects, key insights, and the challenges they face in their work.Time codes:0:15 - 02:40 Kevin Debeire2:41 - 06:37 Yuchen Zhu06:37 - 10:09 Konstantin Göbler10:09 - 17:05 Urja Pawar17:05 - 23:16 William OrchardEnjoy!Support the showCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
Send us a text *Causal Bandits at AAAI 2024 || Part 2*In this special episode we interview researchers who presented their work at AAAI 2024 in Vancouver, Canada.Time codes: 00:12 - 04:18 Kevin Xia (Columbia University) - Transportability4:19 - 9:53 Patrick Altmeyer (Delft) - Explainability & black-box models9:54 - 12:24 Lokesh Nagalapatti (IIT Bombay) - Continuous treatment effects12:24 - 16:06 Golnoosh Farnadi (McGill University) - Causality & responsible AI16:06 - 17:37 Markus Bläser (Saarland University) - Fast identification of causal parameters17:37 - 22:37 Devendra Singh Dhami (TU/e) - The future of causal AI Support the showCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
From being an English teacher in Central Lesotho to being a Peace Corps Volunteer in Madagascar to working with the World Bank and the United Nations, Assistant Professor Molly Offer-Westort chose to experience various opportunities before embarking on an academic life. Now, she uses data science and statistical tools to understand people's online behaviors and help policymakers make better decisions. Tune in to hear Professor Offer-Westort talk about her childhood dreams and how her research now contributes to the public in understanding the 2024 U.S. Presidential Election.
Send us a text Causal Bandits at AAAI 2024 || Part 1In this special episode we interview researchers who presented their work at AAAI 2024 in Vancouver, Canada and participants of our workshop on causality and large language models (LLMs)Time codes:00:00 Intro00:20 Osman Ali Mian (CISPA) - Adaptive causal discovery for time series04:35 Emily McMilin (Independent/Meta) - LLMs, causality & selection bias07:36 Scott Mueller (UCLA) - Causality for EV incentives12:41 Andrew Lampinen (Google DeepMind) - Causality from passive data15:16 Ali Edalati (Huawei) - About Causal Parrots workshop15:26 Adbelrahman Zayed (MILA) - About Causal Parrots workshop Support the showCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
Send us a Text Message.Meet The Godfather of Modern Causal InferenceHis work has pretty literally changed the course of my life and I am honored and incredibly grateful we could meet for this great conversation in his home in Los AngelesTo anybody who knows something about modern causal inference, he needs no introduction.He loves history, philosophy and music, and I believe it's fair to say that he's the godfather of modern causality.Ladies & gentlemen, please welcome, professor Judea Pearl.Subscribe to never miss an episodeAbout The GuestJudea Pearl is a computer scientist, and a creator of the Structural Causal Model (SCM) framework for causal inference. In 2011, he has been awarded the Turing Award, the highest distinction in computer science, for his pioneering works on Bayesian networks and graphical causal models and "fundamental contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning".Connect with Judea:Judea on Twitter/XJudea's webpageAbout The HostAleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality.Connect with Alex:Alex on the Internet LinksPearl, J. - "The Book of Why"Kahneman, D. - "ThinkiShould we build the Causal Experts Network?Share your thoughts in the surveyAnything But LawDiscover inspiring stories and insights from entrepreneurs, athletes, and thought leaders.Listen on: Apple Podcasts SpotifySupport the Show.Causal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
Send us a Text Message.Can we say something about YOUR personal treatment effect?The estimation of individual treatment effects is the Holy Grail of personalized medicine.It's also extremely difficult.Yet, Scott is not discouraged from studying this topic.In fact, he quit a pretty successful business to study it.In a series of papers, Scott describes how combining experimental and observational data can help us understand individual causal effects.Although this sounds enigmatic to many, the intuition behind this mechanism is simpler than you might think.In the episode we discuss:
Send us a Text Message.Video version of this episode is available here Causal personalization?Dima did not love computers enough to forget about his passion for understanding people.His work at Booking.com focuses on recommender systems and personalization, and their intersection with AB testing, constrained optimization and causal inference.Dima's passion for building things started early in his childhood and continues up to this day, but recent events in his life also bring new opportunities to learn.In the episode, we discuss:What can we learn about human psychology from building causal recommender systems?What it's like to work in a culture of radical experimentation?Why you should not skip your operations research classes?Ready to dive in? About The GuestDima Goldenberg is a Senior Machine Learning Manager at Booking.com, Tel Aviv, where he leads machine learning efforts in recommendations and personalization utilizing uplift modeling. Dima obtained his MSc in Tel Aviv University and currently pursuing PhD on causal personalization at Ben Gurion University of the Negev. He led multiple conference workshops and tutorials on causality and personalization and his research was published in top journals and conferences including WWW, CIKM, WSDM, SIGIR, KDD and RecSys.Connect with Dima: Dima on LinkedInAbout The HostAleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality (https://amzn.to/3QhsRz4).Connect with Alex:- Alex on the Internet LinksThe full list of links is available here#machinelearning #causalai #causalinference #causality Should we build the Causal Experts Network?Share your thoughts in the surveySupport the Show.Causal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
Send us a Text Message.Was Deep Learning Revolution Bad For Causal Inference?Did deep learning revolution slowed down the progress in causal research?Can causality help in finding drug repurposing candidates?What are the main challenges in using causal inference at scale?Ehud Karavani, the author of the CausalLib Python library and Researcher at IBM Research shares his experiences and thoughts on these challenging questions.Ehud believes in the power of good code, but for him code is not only about software development.He sees coding as an inseparable part of modern-day research.A powerful conversation for anyone interested in applied causal modeling.In this episode we discuss:Can causality help in finding drug repurposing candidates?Challenges in data processing for causal inference at scaleMotivation behind Python causal inference library CausalLibWorking at IBM Research Ready to dive in? About The GuestEhud Karavani, MSc is Research Staff Member at IBM Research in the Causal Machine Learning for Healthcare & Life Sciences Group. He focuses on high-throughput causal inference for finding new indications for existing drugs using electronic health records and insurance claims data. He's the original author of Causallib - one of the first Python libraries specialized in causal inference.Connect with Ehud:Ehud on Twitter/XEhud on LinkedInEhud's web page About The HostAleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality. Connect with Alex: Alex on the InternetLinksLinks for this episode can be found here Video version of this episode can be found here. Support the Show.Causal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
Sheree Bekker & Stephen Mumford are Co-directors of the Feminist Sport Lab and have a book coming soon: “Open Play: the case for feminist sport”, coming Spring 2025. Reaktion Books (UK), University of Chicago Press (US). Sheree Bekker: Associate Professor, University of Bath, Department for Health, Centre for Qualitative Research Centre for Health and Injury and Illness Prevention in Sport Stephen Mumford, Professor of Metaphysics, Durham University A Author of Dispositions (Oxford, 1998), Russell on Metaphysics (Routledge, 2003), Laws in Nature (Routledge, 2004), David Armstrong (Acumen, 2007), Watching Sport: Aesthetics, Ethics and Emotion (Routledge, 2011), Getting Causes from Powers (Oxford, 2011 with Rani Lill Anjum), Metaphysics: a Very Short Introduction (Oxford, 2012) and Causation: a Very Short Introduction (Oxford, 2013 with Rani Lill Anjum). I was editor of George Molnar's posthumous Powers: a Study in Metaphysics (Oxford, 2003) and Metaphysics and Science (Oxford, 2013 with Matthew Tugby). Episode notes: Feminist Sport Lab: https://www.feministsportlab.com Causation: A Very Short Introduction by Stephen Mumford & Rani Lill Anjum: https://academic.oup.com/book/616 Faye Norby, Iditarod champion & epidemiologist: https://www.kfyrtv.com/2024/03/28/faye-norby-finishes-iditarod-trail-womens-foot-champion/?outputType=amp Follow along on Twitter: The American Journal of Epidemiology: @AmJEpi Ellie: @EpiEllie Lucy: @LucyStats
Greetings listeners! It is a pleasure to introduce this week's guest on the podcast, Ashesh Rambachan, an assistant professor of economics at MIT. I wanted to talk to Ashesh for two main reasons. First, because I wanted to, and second, because I was aware of some of his recent work in econometrics. His recent article on evaluating the fragility of parallel trends in difference-in-differences just came out in the Review of Economic Studies. I'm also intrigued by his work with Sendhil Mullainathan on machine learning, algorithmic fairness as well as generative AI. Having a specialist in both causal inference, artificial intelligence and machine learning is rare, so I thought sitting down with him to learn more about his story would be a lot of fun, not just for me, but for others too. With that said, here you go! I hope you enjoy the interview! Thank you again for all your support!Scott's Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to Scott's Substack at causalinf.substack.com/subscribe
Editor's Summary by Preeti Malani, MD, MSJ, Deputy Editor of JAMA, the Journal of the American Medical Association, for the June 4, 2024, issue.
Send us a Text Message. Causal AI: The Melting Pot. Can Physics, Math & Biology Help Us?What is the relationship between physics and causal models?What can science of non-human animal behavior teach causal AI researchers?Bernhard Schölkopf's rich background and experience allow him to combine perspectives from computation, physics, mathematics, biology, theory of evolution, psychology and ethology to build a deep understanding of underlying principles that govern complex systems and intelligent behavior.His pioneering work in causal machine learning has revolutionized the field, providing new insights that enhance our ability to understand causal relationships and mechanisms in both natural and artificial systems.In the episode we discuss:Does evolution favor causal inference over correlation-based learning?Can differential equations help us generalize structural causal models?What new book is Bernhard working on?Can ethology inspire causal AI researchers?Ready to dive in?About The GuestBernhard Schölkopf, PhD is a Director at Max Planck Institute for Intelligent Systems. He's one of the cofounders of European Lab for Learning & Intelligent Systems (ELLIS) and a recepient of the ACM Allen Newell Award, BBVA Foundation Frontiers of Knowledge Award, and more. His contributions to modern machine learning are hard to overestimate. He's a an affiliated professor at ETH Zürich, honorary professor at the University of Tübingen and the Technical University Berlin. His pioneering work on causal inference and causal machine learning inspired thousands of researchers and practitioners worldwide. Connect with Bernhard:Bernhard on Twitter/XBernhard on All Business. No Boundaries.Welcome to All Business. No Boundaries, a collection of supply chain stories by DHL...Listen on: Apple Podcasts SpotifySupport the Show.Causal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
June 18th is “Maya Petersen” day in San Francisco, in honor of her work building disease models that guided the region through the early days of COVID and saved countless lives. With projects spanning from developing HIV prevention strategies in East Africa to shaping new Medicaid models in California, the UC Berkeley epidemiologist is building a future where local public health leaders have the tools and data to ask and answer complex policy decisions in real time. Now that's a world I want to live in.We discuss:How much better our pandemic response would have been if Public Health had access to integrated and linked dataHer work to bring sophisticated data tools to the point of decision in East AfricaHow California is building population management infrastructureSan Francisco's Director of Health, Grant Colfax, taught her an important lesson about showing up and helping:“I remember… saying, ‘You know what? You really need to find somebody who's an expert in this, I'm not an expert in this.' And he said, ‘Okay, Maya, but if you're gonna find me someone it needs to be in the next 24 hours, because I need help.' And it was just a reminder that, you know, you're not always going to be an expert, sometimes you just need to show up, do your best… be clear about your uncertainty and communicate well, and that can be… a big service”Relevant LinksLocal Epidemic Modeling for the San Francisco Department of Public HealthSan Francisco's COVID strategyMulti-sectorial Approach to HIV in East AfricaMaya Petersen Day in San FranciscoMaya's UC Berkeley pageAbout Our GuestDr. Maya L. Petersen is Professor of Biostatistics and Epidemiology at the University of California, Berkeley. Dr. Petersen's methodological research focuses on the development and application of novel causal inference methods to problems in health, with an emphasis on longitudinal data and adaptive treatment strategies (dynamic regimes), machine learning methods, adaptive designs, and study design and analytic strategies for cluster randomized trials. She is a Founding Editor of the Journal of Causal Inference and serves on the editorial board of Epidemiology. Her applied work focuses on developing and evaluating improved HIV prevention and care strategies. She currently serves as co-PI (with Dr. Diane Havlir and Dr. Moses Kamya) for the Sustainable East Africa Research in Community Health consortium, and as co-PI (with Dr. Elvin Geng) for the ADAPT-R study (a sequential multiple assignment randomized trial of behavioral interventions to optimize retention in HIV care).Source: https://publichealth.berkeley.edu/people/maya-petersenConnect With UsFor more information on The Other 80 please visit our website - www.theother80.com. To connect with our team, please email
Send us a Text Message. What makes two tech giants collaborate on an open source causal AI package?Emre's adventure with causal inference and causal AI has started before it was trendy. He's one of the original core developers of DoWhy - one of the most popular and powerful Python libraries for causal inference - and a researcher focused on the intersection of causal inference, causal discovery, generative modeling and social impact.His unique perspective, inspired by his experience with low-level programming combined with his vivid interest in how humans interact with technology, is driven by a deep seated desire to solve problems that matter to people.In the episode we discuss:
Recorded on Jan 17, 2024 in London, UK. Video version available hereWhat makes so many predictions about the future of AI wrong?And what's possible with the current paradigm?From medical imaging to song recommendations, the association-based paradigm of learning can be helpful, but is not sufficient to answer our most interesting questions.Meet Athanasios (Thanos) Vlontzos who looks for inspirations everywhere around him to build causal machine learning and causal inference systems at Spotify's Advanced Causal Inference Lab.In the episode we discuss:- Why is causal discovery a better riddle than causal inference?- Will radiologists be replaced by AI in 2024 or 2025?- What are causal AI skeptics missing?- Can causality emerge in Euclidean latent space? Ready to dive in? About The GuestAthanasios (Thanos) Vlontzos, PhD is a Research Scientist at Advanced Causal Inference Lab at Spotify. Previousl;y, he worked at Apple, at SETI Institute with NASA stakeholders and published in some of the best scientific journals, including Nature Machine Learning. He's specialized in causal modeling, causal inferernce, causal discovery and medical imaging. Connect with Athanasios:- Athanasios on Twitter/X- Athanasios on LinkedIn- Athanasios's web pageAbout The HostAleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality. Connect with Alex:- Alex on the InternetLinksThe full list of links can be found here.Support the Show.Causal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
Miguel A. Hernán, MD, DrPH, professor of epidemiology, Harvard T.H. Chan School of Public Health, discusses Target Trial Emulation: A Framework for Causal Inference From Observational Data with JAMA Statistical Editor Roger J. Lewis, MD, PhD. Related Content: Target Trial Emulation
Ingrid is a doctoral student in Epidemiology at the Dalla Lana School of Public Health at the University of Toronto. Winning cookie recipe Follow along on Twitter: The American Journal of Epidemiology: @AmJEpi Ellie: @EpiEllie Lucy: @LucyStats
Video version available here Are markets efficient, and if not, can causal models help us leverage the inefficiencies?Do we really need to understand what we're modeling?What's the role of symmetry in modeling financial markets?What are the main challenges in applying causal models in finance?Ready to dive in? About The GuestAlexander Denev is the CEO of Turnleaf Analytics. He's an author of multiple books on financial modeling and a former Head of AI (Financial Services) at Deloitte. He lectures at the University of Oxford and has worked for organizations like IHS Markit, The Royal Bank of Scotland (RBS), and the European Investment Bank. He has over 20 years of experience in finance, data science, and modeling. His first book about causal models was published well ahead of its time.Connect with Alexander:- Alexander on LinkedIn- Alexander's web pageAbout The HostAleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality.Connect with Alex:- Alex on the InternetFull list of links can be found here.#machinelearning #causalai #causalinference #causality #finance #CauslBanditsPodcastClimate ConfidentWith a new episode every Wed morning, the Climate Confident podcast is weekly podcast...Listen on: Apple Podcasts SpotifySupport the showCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
Love Causal Bandits Podcast?Help us bring more quality content: Support the showVideo version of this episode is available hereCausal Inference with LLMs and Reinforcement Learning Agents?Do LLMs have a world model?Can they reason causally?What's the connection between LLMs, reinforcement learning, and causality?Andrew Lampinen, PhD (Google DeepMind) shares the insights from his research on LLMs, reinforcement learning, causal inference and generalizable agents.We also discuss the nature of intelligence, rationality and how they play with evolutionary fitness.Join us in the journey! Recorded on Dec 1, 2023 in London, UK. About The GuestAndrew Lampinen, PhD is a Senior Research Scientist at Google DeepMind. He holds a PhD in PhD in Cognitive Psychology from Stanford University. He's interested in cognitive flexibility and generalization, and how these abilities are enabled by factors like language, memory, and embodiment. Connect with Andrew:- Andrew on Twitter/X - Andrew's web page About The HostAleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality (https://amzn.to/3QhsRz4). Connect with Alex:- Alex on the InternetLinksPapers- Lampinen et al. (2023) - "Passive learning of active causal strategies in agents and language models" (https://arxiv.org/pdf/2305.16183.pdf)- Dasgupta, Lampinen, et al. (2022) Language models show human-like content effects on reasoning tasks" (https://arxiv.org/abs/2207.07051)- Santoro, Lampinen, et al. (2021) - "Symbolic behaviour in artificial intelligeAll Business. No Boundaries.Welcome to All Business. No Boundaries., a collection of supply chain stories by DHL...Listen on: Apple Podcasts SpotifySupport the showCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
Lucy and Ellie chat about immortal time bias, discussing a new paper Ellie co-authored on clone-censor-weights. The Clone-Censor-Weight Method in Pharmacoepidemiologic Research: Foundations and Methodological Implementation: https://link.springer.com/article/10.1007/s40471-024-00346-2 Immortal time in pregnancy: https://pubmed.ncbi.nlm.nih.gov/36805380/ Follow along on Twitter: The American Journal of Epidemiology: @AmJEpi Ellie: @EpiEllie Lucy: @LucyStats
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meStructural Equation Modeling (SEM) is a key framework in causal inference. As I'm diving deeper and deeper into these topics to teach them and, well, finally understand them, I was delighted to host Ed Merkle on the show.A professor of psychological sciences at the University of Missouri, Ed discusses his work on Bayesian applications to psychometric models and model estimation, particularly in the context of Bayesian SEM. He explains the importance of BSEM in psychometrics and the challenges encountered in its estimation.Ed also introduces his blavaan package in R, which enhances researchers' capabilities in BSEM and has been instrumental in the dissemination of these methods. Additionally, he explores the role of Bayesian methods in forecasting and crowdsourcing wisdom.When he's not thinking about stats and psychology, Ed can be found running, playing the piano, or playing 8-bit video games.Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser and Julio.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Takeaways: - Bayesian SEM is a powerful framework in psychometrics that allows for the estimation of complex models involving multiple variables and causal relationships.-...
Support the showVideo version available on YouTubeDo We Need Probability?Causal inference lies at the very heart of the scientific method. Randomized controlled trials (RCTs; also known as randomized experiemnts or A/B tests) are often called "the golden standard for causal inference".It's a less known fact that randomized trials have their limitations in answering causal questions.What are the most common myths about randomization?What causal questions can and cannot be answered with randomized experiments? Finally, why do we need probability? Join me on a fascinating journey into clinical trials, randomization and generalization. Ready to meet Stephen Senn? About The GuestStephen Senn, PhD, is a statistician and consultant specializing in clinical trials for drug development. He is a former Group Head at Ciba-Geigy and has served as a professor at the University of Glasgow and University College London (UCL). He is the author of "Statistical Issues in Drug Development," "Crossover Trials in Clinical Research," and "Dicing with Death". Connect with Stephen: - Stephen on Twitter/X- Stephen on LinkedIn- Stephen's web pageAbout The HostAleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality.Connect with Alex:- Alex on the InternetLinksFind the links hereCausal Bandits TeamProject Coordinator: Taiba MalikThe Code of Entry PodcastThe Code of Entry Podcast, hosted by the insightful Greg Bew, delves deep into the...Listen on: Apple Podcasts SpotifySupport the showCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
Mark van der Laan is a professor of statistics at the University of California, Berkeley. His research focuses on developing statistical methods to estimate causal and non-causal parameters of interest, based on potentially complex and high dimensional data from randomized clinical trials or observational longitudinal studies, or from cross-sectional studies. Center for Targeted Learning, Berkeley: https://ctml.berkeley.edu/ A causal roadmap: https://pubmed.ncbi.nlm.nih.gov/37900353/ Short course on causal learning: https://ctml.berkeley.edu/introduction-causal-inference Handbook on the TLverse (Targeted Learning in R): https://ctml.berkeley.edu/publications/targeted-learning-handbook-causal-machine-learning-and-inference-tlverse-r-software Mark on twitter: @mark_vdlaan Follow along on Twitter: The American Journal of Epidemiology: @AmJEpi Ellie: @EpiEllie Lucy: @LucyStats
Support the showVideo version available on YouTube Recorded on Nov 12, 2023 in Undisclosed location, Undisclosed locationFrom Systems Biology to CausalityRobert always loved statistics.He went to study systems biology, driven by his desire to model natural systems.His perspective on causal inference encompasses graphical models, Bayesian inference, reinforcement learning, generative AI and cognitive science.It allows him to think broadly about the problems we encounter in modern AI research. Is the reward enough and what's the next big thing in causal (generative) AI?Let's see! About The GuestRobert Osazuwa Ness is a Senior Researcher at Microsoft Research. He explores how to combine causal discovery, causal inference, deep probabilistic modeling, and programming languages in search of new capabilities for AI systems. Connect with Robert: - Robert on Twitter/X- Robert on LinkedIn- Robert's web pageAbout The HostAleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality.Connect with Alex:- Alex on the InternetLinksFind the links hereCausal Bandits TeamProject Coordinator: Taiba MalikVideo and Audio Editing: Navneet Sharma, Aleksander Molak#causalai #causalinference #causality Support the showCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
Recorded on Sep 27, 2023 in München, GermanyVideo version available on YouTubeFrom supply chain to large language models and backIshansh realized the potential of data when he was just 10 years old, during his time as a junior cricket player. His journey led him to ask questions about the mechanisms behind the observed events. Can large language models (LLMs) help in building an industrial causal graph? What inspires stakeholders to share their knowledge and which causal discovery algorithms have been most effective for Ishansh's supply chain use case? Hear the insights from one of the BMW Group's fastest-rising young data science talents. Ready? ------------------------------------------------------------------------------------------------------ About The GuestIshansh Gupta is a Lead Data Scientist at BMW Group. Previously, he worked for several companies, including a legendary German sports club SV Werder Bremen. He studied Computer Science, and co-founded an educational startup during his study years. He has supervised or supported students in various universities, including the Munich-based TUM and MIT. Connect with Ishansh: - Ishansh on Twitter/X - Ishansh on LinkedInAbout The HostAleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causalityConnect with Alex: - Alex on the InternetLinksPapers - Kıcıman et al. (2023) - Causal Reasoning and Large Language Models - Peters et al. (2014) - Causal Discovery with Continuous Additive Noise Models [RESIT algorithm]Books- Molak (2023) - Causal Inference and Discovery in Python - Pearl & Mackenzie (2019) - The Book of WhyOther- causaLensCausal Bandits TeamProject Coordinator: Taiba MalikVideo and Audio EditinCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
Correlation does not equal causation, as anyone who has studied statistics or data science would know. But understanding causality isn't just important when you're developing models.If you're working in business and want to be recognised for your work, it's essential to be able to demonstrate causality between what you do and the benefit flowing through to the business.In this episode, Mark Stouse joins Dr Genevieve Hayes to discuss how data science can be used to comprehend the underlying cause-and-effect relationships in business data.Guest BioMark Stouse is the CEO of Proof Analytics, an AI-driven marketing analytics platform. Prior to becoming an analytics software CEO, Mark had a successful career in B2B marketing and in 2014 was named Innovator of the Year at the Holmes Report In2 SABRE Awards for his work in tying marketing and communication investment to key business performance metrics.Talking PointsThe benefits to organisations of understanding causality.How such techniques can be applied to use cases and disciplines beyond marketing analytics.How data scientists can drive conversations about analytics at the C-suite level to maximise their impact.The potential future impact of generative AI on data science and the world in general.LinksConnect with Mark on LinkedInProof AnalyticsConnect with Genevieve on LinkedInBe among the first to hear about the release of each new podcast episode by signing up HERE
Recorded on Oct 15, 2023 in São Paulo, Brazil. Video version of this episode is available on YouTube.Causal Inference in Fintech? For Brave and True Only.From rural Brazil to one of the country's largest banks, Matheus' journey could inspire many. Similarly to our previous guest, Iyar Lin, Matheus was interested in politics, but switched to economics, where he fell in love with math. Observing the state of the industry, he quickly realized that without causality, we cannot answer some of the most interesting business questions. His popular online book 'Causal Inference for The Brave and True' was a side effect of his strong drive to learn causal inference and causal machine learning, while collecting as much feedback as possible along the way. Did he succeed? ------------------------------------------------------------------------------------------------------ About The GuestMatheus Facure is a Staff Data Scientist at Nubank and the author of "Causal Inference for The Brave and True" and "Causal Inference in Python".Connect with Matheus: - Matheus on Twitter/X- Matheus on LinkedIn - Matheus's web pageAbout The HostAleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causalityConnect with Alex: - Alex on the InternetLinksBooks - Facure (2023) – Causal Inference in Python- Molak (2023) – Causal Inference and Discovery in PythonWebcasts - AMA WebcastsCausal Bandits TeamProject Coordinator: Taiba Malik Video and Audio Editing: Navneet Sharma, Aleksander Molak #machinelearning #causalai #causalinference #causality #fintechCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meListen to the full episode: https://learnbayesstats.com/episode/97-probably-overthinking-statistical-paradoxes-allen-downey/Watch the interview: https://www.youtube.com/watch?v=KgesIe3hTe0Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie and Cory Kiser.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)