Do you want to know more about novel methods in epidemiology but don’t have the time read a bunch of papers on the topic? Do you want to keep current on the latest developments but can’t go back to school for another degree? Do you just want the big picture understanding so you can follow along? SERious EPI is a new podcast from the Society for Epidemiologic Research hosted by Hailey Banack and Matt Fox. The podcast will include interviews with leading epidemiology researcher who are experts on cutting edge and novel methods. Interviews will focus on why these methods are so important, what problems they solve, and how they are currently being used. The podcast is targeted towards current students as well as practicing epidemiologists who want to learn more from experts in the field.
Sue Bevan - Society for Epidemiologic Research
In this episode we talk to Dr. Timothy Lash of Emory University about Quantitative Bias Analysis (QBA). We talk about how QBA is any method that quantifies the impact of non-random error. We talk about direction magnitude and uncertainty. We differentiate from sensitivity analysis, and we talk about how to identify key sources of bias. We talk about bias models and bias parameters and how we draw inferences from bias analyses. We talk about validation data and where you can get it. We talk about why predictive values often aren't as useful as classification values for bias analysis. We talk about how bias analysis can strengthen your results and that our intuition about the impact of biases is t always great. And we talk about how bias analysis can guide your future research. We differentiate between simple and probabilistic bias analysis. And we end with some examples of cases where bias analysis is really helpful.
In this episode Hailey and Matt talk about Matt's technology troubles (including having his computer just decide not to let him log on) before we discuss regression discontinuity and difference in difference approaches as part of quasi experimental methods. We focus on what quasi experimental means and encompasses and its relation to natural experiments. We talk about who owns interrupted time series (epidemiologists, economists, other social scientists?). Matt again admits he can't define exogeneity. We talk about how both designs exploit a threshold when there is a rapid change in the probability of being exposed and we think of those on either side of the discontinuity close to the threshold are exchangeable and we can estimate effects in that population under a set of assumptions. And we talk about how difference in difference takes this same approach but adds a control group. And we debate whether the last difference is singular or plural.
In this episode we talk to Dr. Usama Bilal of Drexel University about Regression Discontinuity Design (RDD) and Difference-in-Differences (DiD), two quasi experimental methods that fall under the instrumental variables framework which we discussed in previous episodes. We talk about what RDD is, the different types (fuzzy vs sharp) and what we are actually estimating (LATE vs CACE). We talk about the bias vs variance tradeoff in how far from the threshold we choose to draw inferences. We talk about the assumptions that are needed for these methods to give valid estimate of effects. Then we talk about DiD and how this is a form of RDD with a second group that does not experience the discontinuity as a control. And we talk about the additional assumptions needed for this approach (e.g. parallel trends).
In this episode, Hailey and Matt discuss whether IVs are rebellious or magical or the midlife crisis of methods. We talk about how they deal with confounding problems. We talk about how they are used to attempt to mimic randomization and the assumptions for IVs. We talk about why it's so helpful to think about who gets the exposure and why for causal inference. We talk about how IVs fit in with the target trial framework and wham it might tell us about how to teach intro epi. We talk about what estimand IVs estimate. And we relitigate the soda vs pop discussion.
In this episode, we discuss instrumental variables with Dr. Rita Hamad of Harvard's TH Chan School of Public Health. This episode is focused on the first part of Chapter 28 of Modern Epidemiology 4th edition on quasi experimental methods. We start with what quasi experimental designs are and why we would want to use them (and whether more epidemiologists are being exposed to them). We also talk about why these methods are more common in economics than in epi. We talk about how these methods try to take advantage of something that approximates randomization to estimate causal effects. We talk about what instrumental variables are and the conditions required to be met for a variable to be an instrument. We focus on the strengths and limitations of the methods and when they make the most sense to use them. We talk about what happens when you violate the assumptions of IV. We talk about weak and strong IVs and we talk about Mendelian randomization and its role in epi. And we ask the age-old question, how do you find the elusive instrumental variable?
In this episode we follow up on our conversation about mediation. We talk about what mediation is and when it is useful. We talk about the history of these methods. We debate what direct and indirect effects are. We describe natural and controlled effects. We discuss the importance of the number 666 in Matt's life. We talk about exposure mediator interaction. Matt learns what kinesiology is. We discuss proportion mediated and proportion eliminated. And we talk about the confounding assumptions needed for mediation analysis.
In this episode, Matt and Hailey talk with Dr. Kara Rudolph and Dr. Ivan Diaz about mediation analysis. We talk through what it is, what it means and when we want to do it. We talk about mechanism of causation and how mediation can help. We cover things like natural direct and indirect effects and controlled direct effects (and why there isn't a controlled indirect effect – a thing that stumped Matt for some time). And we discuss the different assumptions need to draw valid inferences in a mediation analysis, like all the many no confounding assumptions and the cross world assumption. And we talk about what Matt refers to as mediated moderation (interaction in the effect on the outcome between the exposure and mediator).
In this episode, Hailey and Matt continue on their discussion on study efficiency and realize that we think about efficiency in very different ways. We talk about the difference between statistical efficiency and cost efficiency and we each make our case for one of them being the driving force in how we design and analyze studies. It may be the biggest disagreement we've had yet (though maybe that was interaction).We talk about matching and its impact of efficiency and also why we do matching. And we try to understand when matching is useful. Studies mentioned in the podcast: Rothman KJ, Poole C. A strengthening programme for weak associations. Int J Epidemiol. 1988 Dec;17(4):955-9. doi: 10.1093/ije/17.4.955. PMID: 3225112.
In this episode we are joined by Professor Robert Platt of McGill University to talk about study efficiency and the ways we can think about this in terms of study design. We talk about hierarchies of evidence and its relationship to things like target validity. We get into why we think case control studies are so often misunderstood, particularly with respect to missing that they should be nested within a cohort. We talked about the varying definitions of efficient (variance, efficiency of confounding control, cost efficient, etc.) and how they relate to different study designs, and we disagreed about which definition is the most useful. And we talk about sampling and how it affects study efficiency and also what question we are asking. The paper that Rob reads over and over is: Kurth T, Walker AM, Glynn RJ, Chan KA, Gaziano JM, Berger K, Robins JM. Results of multivariable logistic regression, propensity matching, propensity adjustment, and propensity-based weighting under conditions of nonuniform effect. Am J Epidemiol. 2006;163:262-70. We also referenced: Westreich D, Edwards JK, Lesko CR, Cole SR, Stuart EA. Target Validity and the Hierarchy of Study Designs. Am J Epidemiol. 2019;188:438-443. Kramer MS, Guo T, Platt RW, Shapiro S, Collet JP, Chalmers B, Hodnett E, Sevkovskaya Z, Dzikovich I, Vanilovich I; PROBIT Study Group. Breastfeeding and infant growth: biology or bias? Pediatrics. 2002;110(2 Pt 1):343-7.
We kick off season 4 by reminiscing about the origins of the podcast and preview what's upcoming for season 4 where we will continue on our last season of reviewing Modern Epidemiology 4th edition. We touch on a few of the topics we are most excited about for the coming season and we preview some small formatting changes. But then we put each other through the fun questions that we ask our guests so you all can get to know us better (spoiler: Matt has no idea what the word non-fiction means). We are excited for our upcoming guests this season and the fun conversations we have in store.
It's hard to believe this is the final episode of season 3! In this season finale episode, we continue our discussion of topics related to Chapter 26 in Modern Epidemiology (4th Edition) with Dr. Eric Tchetgen Tchetgen. In this conversation we ask Dr. Tchetgen Tchetgen to help us better understand several issues related to interaction, including why it's so important to study interaction. He provides a helpful framework for thinking about interaction: start simple and then move on to more complex questions. As part of this framework, he emphasizes the distinction between total effects and main effects, how confounding plays into conversations about interaction, and the role of scale dependence when interpretating interaction.
Matt and Hailey take a deep dive into Chapter 26 in Modern Epidemiology, 4th Edition, Analysis of Interaction. This episode needs a content warning- it is among the most advanced and conceptually complex topics we have ever covered on SERious Epi. Interaction occurs when the effect of one exposure on outcome depends in some way on the presence or absence of another exposure. Seems like a simple enough concept, right? However, as you'll see in this episode, there are many different layers of complexity to consider related to terminology, scale, and interpretation of interaction analyses. A note from Matt and Hailey: since this material is very complex, we reached out to Dr. Jay Kaufman for his perspective on the episode before releasing it. He had some very helpful thoughts, and we would like to share them with you (paraphrasing with his permission): Part of what is confusing about this topic is the terminology differences, with Hailey using terminology (“interaction”) that lines up with that used by VanderWeele, ME4, and the Hernán and Robins textbook chapter and Matt using terminology (“interdependence”) from other articles in the literature, such as Greenland and Poole (1988). When there are joint effects that are exactly multiplicative, or supermultiplicative, you know it's a causal interaction (i.e., synergistic or biologic interaction) because multiplicativity is necessarily super-additive as long as both exposures meet consistency, exchangeability, and positivity assumptions. However, knowing that joint effects are submultiplicative is not informative about additive interaction or synergism. It is also not possible to make a conclusion about additive interaction when a results section tells you only that in a logistic or Cox regression analysis there is “no significant interaction effect (p
In this episode, we are joined by Dr. Sonia Hernandez Diaz for a discussion on Chapter 25 in Modern Epidemiology, 4th edition. This chapter is focused on methods for causal inference in longitudinal settings, with a particular focus on time varying exposures. Dr. Hernandez-Diaz helps to explain some of the conceptual and methodological challenges related to time-varying exposures, including the advanced analytic strategies required and the careful conceptual considerations about defining the exposure of interest and causal questions. Papers referenced in this episode: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3731075/ https://academic.oup.com/aje/article/183/8/758/1739860
This episode is focused on Chapter 25 of Modern Epidemiology 4th edition, Causal Inference with Time Varying Exposures. In this episode, Matt and Hailey talk about how we should think about exposures that change over time. We discuss the concept of feedback loops- scenarios where the exposure affects outcome which affects a later time point of exposure and then that exposure affects a later outcome. We think about whether biologic (mechanistic) conceptualizations of feedback loop the same as the epidemiologic notion presented in the chapter. We then follow the chapter to continue our discussion about how time varying exposures change our frameworks for thinking about causal inference and analytic strategies (e.g., marginal structural models, g-formula, and structural mean models). A historical note about Andrew James Rhodes, whose picture is hanging up in the conference room that Hailey was recording from: https://discoverarchives.library.utoronto.ca/index.php/rhodes-andrew-james
Recording from across the globe, in Melbourne, Australia, Dr. Margarita Moreno-Betancur joins us for an episode on Chapter 22 in Modern Epidemiology (4th edition) on Time-to-Event Analyses. This is a chapter focused on the methods we use when the timing of the occurrence of the event is of central importance. Dr. Moreno-Betancur answers all our questions about these types of analyses, including: the importance of the time scale, defining the origin (time zero), censoring vs. truncation. We also ask Dr. Moreno-Betancur to weigh-in on a hot take about whether the Cox Proportional Hazard model is overused in the health sciences literature.
In this episode Matt and Hailey discuss Chapter 22 of the 4th edition of Modern Epidemiology. This is a chapter focused on time to event analyses including core concepts related to time scales, censoring, and understanding rates. We discuss the issues and challenges related to time to event analyses and analytic approaches in this setting including Kaplan Meier, Cox Proportional Hazards, and other types of fancy models that are frequently taught in advanced epi courses (e.g., Weibull, Accelerated Failure Time) but infrequently used in the real-world. The chapter ends with a brief discussion of competing risks. It's clear that Matt and Hailey need to brush up on concepts related to competing risks and semi-competing risks, and fortunately next month we'll have an expert join us to answer all of our questions!
In this episode we discuss Chapter 18 in the Modern Epidemiology (4th Ed) textbook focused on stratification and standardization with Dr. Rich MacLehose. We invited the illustrious Dr. MacLehose to be the guest for this chapter because it is one of the most important in the book, linking the theoretical concepts discussed in the early chapters with the advanced analytic techniques discussed in subsequent chapters. In this episode we cover topics such as standardization, stratification, pooling, the use and interpretation of relative and absolute effect estimates, and p-values to evaluate effect heterogeneity.
This is an episode focused on ME4 Chapter 18 (Stratification and Standardization). This is a pretty formula-heavy chapter and I'm sure all of our listeners are tuning in to hear Matt's voice read them to you: “The sum of M1i times T0i….”. So sorry to disappoint, but instead, we focused this issue on big picture conceptual issues discussed in the chapter. Matt and Hailey talk about the importance of stratification, compare pooling and standardization, discuss Mantel Haenszel and maximum likelihood estimation, and then finish the episode talking about homogeneity and heterogeneity.
In this episode we feature a super expert on all things related to selection bias, Dr. Chanelle Howe. There are a lot of confusing issues related to selection bias: how it's defined, how it relates to collider stratification bias, whether it's a threat to internal or external validity (or both!). Chanelle helps us understand many of the nuances related to selection bias and provides helpful resources for readers interested in learning more about the topic. Is a lack of exchangeability related to confounding bias or selection? How can DAGs help us decipher the difference between confounding bias and selection? Can you have selection bias in a prospective cohort study? Join us to find out the answers to all of these questions and much more! Resources: Hernán MA. Invited Commentary: Selection Bias Without Colliders. Am J Epidemiol. 2017 Jun 1;185(11):1048-1050. doi: 10.1093/aje/kwx077. PMID: 28535177; PMCID: PMC6664806. Lu H, Cole SR, Howe CJ, Westreich D. Toward a Clearer Definition of Selection Bias When Estimating Causal Effects. Epidemiology. 2022 Sep 1;33(5):699-706. doi: 10.1097/EDE.0000000000001516. Epub 2022 Jun 6. PMID: 35700187; PMCID: PMC9378569. Howe CJ, Cole SR, Chmiel JS, Muñoz A. Limitation of inverse probability-of-censoring weights in estimating survival in the presence of strong selection bias. Am J Epidemiol. 2011 Mar 1;173(5):569-77. doi: 10.1093/aje/kwq385. Epub 2011 Feb 2. PMID: 21289029; PMCID: PMC3105434.
In this episode, Matt and Hailey discuss all things selection bias. This chapter on selection bias and generalizability is the shortest of the bias chapters in the Modern Epidemiology textbook. Does that mean it's the simplest? Listen to this episode and decide for yourself!
In this episode we have a conversation with Patrick Bradshaw about issues related to measurement error, misclassification, and information bias. We ask him to help define and clarify the differences between these concepts. We chat about dependent and differential forms of misclassification and how helpful DAGs can be for identifying these sources of bias. Patrick helps to explain the problem with the over-reliance on non-differential bias producing a bias toward the null and concerns about being “anchored to the null” in epidemiologic analyses. This episode will also serve to provide you with the most up-to-date information from Patrick on his recommendations about excellent new TV shows to stream (Wednesday on Netflix; Wandavision on Disney+). Two thumbs up.
In the season three premiere Matt and Hailey discuss Chapter 13 in Modern Epidemiology, 4th edition. For the third season of the SERious Epi podcast, we are going to continue our close-reading of the newest version of the Modern Epi textbook. This chapter is focused on measurement error and misclassification. In this episode we discuss issues related to the mis-measurement of exposure, outcome, and covariates. We also debate whether misclassification is just an analytic issue (i.e., putting people into the wrong categories) or an analytic + conceptual issue (i.e., putting people into the wrong categories and having an incorrect definition for those categories). We also talk about measurement error DAGs, why we wish more people use analytic approaches to correct for measurement error, and Matt explains the concept of email bankruptcy.
In this episode of Season 2 of SERious Epidemiology, Hailey and Matt connect with Dr. Jon Huang for a discussion on precision and study size. We wade into whether or not we should use p-values. We discuss whether the debates on p-values are real or just on Twitter and whether they should be used in observational epi or just in trials. We ask whether p-values do more harm than good in observational studies or whether the harm is really around null hypothesis significance testing. We talk about misconceptions about p-values. And Jon tells us how he's going to win a gold medal in the Winter Olympics, despite living in a tropical climate.
In this episode of Season 2 of SERious Epidemiology, Hailey and Matt finally start talking about random error. We explore the deep philosophical (as deep as we are capable of) meaning behind randomness and whether the universe is a random (and hey, while we are at it, is there even free will) and how we think about random error. We talk about p-hacking and p-curves and anything p really. And we talk about precision and accuracy in epidemiologic research. And Hailey aces Matt's quiz.
In this episode of Season 2 of SERious Epidemiology, Hailey and Matt connect with Dr. Maya Mathur for a discussion on confounding. We talk about different ways of thinking about confounding and we discuss how different sources of bias can come together. We talk about overadjustment bias, a topic we all feel needs more attention. We discuss e-values, and have Dr. Mathur explain their practical utility and also how complicated they are to interpret. And we discuss bias analysis for meta-analyses. Article mentioned in this episode: Schisterman EF, Cole SR, Platt RW. Overadjustment bias and unnecessary adjustment in epidemiologic studies. Epidemiology. 2009 Jul;20(4):488-95. doi: 10.1097/EDE.0b013e3181a819a1. PMID: 19525685; PMCID: PMC2744485.
In this episode of Season 2 of SERious Epidemiology, Hailey and Matt discuss confounding and whether confounding is hogging the spotlight in epi methods and epi teaching. We debate the value of all the different terms for confounding in the world of epi and beyond and struggle to define them all. We talk about different definitions for confounding and we differentiate between confounders and confounding. We talk about the 10% change in estimate of effect approach and its limitations and we talk about different strategies for confounder control. And Hailey coins the term “DAGmatist”. We reference the paper below: VanderWeele, T.J. and Shpitser, I. (2011). A new criterion for confounder selection. Biometrics, 67:1406-1413.
In this episode of Season 2 of SERious Epidemiology, (recorded back when we were getting COVID booster shots) Hailey and Matt connect with Dr. Ellie Matthay for a discussion on Chapter 8 on case-control studies. We finally answer whether it is spelled with a – or not (and Hailey and Ellie disagree with Matt about semicolons). We discuss how cohort studies and case control studies differ and overlap. We talk about whether case control studies are more biased than cohort studies. And Hailey reveals her dreams for releasing Modern Epidemiology: the Audiobook (with possible singing).
In this episode of Season 2 of SERious Epidemiology, Hailey and Matt get into the humble case control study. We discuss the ins and outs of this much maligned study design that has so flummoxed so many in epidemiology. We ask the hard questions about the best way sample in a case control study, whether we spend too much or not enough time on it in our teaching, whether a case control study always has to be nested within some hypothetical cohort, whether the design is inherently more biased than cohort studies (spoiler: no, but…), why some people refer to cases and controls when they are not referring to a case control study, and, if it were on a famous TV show, which character the case control study would be (and more importantly, why Hailey has never seen said TV show). Papers referenced in this episode: Selection of Controls in Case-Control Studies: I. Principles Sholom Wacholder, Joseph K. McLaughlin, Debra T. Silverman, Jack S. Mandel American Journal of Epidemiology, Volume 135, Issue 9, 1 May 1992, Pages 1019–1028, https://doi.org/10.1093/oxfordjournals.aje.a116396 Selection of Controls in Case-Control Studies: II. Types of Controls Sholom Wacholder, Debra T. Silverman, Joseph K. McLaughlin, Jack S. Mandel American Journal of Epidemiology, Volume 135, Issue 9, 1 May 1992, Pages 1029–1041, https://doi.org/10.1093/oxfordjournals.aje.a116397 Selection of controls in case-control studies. III. Design options S Wacholder 1, D T Silverman, J K McLaughlin, J S Mandel Wacholder S, Silverman DT, McLaughlin JK, Mandel JS. Selection of controls in case-control studies. III. Design options. Am J Epidemiol. 1992 May 1;135(9):1042-50. doi: 10.1093/oxfordjournals.aje.a116398
In this episode of Season 2 of SERious Epidemiology, Hailey and Matt get some real world experience with cohort studies in a conversation with Dr. Vasan Ramachandran, PI of the Framingham Heart Study (FHS). FHS is a very well-known cohort study and the model that many of us have in mind when we think of cohort studies. We get a bit of history on FHS and Hailey and I have a chance to ask the questions we have struggled with around cohort studies including the role of representativeness. And, spoiler alert, we learn that FHS did not invent the term “risk factor” as Matt has been telling his students for years.
In this episode of Season 2 of SERious Epidemiology, Hailey and Matt get into cohort studies. We spend a lot of time confessing our limitations, both personally, and as a field, in assigning person time. We talk about the end of the large cohort study and the challenges in determining when to consider a person as exposed. We talk about issues of immortal person time and whether it is technically acceptable to include those who already have the outcome in a cohort study.
In this episode of Season 2 of SERious Epidemiology, Hailey and Matt connect with Dr. Katie Lesko for a discussion on Chapter 5 on measures of association and measures of effect. We confess our challenge with working with person time. We talk about the importance of a well specified time zero. We talk about why epidemiology is complicated by free will. We ponder what the counterfactual model looks like with time to event models. We talk about the challenges of real world data vs idealized studies. We discuss the challenges of interpreting effect measure modification. And we learn that Katie was a rower in college and is concerned that her daughter may never win an Olympic medal in gymnastics. A few papers that are mentioned in the episode: Hernán MA. Invited Commentary: Selection Bias Without Colliders. Am J Epidemiol. 2017 Jun 1;185(11):1048-1050. doi: 10.1093/aje/kwx077. PMID: 28535177; PMCID: PMC6664806. Edwards JK, Cole SR, Westreich D. All your data are always missing: incorporating bias due to measurement error into the potential outcomes framework. Int J Epidemiol. 2015 Aug;44(4):1452-9. doi: 10.1093/ije/dyu272. Epub 2015 Apr 28. PMID: 25921223; PMCID: PMC4723683. Cole SR, Hudgens MG, Brookhart MA, Westreich D. Risk. Am J Epidemiol. 2015 Feb 15;181(4):246-50. doi: 10.1093/aje/kwv001. Epub 2015 Feb 5. PMID: 25660080; PMCID: PMC4325680.
In this episode of Season 2 of SERious Epidemiology, Hailey and Matt record, then re-record due to a technical error (ooops!) a discussion on Chapter 5 on measures of association and measures of effect. We say whether we prefer risks or rates. We talk about the counterfactual, causal contrasts, valid inferences and good comparison groups. We use the phrase “living your best epi life”. And we define the difference between associations and effects. We answer whether smoking cessation programs increase the risk of being hit by a drunk driver (and if so, whether that's causal). There is a mystery related to a mysterious death in the desert. Matt explains why he almost dropped out of intro epi. Oh and if you are wondering why this is the donut episode, Hailey sent Matt donuts after this episode after realizing (60 minutes in….) that she never pressed ‘record' and Matt's wife almost sent them back thinking it was a mistake since she had no idea who they were for. In the episode we mention two papers: Identifiability, exchangeability, and epidemiological confounding S Greenland, JM Robins International journal of epidemiology 15 (3), 413-419 And Confounding in health research S Greenland, H Morgenstern Annual review of public health 22 (1), 189-212
In this episode of Season 2 of SERious Epidemiology, Hailey and Matt go back to chapter 4 of Modern Epidemiology but this time with Dr. Liz Stuart (who may not have trained as an epidemiologist but definitely thinks like an epidemiologist) who has so many insights on what seem like simple concepts. We also get into some of the differences in the way biostatisticians and epidemiologist think about these ideas. And she helps us with some of the disagreements Hailey and I had in the previous episode.
In this episode of Season 2 of SERious Epidemiology, Hailey and Matt dig into chapter 4 of Modern Epidemiology. We focused on the some of the basic building blocks of epidemiology, rates, proportions and prevalence. We found lots to discuss about defining and open and closed populations and the differences (or similarities?) between populations and cohorts. And we debate whether or not this is the “eat your vegetables” chapter. And Matt displays his ignorance of Olympic sports.
In this episode of Season 2 of SERious Epidemiology, Hailey and Matt go back to Chapters 2 and 3 of Modern Epidemiology but this time with guest Dr. Jay Kaufman of McGill University. We focused on the causal inference revolution and how our thinking on some of the issues in the chapter have changed over time as we learn more about these topics.
In this episode of Season 2 of SERious Epidemiology, Hailey and Matt try to finish off Chapter 3 of Modern Epidemiology given they couldn't get it all into one episode as originally promised. We talked about potential outcomes, sufficient causes models and DAGs (very hard to do in audio only). We focus on the assumptions for causal inference. And we make a pitch for a Modern Epidemiology Audio Book…read by James Earl Jones.
In this episode of Season 2 of SERious Epidemiology, Hailey and Matt take on Chapters 2 and 3 of Modern Epidemiology… at least that was the plan, we really only got to chapter 2 so we'll be back again in our next episode for Chapter 3. But in this episode we focused on some key insights around replicability and reproducibility. And camp color wars. You'll have to listen to understand.
We are going in a new direction for Season 2 of SERious Epidemiology. This season Hailey and Matt are focusing exclusively on the new fourth edition of the textbook Modern Epidemiology. The textbook has played such an important role in the training of epidemiologists since the first edition was released and has taken on an even larger role within the field as more editions have come out. We will work through each chapter and talk about what key insights we got from them and we will talk to guests about their experiences with the text. In this first episode of the season, we are delighted to present our interview with Dr. Kenneth Rothman, author of the first edition and co-author of editions two through four. Show notes: Link to Modern Epidemiology: https://www.amazon.com/Modern-Epidemiology-Kenneth-Rothman/dp/1451193289 Link to Epidemiology: An Introduction https://www.amazon.com/Epidemiology-Introduction-Kenneth-J-Rothman/dp/0199754551/ref=sr_1_1?dchild=1&keywords=Epidemiology%3A+An+Introduction&qid=1630253351&s=books&sr=1-1
Join Matt Fox and Hailey Banack for our final episode of the first season of SERious Epidemiology, a season which happened to take place entirely during the COVID-19 pandemic. The pandemic has raised countless public health issues for us all to consider from virus testing to health disparities to safe classrooms to vaccine distribution. For the first time (maybe ever), nearly everyone knows what epidemiology is, and we are all hopefully done with having to explain that we are not a group of skin doctors (“we study epidemics… not the epidermis”). In this episode we discuss a few pandemic-related issues particularly relevant for epidemiologists, such as whether we’ll ever have to wear work pants again, the use pre-prints and the value of peer review, and issues related to confirmation bias.
In this journal club episode, Dr. Matt Fox and Dr. Hailey Banack discuss a paper recently published in the New England Journal of Medicine by Dagan et al. on the Pfizer COVID-19 vaccine. Listen in for a real-world example of the concept of emulating a target trial and a discussion of how an epidemiologic study can be described as truly beautiful. Reference: Dagan N, Barda N, Kepten E, Miron O, Perchik S, Katz MA, Hernán MA, Lipsitch M, Reis B, Balicer RD. BNT162b2 mRNA Covid-19 Vaccine in a Nationwide Mass Vaccination Setting. N Engl J Med. 2021 Feb 24:NEJMoa2101765. doi: 10.1056/NEJMoa2101765. Epub ahead of print. PMID: 33626250; PMCID: PMC7944975.
The topic of this episode is lifecourse epidemiology, defined by Dr. Paola Gilsanz as the biological, behavioural and social processes that influence an individual’s health outcomes throughout their life. Join us as we discuss models commonly used in lifecourse epidemiology, such as the early life critical period model, accumulation model, and pathway model. Is lifecourse epidemiology different than social epidemiology? Is all epidemiology lifecourse epidemiology because we study individuals at some point in their lifetime? Dr. Gilsanz answers these questions for us and also highlights the importance of using different data sources depending on your question of interest and the specific types of bias that are particularly prevalent in lifecourse epidemiology. Show notes: Brazilian cheese bread recipe: https://braziliankitchenabroad.com/brazilian-cheese-bread/
Ask yourself these true or false questions: Generalizability and transportability and external validity are all the same thing Generalizability is a secondary concern to internal validity We spend too much time in epi training programs teaching internal validity and not enough teaching external validity Worrying about external validity is largely and academic exercise that doesn’t really have much in the way of real-world impact. In this episode of SERious Epi we discuss these questions and more with Dr. Megha Mehrotra. While internal and external validity are familiar to nearly all epidemiologists, the concept of transportability is less familiar. Listen in to this episode for a clear description of how concepts related to validity, generalizability, and transportability are similar, and different, from each other.
Matching is something we learn about in our intro to epidemiology classes and yet we probably spend little time thinking about it after that, we just do it. But when should we match and when does it help us and when does it hurt us? What do we need to consider before we match? Dr. Anusha Vable joins us to help us understand matching in detail. For those of you looking to do more reading around matching see: Ho, D., Imai, K., King, G., & Stuart, E. (2007). Matching as Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference. Political Analysis, 15(3), 199-236. doi:10.1093/pan/mpl013 Stuart EA. Matching methods for causal inference: A review and a look forward. Stat Sci. 2010 Feb 1;25(1):1-21. doi: 10.1214/09-STS313. PMID: 20871802; PMCID: PMC2943670. Vable AM, Kiang MV, Glymour MM, Rigdon J, Drabo EF, Basu S. Performance of Matching Methods as Compared With Unmatched Ordinary Least Squares Regression Under Constant Effects. Am J Epidemiol. 2019 Jul 1;188(7):1345-1354. doi: 10.1093/aje/kwz093. PMID: 30995301; PMCID: PMC6601529. Iacus, S., King, G., & Porro, G. (2012). Causal Inference without Balance Checking: Coarsened Exact Matching. Political Analysis, 20(1), 1-24. doi:10.1093/pan/mpr013
Perhaps the biggest challenge we all face in epidemiologic research is recruitment of study participants. And recruiting a diverse population for our studies that allows for broad generalizability and transportability of effect estimates is something we haven’t done a good enough job of and as a consequence, our work has suffered. While we may think of this as not a methods issue, Dr. Jonathan Jackson helps us understand why representativeness affects or work and how we can do better.
Do you, like us, understand that competing risks are important to account for and yet are not 100% sure exactly what they are and when they matter? Do you stay up at night wondering if competing risks regressions are necessary for valid inference in your study? If so, this episode is for you. Dr. Brian Lau gives us the details on this important method. After listening to this podcast, if you’re interested in learning more about some of the topics we discussed, here are links for you to check out: Koller MT, Raatz H, Steyerberg EW, Wolbers M. Competing risks and the clinical community: irrelevance or ignorance? Stat Med. 2012 May 20;31(11-12):1089-97. Andersen PK, Geskus RB, de Witte T, Putter H. Competing risks in epidemiology: possibilities and pitfalls. Int J Epidemiol. 2012 Jun;41(3):861-70. Allignol A, Schumacher M, Wanner C, Drechsler C, Beyersmann J. Understanding competing risks: a simulation point of view. BMC Med Res Methodol. 2011 Jun 3;11:86. Grambauer N, Schumacher M, Dettenkofer M, Beyersmann J. Incidence densities in a competing events analysis. Am J Epidemiol. 2010 Nov 1;172(9):1077-84. Lau B, Cole SR, Gange SJ. Competing risk regression models for epidemiologic data. Am J Epidemiol. 2009 Jul 15;170(2):244-56.
What are instrumental variables? Should I be using them in my research? And if so, how do I do that? In this episode of SERious Epidemiology, we talk with Dr. Sonja Swanson about what instrumental variables are and what’s so great (and not so great) about them. After listening to this podcast, if you’re interested in learning more about some of the topics we discussed, here are links for you to check out: Greenland S. An introduction to instrumental variables for epidemiologists. Int J Epidemiol. 2018;47(1):358. Swanson SA, Labrecque J, Hernán MA. Causal null hypotheses of sustained treatment strategies: What can be tested with an instrumental variable? Eur J Epidemiol. 2018;33(8):723-728. Brookhart MA, Wang PS, Solomon DH, Schneeweiss S. Instrumental variable analysis of secondary pharmacoepidemiologic data. Epidemiology. 2006;17(4):373-4. Hernán MA, Robins JM. Instruments for causal inference: an epidemiologist's dream? Epidemiology. 2006;17(4):360-72. Swanson SA, Hernán MA. Commentary: how to report instrumental variable analyses (suggestions welcome). Epidemiology. 2013;24(3):370-4.
In honor of the Society for Epidemiologic Research 2020 Meeting, the hosts of four epidemiology podcasts came together to record the first ever “crossover event” to talk about their experiences recording our shows and what podcasting can bring to the table for the field of epidemiology. Join the hosts of Epidemiology Counts (Bryan James), SERiousEPi (Matt Fox, Hailey Banack), Casual Inference (Lucy D’Agostino McGowan), and Shiny Epi People (Lisa Bodnar) as they engage in a fun and informative (we hope!) conversation of the burgeoning field of epidemiology podcasting, emceed by Geetika Kalloo. The audio podcast will be released on some of our pod feeds, and the video recording will be available to watch on the SER website.
Episode Title: The need for theory in epidemiology with Dr. Nancy Krieger This episode features an interview with Dr. Nancy Krieger, Professor of Social Epidemiology at the T.H. Chan School of Public Health and author of Epidemiology and the People’s Health: Theory and Context. Dr. Krieger discusses the importance of using conceptual frameworks to improve people’s health and the role of population-level determinants of health (including social determinants) in population health research. We discuss a range of topics, including the differences between biomedical and analytics driven approaches to population health research and theory driven research, as well as the importance of descriptive epidemiology.
What puts the quasi in quasi-experimental designs? What makes a quasi-experimental study different than a “real” experiment? Ever wondered about the difference between regression discontinuity, difference-in-differences, and synthetic control methods? Dr. Tarik Benmarnhia joins us on this episode of SERious Epidemiology to talk us through a range of quasi-experimental designs. He makes a strong case for why we should integrate these designs in a variety of settings in epidemiology ranging from public health policy to clinical epidemiology After listening to this podcast, if you are interested in learning more about quasi-experimental designs, you can check out some of the resources below: Abadie A, Diamond A, Hainmueller J. (2010) Synthetic Control Methods for Comparative Case Studies: Estimating the Effect of California’s Tobacco Control Program, Journal of the American Statistical Association, 105:490, 493-505, DOI: 10.1198/jasa.2009.ap08746 Chen H, Li Q, Kaufman JS, Wang J, Copes R, Su Y, Benmarhnia T. Effect of air quality alerts on human health: a regression discontinuity analysis in Toronto, Canada. Lancet Planet Health. 2018 Jan;2(1):e19-e26. doi: 10.1016/S2542-5196(17)30185-7. Epub 2018 Jan 9. PMID: 29615204. Auger N, Kuehne E, Goneau M, Daniel M. Preterm birth during an extreme weather event in Québec, Canada: a "natural experiment". Matern Child Health J. 2011 Oct;15(7):1088-96. doi: 10.1007/s10995-010-0645-0. PMID: 20640493. Hernán MA, Robins JM. Instruments for causal inference: an epidemiologist's dream? Epidemiology. 2006 Jul;17(4):360-72. doi: 10.1097/01.ede.0000222409.00878.37. Erratum in: Epidemiology. 2014 Jan;25(1):164. PMID: 16755261. Courtemanche, C., Marton, J., Ukert, B., Yelowitz, A. and Zapata, D. (2017), Early Impacts of the Affordable Care Act on Health Insurance Coverage in Medicaid Expansion and Non‐Expansion States. J. Pol. Anal. Manage., 36: 178-210. https://doi.org/10.1002/pam.21961 Bor J, Fox MP, Rosen S, Venkataramani A, Tanser F, Pillay D, Bärnighausen T. Treatment eligibility and retention in clinical HIV care: A regression discontinuity study in South Africa. PLoS Med. 2017 Nov 28;14(11):e1002463. doi: 10.1371/journal.pmed.1002463. PMID: 29182641; PMCID: PMC5705070. Bor J, Moscoe E, Mutevedzi P, Newell ML, Bärnighausen T. Regression discontinuity designs in epidemiology: causal inference without randomized trials. Epidemiology. 2014 Sep;25(5):729-37. doi: 10.1097/EDE.0000000000000138. PMID: 25061922; PMCID: PMC4162343. Elder TE. The importance of relative standards in ADHD diagnoses: evidence based on exact birth dates. J Health Econ. 2010;29(5):641-656. doi:10.1016/j.jhealeco.2010.06.003 Smith LM, Kaufman JS, Strumpf EC, Lévesque LE. Effect of human papillomavirus (HPV) vaccination on clinical indicators of sexual behaviour among adolescent girls: the Ontario Grade 8 HPV Vaccine Cohort Study. CMAJ. 2015;187(2):E74-E81. doi:10.1503/cmaj.140900
In most introductory epidemiology courses, students are taught about three categories of bias: confounding, information bias, and selection bias. On this episode of the podcast, we talk to Dr. Elizabeth Rose Mayeda about where collider stratification bias fits in to the framework of biases in epidemiology. Is collider stratification bias the same as selection bias? Why is collider bias so hard to understand, conceptually and empirically? Does collider stratification bias even matter? Listen in for some great conversation explaining these topics and others. After listening to this podcast, if you are interested in learning more about selection bias and collider stratification bias some resources are included below: Hernán MA, Hernández-Díaz S, Robins JM. A structural approach to selection bias. Epidemiology. 2004;15:615-625. Howe CJ, Cole SR, Lau B, Napravnik S, Eron JJJ. Selection Bias Due to Loss to Follow Up in Cohort Studies. Epidemiology. 2016;27:91-97. Hernán MA. Invited Commentary: Selection Bias Without Colliders. American journal of epidemiology. 2017;185:1048-1050. Greenland S. Response and follow-up bias in cohort studies. Am J Epidemiol. 1977 Sep;106(3):184-7. doi: 10.1093/oxfordjournals.aje.a112451. Kleinbaum D, Morgenstern H, Kupper L. Selection bias in epidemiologic studies. Am J Epidemiol. 1981;113:452-463. Greenland S, Pearl J, Robins JM. Causal Diagrams for Epidemiologic Research. Epidemiology. 1999;10:37-48. Mayeda ER, Banack HR, Bibbins-Domingo K, Zeki Al Hazzouri A, Marden JR, Whitmer RA, et al. Can Survival Bias Explain the Age Attenuation of Racial Inequalities in Stroke Incidence?: A Simulation Study. Epidemiology. 2018;29:525-532.
Given the COVID-19 pandemic there is an urgent need for us to better understand how scientific evidence generated in epidemiologic research gets translated into information that can be used to create public health policy. In this episode of SERious Epidemiology, we talk with Dr. Laura Rosella about data driven public health, the role of epidemiology in public health, and more broadly, the importance of knowledge translation for epidemiologists. After listening to this podcast, if you are interested in learning more about the intersection of epidemiology and public health some resources are included below: How’s my flattening: A centralized data analytics and visualization hub monitoring Ontario's response to COVID-19 Link: howsmyflattening.ca Definitions of epidemiology, including references to the definition Dr. Rosella mentioned from McMahon and Pugh’s epidemiology textbook (1970): Frérot M, Lefebvre A, Aho S, Callier P, Astruc K, Aho Glélé LS. What is epidemiology? Changing definitions of epidemiology 1978-2017. PLoS One. 2018;13(12):e0208442. doi:10.1371/journal.pone.0208442 Terris, M. Approaches to an Epidemiology of Health. Am J Public Health. 1975; 65(10) https://ajph.aphapublications.org/doi/pdf/10.2105/AJPH.65.10.1037#:~:text=Introduction,.%221I%20This%20definition%20repre%2D The use of scientific evidence for public health decision making: Rosella LC, Wilson K, Crowcroft NS, Chu A, Upshur R, Willison D, Deeks SL, Schwartz B, Tustin J, Sider D, Goel V. Pandemic H1N1 in Canada and the use of evidence in developing public health policies--a policy analysis. Soc Sci Med. 2013 Apr;83:1-9. doi: 10.1016/j.socscimed.2013.02.009. Agent-based modeling Tracy M, Cerdá M, Keyes KM. Agent-Based Modeling in Public Health: Current Applications and Future Directions. Annu Rev Public Health. 2018 Apr 1;39:77-94. doi: 10.1146/annurev-publhealth-040617-014317. Additional info on agent-based modeling: https://www.publichealth.columbia.edu/research/population-health-methods/agent-based-modeling