POPULARITY
Prepare to unlock a new dimension of personal insight with Elwin Robinson, the visionary Founder of Genetic Insights. This groundbreaking platform offers a seamless, direct-to-consumer experience, allowing individuals to upload their raw DNA data from any Ancestry service. With access to over 350+ Reports covering Health Risks, Nutritional Requirements, Allergies, Intolerances, Sensitivities, and even Personality Traits, Genetic Insights delivers unparalleled accuracy and personalized recommendations tailored to your unique genetic makeup.Elwin's passion extends beyond genetic insights; he is deeply committed to helping people reverse the effects of Premature Aging. Having experienced feeling much older than his years in his 20s, Elwin's transformation to feeling youthful and vibrant at 43 is a testament to his approach.Explore Elwin's wealth of knowledge and resources through the Rejuvenate Podcast and the website FeelYounger.net. These platforms are dedicated to providing both educational and practical tools to empower individuals on their journey to feeling younger and living their best lives.Unlock Limitless Access to 300+ Genetic Insights Reports by visiting https://geneticinsights.co/ today. Gain personalized Risk Scores and Recommendations that can revolutionize your understanding of health and well-being. Hosted on Acast. See acast.com/privacy for more information.
This week join us as Kate, Mark, Henry and Gary discuss clinical prediction tool for patients using DOACs for atrial fibrillation, the benefits of cognitively enhanced tai chi, whether high dose recombinant flu vaccine is useful in adults 50-64, and watching patients with symptomatic gallstone disease.
You may be familiar with polygenic risk scores (PRS), but have you ever heard of methylation risk scores (MRS)? MRS are crucial to understand, as they're a tool that quantifies DNA methylation levels at specific genomic regions linked to particular conditions, shedding light on the potential impact of epigenetic modifications on disease susceptibility.In contrast, PRS calculates an individual's genetic disease risk by considering multiple genetic variants across the genome, often identified through genome-wide association studies. While PRS offers valuable insights into genetic predisposition for complex diseases such as heart disease and diabetes, it has its limitations, including the risk of false positives and challenges in clinical interpretation. The choice between MRS and PRS depends on the specific disease or research context and the available data, as both scores provide unique perspectives on disease risk.In this week's Everything Epigenetics podcast, Dr. Michael Thompson and I chat about the importance and benefits of MRS, how to calculate such scores, and how these scores compare to PRS. For example, in his recent paper, Mike discovered that MRS significantly improved the imputation of 139 outcomes, whereas the PRS improved only 22.We focus on the results from a study Mike published last year that showed MRS are associated with a collection of phenotypes with electric health record systems. Mike's work added significant MRS to state-of-the-art EHR imputation methods that leverage the entire set of medical records, and found that including MRS as a medical feature in the algorithm significantly improves EHR imputation in 37% of lab tests examined (median R2 increase 47.6%). His publicly available results show promise for methylation risk scores as clinical and scientific tools.Mike is currently in Barcelona working on using artificial intelligence to map and learn the biological effects of mutating everything (and anything) in every single position from a genetic variant to the change in splicing or to some other interesting phenotype. In this episode of Everything Epigenetics, you'll learn about: How Mike got into the field of Epigenetics What epigenetics means to MikeMike's interesting background starting with his undergraduate journey to his graduate and postgraduate studiesThe importance and limitations of electric health records (EHR)The importance and benefits of methylation risk scores (MRS)The importance and limitations of polygenic risk scores (PRS)How MRS compares to polygenic risk scores Mike's paper titled “Methylation risk scores are associated with a collection of phenotypes within electronic health record systems” and what prompted this investigation How you create an MRSWhy we don't see MRS commercialized quite yetThe EHR-derived phenotypes spanning medications, labs, and diagnoses that Mike investigatedFuture application of MRSThe future of Mike's career Where to find Mike: Google Scholar: https://scholar.google.com/citations?user=lFjujsAAAAAJ&hl=enMike's MRS Study:Methylation risk scores are associated with a collection of phenotypes within electronic health record systems: https://www.nature.com/articles/s41525-022-00320-1Support the showThank you for joining us at the Everything Epigenetics Podcast and remember you have control over your Epigenetics, so tune in next time to learn more about how.
To scope out the danger, insurance companies are turning to a variety of tools that use algorithms to measure and predict wildfire risk. As that risk grows, insurance is becoming harder for homeowners to come by.
Commentary by Dr. Eman Rashed
Join Patrick Short and Professor Clare Turnbull, Professor in Translational Cancer Genetics at the Institute of Cancer Research, as they discuss polygenic risk scores and their application in healthcare. Delve into the complexities of predicting disease, the challenges of screening programs, and the potential impact of integrating genomics into healthcare systems. Discover the limitations and potential of polygenic risk scores and gain valuable insights into the future of personalized medicine. 0:00 Intro 1:00 Clare's path to becoming a clinical geneticist and her research in uncovering genetic links to cancer 3:20 How do Polygenic Risk Scores help to predict disease, particularly breast cancer? 10:00 The influence of environmental and genetic effects on breast cancer presentation 11:30 Next clinical steps after determining genetic risk for breast cancer 17:30 How effective and accurate are polygenic risk scores in predicting various types of cancer, given the potential for false positives or negatives? 25:00 The potential for integrating genetic screenings and polygenic risk scores into early cancer diagnosis 27:20 How do monogenic risk scores like BRCA 1 and 2 fit into the paradigm of cancer research? 31:30 Using both monogenic and polygenic to explain population prevalence of disease 35:00 Integration of genomics and genetic screenings into the UK healthcare system 40:30 What comes after the genetic test? What is the use in identifying risk for a disease if nothing is subsequently done to prevent it? 44:50 Clare's upcoming work in remodeling NHS systems for evidence protocols and clinical use of genetic tests 46:50 Closing remarks
Join Patrick Short and Professor Clare Turnbull, Professor in Translational Cancer Genetics at the Institute of Cancer Research, as they discuss polygenic risk scores and their application in healthcare. Delve into the complexities of predicting disease, the challenges of screening programs, and the potential impact of integrating genomics into healthcare systems. Discover the limitations and potential of polygenic risk scores and gain valuable insights into the future of personalized medicine. 0:00 Intro 1:00 Clare's path to becoming a clinical geneticist and her research in uncovering genetic links to cancer 3:20 How do Polygenic Risk Scores help to predict disease, particularly breast cancer? 10:00 The influence of environmental and genetic effects on breast cancer presentation 11:30 Next clinical steps after determining genetic risk for breast cancer 17:30 How effective and accurate are polygenic risk scores in predicting various types of cancer, given the potential for false positives or negatives? 25:00 The potential for integrating genetic screenings and polygenic risk scores into early cancer diagnosis 27:20 How do monogenic risk scores like BRCA 1 and 2 fit into the paradigm of cancer research? 31:30 Using both monogenic and polygenic to explain population prevalence of disease 35:00 Integration of genomics and genetic screenings into the UK healthcare system 40:30 What comes after the genetic test? What is the use in identifying risk for a disease if nothing is subsequently done to prevent it? 44:50 Clare's upcoming work in remodeling NHS systems for evidence protocols and clinical use of genetic tests 46:50 Closing remarks
Hello and welcome to HBR News where we talk about the news of the week! This week we talk about the smart home that Amazon shut down over "racist language", the UN Secretary General proposes a "global Digital Compact" to push laws against "hate", the Department of Homeland Security is sought to assign "risk scores" to social media accounts, and more!This show is part of the Spreaker Prime Network, if you are interested in advertising on this podcast, contact us at https://www.spreaker.com/show/4148711/advertisement
Genomics is leading a revolution in our understanding of disease. But the ways we pursue genomics research and the use we make of that knowledge demand careful thinking.Anna is a researcher at The Edmond & Lily Safra Center for Ethics at Harvard, she holds a PhD in Systems Biology from Oxford (where we met) and has worked in medtech startups. As someone who has looked at genomics from multiple perspectives, she's an excellent guide to this rocky terrain.Anna emphasizes the challenges and importance of polygenic traits and Polygenic Risk Scores (PRS). While they are key tools in understanding and predicting traits, they are subject to misinterpretation and misuse if not properly defined. The concept of 'race' and more recently ‘continental ancestry group' often used in the calculation of PRSs can lead to misguided or even harmful assumptions, potentially propagating racist ideologies. Instead, Anna suggests the use of Ancestral Recombination Graphs (ARG) to better represent an individual's genetic ancestry.Through ARG, we can achieve a more scientifically accurate and ethically sound basis for research. As we continue to make leaps in genomics and potentially influence traits like intelligence or strength, the importance of ethical, legal, and social implications becomes increasingly crucial. As we learn to wield our scientific tools, we need to understand how we should use them. Anna's Twitter: @ACFLewis Show notes on multiverses.xyz Anna's website: acflewis.com Undark article on genetic ancestry Anna et al on getting genetic ancestry right for science and society
These scores — composite measures of a person's autism-linked common genetic variants — cannot predict an autism diagnosis but could help researchers better understand the condition's underlying biology.
How Artificial Intelligence is Powering the Future of Risk with Matt Weaver of Gradient AI In this episode of "Self-Funded with Spencer," host Spencer Smith chats with Matt Weaver, Director of Sales for Gradient AI, about Artificial Intelligence in underwriting tools for self-funding and other stop-loss initiatives. Gradient AI helps underwriters make better assessments of risk, especially when limited loss and claims history exists, or even in the potential absence of claims data altogether. This solution can be helpful in the world of self-funding when it comes to underwriting stop-loss insurance, especially for smaller employers. Matt, who brings his expertise in the legal and sales domains, shares his experience working at large insurance defense firms after law school, selling insurance despite no prior knowledge of the industry, and eventually finding himself working with Gradient AI. He explains how the Gradient AI tool uses advanced data analytics and Artificial Intelligence to accurately assess risk profiles for small and mid-sized business groups. Their platform has revolutionized the underwriting process, making it faster, and much more efficient. They also discuss the challenges of predicting outcomes for the "messy middle" groups and how the pricing model can lead to a rollercoaster ride for employers. Listeners will learn the importance of data to support the recommendation of self-funding and how Gradient AI is using AI to make more informed decisions in the insurance industry. Join Spencer and guest Matt Weaver, as they share their insights into how AI is changing the future of employee benefits and A and H space. Timestamps [00:00:09] Artificial Intelligence in Self-Funding Underwriting Tools [00:03:04] Experience working at an insurance defense firm [00:06:06] Transitioning from Insurance Defense to Benefits Consulting [00:09:01] Starting in Insurance Sales [00:14:42] Transitioning from traditional brokering to consulting [00:17:53] From PEO Consulting to Joining Gradient AI [00:21:04] Gradient AI's Role in Predicting Risk for Insurance [00:27:37] Challenges in Underwriting Self-Funded Businesses [00:30:45] Distinguishing Factors in Predictive Analytics [00:37:14] Risk Scores and Cost Estimates for Health Insurance [00:40:30] Success of Data-Driven Underwriting Tool [00:43:35] Revolutionizing Underwriting in Insurance Industry [00:46:33] Innovative Pricing Model & AI in Health Benefits [00:52:56] The Power of Claims Data in Insurance Consulting [00:55:47] Maximizing Benefits of Self-funded Healthcare Plans [00:59:02] Challenges of Self-Funding for Employers [01:01:59] Procurement processes for clients and use of data [01:05:26] The Future of Artificial Intelligence in Insurance [01:11:47] Risk Analytics and Artificial Intelligence in Insurance --- Support this podcast: https://podcasters.spotify.com/pod/show/spencer-harlan-smith/support
How Artificial Intelligence is Powering the Future of Risk with Matt Weaver of Gradient AI In this episode of "Self-Funded with Spencer," host Spencer Smith chats with Matt Weaver, Director of Sales for Gradient AI, about Artificial Intelligence in underwriting tools for self-funding and other stop-loss initiatives. Gradient AI helps underwriters make better assessments of risk, especially when limited loss and claims history exists, or even in the potential absence of claims data altogether. This solution can be helpful in the world of self-funding when it comes to underwriting stop-loss insurance, especially for smaller employers. Matt, who brings his expertise in the legal and sales domains, shares his experience working at large insurance defense firms after law school, selling insurance despite no prior knowledge of the industry, and eventually finding himself working with Gradient AI. He explains how the Gradient AI tool uses advanced data analytics and Artificial Intelligence to accurately assess risk profiles for small and mid-sized business groups. Their platform has revolutionized the underwriting process, making it faster, and much more efficient. They also discuss the challenges of predicting outcomes for the "messy middle" groups and how the pricing model can lead to a rollercoaster ride for employers. Listeners will learn the importance of data to support the recommendation of self-funding and how Gradient AI is using AI to make more informed decisions in the insurance industry. Join Spencer and guest Matt Weaver, as they share their insights into how AI is changing the future of employee benefits and A and H space. Timestamps [00:00:09] Artificial Intelligence in Self-Funding Underwriting Tools [00:03:04] Experience working at an insurance defense firm [00:06:06] Transitioning from Insurance Defense to Benefits Consulting [00:09:01] Starting in Insurance Sales [00:14:42] Transitioning from traditional brokering to consulting [00:17:53] From PEO Consulting to Joining Gradient AI [00:21:04] Gradient AI's Role in Predicting Risk for Insurance [00:27:37] Challenges in Underwriting Self-Funded Businesses [00:30:45] Distinguishing Factors in Predictive Analytics [00:37:14] Risk Scores and Cost Estimates for Health Insurance [00:40:30] Success of Data-Driven Underwriting Tool [00:43:35] Revolutionizing Underwriting in Insurance Industry [00:46:33] Innovative Pricing Model & AI in Health Benefits [00:52:56] The Power of Claims Data in Insurance Consulting [00:55:47] Maximizing Benefits of Self-funded Healthcare Plans [00:59:02] Challenges of Self-Funding for Employers [01:01:59] Procurement processes for clients and use of data [01:05:26] The Future of Artificial Intelligence in Insurance [01:11:47] Risk Analytics and Artificial Intelligence in Insurance --- Support this podcast: https://podcasters.spotify.com/pod/show/spencer-harlan-smith/support
https://psychiatry.dev/wp-content/uploads/speaker/post-11160.mp3?cb=1670869074.mp3 Playback speed: 0.8x 1x 1.3x 1.6x 2x Download: Relationship between polygenic risk scores and symptom dimensions of schizophrenia and schizotypy in multiplex families with schizophrenia – Mohammad Ahangari et al.Full EntryRelationship between polygenic risk scores and symptom dimensions of schizophrenia and schizotypy in multiplex families with schizophrenia –
Have you heard of scientific wellness? In this episode, Nathan Price and Joe discuss whether the conventional approach of using pharmaceuticals to treat chronic diseases is effective, or if a preventative approach to complex conditions such as cardiovascular disease and Alzheimer's is the best choice. Nathan explains how genes can help predict the success of lifestyle-based interventions and talks about the accuracy of polygenic risk scores vs single variants for disease prediction, and how genetics can be used in healthcare. Nathan and Joe also talk about nicotinamide mononucleotide (NMN), nicotinamide riboside (NR), DHEA, and other supplements. Plus, Nathan explains the controversy of choline, carnitine, and TMAO, and if you should be worried. Dr. Nathan Price is the Chief Scientific Officer of Thorne HealthTech and author of The Age of Scientific Wellness. He was named one of the 10 Emerging Leaders in Health and Medicine by the National Academy of Medicine and was appointed to the Board on Life Sciences of the National Academies of Sciences, Engineering, and Medicine. - Preorder The Age of Scientific Wellness - Check out SelfDecode - Join Joe's online community - Follow Joe on Instagram & TikTok
#HealthCast episode about policies and procedures and how they are not sexy but are essential for any health plans operation and have a profound impact on STAR Ratings and Risk Scores
https://psychiatry.dev/wp-content/uploads/speaker/post-9631.mp3?cb=1663176228.mp3 Playback speed: 0.8x 1x 1.3x 1.6x 2x Download: Penetrance and Pleiotropy of Polygenic Risk Scores for Schizophrenia, Bipolar Disorder, and Depression Among Adults in the US Veterans Affairs HealthFull EntryPenetrance and Pleiotropy of Polygenic Risk Scores for Schizophrenia, Bipolar Disorder, and Depression Among Adults in the US Veterans Affairs Health Care System – PubMed
https://psychiatry.dev/wp-content/uploads/speaker/post-9469.mp3?cb=1662567099.mp3 Playback speed: 0.8x 1x 1.3x 1.6x 2x Download: Association Between Polygenic Risk Scores and Outcome of ECT – PubMed Robert Sigström et al. American Journal of Psychiatry. 2022. Identifying biomarkersFull EntryAssociation Between Polygenic Risk Scores and Outcome of ECT – PubMed
This week Patrick is joined by Sir Peter Donnelly, CEO of Genomics PLC and Professor of Statistical Science at the University of Oxford. They discuss how to get from data to implementation in the clinic, the challenges of polygenic risk scores including prediction across different ethnic backgrounds, and the role of genomics in drug discovery.
This week Patrick is joined by Sir Peter Donnelly, CEO of Genomics PLC and Professor of Statistical Science at the University of Oxford. They discuss how to get from data to implementation in the clinic, the challenges of polygenic risk scores including prediction across different ethnic backgrounds, and the role of genomics in drug discovery.
In this episode of the Heart podcast, Digital Media Editor, Dr James Rudd, is joined by Dr Christopher Fordyce from the University of British Columbia. They discuss how good physicians are at judging the nature of chest pain. If you enjoy the show, please leave us a podcast review at https://itunes.apple.com/gb/podcast/heart-podcast/id445358212?mt=2 Link to published paper: https://heart.bmj.com/content/108/11/860
In "Dosing Discrimination: Regulating PDMP Risk Scores," Professor Jennifer D. Oliva explores how risk scores from Prescription Drug Monitoring Programs can deter treatment for patients who are deemed to be at high risk of drug misuse, exacerbating discrimination against certain marginalized populations. Author: Jennifer D. Oliva is the Associate Dean for Faculty Research and Development, Professor of Law, and Director of the Center for Health & Pharmaceutical Law at Seton Hall University School of Law. Host: Carter Jansen Technology Editors: NoahLani Litwinsella (Volume 110 Senior Technology Editor), Carter Jansen (Volume 110 Technology Editor), Hiep Nguyen (Volume 111 Senior Technology Editor), Taylor Graham (Volume 111 Technology Editor), Benji Martinez (Volume 111 Technology Editor) Other Editors: Ximena Velazquez-Arenas (Volume 111 Senior Diversity Editor), Jacob Binder (Volume 111 Associate Editor), Michaela Park (Volume 111 Associate Editor), Kat King (Volume 111 Publishing Editor) Soundtrack: Composed and performed by Carter Jansen Article Abstract: Prescription drug monitoring program (PDMP) predictive surveillance platforms were designed for—and funded by—law enforcement agencies. PDMPs use proprietary algorithms to determine a patient's risk for prescription drug misuse, diversion, and overdose. The proxies that PDMPs utilize to calculate patient risk scores likely produce artificially inflated scores for marginalized patients, including women and racial minorities with complex, pain-related conditions; poor, uninsured, under-insured, and rural individuals; and patients with co-morbid disabilities or diseases, including substance use disorder and mental health conditions. Law enforcement conducts dragnet sweeps of PDMP data to target providers that the platform characterizes as “overprescribers” and patients that it deems as high risk of drug diversion, misuse, and overdose. Research demonstrates that PDMP risk scoring coerces clinicians to force medication tapering, discontinue prescriptions, and even abandon patients without regard for the catastrophic collateral consequences that attend to those treatment decisions. PDMPs, therefore, have the potential to exacerbate discrimination against patients with complex and stigmatized medical conditions by generating flawed, short-cut assessment tools that incentivize providers to deny these patients indicated treatment. The Federal Food and Drug Administration (FDA) is authorized to regulate PDMP predictive diagnostic software platforms as medical devices, and the agency recently issued guidance that provides a framework for such oversight. Thus far, however, the FDA has failed to regulate PDMP platforms. This Article contends that the FDA should exercise its regulatory authority over PDMP risk scoring software to ensure that such predictive diagnostic tools are safe and effective for patients.
In Episode 7 of Tattoos, Code, and Data Flows, Matt Rose interviews Walter Haydock, Director of Product Management at Privacera. Walter is a Naval Academy grad, Marine Corp veteran, and has a ton of experience in security product management. He is an active member of the security community and contributes regularly to his own blog called Deploying Securely (http://haydock.substack.com/ (haydock.substack.com)). He also contributes to the tech and veteran communities by offering free 30-minute sessions to veterans looking to get into tech (see this link: https://www.linkedin.com/posts/walter-haydock_marine-corps-veterans-looking-to-get-into-activity-6873953034469171200-LLUp/). Walter and Matt talk about: The importance of managing risk across the SDLC Why money talks - the impact of risk measured in dollars How business context should be applied to risk How a poor interpretation of risk scores causes businesses to overcorrect And so much more. Be sure to listen to this episode, and so many of our other great episodes by hitting the follow button. Make sure to like and subscribe to the episode. We hope you enjoy it!
Shai Carmi is Professor of Statistical and Medical Genetics at Hebrew University (Jerusalem). Carmi Lab: https://scarmilab.org/ Twitter: https://twitter.com/ShaiCarmi Topics and links: Shai's educational background. From statistical physics and network theory to genomics. Shai's paper on embryo selection: Schizophrenia risk. Modeling synthetic sibling genomes. Variance among sibs vs general population. RRR vs ARR, family history and elevated polygenic risk. (Link to paper: https://www.biorxiv.org/content/10.1101/2020.11.05.370478v3) Response to the ESHG opinion piece on embryo selection. https://twitter.com/ShaiCarmi/status/1487694576458481664 Pleiotropy, Health Index scores. Genetic genealogy and DNA forensics. Solving cold cases, Othram, etc. (Link to paper: https://www.science.org/doi/10.1126/science.aau4832) Healthcare in Israel. Application of PRS in adult patients. Music used with permission from Blade Runner Blues Livestream improvisation by State Azure.--Steve Hsu is Professor of Theoretical Physics and of Computational Mathematics, Science, and Engineering at Michigan State University. Previously, he was Senior Vice President for Research and Innovation at MSU and Director of the Institute of Theoretical Science at the University of Oregon. Hsu is a startup founder (SafeWeb, Genomic Prediction, Othram) and advisor to venture capital and other investment firms. He was educated at Caltech and Berkeley, was a Harvard Junior Fellow, and has held faculty positions at Yale, the University of Oregon, and MSU.Please send any questions or suggestions to manifold1podcast@gmail.com or Steve on Twitter @hsu_steve.
Shai Carmi is Professor of Statistical and Medical Genetics at Hebrew University (Jerusalem). Carmi Lab: https://scarmilab.org/ Twitter: https://twitter.com/ShaiCarmi Topics and links: Shai's educational background. From statistical physics and network theory to genomics. Shai's paper on embryo selection: Schizophrenia risk. Modeling synthetic sibling genomes. Variance among sibs vs general population. RRR vs ARR, family history and elevated polygenic risk. (Link to paper: https://www.biorxiv.org/content/10.1101/2020.11.05.370478v3) Response to the ESHG opinion piece on embryo selection. https://twitter.com/ShaiCarmi/status/1487694576458481664 Pleiotropy, Health Index scores. Genetic genealogy and DNA forensics. Solving cold cases, Othram, etc. (Link to paper: https://www.science.org/doi/10.1126/science.aau4832) Healthcare in Israel. Application of PRS in adult patients. Music used with permission from Blade Runner Blues Livestream improvisation by State Azure.--Steve Hsu is Professor of Theoretical Physics and of Computational Mathematics, Science, and Engineering at Michigan State University. Previously, he was Senior Vice President for Research and Innovation at MSU and Director of the Institute of Theoretical Science at the University of Oregon. Hsu is a startup founder (SafeWeb, Genomic Prediction, Othram) and advisor to venture capital and other investment firms. He was educated at Caltech and Berkeley, was a Harvard Junior Fellow, and has held faculty positions at Yale, the University of Oregon, and MSU.Please send any questions or suggestions to manifold1podcast@gmail.com or Steve on Twitter @hsu_steve.
Enter a giveaway on our social media! Win free enrollment to a 3-hour course in the Allelica PRS clinical academy covering the research behind PRS to clinical applications. You can enter by looking for us on Twitter, LinkedIn, and Instagram. This has been posted at 9am on January 21st and will end on February 4th. Our guest this week is Giordano Bottà, a biologist and bioinformatician, who is joining us to discuss polygenic risk scores. Giordano earned a PhD in Public Health and he has extensive experience in analysis of large genomics dataset. During his career he has had the opportunity to work with some of the top genomics experts in the world at the University of Oxford, publishing in the journal Nature. He is a co-founder and CEO of Allelica, which created a software to help clinical genetics labs to perform polygenic risk score analysis.On This Episode We Discuss:Defining polygenic risk scores (PRS)? How PRSs are empowering the next generation of clinical genomics Types of conditions that PRS can be calculated forWho can benefit the most from PRSsHow Allelica is addressing the underrepresentation of people of Non-European descent in genetic studies with PRSUsing PRS to assess risk for heart disease and cancerTo learn more about Giordano, check him out on Twitter, LinkedIn and Instagram, and stay up to date with Allelica on Twitter and LinkedIn. Stay tuned for the next new episode of DNA Today on January 28, 2022 where we'll be discussing cytogenomics! New episodes are released on the first and third Friday of the month. In the meantime, you can binge over 165 other episodes on Apple Podcasts, Spotify, streaming on the website, or any other podcast player by searching, “DNA Today”. All episodes in 2021 and 2022 are also recorded with video which you can watch on our YouTube channel. See what else we are up to on Twitter, Instagram, Facebook, YouTube and our website, DNApodcast.com. Questions/inquiries can be sent to info@DNApodcast.com.Do you want to connect with other people who have the same genetic variant as you? You should check out “Connect My Variant”, it's an online resource that allows you to do just that. “Connect My Variant” also provides different avenues of informing your family of possible inherited risk of disease. This includes helping find where your variant came from and finding distant cousins that may also be at risk. The University of Washington has supported the “Connect My Variant” project in an effort to help patients and families understand where their unique genetic variants come from. Check out it at ConnectMyVariant.org. (SPONSORED)Are you interested in the rapidly growing field of genetics and want to learn more about clinical genetics, molecular genetics, and laboratory science? Then you should check out the Genetic Assistant Online Training Program at Johns Hopkins University School of Medicine! By taking part in the program, you will be joining both national and international learners with the same passion for genetics. Interact directly with your Johns Hopkins instructors and fellow students throughout the program. Applications are closing for the spring cohort, but there are still spots available for summer and fall 2022. (SPONSORED)PerkinElmer Genomics is a state-of-the-art biochemical and molecular genetics laboratory that provides newborn screening and genomic testing services around the world. With over seven million newborns screened since 1994, PerkinElmer Genomics' laboratory pairs decades of newborn screening experience with a leading-edge clinical genomics program to offer one of the world's most comprehensive programs for detecting clinically significant genomic changes. Learn more at PerkinElmerGenomics.com (SPONSORED)
In this podcast, we are joined by Dr. Alex Neumann, of the VIB Centre for Molecular Neurology at the University of Antwerp, and Professor Henning Tiemeier (pic), Professor of Social and Behavioural Science at the Harvard T.H Chan School of Public Health in Boston and professor of psychiatric epidemiology at Erasmus University Medical Centre, Rotterdam. The focus is on their co-authored JCPP paper ‘Combined polygenic risk scores of different psychiatric traits predict general and specific psychopathology in childhood' (doi.org/10.1111/jcpp.13501). Alex and Henning begin by providing us with a quick insight into how they became interested in the field of child and adolescent mental health, before providing us with an insight into what their JCPP paper looks at and why they choose to explore this area. Alex and Henning then provide insight into the methodology used in the research and share some of the findings, including how polygenic risk scores associated with school age psychopathology tended to either be associated with general psychopathology only or general and specific, but not except in the case of anxiety specific psychopathology only. Alex and Henning explain the importance of this finding and what it means for assessment and diagnosis. Furthermore, Alex and Henning describe what the implications of their findings are for professionals working with young people and their families, what message they have to researchers in this field, and what they concluded in the paper.
Machine Learning can improve decision making in a big way -- but it can also reproduce human biases and discrimination. Solon Barocas joins Vasant Dhar in episode 24 of Brave New World to discuss the challenges of solving this problem. Useful resources: 1. Solon Barocas at his website, Cornell, Google Scholar and Twitter. 2. Fairness and Machine Learning -- Solon Barocas, Moritz Hardt and Arvind Narayanan. 3. Danger Ahead: Risk Assessment and the Future of Bail Reform: John Logan Koepke and David G. Robinson. 4. Fairness and Utilization in Allocating Resources with Uncertain Demand -- Kate Donahue and Jon Kleinberg. 5. Profiles, Probabilities, and Stereotypes -- Frederick Schauer. 6. Thinking Like a Lawyer: A New Introduction to Legal Reasoning -- Frederick Schauer. 7. Measuring the predictability of life outcomes with a scientific mass collaboration -- Mathew Salganik and others. 8. Inherent Trade-Offs in the Fair Determination of Risk Scores -- Jon Kleinberg, Sendhil Mullainathan and Manish Raghavan. 9. Limits to Prediction -- Arvind Narayanan and Matthew Salganik. 10. The Fragile Families Challenge. 11. Daniel Kahneman on How Noise Hampers Judgement -- Episode 21 of Brave New World. 12. Noise: A Flaw in Human Judgment -- Daniel Kahneman. 13. Dissecting “Noise” — Vasant Dhar. 14. Nudge: Improving Decisions About Health, Wealth, and Happiness -- Richard Thaler and Cass Sunstein.
Readmission risk has been on the radar for a very long time and from many different vantage points. Our team of experts from Collective Medical will talk about all things Readmission Risk Scores – from how to think about it conceptually, current, and future use cases, and how using Readmission Risk score can be used to change behaviors and therefore outcomes. This podcast is sponsored by PointClickCare.
With Ben Freedman & David Brieger, Sydney Medical School, University of Sydney, Head Vascular Biology Anzac Research Institute - Australia Link to paper Link to editorial
As artificial intelligence gets more and more powerful, the need becomes greater to ensure that machines do the right thing. But what does that even mean? Brian Christian joins Vasant Dhar in episode 13 of Brave New World to discuss, as the title of his new book goes, the alignment problem. Useful resources: 1. Brian Christian's homepage. 2. The Alignment Problem: Machine Learning and Human Values -- Brian Christian. 3. Algorithms to Live By: The Computer Science of Human Decisions -- Brian Christian and Tom Griffiths. 4. The Most Human Human -- Brian Christian. 5. How Social Media Threatens Society -- Episode 8 of Brave New World (w Jonathan Haidt). 6. Are We Becoming a New Species? -- Episode 12 of Brave New World (w Molly Crockett). 7. The Nature of Intelligence -- Episode 7 of Brave New World (w Yann le Cunn) 8. Some Moral and Technical Consequences of Automation -- Norbert Wiener. 9.Superintelligence: Paths, Dangers, Strategies -- Nick Bostrom. 10. Human Compatible: AI and the Problem of Control -- Stuart Russell. 11. OpenAI. 12. Center for Human-Compatible AI. 13. Concrete Problems in AI Safety -- Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané. 14. Machine Bias -- Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner. 15. Inherent Trade-Offs in the Fair Determination of Risk Scores -- Jon Kleinberg, Sendhil Mullainathan, Manish Raghavan. 16. Algorithmic Decision Making and the Cost of Fairness -- Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, Aziz Huq.. 17. Predictions Put Into Practice -- Jessica Saunders, Priscillia Hunt, John S. Hollywood 18. An Engine, Not a Camera: How Financial Models Shape Markets -- Donald MacKenzie. 19. An Anthropologist on Mars -- Oliver Sacks. 20. Deep Reinforcement Learning from Human Preferences -- Paul F Christiano, Jan Leike, Tom B Brown, Miljan Martic, Shane Legg, Dario Amadei for OpenAI & Deep Mind.
We spoke with Noor Siddiqui about how Orchid is using polygenic risk scores for common diseases for preimplantation screening. The Bioinformatics CRO is a fully distributed contract research company that serves the computational biology needs of biotechnology companies, with a focus on genomics. https://www.bioinformaticscro.com/
Recently, I’ve been thinking about risk scores, and whether they work or not. www.AdamRoxby.co.uk --- Send in a voice message: https://anchor.fm/adamroxby/message
Polygenic risk scores (PRS) rely on the genome-wide association studies (GWAS) to predict the phenotype based on the genotype. However, the prediction accuracy suffers when GWAS from one population are used to calculate PRS within a different population, which is a problem because the majority of the GWAS are done on cohorts of European ancestry. In this episode, Bárbara Bitarello helps us understand how PRS work and why they don’t transfer well across populations. Links: Polygenic Scores for Height in Admixed Populations (Bárbara D. Bitarello, Iain Mathieson) What is ancestry? (Iain Mathieson, Aylwyn Scally)
The pandemic has had a tremendous impact on the business of healthcare. With states canceling elective procedures and people deferring care for fear of being exposed to the virus, hospital and medical practice revenue is down. On the flip side, many health plans are sitting on a mountain of premiums that aren't being spent because of this deferred care, possibly leading to rebates in some cases and a ton of uncertainty in pretty much all cases. One less obvious outcome of all of this may fall on Medicare Advantage plans in 2021, and it threatens to lower payments by 4-6% in 2021. Medicare Advantage, of course, is the rapidly growing model that'll cover more than 24 million Americans this year. According to a recent Avalere report, these plans may be looking at both a sicker population and reduced payments in 2021 because of this deferred utilization. Here to help us understand why, and to share some advice for how Medicare Advantage plans can weather the storm is Dr. Matt Lambert, a practicing ER clinician, and Chief Medical Officer at Curation Health. A few actions Dr. Lambert suggests are: Focus on the long game and be patient. For example, don't pay out more to shareholders and, instead, place revenue in short-term investments that they can access without penalty. Prioritize virtual care/telemedicine enablement/reimbursement now and moving forward. This will enable more members to access care while avoiding in-person treatment risks. Lead with interventions and the type of claim vs. volume of claims. MA plans will be best served to focus on capturing the key conditions that map specifically to chronic conditions as they drive the most improved outcomes, utilization and costs. There's a lot of nuance to this story and the way Medicare Advantage payments are calculated. Dr. Lambert breaks it all down for us. Enjoy! Dr. Matt Lambert Dr. Matt Lambert brings more than 20 years of experience as a clinician, CMIO, and change leader in value-based care, ensuring that patients receive more comprehensive care and that payers and providers better capture the value of their services. He is a practicing, board-certified emergency medicine provider who previously founded his own physician staffing company. Dr. Lambert was one of the founding members of Clinovations. During his time there he served as part of the leadership team for several electronic health record implementations at the nation's largest public health system in New York City, the University of Washington in Seattle, Johns Hopkins, Barnabas Health, Medstar, and Broward Health. He is also the author of two healthcare books: Unrest Insured and Close to Change: Perspectives on Change and Healthcare for a Doctor, a Town, and a Country. mlambert@curationhealth.com Curation Health Curation Health was founded by a team of healthcare veterans and clinicians to help providers and health plans effectively navigate the transition from fee-for-service to value-based care. Their advanced clinical decision support platform for value-based care drives more accurate risk adjustment and improved quality program performance by curating relevant insights from disparate sources and delivering them in real time to clinicians and care teams. With Curation Health, clinicians enjoy a streamlined, comprehensive clinical documentation process that enables better clinical and financial outcomes while simultaneously reducing clinical administrative burdens on providers. Curation Health takes pride in combining the flexibility and speed of a startup with decades of leadership experience and know-how from roles in leading companies including Clinovations, Evolent Health, and The Advisory Board Company. Web: curationhealthcare.com. LinkedIn: https://www.linkedin.com/company/curationhealth/ Twitter: https://twitter.com/curationhealth Case Study: https://curationhealthcare.com/a-case-study-on-curation-health-and-a-physician-group-in-the-midwest/ Links and Resources Report: COVID-19 Pandemic May Reduce MA Risk Scores and Payments (Avalere) Unrest Insured by Dr. Matt Lambert Close to Change: Perspectives on Change and Healthcare for a Doctor, a Town, and a Country by DR. Matt Lambert Episode #122: Headwinds Impacting the Shift to Value-Based Care with Kyle Swarts and Dr. Matt Lambert The #HCBiz Show! is produced by Glide Health IT, LLC in partnership with Netspective Media. Music by StudioEtar
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.11.06.371286v1?rss=1 Authors: Paliwal, D., McInerney, T. W., Pa, J., Swerdlow, R. H., Easteal, S., Andrews, S. J. Abstract: INTRODUCTION: Genetic, animal and epidemiological studies involving biomolecular and clinical endophenotypes implicate mitochondrial dysfunction in Alzheimers disease (AD) pathogenesis. Polygenic risk scores (PRS) provide a novel approach to assess biological pathway-associated disease risk by combining the effects of variation at multiple, functionally related genes. METHODS: We investigated associations of PRS for genes involved in 12 mitochondrial pathways (pathway-PRS) related to AD in 854 participants from Alzheimers Disease Neuroimaging Initiative. RESULTS: Pathway-PRS for four mitochondrial pathways are significantly associated with increased AD risk: (i) response to oxidative stress (OR: 2.01 [95% Cl: 1.71, 2.37]); (ii) mitochondrial transport (OR: 1.81 [95% Cl: 1.55, 2.13]); (iii) hallmark oxidative phosphorylation (OR: 1.23 [95% Cl: 1.07, 1.41]); and (iv) mitochondrial membrane potential regulation (OR: 1.18 [95% Cl: 1.03, 1.36]). DISCUSSION: Therapeutic approaches targeting these pathways may have potential for modifying AD pathogenesis. Further investigation is required to establish a causal role for these pathways in AD pathology. Copy rights belong to original authors. Visit the link for more info
Four Twenty Seven promotes climate adaptation and resilient investment through the integration of climate science into business and policy decisions, particularly their climate risk scores for listed securities, risk assessments for real estate, and intelligence services for scenario analysis. In our conversation Mazzacurati also talks about the increasing number of carbon neutrality declarations from corporate and government sectors. For example, she walks us through how corporations in the manufacturing sector are applying scenario analysis to opportunities for breakthrough technologies, and how China and the EU can do the same for credit rating risk. Mazzacurati was named among the Top 100 People in Finance for 2019. Her full bio is included in the Attachments tab of this program.
About This EpisodeDr. Holly Pederson is the director of the Medical Breast services in the breast center at the Cleveland Clinic. She is an Associate Professor at the Cleveland Clinic Lerner College of Medicine. She sits down with Dr. Slavin to discuss a newly emerging tool within hereditary cancer genetics: polygenic risk scores.
Podcast: Unsolicited Response PodcastEpisode: Splunk OT Security Add-OnPub date: 2020-09-16Most of the OT Detection and Asset Management solutions have developed 'integrations' with SIEMs, with Splunk and QRadar being the most common. I put integrations in quotes because they did little more than push alerts and events to the SIEMs with little context. This all changed with Splunk announcing their OT Security Add-On last month. In this episode of the Unsolicited Response podcast I talk with Ed Albanese, the VP Internet of Things at Splunk about the OT Security Add-On. This is a more detailed, technical episode as I try to dig into the features and benefits of the integration today and where it can be improved in the future. This includes: The additional OT fields in the Splunk Asset Framework The OT_Asset and OT_SW_Asset data models How the 29 OT search queries will work with integrations likely using different terms (such as different names for asset types) and the types of search queries currently supported. The value of having standardizations for some OT alerts/events sent to Splunk, such as "modify control logic". This support for standardized notables, as Splunk calls them, is not in the released Add-On but can be configured. How Splunk is tracking vulnerability management (currently no OT integration) And how Splunk is calculating the Risk Scores in the OT Security Posture Tab Links Splunk OT Security Add-On Announcement Splunk OT Security Add-On Software Download PageThe podcast and artwork embedded on this page are from Dale Peterson: ICS Security Catalyst and S4 Conference Chair, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.
Most of the OT Detection and Asset Management solutions have developed 'integrations' with SIEMs, with Splunk and QRadar being the most common. I put integrations in quotes because they did little more than push alerts and events to the SIEMs with little context. This all changed with Splunk announcing their OT Security Add-On last month. In this episode of the Unsolicited Response podcast I talk with Ed Albanese, the VP Internet of Things at Splunk about the OT Security Add-On. This is a more detailed, technical episode as I try to dig into the features and benefits of the integration today and where it can be improved in the future. This includes: The additional OT fields in the Splunk Asset Framework The OT_Asset and OT_SW_Asset data models How the 29 OT search queries will work with integrations likely using different terms (such as different names for asset types) and the types of search queries currently supported. The value of having standardizations for some OT alerts/events sent to Splunk, such as "modify control logic". This support for standardized notables, as Splunk calls them, is not in the released Add-On but can be configured. How Splunk is tracking vulnerability management (currently no OT integration) And how Splunk is calculating the Risk Scores in the OT Security Posture Tab Links Splunk OT Security Add-On Announcement Splunk OT Security Add-On Software Download Page
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.08.09.243287v1?rss=1 Authors: Trochet, H., Hussin, J. Abstract: Genetic risk scores (GRS), also known as polygenic risk scores, are a tool to estimate individuals' liabilities to a disease or trait measurement based solely on genetic information. They have value in clinical applications as well as for assessing relationships between traits and discovering causal determinants of complex disease. However, it has been shown that these scores are not robust to differences across continental populations and may not be portable within them either. Even within a single population, they may have variable predictive ability across sexes and socioeconomic strata, raising questions about their potential biases. In this paper, we investigated the accuracy of two different GRS across population strata of the UK Biobank, separated along principal component (PC) axes, considering different approaches to account for social and environmental confounders. We found that these scores did not predict the real differences in phenotypes observed along the first principal component, with evidence of discrepancies on axes as high as PC45. These results demonstrate that the measures currently taken for correcting for population structure are not sufficient, and the need for social and environmental confounders to be factored into the creation of GRS. Copy rights belong to original authors. Visit the link for more info
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.07.29.227439v1?rss=1 Authors: Deak, J. D., Clark, D. A., Liu, M., Durbin, C. E., Iacono, W. G., McGue, M., Vrieze, S. I., Hicks, B. M. Abstract: Importance: Molecular genetic studies of alcohol and nicotine use have identified hundreds of genome-wide risk loci. Few studies have examined the influence of aggregate genetic risk on substance use trajectories over time. Objective: We examined the predictive utility of drinking and smoking polygenic risk scores (PRS) for alcohol and nicotine use from late childhood to early adulthood, substance-specific versus broader-liability effects of the respective PRS, and if PRS performance varied between regular consumption versus pathological use. Design: Latent growth curve models with structured residuals were used to assess the predictive utility of drinks per week and regular smoking PRS for measures of alcohol and nicotine consumption and problematic use from age 14 to 34. Setting: PRS were generated from the largest discovery sample for alcohol and nicotine use to date (i.e., GSCAN), and examined for associations with alcohol and nicotine use outcomes in the Minnesota Twin Family Study (MTFS). Participants: Participants were members of the MTFS (N=3225), a longitudinal study investigating the development of substance use disorders and related conditions. Main Outcomes and Measures: Outcomes included alcohol and nicotine use disorder symptoms as defined by the Diagnostic and Statistical Manual of Mental Disorders, measures of alcohol and nicotine consumption (i.e., drinks per occasion, cigarettes per day), and composite variables for alcohol and nicotine use problems. Results: The drinks per week PRS was a significant predictor of problematic alcohol use at age 14 and increases in problematic use during young adulthood. The regular smoking PRS was a significant predictor for all nicotine use outcomes. After adjusting for the effects of both PRSs,the regular smoking PRS demonstrated incremental predictive utility for most alcohol use outcomes and remained a significant predictor of nicotine use trajectories. Conclusions and Relevance: Higher PRS for drinks per week and regular smoking were each associated with more problematic levels of substance use over time. Additionally, the regular smoking PRS seems to capture both nicotine-specific and non-specific genetic liability for substance use problems, and may index genetic risk for externalizing behavior in general. Longitudinal PRS prediction approaches may inform personalized substance use intervention approaches. Copy rights belong to original authors. Visit the link for more info
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.07.22.215376v1?rss=1 Authors: Schmitz, J., Abbondanza, F., Paracchini, S. Abstract: An efficient auditory system contributes to cognitive and psychosocial development. A right ear advantage in hearing thresholds (HT) has been described in adults and atypical patterns of left/right hearing threshold asymmetry (HTA) have been described for psychiatric and neurodevelopmental conditions. Previous genome-wide association studies (GWAS) on HT have mainly been conducted in elderly participants whose hearing is more likely to be affected by environmental effects. We analysed HT and HTA in a children population cohort (ALSPAC, n = 6,743, 7.6 years). Better hearing was associated with more advanced cognitive skills and higher socioeconomic status (SES). Mean HTA was negative (-0.28 dB), suggesting a left ear advantage in children but mainly driven by females (-0.48 dB in females v -0.09 dB in males). We performed the first GWAS on HT in children and the very first GWAS on HTA (n = 5,344). Single marker trait association analysis did not yield significant hits. Polygenic risk score (PRS) analysis revealed associations of PRS for schizophrenia with HT, which remained significant after controlling for SES and cognitive skills, and of PRS for autism spectrum disorders (ASD) with HTA. Gene-based analysis for HTA reached genome-wide significance for MCM5, which is implicated in axon morphogenesis. This analysis also highlighted other genes associated with contralateral axon crossing. Some of these genes have previously been reported for ASD. These results further support the hypothesis that pathways distinguishing the left/right axis of the brain (i.e. commissural crossing) contribute to both different types of asymmetries (i.e. HTA) and neurodevelopmental disorders. Copy rights belong to original authors. Visit the link for more info
Interview with Joe Habboushe, MD, CEO of MDCalc about new COVID-19 tools and his New York City experience. MDCalc's new COVID-19 resource center: https://www.mdcalc.com/covid-19 EBMedicine's COVID-19 article with recent updates: https://www.ebmedicine.net/topics/infectious-disease/COVID-19 Time Stamps: 00:00- Discussion of new tools for COVID-19: calculators, risk factors and odds ratios, labs, etc. 40:02- Discussion of the New York City COVID-19 crisis.
Interview with Joe Habboushe, MD, CEO of MDCalc about new COVID-19 tools and his New York City experience. MDCalc's new COVID-19 resource center: https://www.mdcalc.com/covid-19 EBMedicine's COVID-19 article with recent updates: https://www.ebmedicine.net/topics/infectious-disease/COVID-19 Time Stamps: 00:00- Discussion of new tools for COVID-19: calculators, risk factors and odds ratios, labs, etc. 40:02- Discussion of the New York City COVID-19 crisis.
Editor's Summary by Howard Bauchner, MD, Editor in Chief of JAMA, the Journal of the American Medical Association, for the February 18, 2020 issue
[This is the text of a talk I gave to the Irish Law Reform Commission Annual Conference in Dublin on the 13th of November 2018. You can listen to an audio version of this lecture here or using the embedded player above.]In the mid-19th century, a set of laws were created to address the menace that newly-invented automobiles and locomotives posed to other road users. One of the first such laws was the English The Locomotive Act 1865, which subsequently became known as the ‘Red Flag Act’. Under this act, any user of a self-propelled vehicle had to ensure that at least two people were employed to manage the vehicle and that one of these persons:“while any locomotive is in motion, shall precede such locomotive on foot by not less than sixty yards, and shall carry a red flag constantly displayed, and shall warn the riders and drivers of horses of the approach of such locomotives…”The motive behind this law was commendable. Automobiles did pose a new threat to other, more vulnerable, road users. But to modern eyes the law was also, clearly, ridiculous. To suggest that every car should be preceded by a pedestrian waving a red flag would seem to defeat the point of having a car: the whole idea is that it is faster and more efficient than walking. The ridiculous nature of the law eventually became apparent to its creators and all such laws were repealed in the 1890s, approximately 30 years after their introduction.[1]The story of the Red Flag laws shows that legal systems often get new and emerging technologies badly wrong. By focusing on the obvious or immediate risks, the law can neglect the long-term benefits and costs.I mention all this by way of warning. As I understand it, it has been over 20 years since the Law Reform Commission considered the legal challenges around privacy and surveillance. A lot has happened in the intervening decades. My goal in this talk is to give some sense of where we are now and what issues may need to be addressed over the coming years. In doing this, I hope not to forget the lesson of the Red Flag laws.1. What’s changed? Let me start with the obvious question. What has changed, technologically speaking, since the LRC last considered issues around privacy and surveillance? Two things stand out.First, we have entered an era of mass surveillance. The proliferation of digital devices — laptops, computers, tablets, smart phones, smart watches, smart cars, smart fridges, smart thermostats and so forth — combined with increased internet connectivity has resulted in a world in which we are all now monitored and recorded every minute of every day of our lives. The cheapness and ubiquity of data collecting devices means that it is now, in principle, possible to imbue every object, animal and person with some data-monitoring technology. The result is what some scholars refer to as the ‘internet of everything’ and with it the possibility of a perfect ‘digital panopticon’. This era of mass surveillance puts increased pressure on privacy and, at least within the EU, has prompted significant legislative intervention in the form of the GDPR.Second, we have created technologies that can take advantage of all the data that is being collected. To state the obvious: data alone is not enough. As all lawyers know, it is easy to befuddle the opposition in a complex law suit by ‘dumping’ a lot of data on them during discovery. They drown in the resultant sea of information. It is what we do with the data that really matters. In this respect, it is the marriage of mass surveillance with new kinds of artificial intelligence that creates the new legal challenges that we must now tackle with some urgency.Artificial intelligence allows us to do three important things with the vast quantities of data that are now being collected:(i) It enables new kinds of pattern matching - what I mean here is that AI systems can spot patterns in data that were historically difficult for computer systems to spot (e.g. image or voice recognition), and that may also be difficult, if not impossible, for humans to spot due to their complexity. To put it another way, AI allows us to understand data in new ways.(ii) It enables the creation of new kinds of informational product - what I mean here is that the AI systems don’t simply rebroadcast, dispassionate and objective forms of the data we collect. They actively construct and reshape the data into artifacts that can be more or less useful to humans.(iii) It enables new kinds of action and behaviour - what I mean here is that the informational products created by these AI systems are not simply inert artifacts that we observe with bemused detachment. They are prompts to change and alter human behaviour and decision-making.On top of all this, these AI systems do these things with increasing autonomy (or, less controversially, automation). Although humans do assist the AI systems in both understanding, constructing and acting on foot of the data being collected, advances in AI and robotics make it increasingly possible for machines to do things without direct human assistance or intervention.It is these ways of using data, coupled with increasing automation, that I believe give rise to the new legal challenges. It is impossible for me to cover all of these challenges in this talk. So what I will do instead is to discuss three case studies that I think are indicative of the kinds of challenges that need to be addressed, and that correspond to the three things we can now do with the data that we are collecting.2. Case Study: Facial Recognition TechnologyThe first case study has to do with facial recognition technology. This is an excellent example of how AI can understand data in new ways. Facial recognition technology is essentially like fingerprinting for the face. From a selection of images, an algorithm can construct a unique mathematical model of your facial features, which can then be used to track and trace your identity across numerous locations.The potential conveniences of this technology are considerable: faster security clearance at airports; an easy way to record and confirm attendance in schools; an end to complex passwords when accessing and using your digital services; a way for security services to track and identify criminals; a tool for locating missing persons and finding old friends. Little surprise then that many of us have already welcomed the technology into our lives. It is now the default security setting on the current generation of smartphones. It is also being trialled at airports (including Dublin Airport),[2] train stations and public squares around the world. It is cheap and easily plugged into existing CCTV surveillance systems. It can also take advantage of the vast databases of facial images collected by governments and social media engines.Despite its advantages, facial recognition technology also poses a significant number of risks. It enables and normalises blanket surveillance of individuals across numerous environments. This makes it the perfect tool for oppressive governments and manipulative corporations. Our faces are one of our most unique and important features, central to our sense of who we are and how we relate to each other — think of the Beatles immortal line ‘Eleanor Rigby puts on the face that she keeps in the jar by the door’ — facial recognition technology captures this unique feature and turns into a digital product that can be copied and traded, and used for marketing, intimidation and harassment.Consider, for example, the unintended consequences of the FindFace app that was released in Russia in 2016. Intended by its creators to be a way of making new friends, the FindFace app matched images on your phone with images in social media databases, thus allowing you to identify people you may have met but whose names you cannot remember. Suppose you met someone at a party, took a picture together with them, but then didn’t get their name. FindFace allows you use the photo to trace their real identity.[3] What a wonderful idea, right? Now you need never miss out on an opportunity for friendship because of oversight or poor memory. Well, as you might imagine, the app also has a dark side. It turns out to be the perfect technology for stalkers, harassers and doxxers (the internet slang for those who want to out people’s real world identities). Anyone who is trying to hide or obscure their identity can now be traced and tracked by anyone who happens to take a photograph of them.What’s more, facial recognition technology is not perfect. It has been shown to be less reliable when dealing with non-white faces, and there are several documented cases in which it matches the wrong faces, thus wrongly assuming someone is a criminal when they are not. For example, many US drivers have had their licences cancelled because an algorithm has found two faces on a licence database to be suspiciously similar and has then wrongly assumed the people in question to be using a false identity. In another famous illustration of the problem, 28 members of the US congress (most of them members of racial minorities), were falsely matched with criminal mugshots using facial recognition technology created by Amazon.[4] As some researchers have put it, the widespread and indiscriminate use of facial recognition means that we are all now part of a perpetual line-up that is both biased and error prone.[5] The conveniences of facial recognition thus come at a price, one that often only becomes apparent when something goes wrong, and is more costly for some social groups than others.What should be done about this from a legal perspective? The obvious answer is to carefully regulate the technology to manage its risks and opportunities. This is, in a sense, what is already being done under the GDPR. Article 9 of the GDPR stipulates that facial recognition is a kind of biometric data that is subject to special protections. The default position is that it should not be collected, but this is subject to a long list of qualifications and exceptions. It is, for example, permissible to collect it if the data has already been made public, if you get the explicit consent of the person, if it serves some legitimate public interest, if it is medically necessary or necessary for public health reasons, if it is necessary to protect other rights and so on. Clearly the GDPR does restrict facial recognition in some ways. A recent Swedish case fined a school for the indiscriminate use of facial recognition for attendance monitoring.[6] Nevertheless, the long list of exceptions makes the widespread use of facial recognition not just a possibility but a likelihood. This is something the EU is aware of and in light of the Swedish case they have signalled an intention to introduce stricter regulation of facial recognition.This is something we in Ireland should also be considering. The GDPR allows states to introduce stricter protections against certain kinds of data collection. And, according to some privacy scholars, we need the strictest possible protections to to save us from the depredations of facial recognition. Woodrow Hartzog, one of the foremost privacy scholars in the US, and Evan Selinger, a philosopher specialising in the ethics of technology, have recently argued that facial recognition technology must be banned. As they put it (somewhat alarmingly):[7]“The future of human flourishing depends upon facial recognition technology being banned before the systems become too entrenched in our lives. Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.”They caution against anyone who thinks that the technology can be procedurally regulated, arguing that governmental and commercial interests will always lobby for expansion of the technology beyond its initially prescribed remit. They also argue that attempts at informed consent will be (and already are) a ‘spectacular failure’ because people don’t understand what they are consenting to when they give away their facial fingerprint.Some people might find this call for a categorical ban extreme, unnecessary and impractical. Why throw the baby out with the bathwater and other cliches to that effect. But I would like to suggest that there is something worth taking seriously here, particularly since facial recognition technology is just the tip of the iceberg of data collection. People are already experimenting with emotion recognition technology, which uses facial images to predict future behaviour in real time, and there are many other kinds of sensitive data that are being collected, digitised and traded. Genetic data is perhaps the most obvious other example. Given that data is what fuels the fire of AI, it is possible that we should consider cutting off some of the fuel supply in its entirety.3. Case Study: DeepfakesLet me move on to my second case study. This one has to do with how AI is used to create new informational products from data. As an illustration of this I will focus on so-called ‘deepfake’ technology. This is a machine learning technique that allows you to construct realistic synthetic media from databases of images and audio files. The most prevalent use of deepfakes is, perhaps unsurprisingly, in the world of pornography, where the faces of famous actors have been repeatedly grafted onto porn videos. This is disturbing and makes deepfakes an ideal technology for ‘synthetic’ revenge porn.Perhaps more socially significant than this, however, are the potential political uses of deepfake technology. In 2017, a team of researchers at the University of Washington created a series of deepfake videos of Barack Obama which I will now play for you.[8] The images in these videos are artificial. They haven’t been edited together from different clips. They have been synthetically constructed by an algorithm from a database of audiovisual materials. Obviously, the video isn’t entirely convincing. If you look and listen closely you can see that there is something stilted and artificial about it. In addition to this it uses pre-recorded audio clips to sync to the synthetic video. Nevertheless, if you weren’t looking too closely, you might be convinced it was real. Furthermore, there are other teams working on using the same basic technique to create synthetic audio too. So, as the technology improves, it could be very difficult for even the most discerning viewers to tell the difference between fiction and reality.Now there is nothing new about synthetic media. With the support of the New Zealand Law Foundation, Tom Barraclough and Curtis Barnes have published one of the most detailed investigations into the legal policy implications of deepfake technology.[9] In their report, they highlight the fact that an awful lot of existing audiovisual media is synthetic: it is all processed, manipulated and edited to some degree. There is also a long history of creating artistic and satirical synthetic representations of political and public figures. Think, for example, of the caricatures in Punch magazine or in the puppet show Spitting Image. Many people who use deepfake technology to create synthetic media will, no doubt, claim a legitimate purpose in doing so. They will say they are engaging in legitimate satire or critique, or producing works of artistic significance.Nevertheless, there does seem to be something worrying about deepfake technology. The highly realistic nature of the audiovisual material being created makes it the ideal vehicle for harassment, manipulation, defamation, forgery and fraud. Furthermore, the realism of the resultant material also poses significant epistemic challenges for society. The philosopher Regina Rini captures this problem well. She argues that deepfake technology poses a threat to our society’s ‘epistemic backstop’. What she means is that as a society we are highly reliant on testimony from others to get by. We rely on it for news and information, we use it to form expectations about the world and build trust in others. But we know that testimony is not always reliable. Sometimes people will lie to us; sometimes they will forget what really happened. Audiovisual recordings provide an important check on potentially misleading forms of testimony. They encourage honesty and competence. As Rini puts it:[10]“The availability of recordings undergirds the norms of testimonial practice…Our awareness of the possibility of being recorded provides a quasi-independent check on reckless testifying, thereby strengthening the reasonability of relying on the words of others. Recordings do this in two distinctive ways: actively correcting errors in past testimony and passively regulating ongoing testimonial practices.”The problem with deepfake technology is that it undermines this function. Audiovisual recordings can no longer provide the epistemic backstop that keeps us honest.What does this mean for the law? I am not overly concerned about the impact of deepfake technology on legal evidence-gathering practices. The legal system, with its insistence on ‘chain of custody’ and testimonial verification of audiovisual materials, is perhaps better placed than most to deal with the threat of deepfakes (though there will be an increased need for forensic experts to identify deepfake recordings in court proceedings). What I am more concerned about is how deepfake technologies will be weaponised to harm and intimidate others — particularly members of vulnerable populations. The question is whether anything can be done to provide legal redress for these problems? As Barraclough and Barnes point out in their report, it is exceptionally difficult to legislate in this area. How do you define the difference between real and synthetic media (if at all)? How do you balance the free speech rights against the potential harms to others? Do we need specialised laws to do this or are existing laws on defamation and fraud (say) up to the task? Furthermore, given that deepfakes can be created and distributed by unknown actors, who would the potential cause of action be against?These are difficult questions to answer. The one concrete suggestion I would make is that any existing or proposed legislation on ‘revenge porn’ should be modified so that it explicitly covers the possibility of synthetic revenge porn. Ireland is currently in the midst of legislating against the nonconsensual sharing of ‘intimate images’ in the Harassment, Harmful Communications and Related Offences Bill. I note that the current wording of the offence in section 4 of the Bill covers images that have been ‘altered’ but someone might argue that synthetically constructed images are not, strictly speaking, altered. There may be plans to change this wording to cover this possibility — I know that consultations and amendments to the Bill are ongoing[11] — but if there aren’t then I suggest that there should be.To reiterate, I am using deepfake technology as an illustration of a more general problem. There are many other ways in which the combination data and AI can be used to mess with the distinction between fact and fiction. The algorithmic curation and promotion of fake news, for example, or the use of virtual and augmented reality to manipulate our perception of public and private spaces, both pose significant threats to property rights, privacy rights and political rights. We need to do something to legally manage this brave new (technologically constructed) world.4. Case Study: Algorithmic Risk PredictionLet me turn turn now to my final case study. This one has to do with how data can be used to prompt new actions and behaviours in the world. For this case study, I will look to the world of algorithmic risk prediction. This is where we take a collection of datapoints concerning an individual’s behaviour and lifestyle and feed it into an algorithm that can make predictions about their likely future behaviour. This is a long-standing practice in insurance, and is now being used in making credit decisions, tax auditing, child protection, and criminal justice (to name but a few examples). I’ll focus on its use in criminal justice for illustrative purposes.Specifically, I will focus on the debate surrounding the COMPAS algorithm, that has been used in a number of US states. The COMPAS algorithm (created by a company called Northpointe, now called Equivant) uses datapoints to generate a recidivism risk score for criminal defendants. The datapoints include things like the person’s age at arrest, their prior arrest/conviction record, the number of family members who have been arrested/convicted, their address, their education and job and so on. These are then weighted together using an algorithm to generate a risk score. The exact weighting procedure is unclear, since the COMPAS algorithm is a proprietary technology, but the company that created it has released a considerable amount of information about the datapoints it uses into the public domain.If you know anything about the COMPAS algorithm you will know that it has been controversial. The controversy stems from two features of how the algorithm works. First, the algorithm is relatively opaque. This is a problem because the fair administration of justice requires that legal decision-making be transparent and open to challenge. A defendant has a right to know how a tribunal or court arrived at its decision and to challenge or question its reasoning. If this information isn’t known — either because the algorithm is intrinsically opaque or has been intentionally rendered opaque for reasons of intellectual property — then this principle of fair administration is not being upheld. This was one of the grounds on which the use of COMPAS algorithm was challenged in the US case of Loomis v Wisconsin.[12] In that case, the defendant, Loomis, challenged his sentencing decision on the basis that the trial court had relied on the COMPAS risk score in reaching its decision. His challenge was ultimately unsuccessful. The Wisconsin Supreme Court reasoned that the trial court had not relied solely on the COMPAS risk score in reaching its decision. The risk score was just one input into the court’s decision-making process, which was itself transparent and open to challenge. That said, the court did agree that courts should be wary when relying on such algorithms and said that warnings should be attached to the scores to highlight their limitations.The second controversy associated with the COMPAS algorithm has to do with its apparent racial bias. To understand this controversy I need to say a little bit more about how the algorithm works. Very roughly, the COMPAS algorithm is used to sort defendants into to outcome ‘buckets’: a 'high risk' reoffender bucket or a 'low risk' reoffender bucket. A number of years back a group of data journalists based at ProPublica conducted an investigation into which kinds of defendants got sorted into those buckets. They discovered something disturbing. They found that the COMPAS algorithm was more likely to give black defendants a false positive high risk score and more likely to give white defendants a false negative low risk score. The exact figures are given in the table below. Put another way, the COMPAS algorithm tended to rate black defendants as being higher risk than they actually were and white defendants as being lower risk than they actually were. This was all despite the fact that the algorithm did not explicitly use race as a criterion in its risk scores.Needless to say, the makers of the COMPAS algorithm were not happy about this finding. They defended their algorithm, arguing that it was in fact fair and non-discriminatory because it was well calibrated. In other words, they argued that it was equally accurate in scoring defendants, irrespective of their race. If it said a black defendant was high risk, it was right about 60% of the time and if it said that a white defendant was high risk, it was right about 60% of the time. This turns out to be true. The reason why it doesn't immediately look like it is equally accurate upon a first glance at the relevant figures is that there are a lot more black defendants than white defendants -- an unfortunate feature of the US criminal justice system that is not caused by the algorithm but is, rather, a feature the algorithm has to work around.So what is going on here? Is the algorithm fair or not? Here is where things get interesting. Several groups of mathematicians analysed this case and showed that the main problem here is that the makers of COMPAS and the data journalists were working with different conceptions of fairness and that these conceptions were fundamentally incompatible. This is something that can be formally proved. The clearest articulation of this proof can be found in a paper by Jon Kleinberg, Sendhil Mullainathan and Manish Raghavan.[13] To simplify their argument, they said that there are two things you might want a fair decision algorithm to do: (i) you might want it to be well-calibrated (i.e. equally accurate in its scoring irrespective of racial group); (ii) you might want it to achieve an equal representation for all groups in the outcome buckets. They then proved that except in two unusual cases, it is impossible to satisfy both criteria. The two unusual cases are when the algorithm is a 'perfect predictor' (i.e. it always get things right) or, alternatively, when the base rates for the relevant populations are the same (e.g. there are the same number of black defedants as there are white defendants). Since no algorithmic decision procedure is a perfect predictor, and since our world is full of base rate inequalities, this means that no plausible real-world use of a predictive algorithm is likely to be perfectly fair and non-discriminatory. What's more, this is generally true for all algorithmic risk predictions and not just true for cases involving recidivism risk. If you would like to see a non-mathematical illustration of the problem, I highly recommend checking out a recent article in the MIT Technology Review which includes a game you can play using the COMPAS algorithm and which illustrates the hard tradeoff between different conceptions of fairness.[14]What does all this mean for the law? Well, when it comes to the issue of transparency and challengeability, it is worth noting that the GDPR, in articles 13-15 and article 22, contains what some people refer to as a ‘right to explanation’. It states that, when automated decision procedures are used, people have a right to access meaningful information about the logic underlying the procedures. What this meaningful information looks like in practice is open to some interpretation, though there is now an increasing amount of guidance from national data protection units about what is expected.[15] But in some ways this misses the deeper point. Even if we make these procedures perfectly transparent and explainable, there remains the question about how we manage the hard tradeoff between different conceptions of fairness and non-discrimination. Our legal conceptions of fairness are multidimensional and require us to balance competing interests. When we rely on human decision-makers to determine what is fair, we accept that there will be some fudging and compromise involved. Right now, we let this fudging take place inside the minds of the human decision-makers, oftentimes without questioning it too much or making it too explicit. The problem with algorithmic risk predictions is that they force us to make this fudging explicit and precise. We can no longer pretend that the decision has successfully balanced all the competing interests and demands. We have to pick and choose. Thus, in some ways, the real challenge with these systems is not that they are opaque and non-transparent but, rather, that when they are transparent they force us to make hard choices.To some, this is the great advantage of algorithmic risk prediction. A paper by Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan and Cass Sunstein entitled ‘Discrimination in the Age of the Algorithm’ makes this very case.[16] They argue that the real problem at the moment is that decision-making is discriminatory and its discriminatory nature is often implicit and hidden from view. The widespread use of transparent algorithms will force it into the open where it can be washed by the great disinfectant of sunlight. But I suspect others will be less sanguine about this new world of algorithmically mediated justice. They will argue that human-led decision-making, with its implicit fudging, is preferable, partly because it allows us to sustain the illusion of justice. Which world do we want to live in? The transparent and explicit world imagined by Kleinberg et al, or the murky and more implicit world of human decision-making? This is also a key legal challenge for the modern age.5. ConclusionIt’s time for me to wrap up. One lingering question you might have is whether any of the challenges outlined above are genuinely new. This is a topic worth debating. In one sense, there is nothing completely new about the challenges I have just discussed. We have been dealing with variations of them for as long as humans have lived in complex, literate societies. Nevertheless, there are some differences with the past. There are differences of scope and scale — mass surveillance and AI enables collection of data at an unprecedented scale and its use on millions of people at the same time. There are differences of speed and individuation — AI systems can update their operating parameters in real time and in highly individualised ways. And finally, there are the crucial differences in the degree of autonomy with which these systems operate, which can lead to problems in how we assign legal responsibility and liability.Endnotes[1] I am indebted to Jacob Turner for drawing my attention to this story. He discusses it in his book Robot Rules - Regulating Artificial Intelligence (Palgrave MacMillan, 2018). This is probably the best currently available book about Ai and law. [2] See https://www.irishtimes.com/business/technology/airport-facial-scanning-dystopian-nightmare-rebranded-as-travel-perk-1.3986321; and https://www.dublinairport.com/latest-news/2019/05/31/dublin-airport-participates-in-biometrics-trial [3] https://arstechnica.com/tech-policy/2016/04/facial-recognition-service-becomes-a-weapon-against-russian-porn-actresses/# [4] This was a stunt conducted by the ACLU. See here for the press release https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28 [5] https://www.perpetuallineup.org/ [6] For the story, see here https://www.bbc.com/news/technology-49489154 [7] Their original call for this can be found here: https://medium.com/s/story/facial-recognition-is-the-perfect-tool-for-oppression-bc2a08f0fe66 [8] The video can be found here; https://www.youtube.com/watch?v=UCwbJxW-ZRg; For more information on the research see here: https://www.washington.edu/news/2017/07/11/lip-syncing-obama-new-tools-turn-audio-clips-into-realistic-video/; https://grail.cs.washington.edu/projects/AudioToObama/siggraph17_obama.pdf [9] The full report can be found here: https://static1.squarespace.com/static/5ca2c7abc2ff614d3d0f74b5/t/5ce26307ad4eec00016e423c/1558340402742/Perception+Inception+Report+EMBARGOED+TILL+21+May+2019.pdf [10] The paper currently exists in a draft form but can be found here: https://philpapers.org/rec/RINDAT [11] https://www.dccae.gov.ie/en-ie/communications/consultations/Pages/Regulation-of-Harmful-Online-Content-and-the-Implementation-of-the-revised-Audiovisual-Media-Services-Directive.aspx [12] For a summary of the judgment, see here: https://harvardlawreview.org/2017/03/state-v-loomis/ [13] “Inherent Tradeoffs in the Fair Determination of Risk Scores” - available here https://arxiv.org/abs/1609.05807 [14] The article can be found at this link - https://www.technologyreview.com/s/613508/ai-fairer-than-judge-criminal-risk-assessment-algorithm/ [15] Casey et al ‘Rethinking Explainabie Machines’ - available here https://scholarship.law.berkeley.edu/btlj/vol34/iss1/4/ [16] An open access version of the paper can be downloaded here https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3329669 #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
This week, medical genetics and diversity https://pxlme.me/bmjzWq8R
HealthSource Radio at the University of Vermont Medical Center
Debra Leonard, MD, PhD, talks about the latest advances in genetic testing and what it means for patients and families. Dr. Leonard is chair of Pathology and Laboratory Medicine at the UVM Health Network and professor and chair of the department of Pathology and Laboratory Medicine at the Robert Larner, MD College of Medicine at UVM.
In this episode I talk to Reuben Binns. Reuben is a post-doctoral researcher at the Department of Computer Science in Oxford University. His research focuses on both the technical, ethical and legal aspects of privacy, machine learning and algorithmic decision-making. We have a detailed and informative discussion (for me at any rate!) about recent debates about algorithmic bias and discrimination, and how they could be informed by the philosophy of egalitarianism.You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (the RSS feed is here). Show notes0:00 - Introduction 1:46 - What is algorithmic decision-making? 4:20 - Isn't all decision-making algorithmic? 6:10 - Examples of unfairness in algorithmic decision-making: The COMPAS debate 12:02 - Limitations of the COMPAS debate 15:22 - Other examples of unfairness in algorithmic decision-making 17:00 - What is discrimination in decision-making? 19:45 - The mental state theory of discrimination 25:20 - Statistical discrimination and the problem of generalisation 29:10 - Defending algorithmic decision-making from the charge of statistical discrimination 34:40 - Algorithmic typecasting: Could we all end up like William Shatner? 39:02 - Egalitarianism and algorithmic decision-making 43:07 - The role that luck and desert play in our understanding of fairness 49:38 - Deontic justice and historical discrimination in algorithmic decision-making 53:36 - Fair distribution vs Fair recognition 59:03 - Should we be enthusiastic about the fairness of future algorithmic decision-making? Relevant LinksReuben's homepage Reuben's institutional page 'Fairness in Machine Learning: Lessons from Political Philosophy' by Reuben Binns 'Algorithmic Accountability and Public Reason' by Reuben Binns 'It's Reducing a Human Being to a Percentage: Perceptions of Justice in Algorithmic Decision-Making' by Binns et al 'Machine Bias' - the ProPublica story on unfairness in the COMPAS recidivism algorithm 'Inherent Tradeoffs in the Fair Determination of Risk Scores' by Kleinberg et al -- an impossibility proof showing that you cannot minimise false positive rates and equalise accuracy rates across two populations at the same time (except in the rare case that the base rate for both populations is the same) #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
This week The Rounds Table bridges the Atlantic. Kieran Quinn is joined by Freddy Frost from the University of Liverpool to cover talc administration by indwelling pleural catheter for pleurodesis and comparing cardiovascular risk scores in the Emergency Department. It's not often that we see an interventional randomized controlled trial performed in a population with ... The post Drain or Retain? Talc Administration in Pleurodesis & Cardiovascular Risk Scores in the ED appeared first on Healthy Debate.
BARCELONA—Patients treated for BRCA-associated breast cancers could be given more accurate estimates of the risk for developing second primaries of the contralateral breast by combining polygenic risk scores (PRS) with standard risk factors if study findings reported at the 2018 …Alexandra van den Broek INTERVIEW
Commentary by Dr. Valentin Fuster
Commentary by Dr. Valentin Fuster
Recapping pearls from our weekly conference. This week, we discussed pearls on chest pain. https://media.blubrry.com/coreem/content.blubrry.com/coreem/Core_EM_Podcast_Episode_8.m4a Download Leave a Comment Tags: ACS, Chest Pain Show Notes How to Build a Great Talk The Teaching Course Podcast: How to Build a Talk – Part I The Teaching Course Podcast: How to Build a Talk – Part II Chest Pain Workshop Core EM: Chief Complaint – Chest Pain REBEL EM: Is it time to start using the HEART pathway in the Emergency Department? EMCast November 2014: Low Risk Chest Pain Backus BE et al. Risk Scores for Patients with Chest Pain: Evaluation in the Emergency Department. Curr Card Rev 2011; 7: 2-8. PMC: 3131711 Mahler SA et al. The HEART Pathway Randomized Trial Identifying Emergency Department Patients With Acute Chest Pain for Early Discharge
Recapping pearls from our weekly conference. This week, we discussed pearls on chest pain. https://media.blubrry.com/coreem/content.blubrry.com/coreem/Core_EM_Podcast_Episode_8.m4a Download Leave a Comment Tags: ACS, Chest Pain Show Notes How to Build a Great Talk The Teaching Course Podcast: How to Build a Talk – Part I The Teaching Course Podcast: How to Build a Talk – Part II Chest Pain Workshop Core EM: Chief Complaint – Chest Pain REBEL EM: Is it time to start using the HEART pathway in the Emergency Department? EMCast November 2014: Low Risk Chest Pain Backus BE et al. Risk Scores for Patients with Chest Pain: Evaluation in the Emergency Department. Curr Card Rev 2011; 7: 2-8. PMC: 3131711 Mahler SA et al. The HEART Pathway Randomized Trial Identifying Emergency Department Patients With Acute Chest Pain for Early Discharge.
Recapping pearls from our weekly conference. This week, we discussed pearls on chest pain. https://media.blubrry.com/coreem/content.blubrry.com/coreem/Core_EM_Podcast_Episode_8.m4a Download Leave a Comment Tags: ACS, Chest Pain Show Notes How to Build a Great Talk The Teaching Course Podcast: How to Build a Talk – Part I The Teaching Course Podcast: How to Build a Talk – Part II Chest Pain Workshop Core EM: Chief Complaint – Chest Pain REBEL EM: Is it time to start using the HEART pathway in the Emergency Department? EMCast November 2014: Low Risk Chest Pain Backus BE et al. Risk Scores for Patients with Chest Pain: Evaluation in the Emergency Department. Curr Card Rev 2011; 7: 2-8. PMC: 3131711 Mahler SA et al. The HEART Pathway Randomized Trial Identifying Emergency Department Patients With Acute Chest Pain for Early Discharge.
Barbra Backus joins Rick Body to discuss the origin, development and future of risk scores for ED patients with possible acute coronary syndromes. Two researchers at the top of their game, and authors of the HEART and MACS scores. vb S