POPULARITY
About this episode: It's graduation time at the Bloomberg School! Doctoral candidate Jeff Marr joins the podcast to talk about how an economics major and an early internship at a health care system led to an interest in examining how health care markets and public policy work. Soon-to-be Dr. Marr discusses his dissertation looking at how predictive algorithms lead to decisions about care coverage. Guest: Jeffrey Marr is a healthcare economist and doctoral candidate at the Johns Hopkins Bloomberg School of Public Health. In July 2025, he will join Brown University as an Assistant Professor of Health Services, Policy, and Practice. Host: Dr. Josh Sharfstein is vice dean for public health practice and community engagement at the Johns Hopkins Bloomberg School of Public Health, a faculty member in health policy, a pediatrician, and former secretary of Maryland's Health Department. Show links and related content: Algorithmic Decision-Making in Health Care: Evidence from Post-Acute Care in Medicare Advantage Transcript information: Looking for episode transcripts? Open our podcast on the Apple Podcasts app (desktop or mobile) or the Spotify mobile app to access an auto-generated transcript of any episode. Closed captioning is also available for every episode on our YouTube channel. Contact us: Have a question about something you heard? Looking for a transcript? Want to suggest a topic or guest? Contact us via email or visit our website. Follow us: @PublicHealthPod on Bluesky @JohnsHopkinsSPH on Instagram @JohnsHopkinsSPH on Facebook @PublicHealthOnCall on YouTube Here's our RSS feed Note: These podcasts are a conversation between the participants, and do not represent the position of Johns Hopkins University.
Join us for a provocative episode on Brain in a Vat as we rejoin the infamous Stephen Kershnar, whose prior discussions have made headlines. This episode delves into affirmative action, demographic considerations in education and employment, and the ethics of statistical predictions informed by race.The discussion debates the legitimacy and consequences of using race, gender, and other demographic factors in decision-making processes across various fields, from medicine and law to parole decisions. The episode explores the balance between fairness and efficiency, and whether algorithms could replace human judgment in critical decisions.Don't miss this thought-provoking exploration of some of today's most contentious issues.[00:00] Introduction and Guest Reintroduction[00:25] Affirmative Action and Medical Care[02:23] Market Preferences and Performance[08:08] Challenges of Colorblind Policies[17:44] Fair vs. Unfair Discrimination[26:05] Statistical Predictors vs. Demographic Predictors[27:45] Correlation vs. Causation in Performance Prediction[31:31] IQ and Performance in Medicine[33:27] The Ethics of Using Demographics in Decision Making[41:59] Algorithmic Decision Making in Justice and BeyondCheck out FeedSpot's list of 90 best philosophy podcasts, where Brain in a Vat is ranked at 15, here: https://podcast.feedspot.com/philosophy_podcasts/
What do academics have to offer that practitioners do not already have? They have the data academics want. They can analyse it by themselves, sometimes better than academics. They are also not reading our articles. So why would academics bother engaging with them? Why should we even bridge that perceived or existing gap between theory and practice? Because academics need to dip their toes into practice, and they need to mingle with industry to stay relevant. So says Jonny Holmström, director and co-founder of the Swedish Center for Digital Innovation. He has been at the forefront of doing academic research that blends theory and practice, rigor and relevance, and he knows a thing or two about how to do so successfully. His secret? Maximize the gap between academics and practitioners, don't close it. References Holmström, J., Magnusson, J., & Mähring, M. (2021). Orchestrating Digital Innovation: The Case of the Swedish Center for Digital Innovation. Communications of the Association for Information Systems, 48(31), 248-264. Churchman, C. W. (1972). The Design of Inquiring Systems: Basic Concepts of Systems and Organization. Basic Books. Latour, B. (2005). Reassembling the Social: An Introduction to Actor-Network Theory. Oxford University Press. Holmström, J. (2022). From AI to Digital Transformation: The AI Readiness Framework. Business Horizons, 65(3), 329-339. Recker, J., Bockelmann, T., & Barthel, F. (2024). Growing Online-to-Offline Platform Businesses: How Vytal Became the World-Leading Provider of Smart Reusable Food Packaging. Information Systems Journal, 34(1), 179-200. Abbasi, A., Somanchi, S., & Kelley, K. (2025). The Critical Challenge of using Large-scale Digital Experiment Platforms for Scientific Discovery. MIS Quarterly, 49, . Sandberg, J., Holmström, J., & Lyytinen, K. (2020). Digitization and Phase Transitions in Platform Organizing Logics: Evidence from the Process Automation Industry. MIS Quarterly, 44(1), 129-153. Werder, K., Seidel, S., Recker, J., Berente, N., Kundert-Gibbs, J., Abboud, N., & Benzeghadi, Y. (2020). Data-Driven, Data-Informed, Data-Augmented: How Ubisoft's Ghost Recon Wildlands Live Unit Uses Data for Continuous Product Innovation. California Management Review, 62(3), 86-102. Sting, F. J., Tarakci, M., & Recker, J. (2024). Performance Implications of Digital Disruption in Strategic Competition. MIS Quarterly, 48(3), 1263-1278. Tarakci, M., Sting, F. J., Recker, J., & Kane, G. C. (2024). Three Questions to Ask About Your Digital Strategy. MIT Sloan Management Review, July, . Davenport, T. H. (1993). Process Innovation: Reengineering Work Through Information Technology. Harvard Business School Press. Davenport, T. H. (1998). Putting the Enterprise into the Enterprise System. Harvard Business Review, 76(4), 121-131. Schecter, A., Wowak, K. D., Berente, N., Ye, H., & Mukherjee, U. (2021). A Behavioral Perspective on Service Center Routing: The Role of Inertia. Journal of Operations Management, 67(8), 964-988. Sundberg, L., & Holmström, J. (2024). Innovating by Prompting: How to Facilitate Innovation in the Age of Generative AI. Business Horizons, 67(5), 561-570. Kronblad, C., Essén, A., & Mähring, M. (2024). When Justice is Blind to Algorithms: Multilayered Blackboxing of Algorithmic Decision Making in the Public Sector. MIS Quarterly, 48(4), 1637-1662.
Navigating intricate philosophical and use-case questions, he emphasizes the importance of transparency, continuous result evaluation, and ensuring data purity from the outset. Join the conversation on the nuanced principles shaping ethical AI practices. Watch the full episode here
Welcome to today's episode of "AI Lawyer Talking Tech," your daily review of the latest legal technology news. In today's episode, we explore the exciting developments in the legal industry, including the integration of AI-driven solutions, the impact of generative AI on law firm profitability, the potential of AI-powered chatbots in corporate operations, and the use of AI in judicial analytics for composing powerful legal briefs. Stay tuned as we delve into these fascinating topics and discuss their implications for the future of the legal profession. New developments at vLex03 Aug 2023LexBlogThe New Era: Redefining How Corporate In-House Legal Professionals Do Their Work03 Aug 2023Legal.ThomsonReuters.comAI-Pocalypse: The Shocking Impact on Law Firm Profitability03 Aug 20233 Geeks and a Law BlogChat GPT, Artificial Intelligence and the Lawyer03 Aug 2023LexBlogCooley Ranked on The American Lawyer's A-List03 Aug 2023CooleyBridge to Life Sells Certain Assets to TransMedics03 Aug 2023CooleyEnvoy Global Acquires Sesam Immigration, Expands UAE Services03 Aug 2023CBS4IndyCase Law Analytics joins LexisNexis03 Aug 2023LexisNexisMastering Legal Briefs with the Power of AI Judicial Analytics03 Aug 2023ReadWriteThe Silent Witness: Understanding Event Data Recorders03 Aug 2023Legal ReaderEmbracing Artificial Intelligence in the Legal Landscape: The Blueprint03 Aug 2023beSpacificTen Stories in the Crypto World You Need to Know Today03 Aug 2023Hacker NoonFrench media giant AFP is suing Twitter over payments for news distribution. Elon Musk almost immediately called the lawsuit bizarre.03 Aug 2023AOL.comFlotek partners with legal case management software provider Hoowla in strategic move03 Aug 2023Wales 247Don't Kill the Golden Goose – Survey on 30 Years of Legal Publishing Mergers03 Aug 2023Dewey B StrategicLitify Appoints New CEO Following Record-Breaking Quarter03 Aug 2023FintechNewsNeeding help with legal tech?02 Aug 2023LexBlogHarnessing the future: Former Brooklyn Law Dean Nick Allard on navigating legal practice in the AI age02 Aug 2023Brooklyn Daily EagleACI National Conference on AI Law, Ethics, and Compliance: “Level Setting”—the Fundamentals of AI, Algorithmic Decision-Making, Testing, and How They All Work02 Aug 2023Epstein Becker & GreenWhite & Case signs the Campaign for Greener Arbitrations Green Pledge02 Aug 2023White & CaseWill law firms fully embrace generative AI? The jury is out | The AI Beat02 Aug 2023Inferse.comAdobe Releases “Infringement-Free” AI Image Generator02 Aug 2023GenAI-LexologyFree Webinar Today: Demystifying AI for Legal Contract Review03 Aug 2023LawSitesAI: Tomorrow's Platform or Today's Ethical Quicksand?02 Aug 2023GenAI-LexologyTrademark Docketing Software Company Alt Legal Acquires Customers Of Competitor TM Cloud03 Aug 2023LawSitesBundledocs and Affinity Consulting Announce Strategic Partnership03 Aug 2023Legal Technology News - Legal IT Professionals | Everything legal technologyCiting ‘Political Challenges,' ABA Innovation Center Cancels Op-Ed Advocating Regulatory Reform; In An Exclusive, We Have the Piece They Wouldn't Publish03 Aug 2023LawSitesFliplet enables advanced AI features, finds the top 200 law firms are not harnessing the power of AI in their public apps03 Aug 2023Legal Technology News - Legal IT Professionals | Everything legal technologyHow to Build a Low-Code Legal Tech Start Up (Chad Sakonchick – BetterLegal)03 Aug 2023Technically Legal - A Legal Technology and Innovation PodcastCan a flawless dance be achieved between human rights and digital rights?03 Aug 2023Legaltech on MediumAI: The Secret Weapon for Tomorrow's Law Firms02 Aug 2023Legaltech on MediumJudges Guide Attorneys on AI Pitfalls With Standing Orders02 Aug 2023GenAI-Lexology
Directionally Correct, A People Analytics Podcast with Cole & Scott
Directionally Correct podcast is sponsored by Worklytics! https://www.worklytics.co/directionallycorrect/ Cole's Article - Elephant Hunting: Human vs Algorithmic Decision Making: https://open.substack.com/pub/directionallycorrectnews/p/elephant-hunting-weighing-human-vs?r=ybtwi&utm_campaign=post&utm_medium=web Guru's NYC AI Law Article: https://fairnow.ai/understanding-nycs-local-law-144-to-regulate-ai-in-hiring/ Charles Handler's NYC AI Article on Uniform Guidelines: https://www.linkedin.com/pulse/does-new-york-citys-ll144-have-dirty-little-secret%3FtrackingId=2fpz3%252Bc2SsmoytcBjYccEw%253D%253D/?trackingId=2fpz3%2Bc2SsmoytcBjYccEw%3D%3D Using Metadata on Shopping Mall Credit Card Transactions: https://scholar.google.com/citations?view_op=view_citation&hl=en&user=P4nfoKYAAAAJ&cstart=20&pagesize=80&citation_for_view=P4nfoKYAAAAJ:3x-KLxxGyuUC Detroit Lions Use of AI in Stadium: https://www.tiktok.com/@tyler.m.webb/video/7193126898858216750?_r=1&_t=8cUFKPhzMNP What You Should Do When You Start a Data Function LI Post: https://www.linkedin.com/posts/ethanaaron_you-join-a-1000-person-company-as-the-head-activity-7085654074020216833-uzZ-/?utm_source=share&utm_medium=member_ios
Abrupt Future. The Future of Work Happened Faster Than we Thought.
Join us on this episode as we dive into the complex world of algorithmic fairness in HR with Manish Raghavan, Assistant Professor of Information Technology at the MIT Sloan School of Management. Discover the challenges and opportunities of using algorithms to make decisions about people, and learn about the importance of preventing algorithms from replicating discriminatory and unfair human decision-making. Get insights into the distinction between procedural fairness and outcome fairness, and understand why the deployment environment of a machine learning model is just as crucial as the technology itself. Gain a deeper understanding of the scoring mechanism behind algorithmic tools, and the potential dangers and consequences of their use. Learn how common signals in assessments can result in similar assessments across organizations and what it takes to achieve fairness in algorithmic decision-making in HR.Manish page at MITFollow Manish on LinkedIn More content at abruptfuture.com Connect with Benoit on LinkedIn Follow our page on LinkedIn here
The Facebook Files and Algorithmic Decision-Making with Angela Müller, Ph.D. by Martens Centre
Happy New Year! This week we have another amazing guest joining us. Mark Durkee from the Centre for Data Ethics and Innovation. In this episode Mark will be discussing the recent report that CDEI published: Review into Bias in Algorithmic decision-making. Mark Durkee works for the Centre for Data Ethics & Innovation, leading a portfolio of work including the recently published Review into Bias in Algorithmic Decision-Making. Prior to joining CDEI in 2019, he worked in a variety of technology strategy, architecture and cyber security roles in the UK government, as a software engineer, and completed a PhD in theoretical physics.The Centre for Data Ethics and Innovation (CDEI) is an independent expert committee, led by a board of specialists, set up and tasked by the UK government to investigate and advise on how we maximise the benefits of these technologies. Our goal is to create the conditions in which ethical innovation can thrive: an environment in which the public are confident their values are reflected in the way data-driven technology is developed and deployed; where we can trust that decisions informed by algorithms are fair; and where risks posed by innovation are identified and addressed. More information about CDEI can be found at www.gov.uk/cdei.
Check out Chelsea’s work on Twitter and Medium.Created by SOUR, this podcast is part of the studio's "Future of X,Y,Z" research, where the collaborative discussion outcomes serve as the base for the futuristic concepts built in line with the studio's mission of solving urban, social and environmental problems through intelligent designs.Find out what today’s guest and former guests are up to by following What’s Wrong With on Instagram and on Twitter. Make sure to visit our website - podcast.whatswrongwith.xyz - and subscribe to the show on Apple Podcasts, Spotify, or Google Podcasts so you never miss an episode. If you found value in this show, we would appreciate it if you could head over to iTunes to rate and leave a review – or you can simply tell your friends about the show!Don’t forget to join us next week for another episode. Thank you for listening!
For the third episode in a miniseries centered around a new Oxford University report on “Citizenship in a Networked Age,” Adam White explores democratic and algorithmic decision-making. Can we draw a clear distinction between the two categories? How should we understand them in terms of efficiency, accuracy, dignity, and other values? He's joined in this […] The post https://www.aei.org/multimedia/citizenship-in-a-networked-age-part-3-democratic-vs-algorithmic-decisionmaking/ (Citizenship in a networked age (part 3): Democratic vs. algorithmic decision-making) appeared first on https://www.aei.org (American Enterprise Institute - AEI).
In this episode, Toby Wilde and Anna Clare Harper discuss how property investors can use data and algorithmic decision-making to identify better opportunities, safeguard against risk and drive efficiency. Toby is a multi-time Proptech Founder whose experience includes scaling Sprift, currently the fastest growing property data resource for Estate Agents, Landlords, Developers & Solicitors, as well as in Estate Agency and in his family's property development business. Highlights of this episode includes: Using data, one of the most valuable resources in property, to make better investment decisions How identifying the early signs of insolvency and financial stress can give homeowners a quick and dignified exit The evolution of #proptech - and Proptech Version 4.0 The de-skilling of the Estate Agency sector due to hybrid, low-fee agencies Why property is a human-led industry that can't be ruled by data alone Why the old adage, ‘if you build it they will come' may no longer apply Why taking a 10 year view in property may no longer be relevant Resources: https://oparo.co.uk/ toby@oparo.co.uk https://welcome.sprift.com/ annaclareharper.com https://www.linkedin.com/in/annaclareharper/
Prof. (Dr.) Steve Omohundro, President at Possibility Research based in the United States participates in Risk Roundup to discuss the Rise of Algorithms in Decision-Making. The rise of Algorithms in Decision- Making In pursuit of automation-driven efficiencies, the rapidly evolving Artificial Intelligence (AI) techniques, such as neural networks, machine-learning systems, predictive analytics, speech recognition, natural language […] The post The Rise of Algorithms in Decision-Making appeared first on Risk Group.
In this episode I talk to Virginia Eubanks. Virginia is an Associate Professor of Political Science at the University at Albany, SUNY. She is the author of several books, including Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor and Digital Dead End: Fighting for Social Justice in the Information Age. Her writing about technology and social justice has appeared in The American Prospect, The Nation, Harper’s and Wired. She has worked for two decades in community technology and economic justice movements. We talk about the history of poverty management in the US and how it is now being infiltrated and affected by tools for algorithmic governance. You can download the episode here or listen below. You can also subscribe to the show on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:39 - The future is unevenly distributed but not in the way you might think7:05 - Virginia's personal encounter with the tools for automating inequality12:33 - Automated helplessness?14:11 - The history of poverty management: denial and moralisation22:40 - Technology doesn't disrupt our ideology of poverty; it amplifies it24:16 - The problem of poverty myths: it's not just something that happens to other people28:23 - The Indiana Case Study: Automating the system for claiming benefits33:15 - The problem of automated defaults in the Indiana Case37:32 - What happened in the end?41:38 - The L.A. Case Study: A "match.com" for the homeless45:40 - The Allegheny County Case Study: Managing At-Risk Children52:46 - Doing the right things but still getting it wrong?58:44 - The need to design an automated system that addresses institutional bias1:07:45 - The problem of technological solutions in search of a problem1:10:46 - The key features of the digital poorhouse Relevant LinksVirginia's HomepageVirginia on TwitterAutomating Inequality'A Child Abuse Prediction Model Fails Poor Families' by Virginia in WiredThe Allegheny County Family Screening Tool (official webpage - includes a critical response to Virginia's Wired article)'Can an Algorithm Tell when Kids Are in Danger?' by Dan Hurley (generally positive story about the family screening tool in the New York Times).'A Response to Allegheny County DHS' by Virginia (a response to Allegheny County's defence of the family screening tool)Episode 41 with Reuben Binns on Fairness in Algorithmic Decision-MakingEpisode 19 with Andrew Ferguson about Predictive Policing #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
In this episode I talk to Reuben Binns. Reuben is a post-doctoral researcher at the Department of Computer Science in Oxford University. His research focuses on both the technical, ethical and legal aspects of privacy, machine learning and algorithmic decision-making. We have a detailed and informative discussion (for me at any rate!) about recent debates about algorithmic bias and discrimination, and how they could be informed by the philosophy of egalitarianism.You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (the RSS feed is here). Show notes0:00 - Introduction 1:46 - What is algorithmic decision-making? 4:20 - Isn't all decision-making algorithmic? 6:10 - Examples of unfairness in algorithmic decision-making: The COMPAS debate 12:02 - Limitations of the COMPAS debate 15:22 - Other examples of unfairness in algorithmic decision-making 17:00 - What is discrimination in decision-making? 19:45 - The mental state theory of discrimination 25:20 - Statistical discrimination and the problem of generalisation 29:10 - Defending algorithmic decision-making from the charge of statistical discrimination 34:40 - Algorithmic typecasting: Could we all end up like William Shatner? 39:02 - Egalitarianism and algorithmic decision-making 43:07 - The role that luck and desert play in our understanding of fairness 49:38 - Deontic justice and historical discrimination in algorithmic decision-making 53:36 - Fair distribution vs Fair recognition 59:03 - Should we be enthusiastic about the fairness of future algorithmic decision-making? Relevant LinksReuben's homepage Reuben's institutional page 'Fairness in Machine Learning: Lessons from Political Philosophy' by Reuben Binns 'Algorithmic Accountability and Public Reason' by Reuben Binns 'It's Reducing a Human Being to a Percentage: Perceptions of Justice in Algorithmic Decision-Making' by Binns et al 'Machine Bias' - the ProPublica story on unfairness in the COMPAS recidivism algorithm 'Inherent Tradeoffs in the Fair Determination of Risk Scores' by Kleinberg et al -- an impossibility proof showing that you cannot minimise false positive rates and equalise accuracy rates across two populations at the same time (except in the rare case that the base rate for both populations is the same) #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
In this episode I talk to Reuben Binns. Reuben is a post-doctoral researcher at the Department of Computer Science in Oxford University. His research focuses on both the technical, ethical and legal aspects of privacy, machine learning and algorithmic decision-making. We have a detailed and informative discussion (for me at any rate!) about recent debates … More Episode #41 – Binns on Fairness in Algorithmic Decision-Making
00:58 – Aditya’s Superpower: Parking Karma 03:18 – Algorithmic Decision Making in Machine Learning and Artificial Intelligence 09:06 – Recognizing the Effects of Bias The Bias Blind Spot (https://en.wikipedia.org/wiki/Bias_blind_spot) The Babadook (https://www.imdb.com/title/tt2321549/) 18:07 – Health and Technology: How can technology have a meaningful impact on care delivery? 23:54 – Why are people frightened of automation? 33:33 – Storytelling in Software and Engineering Reflections: Sam: Be a little bit more aware of how stories are told. Aditya: Thinking of yourself as an editor for the story. Jamey: The difference between can we build this and should we build this? This episode was brought to you by @therubyrep (https://twitter.com/therubyrep) of DevReps, LLC (http://www.devreps.com/). To pledge your support and to join our awesome Slack community, visit patreon.com/greaterthancode (https://www.patreon.com/greaterthancode). To make a one-time donation so that we can continue to bring you more content and transcripts like this, please do so at paypal.me/devreps (https://www.paypal.me/devreps). You will also get an invitation to our Slack community this way as well. Amazon links may be affiliate links, which means you’re supporting the show when you purchase our recommendations. Thanks! Special Guest: Aditya Mukerjee.
Ethical principles for algorithmic decision making; more women in the tech industry; inclusion in AI and design - these of all issues of increasing significance in the future.