POPULARITY
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Astronomical Cake, published by Richard Y Chappell on June 6, 2024 on The Effective Altruism Forum. There's one respect in which philosophical training seems to make (many) philosophers worse at practical ethics. Too many are tempted to treat tidy thought experiments as a model for messy real-world ethical quandaries. We're used to thinking about scenarios where all the details and consequences are stipulated, so that we can better uncover our theoretical commitments about what matters in principle. I've previously flagged that this can be misleading: our intuitions about real-world situations may draw upon implicit knowledge of what those situations are like, and this implicit knowledge (when contrary to the explicit stipulations of the scenario) may distort our theoretical verdicts. But it's even worse when the error goes the other way, and verdicts that only make sense given theoretical stipulations get exported into real-life situations where the stipulations do not hold. This can badly distort our understanding of how people should actually behave. Our undergraduate students often protest the silly stipulations we build into our scenarios: "Why can't we rescue everyone from the tracks without killing anyone?" It's a good instinct! Alas, to properly engage with thought experiments, we have to abide by the stipulations. We learn (and train our students) to take moral trade-offs at face value, ignore likely downstream effects, and not question the apparent pay-offs for acting in dastardly ways. This self-imposed simple-mindedness is a crucial skill for ethical theorizing. But it can be absolutely devastating to our practical judgment, if we fail to carefully distinguish ethical theory and practice. Moral distortion from high stakes A striking example of such philosophy-induced distortion comes from our theoretical understanding that sufficiently high stakes can justify overriding other values. This is a central implication of "moderate deontology": it's wrong to kill one as a means to save five, but obviously you should kill one innocent person if that's a necessary means to saving the entire world. Now, crucially, in real life that is not actually a choice situation in which you could ever find yourself. The thought experiment comes with stipulated certainty; real life doesn't. So, much practical moral know-how comes down to having good judgment, including about how to manage your own biases so that you don't mistakenly take yourself to have fantastically strong reasons to do something that's actually disastrously counterproductive. This is why utilitarians talk a lot about respecting generally-reliable rules rather than naively taking expected value (EV) calculations at face value. Taking our fallibility seriously is crucial for actually doing good in the world. Higher stakes make it all the more important to choose the consequentially better option. But they don't inherently make it more likely that a disreputable-seeming action is consequentially better. If "stealing to give" is a negative-EV strategy for ordinary charities, my default assumption is that it's negative-EV for longtermist causes too.[1] There are conceivable scenarios where that isn't so; but some positive argument is needed for thinking that any given real-life situation (like SBF's) takes this inverted form. Raising the stakes doesn't automatically flip the valence. Many philosophers don't seem to understand this. Seth Lazar, for example, gave clear voice to (what we might call) academic philosophy's high stakes distortion when he was interviewed on Hi-Phi Nation last year.[2] Lazar claimed that it's "intellectually inconsistent" to simultaneously hold that (i) there are astronomical stakes to longtermism and x-risk reduction, and yet (ii) it's also really important that you act with integrity....
Episode 124You may think you're doing a priori reasoning, but actually you're just over-generalizing from your current experience of technology.I spoke with Professor Seth Lazar about:* Why managing near-term and long-term risks isn't always zero-sum* How to think through axioms and systems in political philosphy* Coordination problems, economic incentives, and other difficulties in developing publicly beneficial AISeth is Professor of Philosophy at the Australian National University, an Australian Research Council (ARC) Future Fellow, and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI. He has worked on the ethics of war, self-defense, and risk, and now leads the Machine Intelligence and Normative Theory (MINT) Lab, where he directs research projects on the moral and political philosophy of AI.Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (00:54) Ad read — MLOps conference* (01:32) The allocation of attention — attention, moral skill, and algorithmic recommendation* (03:53) Attention allocation as an independent good (or bad)* (08:22) Axioms in political philosophy* (11:55) Explaining judgments, multiplying entities, parsimony, intuitive disgust* (15:05) AI safety / catastrophic risk concerns* (22:10) Superintelligence arguments, reasoning about technology* (28:42) Attacking current and future harms from AI systems — does one draw resources from the other? * (35:55) GPT-2, model weights, related debates* (39:11) Power and economics—coordination problems, company incentives* (50:42) Morality tales, relationship between safety and capabilities* (55:44) Feasibility horizons, prediction uncertainty, and doing moral philosophy* (1:02:28) What is a feasibility horizon? * (1:08:36) Safety guarantees, speed of improvements, the “Pause AI” letter* (1:14:25) Sociotechnical lenses, narrowly technical solutions* (1:19:47) Experiments for responsibly integrating AI systems into society* (1:26:53) Helpful/honest/harmless and antagonistic AI systems* (1:33:35) Managing incentives conducive to developing technology in the public interest* (1:40:27) Interdisciplinary academic work, disciplinary purity, power in academia* (1:46:54) How we can help legitimize and support interdisciplinary work* (1:50:07) OutroLinks:* Seth's Linktree and Twitter* Resources* Attention, moral skill, and algorithmic recommendation* Catastrophic AI Risk slides Get full access to The Gradient at thegradientpub.substack.com/subscribe
Recognize that AI is probably net harmful: Actually-existing and near-future AIs are net harmful—never mind their longer-term risks. We should shut them down, not pussyfoot around hoping they can somehow be made safe. https://betterwithout.ai/AI-is-harmful Create a negative public image for AI: Most funding for AI research comes from the advertising industry. Their primary motivation may be to create a positive corporate image, to offset their obvious harms. Creating bad publicity for AI would eliminate their incentive to fund it. https://betterwithout.ai/AI-is-public-relations Seth Lazar's "Legitimacy, Authority, and the Political Value of Explanations": https://arxiv.org/ftp/arxiv/papers/2208/2208.08628.pdf Kate Crawford's "Atlas Of AI": https://www.amazon.com/dp/B08WKQ1MTM/?tag=meaningness-20 You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
AI has changed our lives already and looks set to have a huge impact. How should we adapt our thinking about political philosophy in the light of this? The philosopher Seth Lazar explores this question in conversation with Nigel Warburton in this episode of the Philosophy Bites podcast.
Facial recogniton technology is being rolled out across Australia with what some, including the Human Rights Commission, consider to be unseemly haste. Is mass societal face surveillance the next step, and will the current Covid crisis be an additional push factor? Rick Sarre, adjunct professor of law and criminal justice at the University of South Australia, and Seth Lazar, professor of philosophy at the Australian National University take up the legal, ethical and privacy issues.
This year will mark the 18th anniversary of the war in Afghanistan, the forever war characterized by regime change, a surge, drawdowns, and then re-engagement across three Presidential administrations. We take a retrospective of the entire war, from the forgotten events of the lead-up to its total financial and moral costs to date. Journalist Douglas Wissing and Professor Neta Crawford of the Cost of War project take us through the staggering amounts of money spent on prosecuting the war and the development of Afghanistan, and we investigate where the money went. Veterans who served at each stage of the conflict, from the Gen Xers of the early days to the millennials of the Obama surge, give us the changing, and unchanging picture of the unending war. Finally, philosopher Seth Lazar and Barry talk about sunk costs and the role that thinking about past sacrifices play in rationalizing the indefinite continuation of war. Special thanks the veterans who gave their stories for this episode, Ian Fishback, Joshua Maxwell, Gaven Eier, Pat DeYoung, and Romario Ortiz. In the bonus content for SlatePlus members, Neta Crawford talks about the opportunity costs of the wars that can't be calculated, and Barry talks with Doug Wissing about the opium economy of Afghanistan. Get all bonus content and an ad-free version of this and every other Slate podcast at slate.com/hiphiplus. Learn more about your ad choices. Visit megaphone.fm/adchoices
This year will mark the 18th anniversary of the war in Afghanistan, the forever war characterized by regime change, a surge, drawdowns, and then re-engagement across three Presidential administrations. We take a retrospective of the entire war, from the forgotten events of the lead-up to its total financial and moral costs to date. Journalist Douglas Wissing and Professor Neta Crawford of the Cost of War project take us through the staggering amounts of money spent on prosecuting the war and the development of Afghanistan, and we investigate where the money went. Veterans who served at each stage of the conflict, from the Gen Xers of the early days to the millennials of the Obama surge, give us the changing, and unchanging picture of the unending war. Finally, philosopher Seth Lazar and Barry talk about sunk costs and the role that thinking about past sacrifices play in rationalizing the indefinite continuation of war. This episode is brought to you by LinkedIn Jobs, where you can post a job and have LinkedIn help find you the perfect candidate. Get $50 off your first job posting by going to linkedin.com/nation. In the bonus content for SlatePlus members, Neta Crawford talks about the opportunity costs of the wars that can't be calculated, and Barry talks with Doug Wissing about the opium economy of Afghanistan. Get all bonus content and an ad-free version of this and every other Slate podcast at slate.com/hiphiplus. Learn more about your ad choices. Visit megaphone.fm/adchoices
How many innocent people should we be allowed to arrest and jail in order to prevent a single dangerous person from being free? The Supreme Court has refused to answer this question, but algorithms have, and many courts across the country are going with the algorithm. At different stages of the criminal justice system, computerized risk-assessment algorithms are slowly replacing bail hearings in determining who goes to jail and who goes free. This is widely seen as progressive reform, but may in fact be leading to more incarceration, not less. While many are warning that these algorithms are biased, racist, or based on bad data, the real problems are in fact much deeper, and even harder to solve. Guest voices include Megan Stevenson, John Ralphling, Renee Bolinger, Georgi Gardiner, and Seth Lazar. Please help the show by taking a listener survey to give us feedback. slate.com/podcastsurvey To sign up for Slate Plus to get bonus content for this and every episode, and every episode ad-free, go to slate.com/hiphiplus Learn more about your ad choices. Visit megaphone.fm/adchoices
How many innocent people should we be allowed to arrest and jail in order to prevent a single dangerous person from being free? The Supreme Court has refused to answer this question, but algorithms have, and many courts across the country are going with the algorithm. At different stages of the criminal justice system, computerized risk-assessment algorithms are slowly replacing bail hearings in determining who goes to jail and who goes free. This is widely seen as progressive reform, but may in fact be leading to more incarceration, not less. While many are warning that these algorithms are biased, racist, or based on bad data, the real problems are in fact much deeper, and even harder to solve. Guest voices include Megan Stevenson, John Raphling, Renee Bolinger, Georgi Gardiner, and Seth Lazar. Please help the show by taking a listener survey to give us feedback. slate.com/podcastsurvey To sign up for Slate Plus to get bonus content for this and every episode, and every episode ad-free, go to slate.com/hiphiplus Learn more about your ad choices. Visit megaphone.fm/adchoices
On today’s episode, we have one major question for philosopher Seth Lazar: is it ever acceptable to kill civilians in war? As with all good questions in philosophy, it turned out to be a lot... The post 12: Should Civilians Be Spared? appeared first on Examining Ethics.
On today's episode, we have one major question for philosopher Seth Lazar: is it ever acceptable to kill civilians in war? As with all good questions in philosophy, it turned out to be a lot more complicated than we initially thought. The post Should Civilians Be Spared? with Seth Lazar appeared first on Prindle Institute.
On today's episode, we have one major question for philosopher Seth Lazar: is it ever acceptable to kill civilians in war? As with all good questions in philosophy, it turned out to be a lot more complicated than we initially thought. The post Should Civilians Be Spared? with Seth Lazar appeared first on Prindle Institute.
Why is it morally wrong to target civilians in war? Can civilians be distinguished clearly from combatants? Seth Lazar discusses these issues in this episode of the Philosophy Bites podcast.
Nigel Warburton talks with Seth Lazar on the ethics and justification of killing in war
Dr Seth Lazar gives a talk for the ELAC Hilary Term 2010 seminar series. This series is co-hosted by the ELAC and the University of Oxford Programme on the Changing Character of War (CCW). Dapo Akande is the discussant.