Podcasts about what we owe the future

  • 16PODCASTS
  • 45EPISODES
  • 36mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 15, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about what we owe the future

Latest podcast episodes about what we owe the future

Clearer Thinking with Spencer Greenberg
What should the Effective Altruism movement learn from the SBF / FTX scandal? (with Will MacAskill)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Apr 15, 2024 121:52


What are the facts around Sam Bankman-Fried and FTX about which all parties agree? What was the nature of Will's relationship with SBF? What things, in retrospect, should've been red flags about Sam or FTX? Was Sam's personality problematic? Did he ever really believe in EA principles? Does he lack empathy? Or was he on the autism spectrum? Was he naive in his application of utilitarianism? Did EA intentionally install SBF as a spokesperson, or did he put himself in that position of his own accord? What lessons should EA leaders learn from this? What steps should be taken to prevent it from happening again? What should EA leadership look like moving forward? What are some of the dangers around AI that are not related to alignment? Should AI become the central (or even the sole) focus of the EA movement?William MacAskill is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. He also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and 80,000 Hours, which together have moved over $300 million to effective charities. He's the author of What We Owe The Future, Doing Good Better, and Moral Uncertainty.Further reading:Episode 133: The FTX catastrophe (with Byrne Hobart, Vipul Naik, Maomao Hu, Marcus Abramovich, and Ozzie Gooen) — Our previous podcast episode about what happened in the FTX disaster"Who is Sam Bankman-Fried (SBF) really, and how could he have done what he did? – three theories and a lot of evidence" — Spencer's essay about SBF's personalityWhy They Do It: Inside the Mind of the White-Collar Criminal by Eugene SoltesStaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsAlexandria D. — Research and Special Projects AssistantMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]

Making Sense with Sam Harris - Subscriber Content
#361 - Sam Bankman-Fried & Effective Altruism

Making Sense with Sam Harris - Subscriber Content

Play Episode Listen Later Apr 1, 2024 85:25


Share this episode: https://www.samharris.org/podcasts/making-sense-episodes/361-sam-bankman-fried-effective-altruism Sam Harris speaks with William MacAskill about the implosion of FTX and the effect that it has had on the Effective Altruism movement. They discuss the logic of “earning to give,” the mind of SBF, his philanthropy, the character of the EA community, potential problems with focusing on long-term outcomes, AI risk, the effects of the FTX collapse on Will personally, and other topics. William MacAskill is an associate professor of moral philosophy at Oxford University, and author of Doing Good Better, Moral Uncertainty, and What We Owe The Future. He cofounded the nonprofits 80,000 Hours, Centre for Effective Altruism, and Giving What We Can, and helped to launch the effective altruism movement, which encourages people to use their time and money to support the projects that are most effectively making the world a better place. Website: ​​www.williammacaskill.com Twitter: @willmacaskill Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

The Nonlinear Library
EA - PhD on Moral Progress - Bibliography Review by Rafael Ruiz

The Nonlinear Library

Play Episode Listen Later Dec 10, 2023 70:38


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: PhD on Moral Progress - Bibliography Review, published by Rafael Ruiz on December 10, 2023 on The Effective Altruism Forum. Epistemic Status: I've researched this broad topic for a couple of years. I've read about 30+ books and 100+ articles on the topic so far (I'm not really keeping count). I've also read many other works in the related areas of normative moral philosophy, moral psychology, moral epistemology, moral methodology, and metaethics, since it's basically my area of specialization within philosophy. This project will be my PhD thesis. However, I still have 3 years of the PhD to go, so a substantial amount of my opinions on the matter are subject to changes. Disclaimer: I have received some funding as a Forethought Foundation Fellow in support of my PhD research. But all the opinions expressed here are my own. Index. Part I - Bibliography Review Part II - Preliminary Takes and Opinions (I'm writing it, coming very soon!) More parts to be published later on. Introduction. Hi everyone, this is my first proper research-related post on the EA Forum, on a topic that I've been working on for several years, since even before my PhD, and now as part of my PhD in Philosophy at the London School of Economics. This post is the start of a series on my work on the topic of Moral Progress, which includes and intersects with Moral Circle Expansion (also called Inclusivism or Moral Inclusion), Moral Progress, Social Progress, Social Movements, the mechanisms that drive progress and regress, the possibilities of measuring these phenomena, and policy or practical implications. This first post is a bibliography review, which I hope will serve to orient future researchers that might want to tackle the same or similar topics. Hopefully it will help them to save time by separating the wheat from the chaff, the good research articles and books from the minor contributions. Initially, I had my reservations about doing a Bibliography Review, since now we have GPT4 which is quite good at purely neutral descriptive summarizing, so I felt maybe perhaps this work wasn't needed. However, given that now we have it as a good research assistant for pure facts, that also allows me more freedom to be more opinionated in my bibliography review. I'll try to tell you what I think is super worth reading, and what is "meh, skim it if you have free time", so you can sift through the literature in a more time-efficient way. The eventual goal outcome of the whole project would be to distil the main insights into book on the topic of Moral Progress with serious contributions to the current literature within interdisciplinary moral philosophy, but that probably won't happen until I finish my PhD thesis manuscript around 2026. Then after that, I'll have to rewrite that manuscript to turn it into a more accessible book, so it probably wouldn't be published until a later date. I'm also not sure just yet whether it would be an academic book on a University Press or something closer to What We Owe The Future, which aims to be accessible for a broader audience. So the finished work is quite a long way. On the brighter side, I will publish some of the key findings and takeaways on the EA Forum, probably in summarized form rather than the excruciatingly slow pace of writing in philosophy, which often takes 20 pages to make some minor points. Instead of that, I guess I'll post something closer to digestible bullet points with my views, attempting to foster online discussion, and then defend them in more detail over time and in the eventual book. Your feedback will of course be appreciated, particularly if I change my mind on substantial issues, connect me with other researchers, etc. So let's put our collective brains together (this is a pun having to do with cultural evolution that you might not understand y...

The Antinatalist Advocacy Podcast
AAP Ep.5 - Responding to Will MacAskill on Antinatalism with @LawrenceAnton

The Antinatalist Advocacy Podcast

Play Episode Listen Later Nov 15, 2023 79:21


In this episode, John and Lawrence respond to philosopher Will MacAskill, for many the leading figure of Effective Altruism, on the subject of antinatalism. Is antinatalism worth taking seriously? Would human extinction be bad? And are antinatalists welcome in the EA community? Listen to find out! TIMESTAMPS00:00 Intro02:04 Purpose of this episode05:29 Our thoughts on Will MacAskill13:04 "Too nihilistic and divorced from humane values" comment32:03 "Positively glad that the world exists" comment38:09 Question 1: Does MacAskill take human extinction seriously enough? 56:01 Question 2: Are antinatalists welcome in the EA community?1:10:36 A positive note to end on1:18:05 OutroANTINATALIST ADVOCACYNewsletter: https://antinatalistadvocacy.org/news...Website: https://antinatalistadvocacy.org/ Twitter / X: https://twitter.com/AN_advocacyInstagram: https://www.instagram.com/an_advocacyCheck out the links below!- What We Owe The Future: https://amzn.eu/d/bO5Wqpo- Doing Good Better: https://amzn.eu/d/cyPlhZY- Reluctant Prophet of Effective Altruism: https://www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism- The dismal dismissal of suffering-focused views: https://magnusvinding.com/2022/06/17/dismal-dismissal/- Utilitarianism.net: https://www.utilitarianism.net/population-ethics/#person-affecting-views-and-the-procreative-asymmetry- The Problem with Antinatalism: https://www.youtube.com/watch?v=zxuohL8Lx1o

The Nonlinear Library
EA - EA Germany's Mid-Year Update 2023 by Sarah Tegeler

The Nonlinear Library

Play Episode Listen Later Sep 19, 2023 9:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Germany's Mid-Year Update 2023, published by Sarah Tegeler on September 19, 2023 on The Effective Altruism Forum. This post outlines our current status regarding the planned focus areas in our strategy for this year. Background EA Germany (EAD) In the 2022 EA survey, 9.9% of all participants were from Germany, up from 7.4% in 2020, remaining the third-largest community behind the US and the UK. We are a membership association with 108 members, six board members, five employed team members (four FTEs), and ten contractors and volunteers (for EAGxBerlin 2023, intro program and newsletter). In 2023, the association gained 35 new members and two new employees. Local Groups There are currently 27 active local groups in Germany, some of which are university-based, but most refer to themselves as local groups. Group size ranges from 5-50 active members. Overall, there are ~50-70 community builders and a total of >300 people regularly involved in the groups. Impact Estimation We have gathered data points about the outcomes of our programmes, which we will share in the following sections. Since, however, we are uncertain about the interpretation of this data, we cannot be sure about the overall impact of our programs. In the future, we will focus on finding better evaluation criteria in order to estimate our impact. Focus Areas Foundational Programs Foundational programs are either the continuation of existing programs or those that seem broadly beneficial to growing a sustainable community. We have established OKRs for each program and are reviewing them monthly. Communications We have been running and are regularly updating effektiveraltruismus.de, the German website about effective altruism, since Effektiv Spenden handed it over to us in late 2022. They also handed us the newsletter, which enabled us to send out our existing monthly German-language newsletter to more than 4,000 subscribers compared to the 350 we had before. We are in regular exchange with Open Philanthropy grantees to coordinate the translation of articles from English into German. They have published some of their content on our website, and we have promoted their podcast with narrations of articles. Additionally, we have been helping to coordinate the publication of EA-relevant books in German, including the German launch of What We Owe The Future on 30th August 2023. EAGx Conferences We have applied for and received funding to run EAGxBerlin on September 8-10, 2023 and hired a team of six people in order to do so. Additionally, we have organised meetups for German EAs at 4 EAG(x) conferences with ~5-50 attendees each. Intro Program [formerly Intro Fellowship] The Intro Program, which used to be called "EA Intro Fellowship", was held in the winter of 22/23 and summer of '23. During the last round, we received a peak of more than 100 applications. Around 60 % of the participants completed the program successfully and more than 90 % of program participants described at least one relevant event outcome, e.g. making an important professional connection, discovering a new opportunity, or getting a deeper understanding of a key idea. Career 1-1s We had ~160 calls and meetings related to career paths and decisions between January and June: ~60 at conferences, ~30 at retreats. The others were career calls or office hours. Recommendations came through our programs, 80,000 hours, and the form on our website. Community Health We appointed our team member Milena Canzler as the German Community Health contact person, listing her contact details on our website while also including a German and English contact form. In several cases, we have already been able to provide support. Additionally, we provide materials and training for awareness teams at EA-related events in order to avoid negative experiences for and har...

Heja Framtiden
458. Will MacAskill: Securing humanity's long term future

Heja Framtiden

Play Episode Listen Later Jun 23, 2023 31:29


Perhaps humanity is like a teenager - still young and reckless? In that case, we might have many long and happy years ahead of us. Or we could make a bold move and end things too soon. Will MacAskill is a moral philosopher at the University of Oxford. He is also one of the most influential people in the Effective Altruism movement, striving to create a better world using resources like our money and time. He has founded the Centre for Effective Altruism, as well as Giving What We Can (allocating money), and 80,000 Hours (allocating time). Heja Framtiden met him in Stockholm while promoting his new book What We Owe The Future, which is released in Swedish as Vad framtiden förtjänar. In the book, he makes the case for longtermism - that caring for future generations is a moral imperative for us here and now, and that preveting existential risks is one of the most important actions we can take to ensure that the teenager lives a long and flourishing life. // Podcast host: Christian von Essen // Recorded at Volante in Stockholm. // Listen to our other episodes in English here.

The Nonlinear Library
EA - FYI there is a German institute studying sociological aspects of existential risk by Max Görlitz

The Nonlinear Library

Play Episode Listen Later Feb 12, 2023 1:34


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FYI there is a German institute studying sociological aspects of existential risk, published by Max Görlitz on February 12, 2023 on The Effective Altruism Forum. The institute is called Käte Hamburger Centre for Apocalyptic and Post-Apocalyptic Studies and is based in Heidelberg, Germany. They started in 2021 and initially received 9 million € of funding from the German government for the first four years. AFAICT, they study sociological aspects of narratives of apocalypses, existential risks, and the end of the world. They have engaged with EA thinking, and I assume they will have an interesting outside perspective of some prevalent worldviews in EA. For example, here is a recorded talk about longtermism (I have only skipped through it so far), which mentions MIRI, FHI, and What We Owe The Future. I stumbled upon this today and thought it could interest some people here. Generally, I am very curious to learn more about alternative worldviews to EA that also engage with existential risk in epistemically sound ways. One criticism of EA that became more popular over the last months is that EA organizations engage too little with other disciplines and institutions with relevant expertise. Therefore, I suggest checking out the work of this Centre. Please comment if you have engaged with them before and know more than I do. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
AF - Criticism of the main framework in AI alignment by Michele Campolo

The Nonlinear Library

Play Episode Listen Later Jan 31, 2023 12:24


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Criticism of the main framework in AI alignment, published by Michele Campolo on January 31, 2023 on The AI Alignment Forum. Originally posted on the EA Forum for the Criticism and Red Teaming Contest. Will be included in a sequence containing some previous posts and other posts I'll publish this year. 0. Summary AI alignment research centred around the control problem works well for futures shaped by out-of-control misaligned AI, but not that well for futures shaped by bad actors using AI. Section 1 contains a step-by-step argument for that claim. In section 2 I propose an alternative which aims at moral progress instead of direct risk reduction, and I reply to some objections. I will give technical details about the alternative at some point in the future, in section 3. The appendix clarifies some minor ambiguities with terminology and links to other stuff. 1. Criticism of the main framework in AI alignment 1.1 What I mean by main framework In short, it's the rationale behind most work in AI alignment: solving the control problem to reduce existential risk. I am not talking about AI governance, nor about AI safety that has nothing to do with existential risk (e.g. safety of self-driving cars). Here are the details, presented as a step-by-step argument. At some point in the future, we'll be able to design AIs that are very good at achieving their goals. (Capabilities premise) These AIs might have goals that are different from their designers' goals. (Misalignment premise) Therefore, very bad futures caused by out-of-control misaligned AI are possible. (From previous two premises) AI alignment research that is motivated by the previous argument often aims at making misalignment between AI and designer, or loss of control, less likely to happen or less severe. (Alignment research premise). Common approaches are ensuring that the goals of the AI are well specified and aligned with what the designer originally wanted, or making the AI learn our values by observing our behaviour. In case you are new to these ideas, two accessible books on the subject are [1,2]. 5. Therefore, AI alignment research improves the expected value of bad futures caused by out-of-control misaligned AI. (From 3 and 4). By expected value I mean a measure of value that takes likelihood of events into account, and follows some intuitive rules such as "5% chance of extinction is worse than 1% chance of extinction". It need not be an explicit calculation, especially because it might be difficult to compare possible futures quantitatively, e.g. extinction vs dystopia. I don't claim that all AI alignment research follows this framework; just that this is what motivates a decent amount (I would guess more than half) of work in AI alignment. 1.2 Response I call this a response, and not a strict objection, because none of the points or inferences in the previous argument is rejected. Rather, some extra information is taken into account. 6. Bad actors can use powerful controllable AI to bring about very bad futures and/or lock-in their values (Bad actors premise) For more information about value lock-in, see chapter 4 of What We Owe The Future [3]. 7. Recall that alignment research motivated by the above points makes it easier to design AI that is controllable and whose goals are aligned with its designers' goals. As a consequence, bad actors might have an easier time using powerful controllable AI to achieve their goals. (From 4 and 6) 8. Thus, even though AI alignment research improves the expected value of futures caused by uncontrolled AI, it reduces the expected value of futures caused by bad human actors using controlled AI to achieve their ends. (From 5 and 7) This conclusion will seem more, or less, relevant depending on the beliefs you have about its different components. An example: if you think t...

The Nonlinear Library
EA - Good things that happened in EA this year by Shakeel Hashim

The Nonlinear Library

Play Episode Listen Later Dec 29, 2022 5:17


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Good things that happened in EA this year, published by Shakeel Hashim on December 29, 2022 on The Effective Altruism Forum. Crossposted from Twitter As the year comes to an end, we want to highlight some of the incredible work done and supported by people in the effective altruism community — work that's helping people and animals all over the world. 1/ The team at Charity Entrepreneurship incubated five new charities this year, including the Center for Effective Aid Policy and Vida Plena — the first CE-incubated organisation to operate in Latin America. 2/ Over 1,400 new people signed the Giving What We Can Pledge, committing to giving away 10% or more of their annual income to effective charities. The total number of pledgers is now over 8,000! 3/ The work of The Humane League and other animal welfare activists led 161 new organisations to commit to using cage-free products, helping free millions of chickens from cruel battery cages. 4/ Open Philanthropy launched two new focus areas: South Asian Air Quality and Global Aid Policy. It's already made grants that aim to tackle pollution and increase the quality or quantity of foreign aid./ and/ 5/ Alvea, a new biotechnology company dedicated to fighting pandemics, launched and announced that it had already started animal studies for a shelf-stable COVID vaccine. 6/ Almost 80,000 connections were made at events hosted by @CentreforEA's Events team, prompting people to change jobs, start new projects and explore new ideas. EAGx conferences were held around the world — including in Berlin, Australia and Singapore.#Events 7/ The EU Commission said it will "put forward a proposal to end the ‘disturbing' systematic practice of killing male chicks across the EU" — another huge win for animal welfare campaigners. 8/ What We Owe The Future, a book by @willmacaskill arguing that we can — and should — help build a better world for future generations, became a bestseller in both the US and UK. 9/ New evidence prompted @GiveWell to re-evaluate its views on water quality interventions. It then made a grant of up to $64.7 million for @EvidenceAction's Dispensers for Safe Water water chlorination program, which operates in Kenya, Malawi and Uganda./ 10/ Lots of members of the effective altruism community were featured on @voxdotcom's inaugural Future Perfect 50 list of the people building a better future. 11/ Fish welfare was discussed in the UK Parliament for the first time ever, featuring contributions from effective-altruism-backed charities./ 12/ Researchers at @iGEM published a paper looking at how we might be able to better detect whether states are complying with the Biological Weapons Convention — work which could help improve biosecurity around the world. 13/ New research from the Lead Exposure Elimination Project showed the dangerous levels of lead in paint in Zimbabwe and Sierra Leone. In response, governments in both countries are working with LEEP to try to tackle the problem and reduce lead exposure./ and/ 14/ The EA Forum criticism contest sparked a bunch of interesting and technical debate. One entry prompted GiveWell to re-assess their estimates of the cost-effectiveness of deworming, and inspired a second contest of its own!#Prize_for_inspiring_the_Change_Our_Mind_Contest____20_000 15/ The welfare of crabs, lobsters and prawns was recognised in UK legislation thanks to the new Animal Welfare (Sentience) Bill 16/ Rethink Priorities, meanwhile, embarked on their ambitious Moral Weight Project to provide a better way to compare the interests of different species. 17/ At the @medialab, the Nucleic Acid Observatory project launched — working to develop systems that will help provide an early-warning system for new biological threats. 18/ Longview Philanthropy and @givingwhatwecan launched the Longtermism Fund, a new fund...

Intelligence Squared
The 12 Books of Christmas, Part 4 – Jonathan Freedland, Will MacAskill and Katherine Rundell

Intelligence Squared

Play Episode Listen Later Dec 25, 2022 54:41


For our final look at some of the best books to have hit shelves in 2022, we dive back into standout discussions from the past 12 months including Jonathan Freedland, whose book, The Escape Artist, tells the story of Auschwitz escapees Rudolf Vrba and Alfréd Wetzler. We also revisit our discussion with philosopher Will MacAskill, whose book, What We Owe The Future, claims that society needs to take a far longer-sighted view of how altruism can be effective. The book has also come under scrutiny in the latter half of 2022 due to its influence on the behaviour of billionaire philanthropists. We finish the 12 Books of Christmas with a sneak preview from our upcoming episode with author Katherine Rundell. Her latest book, The Golden Mole, celebrates the irreplaceable diversity found within the animal kingdom.   ... Did you know that Intelligence Squared offers way more than podcasts? We've just launched a new online streaming platform Intelligence Squared+ and we'd love you to give it a go.  It's packed with more than 20 years' worth of video debates and conversations on the world's hottest topics. Tune in to live events, ask your questions or watch back on-demand totally ad-free with hours of discussion to dive into for just £14.99 a month. Visit intelligencesquaredplus.com to start watching today.  Learn more about your ad choices. Visit megaphone.fm/adchoices

The Nonlinear Library
EA - A Case for Voluntary Abortion Reduction by Ariel Simnegar

The Nonlinear Library

Play Episode Listen Later Dec 20, 2022 26:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Case for Voluntary Abortion Reduction, published by Ariel Simnegar on December 20, 2022 on The Effective Altruism Forum. Trigger warning: Abortion is a delicate topic, especially for those of us who've had abortions or otherwise feel strongly on this issue. I've tried to make the following case with care and sensitivity, and if it makes anyone feel uncomfortable, I wholeheartedly apologize. Disclaimer: This essay specifically concerns voluntary abortion reduction. Any discussion of involuntary intervention is outside of this post's scope. Thanks to Ives Parrhesia, Marcus Abramovitch, Ruth Grace Wong, and several anonymous helpers. Their help does not constitute an endorsement of this essay's conclusions. Summary Many EA principles point us towards supporting voluntary abortion reduction: Moral circle expansion. We're receptive to arguments that we should expand our moral circle to include animals and future people. We should be open to the possibility that fetuses—the future people closest to us—could be included in our moral circle too. Our concern for neglected and disenfranchised moral groups. If fetuses are moral patients, then they are relatively neglected and disenfranchised, with more abortions occuring each year than deaths by all causes combined. The metric of (adjusted) life years. We commonly use (adjusted) life years as a measure of the disvalue of problems and the value of interventions. This metric arguably doesn't distinguish between fetal deaths and infant deaths. Singerian duties to give to help those in need. We're typically sympathetic to arguments that we should proactively help those in need, even if it reduces our personal autonomy. We should consider whether we should help our children the same way. Longtermist philosophical views. Longtermists are typically receptive to total / low critical level views, non-person-affecting views, and pro-natalism. Just as these views seem to imply that we should care for people in the far future, they also seem to imply that we should care for fetuses, the future people closest to us. Moral uncertainty's implications for a potential problem of massive scale. Given abortion's massive scale, even a small chance that fetuses are moral patients could imply that we should do something about it. In that regard, we should carry out the following interventions: Shift our family-focused interventions to spotlight mothers' physical and mental health, and support adoption as an option. Suspend our support for charities which reduce the amount of near-term future people until we can systematically review the effect of the above moral considerations on the morality of the charities' interventions. In our personal lives, we should: Understand the situations of people we know who are considering abortion and do whatever we can to support them in having their babies the way they would like. Help each other to be loving parents and raise thriving children, whether or not some of us have abortions or choose to not have children. Introduction: Moral Circle Expansion Future people count, but we rarely count on them. They cannot vote or lobby or run for public office, so politicians have scant incentive to think about them. They can't bargain or trade with us, so they have little representation in the market. And they can't make their views heard directly: they can't tweet, or write articles in newspapers, or march in the streets. They are utterly disenfranchised. The idea that future people count is common sense. Future people, after all, are people. They will exist. They will have hopes and joys and pains and regrets, just like the rest of us. They just don't exist yet. Will MacAskill, What We Owe The Future (2022), pp. 9-10. As EAs, we're no strangers to expanding our moral circle. We're rooted in the idea that distance shou...

In Session with Dr. Farid Holakouee
December 12, 2022 Discussion on the book "What We Owe The Future," Role Play, Good Bad Feelings

In Session with Dr. Farid Holakouee

Play Episode Listen Later Dec 16, 2022 44:42


December 12, 2022 Discussion on the book "What We Owe The Future," Role Play, Good Bad Feelings by Dr. Farid Holakouee

good bad roleplay bad feelings what we owe the future
The Nonlinear Library
EA - [Link Post] If We Don't End Factory Farming Soon, It Might Be Here Forever. by BrianK

The Nonlinear Library

Play Episode Listen Later Dec 7, 2022 1:40


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link Post] If We Don't End Factory Farming Soon, It Might Be Here Forever., published by BrianK on December 7, 2022 on The Effective Altruism Forum. “Do you know what the most popular book is? No, it's not Harry Potter. But it does talk about spells. It's the Bible, and it has been for centuries. In the past 50 years alone, the Bible has sold over 3.9 billion copies. And the second best-selling book? The Quran, at 800 million copies. As Oxford Professor William MacAskill, author of the new book “What We Owe The Future”—a tome on effective altruism and “longtermism”—explains, excerpts from these millennia-old schools of thought influence politics around the world: “The Babylonian Talmud, for example, compiled over a millennium ago, states that ‘the embryo is considered to be mere water until the fortieth day'—and today Jews tend to have much more liberal attitudes towards stem cell research than Catholics, who object to this use of embryos because they believe life begins at conception. Similarly, centuries-old dietary restrictions are still widely followed, as evidenced by India's unusually high rate of vegetarianism, a $20 billion kosher food market, and many Muslims' abstinence from alcohol.” The reason for this is simple: once rooted, value systems tend to persist for an extremely long time. And when it comes to factory farming, there's reason to believe we may be at an inflection point.” Read the rest on Forbes. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Martin Skadal podcast
Alex O' Connor: How to improve the world, vegan advocacy, philosophy++ | Martin Skadal podcast #02

Martin Skadal podcast

Play Episode Listen Later Dec 4, 2022 97:44


Thank you for listening to my new podcast project! - timestamps - 00:00 - Start 02:10 - Effective Ways to Improve the World 07:09 - Suits 11:13 - Tips on Vegan Advocacy 16:19 - Unpopular Opinion 26:09 - Cancel Culture 34:14 - Good Conversations/Communications 45:16 - Dostoevsky, Books, and Podcasts 52:28 - Animal Rights 59:10 - Taking Care of One's Self/Optimizing Life 01:04:45 - Music 01:08:23 - Patreon 01:12:06 - Difficult Choices and Dilemmas 01:17:13 - On Documentaries 01:19:52 - Protecting Species 01:22:18 - Mental Health and Philosophy 01:34:32 - Alex's Future Plans 01:36:05 - Outro Specific links as I referred to in this episode: - Alex's CosmicSkeptic channel: @CosmicSkeptic - Alex's Instagram: https://www.instagram.com/cosmicskeptic/ - "What We Owe The Future" by William MacAskill: https://whatweowethefuture.com/uk/ - Lex Fridman Podcast: @lexfridman - Huberman Lab Podcast: @hubermanlab - Modern Wisdom Podcast: @ChrisWillx - Fyodor Dostoevsky books: i) https://en.wikipedia.org/wiki/Notes_from_Underground ii) https://en.wikipedia.org/wiki/Crime_and_Punishment iii) https://en.wikipedia.org/wiki/The_Brothers_Karamazov - Jordan Peterson: @JordanBPeterson - Joe Rogan Podcast: @joerogan - Animal Liberation by Peter Singer: https://en.wikipedia.org/wiki/Animal_Liberation_(book) As I want to run this podcast ad-free, the best way to support me is through Patreon: https://www.patreon.com/martinskadal If you live in Norway, you can consider becoming a support member(støttemedlem) in the two organisations I run. It costs 50kr a year. The more members we have, the more influence we have and the more funding we get as well. Right now we have around 500 members of World Saving Hustle(WSH) and 300 members of Altruism for Youth(AY). • Become a support member of WSH: https://forms.gle/ogwYPF1c62a59TsRA • Become a support member of AY: https://forms.gle/LSa4P1gyyyUmDsuP7 If you want to become a volunteer for World Saving Hustle or Altruism for Youth, send me an email and I'll forward it to our team. It might take some time before you'll get an answer as we're currently run by volunteers, but you'll get an answer eventually! Do you have any feedback, questions, suggestions for either topics/guests, let me know in the comment section. If you want to get in touch, best way is through email: martin@worldsavinghustle.com Thanks to everyone in World Saving Hustle backing up this project and thanks to my creative partner Candace for editing this podcast! Thanks everyone and have an amazing day as always!! • instagram https://www.instagram.com/skadal/ • linkedin https://www.linkedin.com/in/martinska... • facebook https://www.facebook.com/martinsskadal/ • twitter https://twitter.com/martinskadal • norwegian YT https://www.youtube.com/@martinskadal353 • Patreon https://www.patreon.com/martinskadal

The Nonlinear Library
EA - A Letter to the Bulletin of Atomic Scientists by John G. Halstead

The Nonlinear Library

Play Episode Listen Later Nov 23, 2022 3:22


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Letter to the Bulletin of Atomic Scientists, published by John G. Halstead on November 23, 2022 on The Effective Altruism Forum. Tldr: This is a letter I wrote to the Climate Contributing Editor of the Bulletin Atomic Scientists, Dawn Stover, about Emile Torres' latest piece criticising EA. In short: In advance of the publication of the article, Ms Stover reached out to us to check on what Torres calls their most "disturbing" claim viz. that Will MacAskill lied about getting advice from five climate experts. We showed them that this was false. The Bulletin published the claim anyway, and then tweeted it. In my opinion, this is outrageous, so I have asked them to issue a correction and an apology. Dear Ms Stover, I have long admired the work of the Bulletin of the Atomic Scientists. However, I am extremely disappointed by your publication of the latest piece by Emile Torres. I knew long ago that Torres would publish a piece critical of What We Owe the Future, and on me following my report on climate change. However, I am surprised that the Bulletin has chosen to publish this particular piece in its current form. There are many things wrong with the piece, but the most important is that it accuses Will MacAskill and his research assistants of research misconduct. Specifically, Torres contends that five of the climate experts we listed in the acknowledgements for the book were not actually consulted. Ms Stover: you contacted us about this claim in advance of the article's publication, and we informed you that it was not true. Overall, we consulted around 106 experts in the research process for What We Owe The Future. Torres suggests that five experts were never consulted at all, but this is not true — as Will stated in his earlier email to you, four of those five experts were consulted. I am happy to provide evidence for this. The article would have readers think that we made up the citations out of thin air. One of them was contacted but didn't have time to give feedback, and was incorrectly credited in the acknowledgements, which we will change in future editions: this was an honest mistake. The Bulletin also went on to tweet the false claim that multiple people hadn't been consulted at all. The acknowledgements are also clear that we are not claiming that those listed checked and agreed with every claim in the book. Immediately after the acknowledgements of subject-matter experts, Will writes: “These advisers don't necessarily agree with the claims I make in the book, and all errors in the book are my responsibility alone.” To accuse someone of research misconduct is a very serious allegation. After you check it and find out that it is false, it is extremely poor form to let the claim go out anyway and then to tweet it. The Bulletin should issue a correction to the article, and to the false claim they put out in a tweet. I also have concerns about the nature of Torres' background work for article — they seemingly sent every person that was acknowledged for the book a misleading email, telling them that we lied in the acknowledgements, and making some reviewers quite uncomfortable. To reiterate, I am very disappointed by the journalistic standards demonstrated in this article. I will be publishing something separately about Torres' (as usual) misrepresented substantive claims, but the most serious allegation of research misconduct needs to be retracted and we need an apology. (Also, a more minor point: it's not true that I am Head of Applied Research at Founders Pledge. I left that role in 2019.) John Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Review: What We Owe The Future by Kelsey Piper

The Nonlinear Library

Play Episode Listen Later Nov 21, 2022 1:48


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review: What We Owe The Future, published by Kelsey Piper on November 21, 2022 on The Effective Altruism Forum. For the inaugural edition of Asterisk, I wrote about What We Owe The Future. Some highlights: What is the longtermist worldview? First — that humanity's potential future is vast beyond comprehension, that trillions of lives may lie ahead of us, and that we should try to secure and shape that future if possible. Here there's little disagreement among effective altruists. The catch is the qualifier: “if possible.” When I talk to people working on cash transfers or clean water or accelerating vaccine timelines, their reason for prioritizing those projects over long-term-future ones is approximately never “because future people aren't of moral importance”; it's usually “because I don't think we can predictably affect the lives of future people in the desired direction.” As it happens, I think we can — but not through the pathways outlined in What We Owe the Future. The stakes are as high as MacAskill says — but when you start trying to figure out what to do about it, you end up face-to-face with problems that are deeply unclear and solutions that are deeply technical. I think we're in a dangerous world, one with perils ahead for which we're not at all prepared, one where we're likely to make an irrecoverable mistake and all die. Most of the obligation I feel toward the future is an obligation to not screw up so badly that it never exists. Most longtermists are scared, and the absence of that sentiment from What We Owe the Future feels glaring. If we grant MacAskill's premise that values change matters, though, the value I would want to impart is this one: an appetite for these details, however tedious they may seem. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Announcing the first issue of Asterisk by Clara Collier

The Nonlinear Library

Play Episode Listen Later Nov 21, 2022 1:39


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the first issue of Asterisk, published by Clara Collier on November 21, 2022 on The Effective Altruism Forum. Are you a fan of engaging, epistemically rigorous longform writing about the world's most pressing problems? Interested in in-depth interviews with leading scholars? A reader of taste and discernment? Sick of FTXcourse? Distract yourself with the inaugural issue of Asterisk Magazine, out now! Asterisk is a new quarterly journal of clear writing and clear thinking about things that matter (and, occasionally, things we just think are interesting). In this issue: Kelsey Piper argues that What We Owe The Future can't quite support the weight of its own premises. Kevin Esvelt talks about how we can prevent the next pandemic. Jared Leibowich gives us a superforecaster's approach to modeling monkeypox. Christopher Leslie Brown on the history of abolitionism and the slippery concept of moral progress Stuart Ritchie tries to find out if the replication crisis has really made science better. Dietrich Vollrath explains what economists do and don't know about why some countries become rich and others don't. Scott Alexander asks: is wine fake? Karson Elmgren on the history and future of China's semiconductor industry. Xander Balwit imagines a future where genetic engineering has radically altered the animals we eat. A huge thank you to everyone in the community who helped us make Asterisk a reality. We hope you all enjoy reading it as much as we enjoyed making it. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Open call for EA stories by arushigupta

The Nonlinear Library

Play Episode Listen Later Oct 4, 2022 3:26


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open call for EA stories, published by arushigupta on October 4, 2022 on The Effective Altruism Forum. TLDR: Tell us your stories! We are making an open call for EA stories that might be a good fit for a variety of media projects, including: a written profile series on EA websites, video interview series, podcast interviews, or coverage in newspapers and magazines. Submit your stories at:. Every day, people in the effective altruism movement do an incredible amount of impactful work, directly helping to make the world a better place. As part of our communications strategy, we'd like to find those stories—and share them with the world. The goal is to have a wide variety of media that shows lots of examples of what effective altruism looks like in practice, rather than in theory. Especially with the recent launch of “What We Owe The Future”, there has been a lot of mainstream media discussion of the theoretical and philosophical ideas behind effective altruism and longtermism. Our goal is to also encourage more coverage and discussion of the practical work that people in, or inspired by, the effective altruism community are doing in the real world, and the kinds of impact they are creating. We'd like your suggestions of stories that we should try to share. We're interested in a wide range of stories, including personal career journeys, stories of failure, big successes that have been accomplished and potential big successes that are in the works. These could be stories about individuals, or organizations. Ideally, we would love stories across a wide range of cause areas. You can either share your own story, or nominate other people/stories that you think we should consider (but ideally these would be people and projects that are comfortable with being more widely known). Some examples of what these stories could be used for include: a written profile series on EA websites, video interview series, interviews on EA and non-EA podcasts, or coverage in newspapers and magazines. We will reach out to some of the people/organizations whose stories seem like a good fit and help them move forward with sharing their stories more widely. Examples of how we will help include: connecting them with resources like media training, connecting them with journalists, and/or helping to develop a communications strategy. The stories we're most likely to find exciting probably have some combination of the following qualities: A compelling story/protagonist (eg. a clear conflict, or interesting journey) Work that illustrates EA values or thinking styles Relatively easy for a wide audience to understand why it's important or exciting New, interesting data that is well-suited to visualizations If you're unsure about your story, we encourage you to have a low bar for submission! Team Involved: Shakeel Hashim is the head of communications at CEA, and is focused on communicating EA ideas accurately outside EA. He will be leading this project. Mike Levine is a Principal at TSD, a strategic communications firm that has worked with Open Philanthropy for several years. Arushi Gupta was a recent EA Communications Fellow, and formerly the Co-Director of Effective Altruism NYC. The project is still early right now and we'd appreciate any feedback in the comments on how you think we can best approach this, or what you'd like to see us do with these stories. You can also reach out to shakeel@effectivealtruism.org with any feedback, questions, or story submissions. Go forth and send us your stories! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Munk Debates Podcast
Munk Dialogue: What do we owe the future?

The Munk Debates Podcast

Play Episode Listen Later Sep 28, 2022 39:54


Most societies commemorate and revere distant ancestors, with portraits, statues, streets, buildings, and holidays. We are fascinated with the pyramids in Egypt, Stonehenge in England and the earliest origins of our species in the savannas of Africa.  Our interest in humankind's deep past has created a collective blind spot about the prospects of our distant descendants thousand years into the future. For most of us, the deep future is a fantasy world, something you read about in science fiction novels.  But a growing number of thinkers are pushing back against the attitude that the future is a hypothetical we can discount in the favour of the here and now. Instead, they argue it's high time we start thinking seriously about the idea that humanity may only be in its infancy. That as a species we could potentially be around for thousands of years, with trillions of fellow humans to be born, each with vast potential to shape our future evolution, possibly even beyond Earth.  In sum, humankind urgently needs a thousand year plan or it risks losing millennia of human progress to the existential risks that stalk our all too dangerous present.  William MacAskill is a leading global thinker on how humanity could and should think about a common future for itself and the planet. He is an associate professor in philosophy at the University of Oxford and co-founder of Giving What We Can, the Centre for Effective Altruism, and 80,000 Hours, all philosophically inspired projects which together have raised hundreds of millions of dollars and hundreds of thousand of life years to support charities working to preserve human kind's potential for the millenia to come. He is the author of the international bestseller, Doing Good Better and What We Owe The Future. QUOTE:  "The future could be very big, indeed, at least if we don't cause humanity's untimely demise in the next few centuries. We could have a very large future ahead of us. And that means that if there is anything that would impact the well-being of, not just the present generation, but all generations to come, that would be of enormous moral importance."  The host of the Munk Debates is Rudyard Griffiths - @rudyardg.   Tweet your comments about this episode to @munkdebate or comment on our Facebook page https://www.facebook.com/munkdebates/ To sign up for a weekly email reminder for this podcast, send an email to podcast@munkdebates.com.   To support civil and substantive debate on the big questions of the day, consider becoming a Munk Member at https://munkdebates.com/membership Members receive access to our 10+ year library of great debates in HD video, a free Munk Debates book, newsletter and ticketing privileges at our live events. This podcast is a project of the Munk Debates, a Canadian charitable organization dedicated to fostering civil and substantive public dialogue - https://munkdebates.com/ Producer: Marissa Ramnanan  Editor: Adam Karch  

The Nonlinear Library
EA - What We Owe The Future: a Flashcard-based Summary by Florence

The Nonlinear Library

Play Episode Listen Later Sep 12, 2022 3:45


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What We Owe The Future: a Flashcard-based Summary, published by Florence on September 12, 2022 on The Effective Altruism Forum. Disclaimer: I am building the learning and memory tool called Thought Saver, which allows you to embed flashcards on the forum! TL;DR You can revise the key takeaways from What We Owe the Future with our new flashcard deck on Thought Saver. Scroll to the end of this post to see the embedded flashcards. Will MacAskill's new book What We Owe The Future has been out for a few weeks now. I am sure many of you finished it on the first day, some are reaching the last hurdles, or maybe you've been too busy, and you need that extra push to get the book! We have a set of key facts and takeaways from the book so that you can remember the most important parts after you've read it! After all, how often do you read a book and forget all the key ideas? Remembering key facts from WWOTF can help you to set a better mental base rate, enabling you to evaluate how important the magnitude of one problem is compared to another. Example of better base rates Without a base rate, it's hard to understand the size of a problem. For example, it's very hard to comprehend how important the future lives of humanity are (and how much we should care about future humans) unless we know: A) how many humans there are on the earth today (8 Billion) and B) how many humans lived across the entirety of humanity (100 Billion) However, knowing both of these facts helps us better understand the problem's order of magnitude. The team at Thought Saver and I use flashcards to remember the most important things in life, things that we want to recall at the tip of our fingers. We use flashcards when we want to reprogram actions, remember base rates, use a mental model, or put learning into action. Why spaced repetition for books? Spaced repetition is one of the most effective ways to remember key ideas! When I read a book, I want to make sure I take away the most important information. We already spend ~6 hours reading a book, but if we don't revise the material again afterward, we often remember very little! Investing a little bit of extra time in spaced repetition can have high payoffs for you in the future. So instead of just reading WWOTF, we suggest you also revise flashcards about it using a spaced repetition system - you could say that you owe it to your future self. :P Flashcards Here is a sneak peek of our Thought Saver flashcards for each chapter of the book. Thought Saver ranks your flashcards with our algorithm based on an "importance" score, so if a flashcard isn't valuable to you, select "never," and you don't need to remember it forever! Chapter 1: The Case for Longtermism Chapter 2: You Can Shape the Course of History Chapter 3: Moral Change Chapter 4: Value Lock-In Chapter 5: Extinction Chapter 6: Collapse Chapter 7: Stagnation Chapter 8: Is it Good to Make Happy People? Chapter 9: Will the Future Be Good or Bad? Chapter 10: What to Do Anki Flashcards Prefer to use Anki? Get the same set of flashcards in the Anki format here.Note this link won't work for 24 hours as it takes 24 hours to be shareable on Anki Web! Acknowledgment Thanks to Will MackAskill's team for all their help in creating this deck! Questions Is this valuable to you? Would you like to see more key takeaways from books in flashcard format? I'd love to hear your thoughts in the comments! Relevant Links Thought Saver Accelerated Learning Article People talking about Flashcards 300 Flashcards to tackle pressing world problems Andre's flashcard Articles "How I use Anki" Article Other EA Forum content on SRS Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Clearer Thinking with Spencer Greenberg
Estimating the long-term impact of our actions today (with Will MacAskill)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Sep 7, 2022 66:27


Read the full transcript here. What is longtermism? Is the long-term future of humanity (or life more generally) the most important thing, or just one among many important things? How should we estimate the chance that some particular thing will happen given that our brains are so computationally limited? What is "the optimizer's curse"? How top-down should EA be? How should an individual reason about expected values in cases where success would be immensely valuable but the likelihood of that particular individual succeeding is incredibly low? (For example, if I have a one in a million chance of stopping World War III, then should I devote my life to pursuing that plan?) If we want to know, say, whether protests are effective or not, we merely need to gather and analyze existing data; but how can we estimate whether interventions implemented in the present will be successful in the very far future?William MacAskill is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. A Forbes 30 Under 30 social entrepreneur, he also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and Y Combinator–backed 80,000 Hours, which together have moved over $200 million to effective charities. He's the author of Doing Good Better, Moral Uncertainty, and What We Owe The Future.[Read more]

Clearer Thinking with Spencer Greenberg
Estimating the long-term impact of our actions today (with Will MacAskill)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Sep 7, 2022 66:27


What is longtermism? Is the long-term future of humanity (or life more generally) the most important thing, or just one among many important things? How should we estimate the chance that some particular thing will happen given that our brains are so computationally limited? What is "the optimizer's curse"? How top-down should EA be? How should an individual reason about expected values in cases where success would be immensely valuable but the likelihood of that particular individual succeeding is incredibly low? (For example, if I have a one in a million chance of stopping World War III, then should I devote my life to pursuing that plan?) If we want to know, say, whether protests are effective or not, we merely need to gather and analyze existing data; but how can we estimate whether interventions implemented in the present will be successful in the very far future?William MacAskill is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. A Forbes 30 Under 30 social entrepreneur, he also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and Y Combinator–backed 80,000 Hours, which together have moved over $200 million to effective charities. He's the author of Doing Good Better, Moral Uncertainty, and What We Owe The Future.

Clearer Thinking with Spencer Greenberg
Estimating the long-term impact of our actions today (with Will MacAskill)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Sep 7, 2022 66:27


Read the full transcriptWhat is longtermism? Is the long-term future of humanity (or life more generally) the most important thing, or just one among many important things? How should we estimate the chance that some particular thing will happen given that our brains are so computationally limited? What is "the optimizer's curse"? How top-down should EA be? How should an individual reason about expected values in cases where success would be immensely valuable but the likelihood of that particular individual succeeding is incredibly low? (For example, if I have a one in a million chance of stopping World War III, then should I devote my life to pursuing that plan?) If we want to know, say, whether protests are effective or not, we merely need to gather and analyze existing data; but how can we estimate whether interventions implemented in the present will be successful in the very far future?William MacAskill is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. A Forbes 30 Under 30 social entrepreneur, he also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and Y Combinator–backed 80,000 Hours, which together have moved over $200 million to effective charities. He's the author of Doing Good Better, Moral Uncertainty, and What We Owe The Future.

Slate Star Codex Podcast
Book Review: What We Owe The Future

Slate Star Codex Podcast

Play Episode Listen Later Aug 23, 2022 53:26


https://astralcodexten.substack.com/p/book-review-what-we-owe-the-future I. An academic once asked me if I was writing a book. I said no, I was able to communicate just fine by blogging. He looked at me like I was a moron, and explained that writing a book isn't about communicating ideas. Writing a book is an excuse to have a public relations campaign. If you write a book, you can hire a publicist. They can pitch you to talk shows as So-And-So, Author Of An Upcoming Book. Or to journalists looking for news: “How about reporting on how this guy just published a book?” They can make your book's title trend on Twitter. Fancy people will start talking about you at parties. Ted will ask you to give one of his talks. Senators will invite you to testify before Congress. The book itself can be lorem ipsum text for all anybody cares. It is a ritual object used to power a media blitz that burns a paragraph or so of text into the collective consciousness. If the point of publishing a book is to have a public relations campaign, Will MacAskill is the greatest English writer since Shakespeare. He and his book What We Owe The Future have recently been featured in the New Yorker, New York Times, Vox, NPR, BBC, The Atlantic, Wired, and Boston Review. He's been interviewed by Sam Harris, Ezra Klein, Tim Ferriss, Dwarkesh Patel, and Tyler Cowen. Tweeted about by Elon Musk, Andrew Yang, and Matt Yglesias. The publicity spike is no mystery: the effective altruist movement is well-funded and well-organized, they decided to burn “long-termism” into the collective consciousness, and they sure succeeded.

The Nonlinear Library
EA - Protest movements: How effective are they? by James Ozden

The Nonlinear Library

Play Episode Listen Later Aug 22, 2022 15:08


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Protest movements: How effective are they?, published by James Ozden on August 22, 2022 on The Effective Altruism Forum. Social Change Lab is an EA-aligned non-profit that conducts and disseminates social movement research. For the past six months, we've been researching the outcomes of protests and protest movements using a variety of methods, including literature reviews, polling (see our previous post on the EA Forum here, which goes into more detail), and interviewing experts and policymakers. Today, we're releasing an in-depth report on the work we've done in the last six months that relates to protest outcomes. We'll also be releasing another report soon on the factors that make social movements more or less likely to succeed. Specifically, we're looking at just how much of a difference protest movements can make, and the areas in which they seem to be particularly effective. We think this is relevant to Effective Altruists for a few reasons: firstly, protests and grassroots activities seem to be a tactic that social movements can use that has been fairly neglected by EAs. As Will MacAskill points out in What We Owe The Future, social movements such as Abolitionism have had a huge impact in the past, and we think that it's likely that they will do so again in the future. It seems extremely valuable to look at this in more detail: how impactful are protests and grassroots pressure? What are the mechanisms by which they can make a difference? Is it mostly by influencing public opinion, the behaviour of legislators, corporations, or something else? Secondly, Effective Altruism is itself a social movement. Some interesting work has been done before (for instance, this post on why social movements sometimes fail by Nuño Sempere), but we think it seems valuable to think in more detail about both the impact that social movements can have, and also what makes them likely to succeed or fail (which we'll be talking about in a report that we intend to release soon). Research on how different social movements achieved outsized impacts seems like it would be useful in helping positively influence the future impact of Effective Altruism. We hope that you enjoy reading the report, and would be hugely appreciative for any feedback that people have about what we've been doing so far (we're a fairly new organisation and there are definitely things we still have to learn). The rest of this post includes the summary of the report, as well as the introduction and methodology. The full results examining protest movements outcomes on public opinion, policy change, public discourse, voting behaviour and corporative behaviour are best seen in the full report here. Executive Summary Social Change Lab has undertaken six months of research looking into the outcomes of protests and protest movements, focusing primarily on Western democracies, such as those in North America and Western Europe. In this report, we synthesise our research, which we conducted using various research methods. These methods include literature reviews, public opinion polling, expert interviews, policymaker interviews and a cost-effectiveness analysis. This report only examines the impacts and outcomes of protest movements. Specifically, we mostly focus on the outcomes of large protest movements with at least 1,000 participants in active involvement. This is because we want to understand the impact that protests can have, rather than the impact that most protests will have. Due to this, our research looks at unusually influential protest movements, rather than the median protest movement. We think this is a reasonable approach, as we generally believe protest movements are a hits-based strategy for doing good. In short, we think that most protest movements have little success in achieving their aims, or otherwise po...

The Nonlinear Library
EA - "Call off the EAs": Too Much Advertising? by tae

The Nonlinear Library

Play Episode Listen Later Aug 20, 2022 1:16


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Call off the EAs": Too Much Advertising?, published by tae on August 19, 2022 on The Effective Altruism Forum. Today, my non-EA friend asked me "Hey, could you call off the EAs?". He heard an ad for What We Owe The Future in a podcast yesterday. "Nice," he thought; "I've heard of Will MacAskill." Then he heard another ad in a different podcast. And then he watched three YouTube videos in a row, each from different creators, that again included ads. Now he is genuinely annoyed. This exact pattern showed up in the Carrick Flynn campaign. So many EA funds went into advertising that people got frustrated by how many ads they received. There is such a thing as too much money in advertising! Oregon voters came to believe that Carrick was funded by bottomless money to be a crypto shill. And now smart people, who watch YouTube channels and listen to podcasts that are EA-adjacent, will rightfully get suspicious about who is putting this much money into promoting... a book about philosophy?Hope this anecdotal experience informs how we publicize/market/advertise in the future! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Daily Stoic
Will MacAskill on Creating Lasting Change

The Daily Stoic

Play Episode Listen Later Aug 17, 2022 70:32


Ryan talks to professor and writer Will MacAskill about his book What We Owe The Future, how to create effective change in the world, the importance of gaining a better perspective on the world, and more.Will MacAskill is an Associate Professor in Philosophy and Research Fellow at the Global Priorities Institute, University of Oxford. His research focuses on the fundamentals of effective altruism - the use of evidence and reason to help others as much as possible with our time and money - with a particular concentration on how to act given moral uncertainty. He is the author of the upcoming book What We Owe The Future, available for purchase on August 12. Will also wrote Doing Good Better: Effective Altruism and a Radical New Way to Make a Difference and co-authored Moral Uncertainty.✉️ Sign up for the Daily Stoic email: https://dailystoic.com/dailyemail

The Nonlinear Library
EA - What We Owe The Future: A review and summary of what I learned by Michael Townsend

The Nonlinear Library

Play Episode Listen Later Aug 17, 2022 13:43


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What We Owe The Future: A review and summary of what I learned, published by Michael Townsend on August 16, 2022 on The Effective Altruism Forum. Will MacAskill's new book, What We Owe The Future has just been released in the US and will be available in the UK from September 1. You might already be turning to book reviews or podcasts to inform whether you should buy a copy. To help, I'm writing a quick summary of the book, sharing three new insights I gained, and three questions it left me asking. But it's worth being upfront to the reader about where I sit. MacAskill entwines rigorous arguments with compelling metaphors to promote a profoundly important idea: we can make the future go better, and we should. It's filled with rich, relevant and persuasive historical examples, grounding his philosophical arguments in the real world. It's a book for people who are curious to learn, but also motivated to act — I strongly recommend it. Summary of What We Owe The Future Main argument The book makes a case for longtermism, the view that positively influencing the long-term future is a key moral priority of our time. The overarching argument is simple: Future people matter. The future could be enormously valuable (or terrible). We can positively influence the long-term future. 1. Future people matter The book argues for the first claim at the outset in straightforward and intuitive terms, but MacAskill also takes the reader through rigorous arguments grappling with population ethics, the area of philosophy that focuses on these sorts of questions. 2. The future could be enormously valuable (or terrible) MacAskill's argument for the second claim is that there are far more people who could potentially live in the future than have ever lived in the past. On certain assumptions about the average future population and expected lifespan of the human species, the number of people who could live in the future dramatically outweighs the number of people who have ever lived. This kind of analysis may have inspired Our World In Data's visualisation of how vast the long-term future could be. I recommend Kurzgesagt's “The Last Human — A Glimpse Into The Far Future” which evocatively draws out the potential magnitude of our long-term future. A substantial amount is at stake: if the future goes well, it could be enormously valuable, but if it doesn't, it could be terrible. 3. We can positively influence the long-term future The third claim is the central focus of the book. MacAskill aims not just to argue that we can in principle influence the long-term future (which is the standard of most philosophical arguments) but that we can and here's how (the standard for those who want to take action). MacAskill argues that one of the best ways to focus on the long-term future is to reduce our risk of extinction. Though he also argues that it's not just about whether we survive; it's also about how we survive. The case for focusing on the ways we can improve the quality of the long-term future is one of the key lessons I took from the book. Things I learned from reading What We Owe The Future Much of what I read was new to me, even as someone who's been highly engaged with these ideas. If I were to list all the historical examples that were new to me, I'd essentially be rewriting the book. Instead, here are the top three lessons I learned. Lesson one: Today's values could have easily been different. One of the book's key ideas is that if we could re-run history again, it's unlikely we'd end up with the same values — instead, they're contingent. This is not something I believed before reading the book. If I can make a personal confession: I'm a (perhaps naive) supporter of a philosophical view called hedonistic-utilitarianism, which claims the best actions are those that increase the total amount o...

The Nonlinear Library
EA - What We Owe The Future is out today by William MacAskill

The Nonlinear Library

Play Episode Listen Later Aug 16, 2022 3:09


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What We Owe The Future is out today, published by William MacAskill on August 16, 2022 on The Effective Altruism Forum. So, as some of you might have noticed, there's been a little bit of media attention about effective altruism / longtermism / me recently. This was all in the run up to my new book, What We Owe The Future, which is out today! I think I've worked harder on this book than I've worked on any other single project in my life. I personally spent something like three and a half times as much work on it as Doing Good Better, and I got enormous help from my team, who contributed more work in total than I did. At different times, that team included (in alphabetical order): Frankie Andersen-Wood, Leopold Aschenbrenner, Stephen Clare, Max Daniel, Eirin Evjen, John Halstead, Laura Pomarius, Luisa Rodriguez, and Aron Vallinder. Many more people helped immensely, such as Joao Fabiano with fact checking and the bibliography, Taylor Jones with graphic design, AJ Jacobs with storytelling, Joe Carlsmith with strategy and style, and Fin Moorhouse and Ketan Ramakrishnan with writing around launch. I also benefited from the in-depth advice of dozens of academic consultants and advisors, and dozens more expert reviewers. I want to give a particular thank-you and shout out to Abie Rohrig, who joined after the book was written, to run the publicity campaign. I'm immensely grateful to everyone who contributed; the book would have been a total turd without them. The book is not perfect — reading the audiobook made vivid to me how many things I'd already like to change — but I'm overall happy with how it turned out. The primary aim is to introduce the idea of longtermism to a broader audience, but I think there are hopefully some things that'll be of interest to engaged EAs, too: there are deep dives on moral contingency, value lock-in, civilisation collapse and recovery, stagnation, population ethics and the value of the future. It also tries to bring a historical perspective to bear on these issues more often than is usual in the standard discussions. The book is about longtermism (in its “weak” form) — the idea that we should be doing much more to protect the interests of future generations. (Alt: that protecting the interests of future generations should be a key moral priority of our time.). Some of you have worried (very reasonably!) that we should simplify messages to “holy shit, x-risk!”. I respond to that worry here: I think the line of argument is a good one, but I don't see promoting concern for future generations as inconsistent with also talking about how grave the catastrophic risks we face in the next few decades are. In the comments, please AMA - questions don't just have to be about the book, can be about EA, philosophy, fire raves, or whatever you like! (At worst, I'll choose to not reply.) Things are pretty busy at the moment, but I'll carve out a couple of hours next week to respond to as many questions as I can. If you want to buy the book, here's the link I recommend:. (I'm using different links in different media because bookseller diversity helps with bestseller lists.) If you'd like to help with the launch, please also consider leaving an honest review on Amazon or Good Reads! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

80,000 Hours Podcast with Rob Wiblin
#136 – Will MacAskill on what we owe the future

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Aug 15, 2022 174:36


1. People who exist in the future deserve some degree of moral consideration. 2. The future could be very big, very long, and/or very good. 3. We can reasonably hope to influence whether people in the future exist, and how good or bad their lives are. 4. So trying to make the world better for future generations is a key priority of our time. This is the simple four-step argument for 'longtermism' put forward in What We Owe The Future, the latest book from today's guest — University of Oxford philosopher and cofounder of the effective altruism community, Will MacAskill. Links to learn more, summary and full transcript. From one point of view this idea is common sense. We work on breakthroughs to treat cancer or end use of fossil fuels not just for people alive today, but because we hope such scientific advances will help our children, grandchildren, and great-grandchildren as well. Some who take this longtermist idea seriously work to develop broad-spectrum vaccines they hope will safeguard humanity against the sorts of extremely deadly pandemics that could permanently throw civilisation off track — the sort of project few could argue is not worthwhile. But Will is upfront that longtermism is also counterintuitive. To start with, he's willing to contemplate timescales far beyond what's typically discussed. A natural objection to thinking millions of years ahead is that it's hard enough to take actions that have positive effects that persist for hundreds of years, let alone “indefinitely.” It doesn't matter how important something might be if you can't predictably change it. This is one reason, among others, that Will was initially sceptical of longtermism and took years to come around. He preferred to focus on ending poverty and preventable diseases in ways he could directly see were working. But over seven years he gradually changed his mind, and in *What We Owe The Future*, Will argues that in fact there are clear ways we might act now that could benefit not just a few but *all* future generations. The idea that preventing human extinction would have long-lasting impacts is pretty intuitive. If we entirely disappear, we aren't coming back. But the idea that we can shape human values — not just for our age, but for all ages — is a surprising one that Will has come to more recently. In the book, he argues that what people value is far more fragile and historically contingent than it might first seem. For instance, today it feels like the abolition of slavery was an inevitable part of the arc of history. But Will lays out that the best research on the topic suggests otherwise. If moral progress really is so contingent, and bad ideas can persist almost without end, it raises the stakes for moral debate today. If we don't eliminate a bad practice now, it may be with us forever. In today's in-depth conversation, we discuss the possibility of a harmful moral 'lock-in' as well as: • How Will was eventually won over to longtermism • The three best lines of argument against longtermism • How to avoid moral fanaticism • Which technologies or events are most likely to have permanent effects • What 'longtermists' do today in practice • How to predict the long-term effect of our actions • Whether the future is likely to be good or bad • Concrete ideas to make the future better • What Will donates his money to personally • Potatoes and megafauna • And plenty more Get this episode by subscribing to our podcast on the world's most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore

The Valmy
Will MacAskill - Longtermism, Altruism, History, & Technology

The Valmy

Play Episode Listen Later Aug 12, 2022 56:07


Podcast: Dwarkesh Podcast Episode: Will MacAskill - Longtermism, Altruism, History, & TechnologyRelease date: 2022-08-09Will MacAskill is one of the founders of the Effective Altruist movement and the author of the upcoming book, What We Owe The Future.We talk about improving the future, risk of extinction & collapse, technological & moral change, problems of academia, who changes history, and much more.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.Episode website + Transcript here.Follow Will on Twitter. Follow me on Twitter for updates on future episodes.Subscribe to find out about future episodes!Timestamps(00:23) - Effective Altruism and Western values(07:47) - The contingency of technology(12:02) - Who changes history?(18:00) - Longtermist institutional reform(25:56) - Are companies longtermist?(28:57) - Living in an era of plasticity(34:52) - How good can the future be?(39:18) - Contra Tyler Cowen on what's most important(45:36) - AI and the centralization of power(51:34) - The problems with academiaPlease share if you enjoyed this episode! Helps out a ton!TranscriptDwarkesh Patel 0:06Okay, today I have the pleasure of interviewing William MacAskill. Will is one of the founders of the Effective Altruism movement, and most recently, the author of the upcoming book, What We Owe The Future. Will, thanks for coming on the podcast.Will MacAskill 0:20Thanks so much for having me on.Effective Altruism and Western valuesDwarkesh Patel 0:23My first question is: What is the high-level explanation for the success of the Effective Altruism movement? Is it itself an example of the contingencies you talk about in the book?Will MacAskill 0:32Yeah, I think it is contingent. Maybe not on the order of, “this would never have happened,” but at least on the order of decades. Evidence that Effective Altruism is somewhat contingent is that similar ideas have been promoted many times during history, and not taken on.We can go back to ancient China, the Mohists defended an impartial view of morality, and took very strategic actions to help all people. In particular, providing defensive assistance to cities under siege. Then, there were early utilitarians. Effective Altruism is broader than utilitarianism, but has some similarities. Even Peter Singer in the 70s had been promoting the idea that we should be giving most of our income to help the very poor — and didn't get a lot of traction until early 2010 after GiveWell and Giving What We Can launched.What explains the rise of it? I think it was a good idea waiting to happen. At some point, the internet helped to gather together a lot of like-minded people which wasn't possible otherwise. There were some particularly lucky events like Alex meeting Holden and me meeting Toby that helped catalyze it at the particular time it did.Dwarkesh Patel 1:49If it's true, as you say, in the book, that moral values are very contingent, then shouldn't that make us suspect that modern Western values aren't that good? They're mediocre, or worse, because ex ante, you would expect to end up with a median of all the values we could have had at this point. Obviously, we'd be biased in favor of whatever values we were brought up in.Will MacAskill 2:09Absolutely. Taking history seriously and appreciating the contingency of values, appreciating that if the Nazis had won the World War, we would all be thinking, “wow, I'm so glad that moral progress happened the way it did, and we don't have Jewish people around anymore. What huge moral progress we had then!” That's a terrifying thought. I think it should make us take seriously the fact that we're very far away from the moral truth.One of the lessons I draw in the book is that we should not think we're at the end of moral progress. We should not think, “Oh, we should lock in the Western values we have.” Instead, we should spend a lot of time trying to figure out what's actually morally right, so that the future is guided by the right values, rather than whichever happened to win out.Dwarkesh Patel 2:56So that makes a lot of sense. But I'm asking a slightly separate question—not only are there possible values that could be better than ours, but should we expect our values - we have the sense that we've made moral progress (things are better than they were before or better than most possible other worlds in 2100 or 2200)- should we not expect that to be the case? Should our priors be that these are ‘meh' values?Will MacAskill 3:19Our priors should be that our values are as good as expected on average. Then you can make an assessment like, “Are other values of today going particularly well?” There are some arguments you could make for saying no. Perhaps if the Industrial Revolution happened in India, rather than in Western Europe, then perhaps we wouldn't have wide-scale factory farming—which I think is a moral atrocity. Having said that, my view is to think that we're doing better than average.If civilization were just a redraw, then things would look worse in terms of our moral beliefs and attitudes. The abolition of slavery, the feminist movement, liberalism itself, democracy—these are all things that we could have lost and are huge gains.Dwarkesh Patel 4:14If that's true, does that make the prospect of a long reflection dangerous? If moral progress is a random walk, and we've ended up with a lucky lottery, then you're possibly reversing. Maybe you're risking regression to the mean if you just have 1,000 years of progress.Will MacAskill 4:30Moral progress isn't a random walk in general. There are many forces that act on culture and on what people believe. One of them is, “What's right, morally speaking? What's their best arguments support?” I think it's a weak force, unfortunately.The idea of lumbar flexion is getting society into a state that before we take any drastic actions that might lock in a particular set of values, we allow this force of reason and empathy and debate and goodhearted model inquiry to guide which values we end up with.Are we unwise?Dwarkesh Patel 5:05In the book, you make this interesting analogy where humans at this point in history are like teenagers. But another common impression that people have of teenagers is that they disregard wisdom and tradition and the opinions of adults too early and too often. And so, do you think it makes sense to extend the analogy this way, and suggest that we should be Burkean Longtermists and reject these inside-view esoteric threats?Will MacAskill 5:32My view goes the opposite of the Burkean view. We are cultural creatures in our nature, and are very inclined to agree with what other people think even if we don't understand the underlying mechanisms. It works well in a low-change environment. The environment we evolved towards didn't change very much. We were hunter-gatherers for hundreds of years.Now, we're in this period of enormous change, where the economy is doubling every 20 years, new technologies arrive every single year. That's unprecedented. It means that we should be trying to figure things out from first principles.Dwarkesh Patel 6:34But at current margins, do you think that's still the case? If a lot of EA and longtermist thought is first principles, do you think that more history would be better than the marginal first-principles thinker?Will MacAskill 6:47Two things. If it's about an understanding of history, then I'd love EA to have a better historical understanding. The most important subject if you want to do good in the world is philosophy of economics. But we've got that in abundance compared to there being very little historical knowledge in the EA community.Should there be even more first-principles thinking? First-principles thinking paid off pretty well in the course of the Coronavirus pandemic. From January 2020, my Facebook wall was completely saturated with people freaking out, or taking it very seriously in a way that the existing institutions weren't. The existing institutions weren't properly updating to a new environment and new evidence.The contingency of technologyDwarkesh Patel 7:47In your book, you point out several examples of societies that went through hardship. Hiroshima after the bombings, Europe after the Black Death—they seem to have rebounded relatively quickly. Does this make you think that perhaps the role of contingency in history, especially economic history is not that large? And it implies a Solow model of growth? That even if bad things happen, you can rebound and it really didn't matter?Will MacAskill 8:17In economic terms, that's the big difference between economic or technological progress and moral progress. In the long run, economic or technological progress is very non-contingent. The Egyptians had an early version of the steam engine, semaphore was only developed very late yet could have been invented thousands of years in the past.But in the long run, the instrumental benefits of tech progress, and the incentives towards tech progress and economic growth are so strong, that we get there in a wide array of circumstances. Imagine there're thousands of different societies, and none are growing except for one. In the long run, that one becomes the whole economy.Dwarkesh Patel 9:10It seems that particular example you gave of the Egyptians having some ancient form of a steam engine points towards there being more contingency? Perhaps because the steam engine comes up in many societies, but it only gets turned into an industrial revolution in one?Will MacAskill 9:22In that particular case, there's a big debate about whether quality of metalwork made it actually possible to build a proper steam engine at that time. I mentioned those to share some amazing examples of contingency prior to the Industrial Revolution.It's still contingency on the order of centuries to thousands of years. Post industrial-revolution world, there's much less contingency. It's much harder to see technologies that wouldn't have happened within decades if they hadn't been developed when they were.Dwarkesh Patel 9:57The model here is, “These general-purpose changes in the state of technology are contingent, and it'd be very important to try to engineer one of those. But other than that, it's going to get done by some guy creating a start-up anyways?”Will MacAskill 10:11Even in the case of the steam engine that seemed contingent, it gets developed in the long run. If the Industrial Revolution hadn't happened in Britain in the 18th century, would it have happened at some point? Would similar technologies that were vital to the industrial revolution developed? Yes, there are very strong incentives for doing so.If there's a culture that's into making textiles in an automated way as opposed to England in the 18th century, then that economy will take over the world. There's a structural reason why economic growth is much less contingent than moral progress.Dwarkesh Patel 11:06When people think of somebody like Norman Borlaug and the Green Revolution. It's like, “If you could have done something that, you'd be the greatest person in the 20th century.” Obviously, he's still a very good man, but would that not be our view? Do you think the green revolution would have happened anyways?Will MacAskill 11:22Yes. Norman Borlaug is sometimes credited with saving a billion lives. He was huge. He was a good force for the world. Had Norman Borlaug not existed, I don't think a billion people would have died. Rather, similar developments would have happened shortly afterwards.Perhaps he saved tens of millions of lives—and that's a lot of lives for a person to save. But, it's not as many as simply saying, “Oh, this tech was used by a billion people who would have otherwise been at risk of starvation.” In fact, not long afterwards, there were similar kinds of agricultural development.Who changes history?Dwarkesh Patel 12:02What kind of profession or career choice tends to lead to the highest counterfactual impact? Is it moral philosophers?Will MacAskill 12:12Not quite moral philosophers, although there are some examples. Sticking on science technology, if you look at Einstein, theory of special relativity would have been developed shortly afterwards. However, theory of general relativity was plausibly decades in advance. Sometimes, you get surprising leaps. But, we're still only talking about decades rather than millennia. Moral philosophers could make long-term difference. Marx and Engels made an enormous, long-run difference. Religious leaders like Mohammed, Jesus, and Confucius made enormous and contingent, long-run difference. Moral activists as well.Dwarkesh Patel 13:04If you think that the changeover in the landscape of ideas is very quick today, would you still think that somebody like Marx will be considered very influential in the long future? Communism lasted less than a century, right?Will MacAskill 13:20As things turned out, Marx will not be influential over the long term future. But that could have gone another way. It's not such a wildly different history. Rather than liberalism emerging dominant in the 20th century, it was communism. The better technology gets, the better the ruling ideology is to cement its ideology and persist for a long time. You can get a set of knock-on effects where communism wins the war of ideas in the 20th century.Let's say a world-government is based around those ideas, then, via anti-aging technology, genetic-enhancement technology, cloning, or artificial intelligence, it's able to build a society that possesses forever in accordance with that ideology.Dwarkesh Patel 14:20The death of dictators is especially interesting when you're thinking about contingency because there are huge changes in the regime. It makes me think the actual individual there was very important and who they happened to be was contingent and persistent in some interesting ways.Will MacAskill 14:37If you've got a dictatorship, then you've got single person ruling the society. That means it's heavily contingent on the views, values, beliefs, and personality of that person.Scientific talentDwarkesh Patel 14:48Going back to the second nation, in the book, you're very concerned about fertility. It seems your model about scientific and technological progress happens is number of people times average researcher productivity. If resource productivity is declining and the number of people isn't growing that fast, then that's concerning.Will MacAskill 15:07Yes, number of people times fraction of the population devoted to R&D.Dwarkesh Patel 15:11Thanks for the clarification. It seems that there have been a lot of intense concentrations of talent and progress in history. Venice, Athens, or even something like FTX, right? There are 20 developers making this a multibillion dollar company—do these examples suggest that organization and congregation of researchers matter more than the total amount?Will MacAskill 15:36The model works reasonably well. Throughout history, you start from a very low technological baseline compared to today. Most people aren't even trying to innovate. One argument for why Baghdad lost its Scientific Golden Age is because the political landscape changed such that what was incentivized was theological investigation rather than scientific investigation in the 10th/11th century AD.Similarly, one argument for why Britain had a scientific and industrial revolution rather than Germany was because all of the intellectual talent in Germany was focused on making amazing music. That doesn't compound in the way that making textiles does. If you look at like Sparta versus Athens, what was the difference? They had different cultures and intellectual inquiry was more rewarded in Athens.Because they're starting from a lower base, people trying to do something that looks like what we now think of as intellectual inquiry have an enormous impact.Dwarkesh Patel 16:58If you take an example like Bell Labs, the low-hanging fruit is gone by the late 20th century. You have this one small organization that has six Nobel Prizes. Is this a coincidence?Will MacAskill 17:14I wouldn't say that at all. The model we're working with is the size of the population times the fraction of the population doing R&D. It's the simplest model you can have. Bell Labs is punching above its weight. You can create amazing things from a certain environment with the most productive people and putting them in an environment where they're ten times more productive than they would otherwise be.However, when you're looking at the grand sweep of history, those effects are comparatively small compared to the broader culture of a society or the sheer size of a population.Longtermist institutional reformDwarkesh Patel 18:00I want to talk about your paper on longtermist institutional reform. One of the things you advocate in this paper is that we should have one of the houses be dedicated towards longtermist priorities. Can you name some specific performance metrics you would use to judge or incentivize the group of people who make up this body?Will MacAskill 18:23The thing I'll caveat with longtermist institutions is that I'm pessimistic about them. If you're trying to represent or even give consideration to future people, you have to face the fact that they're not around and they can't lobby for themselves. However, you could have an assembly of people who have some legal regulatory power. How would you constitute that? My best guess is you have a random selection from the population? How would you ensure that incentives are aligned?In 30-years time, their performance will be assessed by a panel of people who look back and assess the policies' effectiveness. Perhaps the people who are part of this assembly have their pensions paid on the basis of that assessment. Secondly, the people in 30-years time, both their policies and their assessment of the previous 30-years previous assembly get assessed by another assembly, 30-years after that, and so on. Can you get that to work? Maybe in theory—I'm skeptical in practice, but I would love some country to try it and see what happens.There is some evidence that you can get people to take the interests of future generations more seriously by just telling them their role. There was one study that got people to put on ceremonial robes, and act as trustees of the future. And they did make different policy recommendations than when they were just acting on the basis of their own beliefs and self-interest.Dwarkesh Patel 20:30If you are on that board that is judging these people, is there a metric like GDP growth that would be good heuristics for assessing past policy decisions?Will MacAskill 20:48There are some things you could do: GDP growth, homelessness, technological progress. I would absolutely want there to be an expert assessment of the risk of catastrophe. We don't have this yet, but imagine a panel of super forecasters predicting the chance of a war between great powers occurring in the next ten years that gets aggregated into a war index.That would be a lot more important than the stock market index. Risk of catastrophe would be helpful to feed into because you wouldn't want something only incentivizing economic growth at the expense of tail risks.Dwarkesh Patel 21:42Would that be your objection to a scheme like Robin Hanson's about maximizing the expected future GDP using prediction markets and making decisions that way?Will MacAskill 21:50Maximizing future GDP is an idea I associate with Tyler Cowen. With Robin Hanson's idea of voting on values but betting on beliefs, if people can vote on what collection of goods they want, GDP and unemployment might be good metrics. Beyond that, it's pure prediction markets. It's something I'd love to see tried. It's an idea of speculative political philosophy about how a society could be extraordinarily different in structure that is incredibly neglected.Do I think it'll work in practice? Probably not. Most of these ideas wouldn't work. Prediction markets can be gamed or are simply not liquid enough. There hasn't been a lot of success in prediction markets compared to forecasting. Perhaps you can solve these things. You have laws about what things can be voted on or predicted in the prediction market, you could have government subsidies to ensure there's enough liquidity. Overall, it's likely promising and I'd love to see it tried out on a city-level or something.Dwarkesh Patel 23:13Let's take a scenario where the government starts taking the impact on the long-term seriously and institutes some reforms to integrate that perspective. As an example, you can take a look at the environmental movement. There're environmental review boards that will try to assess the environmental impact of new projects and repeal any proposals based on certain metrics.The impact here, at least in some cases, has been that groups that have no strong, plausible interest in the environment are able to game these mechanisms in order to prevent projects that would actually help the environment. With longtermism, it takes a long time to assess the actual impact of something, but policymakers are tasked with evaluating the long term impacts of something. Are you worried that it'd be a system that'd be easy to game by malicious actors? And they'd ask, “What do you think went wrong with the way that environmentalism was codified into law?”Will MacAskill 24:09It's potentially a devastating worry. You create something to represent future people, but they're not allowed to lobby themselves (it can just be co-opted). My understanding of environmental impact statements has been similar. Similarly, it's not like the environment can represent itself—it can't say what its interests are. What is the right answer there? Maybe there are speculative proposals about having a representative body that assesses these things and elect jobs by people in 30-years time. That's the best we've got at the moment, but we need a lot more thought to see if any of these proposals would be robust for the long term rather than things that are narrowly-focused.Regulation to have liability insurance for dangerous bio labs is not about trying to represent the interests of future generations. But, it's very good for the long-term. At the moment, if longtermists are trying to change the government, let's focus on a narrow set of institutional changes that are very good for the long-term even if they're not in the game of representing the future. That's not to say I'm opposed to all such things. But, there are major problems with implementation for any of them.Dwarkesh Patel 25:35If we don't know how we would do it correctly, did you have an idea of how environmentalism could have been codified better? Why was that not a success in some cases?Will MacAskill 25:46Honestly, I don't have a good understanding of that. I don't know if it's intrinsic to the matter or if you could've had some system that wouldn't have been co-opted in the long-term.Are companies longtermist?Dwarkesh Patel 25:56Theoretically, the incentives of our most long-term U.S. institutions is to maximize future cash flow. Explicitly and theoretically, they should have an incentive to do the most good they can for their own company—which implies that the company can't be around if there's an existential risk…Will MacAskill 26:18I don't think so. Different institutions have different rates of decay associated with them. So, a corporation that is in the top 200 biggest companies has a half-life of only ten years. It's surprisingly short-lived. Whereas, if you look at universities Oxford and Cambridge are 800 years old. University of Bologna is even older. These are very long-lived institutions.For example, Corpus Christi at Oxford was making a decision about having a new tradition that would occur only every 400 years. It makes that kind of decision because it is such a long-lived institution. Similarly, the legends can be even longer-lived again. That type of natural half-life really affects the decisions a company would make versus a university versus a religious institution.Dwarkesh Patel 27:16Does that suggest that there's something fragile and dangerous about trying to make your institution last for a long time—if companies try to do that and are not able to?Will MacAskill 27:24Companies are composed of people. Is it in the interest of a company to last for a long time? Is it in the interests of the people who constitute the company (like the CEO and the board and the shareholders) for that company to last a long time? No, they don't particularly care. Some of them do, but most don't. Whereas other institutions go both ways. This is the issue of lock-in that I talked about at length in What We Owe The future: you get moments of plasticity during the formation of a new institution.Whether that's the Christian church or the Constitution of the United States, you lock-in a certain set of norms. That can be really good. Looking back, the U.S. Constitution seems miraculous as the first democratic constitution. As I understand it, it was created over a period of four months seems to have stood the test of time. Alternatively, lock-in norms could be extremely dangerous. There were horrible things in the U.S. Constitution like the legal right to slavery proposed as a constitutional amendment. If that had locked in, it would have been horrible. It's hard to answer in the abstract because it depends on the thing that's persisting for a long time.Living in an era of plasticityDwarkesh Patel 28:57You say in the book that you expect our current era to be a moment of plasticity. Why do you think that is?Will MacAskill 29:04There are specific types of ‘moments of plasticity' for two reasons. One is a world completely unified in a way that's historically unusual. You can communicate with anyone instantaneously and there's a great diversity of moral views. We can have arguments, like people coming on your podcast can debate what's morally correct. It's plausible to me that one of many different sets of moral views become the most popular ultimately.Secondly, we're at this period where things can really change. But, it's a moment of plasticity because it could plausibly come to an end — and the moral change that we're used to could end in the coming decades. If there was a single global culture or world government that preferred ideological conformity, combined with technology, it becomes unclear why that would end over the long-term? The key technology here is Artificial Intelligence. The point in time (which may be sooner than we think) where the rulers of the world are digital rather than biological, that [ideological conformity] could persist.Once you've got that and a global hegemony of a single ideology, there's not much reason for that set of values to change over time. You've got immortal leaders and no competition. What are the other kind of sources of value-change over time? I think they can be accounted for too.Dwarkesh Patel 30:46Isn't the fact that we are in a time of interconnectedness that won't last if we settle space — isn't that bit of reason for thinking that lock-in is not especially likely? If your overlords are millions of light years away, how well can they control you?Will MacAskill 31:01The “whether” you have is whether the control will happen before the point of space settlement. If we took to space one day, and there're many different settlements and different solar systems pursuing different visions of the good, then you're going to maintain diversity for a very long time (given the physics of the matter).Once a solar system has been settled, it's very hard for other civilizations to come along and conquer you—at least if we're at a period of technological maturity where there aren't groundbreaking technologies to be discovered. But, I'm worried that the control will happen earlier. I'm worried the control might happen this century, within our lifetimes. I don't think it's very likely, but it's seriously on the table - 10% or something?Dwarkesh Patel 31:53Hm, right. Going back to the long-term of the longtermism movement, there are many instructive foundations that were set up about a century ago like the Rockefeller Foundation, Carnegie Foundation. But, they don't seem to be especially creative or impactful today. What do you think went wrong? Why was there, if not value drift, some decay of competence and leadership and insight?Will MacAskill 32:18I don't have strong views about those particular examples, but I have two natural thoughts. For organizations that want to persist a long time and keep having an influence for a long time, they've historically specified their goals in far too narrow terms. One fun example is Benjamin Franklin. He invested a thousand pounds for each of the cities of Philadelphia and Boston to pay out after 100 years and then 200 years for different fractions of the amount invested. But, he specified it to help blacksmith apprentices. You might think this doesn't make much sense when you're in the year 2000. He could have invested more generally: for the prosperity of people in Philadelphia and Boston. It would have had plausibly more impact.The second is a ‘regression to the mean' argument. You have some new foundation and it's doing an extraordinary amount of good as the Rockefeller Foundation did. Over time, if it's exceptional in some dimension, it's probably going to get closer to average on that dimension. This is because you're changing the people involved. If you've picked exceptionally competent and farsighted people, the next generation are statistically going to be less so.Dwarkesh Patel 33:40Going back to that hand problem: if you specify your mission too narrowly and it doesn't make sense in the future—is there a trade off? If you're too broad, you make space for future actors—malicious or uncreative—to take the movement in ways that you would not approve of? With regards to doing good for Philadelphia, what if it turns into something that Ben Franklin would not have thought is good for Philadelphia?Will MacAskill 34:11It depends on what your values and views are. If Benjamin Franklin only cared about blacksmith's apprentices, then he was correct to specify it. But my own values tend to be quite a bit more broad than that. Secondly, I expect people in the future to be smarter and more capable. It's certainly the trend over time. In which case, if we're sharing similar broad goals, and they're implementing it in a different way, then they have it.How good can the future be?Dwarkesh Patel 34:52Let's talk about how good we should expect the future to be. Have you come across Robin Hanson's argument that we'll end up being subsistence-level ems because there'll be a lot of competition and minimizing compute per digital person will create a barely-worth-living experience for every entity?Will MacAskill 35:11Yeah, I'm familiar with the argument. But, we should distinguish the idea that ems are at subsistence level from the idea that we would have bad lives. So subsistence means that you get a balance of income per capita and population growth such that being poorer would cause deaths to outweigh additional births.That doesn't tell you about their well-being. You could be very poor as an emulated being but be in bliss all the time. That's perfectly consistent with the Malthusian theory. It might seem far away from the best possible future, but it could still be very good. At subsistence, those ems could still have lives that are thousands of times better than ours.Dwarkesh Patel 36:02Speaking of being poor and happy, there was a very interesting section in the chapter where you mentioned the study you had commissioned: you were trying to find out if people in the developing world find life worth living. It turns out that 19% of Indians would not want to relive their life every moment. But, 31% of Americans said that they would not want to relive their life at every moment? So, why are Indians seemingly much happier at less than a tenth of the GDP per capita?Will MacAskill 36:29I think the numbers are lower than that from memory, at least. From memory, it's something more like 9% of Indians wouldn't want to live their lives again if they had the option, and 13% of Americans said they wouldn't. You are right on the happiness metric, though. The Indians we surveyed were more optimistic about their lives, happier with their lives than people in the US were. Honestly, I don't want to generalize too far from that because we were sampling comparatively poor Americans to comparatively well-off Indians. Perhaps it's just a sample effect.There are also weird interactions with Hinduism and the belief in reincarnation that could mess up the generalizability of this. On one hand, I don't want to draw any strong conclusion from that. But, it is pretty striking as a piece of information, given that you find people's well-being in richer countries considerably happier than poorer countries, on average.Dwarkesh Patel 37:41I guess you do generalize in a sense that you use it as evidence that most lives today are living, right?Will MacAskill 37:50Exactly. So, I put together various bits of evidence, where approximately 10% of people in the United States and 10% of people in India seem to think that their lives are net negative. They think they contain more suffering than happiness and wouldn't want to be reborn and live the same life if they could.There's another scripture study that looks at people in United States/other wealthy countries, and asks them how much of their conscious life they'd want to skip if they could. Skipping here means that blinking would reach you to the end of whatever activity you're engaging with. For example, perhaps I hate this podcast so much that I would rather be unconscious than be talking to you. In which case, I'd have the option of skipping, and it would be over after 30 minutes.If you look at that, and then also asked people about the trade offs they would be willing to make as a measure of intensity of how much they're enjoying a certain experience, you reach the conclusion that a little over 10% of people regarded their life that day as being surveyed worse than if they'd been unconscious the entire day.Contra Tyler Cowen on what's most importantDwarkesh Patel 39:18Jumping topics here a little bit, on the 80,000 Hours Podcast, you said that you expect scientists who are explicitly trying to maximize their impact might have an adverse impact because they might be ignoring the foundational research that wouldn't be obvious in this way of thinking, but might be more important.Do you think this could be a general problem with longtermism? If you were trying to find the most important things that are important long-term, you might be missing things that wouldn't be obvious thinking this way?Will MacAskill 39:48Yeah, I think that's a risk. Among the ways that people could argue against my general set of views, I argue that we should be doing fairly specific and targeted things like trying to make AI safe, well-govern the rise of AI, reduce worst-case pandemics that can kill us all, prevent a Third World War, ensure that good values are promoted, and avoid value lock-in. But, some people could argue (and people like Tyler Cowen and Patrick Collison do), that it's very hard to predict the future impact of your actions.It's a mug's game to even try. Instead, you should look at the things that have done loads of good consistently in the past, and try to do the same things. In particular, they might argue that means technological progress or boosting economic growth. I dispute that. It's not something I can give a completely knock-down argument to because we don't know when we will find out who's right. Maybe in thousand-years time. But one piece of evidence is the success of forecasters in general. This also was true for Tyler Cowen, but people in Effective Altruism were realizing that the Coronavirus pandemic was going to be a big deal for them. At an early stage, they were worrying about pandemics far in advance. There are some things that are actually quite predictable.For example, Moore's Law has held up for over 70 years. The idea that AI systems are gonna get much larger and leading models are going to get more powerful are on trend. Similarly, the idea that we will be soon be able to develop viruses of unprecedented destructive power doesn't feel too controversial. Even though it's hard to predict loads of things, there are going to be tons of surprises. There are some things, especially when it comes to fairly long-standing technological trends, that we can make reasonable predictions — at least about the range of possibilities that are on the table.Dwarkesh Patel 42:19It sounds like you're saying that the things we know are important now. But, if something didn't turn out, a thousand years ago, looking back to be very important, it wouldn't be salient to us now?Will MacAskill 42:31What I was saying with me versus Patrick Collison and Tyler Cowen, who is correct? We will only get that information in a thousand-years time because we're talking about impactful strategies for the long-term. We might get suggestive evidence earlier. If me and others engaging in longtermism are making specific, measurable forecasts about what is going to happen with AI, or advances in biotechnology, and then are able to take action such that we are clearly reducing certain risks, that's pretty good evidence in favor of our strategy.Whereas, they're doing all sorts of stuff, but not make firm predictions about what's going to happen, but then things pop out of that that are good for the long-term (say we measure this in ten-years time), that would be good evidence for their view.Dwarkesh Patel 43:38You were saying earlier about the contingency in technology implies that given their worldview, even if you're trying to maximize what in the past is at the most impact, if what's had the most impact in the past is changing values, then economic growth might be the most important thing? Or trying to change the rate of economic growth?Will MacAskill 43:57I really do take the argument seriously of how people have acted in the past, especially for people trying to make a long-lasting impact. What things that they do that made sense and whatnot. So, towards the end of the 19th century, John Stuart Mill and the other early utilitarians had this longtermist wave where they started taking the interests of future generations very seriously. Their main concern was Britain running out of coal, and therefore, future generations would be impoverished. It's pretty striking because they had a very bad understanding of how the economy works. They hadn't predicted that we would be able to transition away from coal with continued innovation.Secondly, they had enormously wrong views about how much coal and fossil fuels there were in the world. So, that particular action didn't make any sense given what we know now. In fact, that particular action of trying to keep coal in the ground, given Britain at the time where we're talking about much lower amounts of coal—so small that the climate change effect is negligible at that level—probably would have been harmful.But, we could look at other things that John Stuart Mill could have done such promoting better values. He campaigned for women's suffrage. He was the first British MP. In fact, even the first politician in the world to promote women's suffrage - that seems to be pretty good. That seems to have stood the test of time. That's one historical data point. But potentially, we can learn a more general lesson there.AI and the centralization of powerDwarkesh Patel 45:36Do you think the ability of your global policymakers to come to a consensus is on net, a good or a bad thing? On the positive, maybe it helps around some dangerous tech from taking off, but on the negative side, prevent human challenge trials that cause some lock-in in the future. On net, what do you think about that trend?Will MacAskill 45:54The question of global integration, you're absolutely right, it's double-sided. One hand, it can help us reduce global catastrophic risks. The fact that the world was able to come come together and ban Chlorofluorocarbons was one of the great events of the last 50 years, allowing the hole in the ozone layer to to repair itself. But on the other hand, if it means we all converge to one monoculture and lose out on diversity, that's potentially bad. We could lose out on the most possible value that way.The solution is doing the good bits and not having the bad bits. For example, in a liberal constitution, you can have a country that is bound in certain ways by its constitution and by certain laws yet still enables a flourishing diversity of moral thought and different ways of life. Similarly, in the world, you can have very strong regulation and treaties that only deal with certain global public goods like mitigation of climate change, prevention of development of the next generation of weapons of mass destruction without having some very strong-arm global government that implements a particular vision of the world. Which way are we going at the moment? It seems to me we've been going in a pretty good and not too worrying direction. But, that could change.Dwarkesh Patel 47:34Yeah, it seems the historical trend is when you have a federated political body that even if constitutionally, the Central Powers constrain over time, they tend to gain more power. You can look at the U.S., you can look at the European Union. But yeah, that seems to be the trend.Will MacAskill 47:52Depending on the culture that's embodied there, it's potentially a worry. It might not be if the culture itself is liberal and promoting of moral diversity and moral change and moral progress. But, that needn't be the case.Dwarkesh Patel 48:06Your theory of moral change implies that after a small group starts advocating for a specific idea, it may take a century or more before that idea reaches common purchase. To the extent that you think this is a very important century (I know you have disagreements about that with with others), does that mean that there isn't enough time for longtermism to gain by changing moral values?Will MacAskill 48:32There are lots of people I know and respect fairly well who think that Artificial General Intelligence will likely lead to singularity-level technological progress and extremely rapid rate of technological progress within the next 10-20 years. If so, you're right. Value changes are something that pay off slowly over time.I talk about moral change taking centuries historically, but it can be much faster today. The growth of the Effective Altruism movement is something I know well. If that's growing at something like 30% per year, compound returns mean that it's not that long. That's not growth. That's not change that happens on the order of centuries.If you look at other moral movements like gay rights movement, very fast moral change by historical standards. If you're thinking that we've got ten years till the end of history, then don't broadly try and promote better values. But, we should have a very significant probability mass on the idea that we will not hit some historical end of this century. In those worlds, promoting better values could pay off like very well.Dwarkesh Patel 49:59Have you heard of Slime Mold Time Mold Potato Diet?Will MacAskill 50:03I have indeed heard of Slime Mold Time Mold Potato Diet, and I was tempted as a gimmick to try it. As I'm sure you know, potato is close to a superfood, and you could survive indefinitely on butter mashed potatoes if you occasionally supplement with something like lentils and oats.Dwarkesh Patel 50:25Hm, interesting. Question about your career: why are you still a professor? Does it still allow you to the things that you would otherwise have been doing like converting more SBF's and making moral philosophy arguments for EA? Curious about that.Will MacAskill 50:41It's fairly open to me what I should do, but I do spend significant amounts of time co-founding organizations or being on the board of those organizations I've helped to set up. More recently, working closely with the Future Fund, SBF's new foundation, and helping them do as much good as possible. That being said, if there's a single best guess for what I want to do longer term, and certainly something that plays to my strengths better, it's developing ideas, trying to get the big picture roughly right, and then communicating them in a way that's understandable and gets more people to get off their seats and start to do a lot of good for the long-term. I've had a lot of impact that way. From that perspective, having an Oxford professorship is pretty helpful.The problems with academiaDwarkesh Patel 51:34You mentioned in the book and elsewhere that there's a scarcity of people thinking about big picture questions—How contingent is history? How are people happy generally?—Are these questions that are too hard for other people? Or they don't care enough? What's going on? Why are there so few people talking about this?Will MacAskill 51:54I just think there are many issues that are enormously important but are just not incentivized anywhere in the world. Companies don't incentivize work on them because they're too big picture. Some of these questions are, “Is the future good, rather than bad? If there was a global civilizational collapse, would we recover? How likely is a long stagnation?” There's almost no work done on any of these topics. Companies aren't interested too grand in scale.Academia has developed a culture where you don't tackle such problems. Partly, that's because they fall through the cracks of different disciplines. Partly because they seem too grand or too speculative. Academia is much more in the mode of making incremental gains in our understanding. It didn't always used to be that way.If you look back before the institutionalization of academic research, you weren't a real philosopher unless you had some grand unifying theory of ethics, political philosophy, metaphysics, logic, and epistemology. Probably the natural sciences too and economics. I'm not saying that all of academic inquiry should be like that. But should there be some people whose role is to really think about the big picture? Yes.Dwarkesh Patel 53:20Will I be able to send my kids to MacAskill University? What's the status on that project?Will MacAskill 53:25I'm pretty interested in the idea of creating a new university. There is a project that I've been in discussion about with another person who's fairly excited about making it happen. Will it go ahead? Time will tell. I think you can do both research and education far better than it currently exists. It's extremely hard to break in or creating something that's very prestigious because the leading universities are hundreds of years old. But maybe it's possible. I think it would could generate enormous amounts of value if we were able to pull it off.Dwarkesh Patel 54:10Excellent, alright. So the book is What We Owe The Future. I understand pre-orders help a lot, right? It was such an interesting read. How often does somebody write a book about the questions they consider to be the most important even if they're not the most important questions? Big picture thinking, but also looking at very specific questions and issues that come up. Super interesting read.Will MacAskill 54:34Great. Well, thank you so much!Dwarkesh Patel 54:38Anywhere else they can find you? Or any other information they might need to know?Will MacAskill 54:39Yeah, sure. What We Owe The Future is out on August 16 in the US and first of September in the United Kingdom. If you want to follow me on Twitter, I'm @WillMcCaskill. If you want to try and use your time or money to do good, Giving What We Can is an organization that encourages people to take a pledge to give a significant fraction of the income (10% or more) to the charities that do the most good. It has a list of recommended charities. 80,000 Hours—if you want to use your career to do good—is a place to go for advice on what careers have the biggest impact at all. They provide one-on-one coaching too.If you're feeling inspired and want to do good in the world, you care about future people and I want to help make their lives go better, then, as well as reading What We Owe The Future, Giving What We Can, and 80,000 hours are the sources you can go to and get involved.Dwarkesh Patel 55:33Awesome, thanks so much for coming on the podcast! It was a lot of fun.Will MacAskill 54:39Thanks so much, I loved it. Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

The Valmy
Will MacAskill - Longtermism, Altruism, History, & Technology

The Valmy

Play Episode Listen Later Aug 12, 2022 56:07


Podcast: The Lunar Society (LS 30 · TOP 5% )Episode: Will MacAskill - Longtermism, Altruism, History, & TechnologyRelease date: 2022-08-09Will MacAskill is one of the founders of the Effective Altruist movement and the author of the upcoming book, What We Owe The Future.We talk about improving the future, risk of extinction & collapse, technological & moral change, problems of academia, who changes history, and much more.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.Episode website + Transcript here.Follow Will on Twitter. Follow me on Twitter for updates on future episodes.Subscribe to find out about future episodes!Timestamps(00:23) -Effective Altruism and Western values(07:47) -The contingency of technology(12:02) -Who changes history?(18:00) -Longtermist institutional reform(25:56) -Are companies longtermist?(28:57) -Living in an era of plasticity(34:52) -How good can the future be?(39:18) -Contra Tyler Cowen on what's most important(45:36) -AI and the centralization of power(51:34) -The problems with academiaPlease share if you enjoyed this episode! Helps out a ton!TranscriptDwarkesh Patel 0:06Okay, today I have the pleasure of interviewing William MacAskill. Will is one of the founders of the Effective Altruism movement, and most recently, the author of the upcoming book, What We Owe The Future. Will, thanks for coming on the podcast.Will MacAskill 0:20Thanks so much for having me on.Effective Altruism and Western valuesDwarkesh Patel 0:23My first question is: What is the high-level explanation for the success of the Effective Altruism movement? Is it itself an example of the contingencies you talk about in the book?Will MacAskill 0:32Yeah, I think it is contingent. Maybe not on the order of, “this would never have happened,” but at least on the order of decades. Evidence that Effective Altruism is somewhat contingent is that similar ideas have been promoted many times during history, and not taken on.We can go back to ancient China, the Mohists defended an impartial view of morality, and took very strategic actions to help all people. In particular, providing defensive assistance to cities under siege. Then, there were early utilitarians. Effective Altruism is broader than utilitarianism, but has some similarities. Even Peter Singer in the 70s had been promoting the idea that we should be giving most of our income to help the very poor — and didn't get a lot of traction until early 2010 after GiveWell and Giving What We Can launched.What explains the rise of it? I think it was a good idea waiting to happen. At some point, the internet helped to gather together a lot of like-minded people which wasn't possible otherwise. There were some particularly lucky events like Alex meeting Holden and me meeting Toby that helped catalyze it at the particular time it did.Dwarkesh Patel 1:49If it's true, as you say, in the book, that moral values are very contingent, then shouldn't that make us suspect that modern Western values aren't that good? They're mediocre, or worse, because ex ante, you would expect to end up with a median of all the values we could have had at this point. Obviously, we'd be biased in favor of whatever values we were brought up in.Will MacAskill 2:09Absolutely. Taking history seriously and appreciating the contingency of values, appreciating that if the Nazis had won the World War, we would all be thinking, “wow, I'm so glad that moral progress happened the way it did, and we don't have Jewish people around anymore. What huge moral progress we had then!” That's a terrifying thought. I think it should make us take seriously the fact that we're very far away from the moral truth.One of the lessons I draw in the book is that we should not think we're at the end of moral progress. We should not think, “Oh, we should lock in the Western values we have.” Instead, we should spend a lot of time trying to figure out what's actually morally right, so that the future is guided by the right values, rather than whichever happened to win out.Dwarkesh Patel 2:56So that makes a lot of sense. But I'm asking a slightly separate question—not only are there possible values that could be better than ours, but should we expect our values - we have the sense that we've made moral progress (things are better than they were before or better than most possible other worlds in 2100 or 2200)- should we not expect that to be the case? Should our priors be that these are ‘meh' values?Will MacAskill 3:19Our priors should be that our values are as good as expected on average. Then you can make an assessment like, “Are other values of today going particularly well?” There are some arguments you could make for saying no. Perhaps if the Industrial Revolution happened in India, rather than in Western Europe, then perhaps we wouldn't have wide-scale factory farming—which I think is a moral atrocity. Having said that, my view is to think that we're doing better than average.If civilization were just a redraw, then things would look worse in terms of our moral beliefs and attitudes. The abolition of slavery, the feminist movement, liberalism itself, democracy—these are all things that we could have lost and are huge gains.Dwarkesh Patel 4:14If that's true, does that make the prospect of a long reflection dangerous? If moral progress is a random walk, and we've ended up with a lucky lottery, then you're possibly reversing. Maybe you're risking regression to the mean if you just have 1,000 years of progress.Will MacAskill 4:30Moral progress isn't a random walk in general. There are many forces that act on culture and on what people believe. One of them is, “What's right, morally speaking? What's their best arguments support?” I think it's a weak force, unfortunately.The idea of lumbar flexion is getting society into a state that before we take any drastic actions that might lock in a particular set of values, we allow this force of reason and empathy and debate and goodhearted model inquiry to guide which values we end up with.Are we unwise?Dwarkesh Patel 5:05In the book, you make this interesting analogy where humans at this point in history are like teenagers. But another common impression that people have of teenagers is that they disregard wisdom and tradition and the opinions of adults too early and too often. And so, do you think it makes sense to extend the analogy this way, and suggest that we should be Burkean Longtermists and reject these inside-view esoteric threats?Will MacAskill 5:32My view goes the opposite of the Burkean view. We are cultural creatures in our nature, and are very inclined to agree with what other people think even if we don't understand the underlying mechanisms. It works well in a low-change environment. The environment we evolved towards didn't change very much. We were hunter-gatherers for hundreds of years.Now, we're in this period of enormous change, where the economy is doubling every 20 years, new technologies arrive every single year. That's unprecedented. It means that we should be trying to figure things out from first principles.Dwarkesh Patel 6:34But at current margins, do you think that's still the case? If a lot of EA and longtermist thought is first principles, do you think that more history would be better than the marginal first-principles thinker?Will MacAskill 6:47Two things. If it's about an understanding of history, then I'd love EA to have a better historical understanding. The most important subject if you want to do good in the world is philosophy of economics. But we've got that in abundance compared to there being very little historical knowledge in the EA community.Should there be even more first-principles thinking? First-principles thinking paid off pretty well in the course of the Coronavirus pandemic. From January 2020, my Facebook wall was completely saturated with people freaking out, or taking it very seriously in a way that the existing institutions weren't. The existing institutions weren't properly updating to a new environment and new evidence.The contingency of technologyDwarkesh Patel 7:47In your book, you point out several examples of societies that went through hardship. Hiroshima after the bombings, Europe after the Black Death—they seem to have rebounded relatively quickly. Does this make you think that perhaps the role of contingency in history, especially economic history is not that large? And it implies a Solow model of growth? That even if bad things happen, you can rebound and it really didn't matter?Will MacAskill 8:17In economic terms, that's the big difference between economic or technological progress and moral progress. In the long run, economic or technological progress is very non-contingent. The Egyptians had an early version of the steam engine, semaphore was only developed very late yet could have been invented thousands of years in the past.But in the long run, the instrumental benefits of tech progress, and the incentives towards tech progress and economic growth are so strong, that we get there in a wide array of circumstances. Imagine there're thousands of different societies, and none are growing except for one. In the long run, that one becomes the whole economy.Dwarkesh Patel 9:10It seems that particular example you gave of the Egyptians having some ancient form of a steam engine points towards there being more contingency? Perhaps because the steam engine comes up in many societies, but it only gets turned into an industrial revolution in one?Will MacAskill 9:22In that particular case, there's a big debate about whether quality of metalwork made it actually possible to build a proper steam engine at that time. I mentioned those to share some amazing examples of contingency prior to the Industrial Revolution.It's still contingency on the order of centuries to thousands of years. Post industrial-revolution world, there's much less contingency. It's much harder to see technologies that wouldn't have happened within decades if they hadn't been developed when they were.Dwarkesh Patel 9:57The model here is, “These general-purpose changes in the state of technology are contingent, and it'd be very important to try to engineer one of those. But other than that, it's going to get done by some guy creating a start-up anyways?”Will MacAskill 10:11Even in the case of the steam engine that seemed contingent, it gets developed in the long run. If the Industrial Revolution hadn't happened in Britain in the 18th century, would it have happened at some point? Would similar technologies that were vital to the industrial revolution developed? Yes, there are very strong incentives for doing so.If there's a culture that's into making textiles in an automated way as opposed to England in the 18th century, then that economy will take over the world. There's a structural reason why economic growth is much less contingent than moral progress.Dwarkesh Patel 11:06When people think of somebody like Norman Borlaug and the Green Revolution. It's like, “If you could have done something that, you'd be the greatest person in the 20th century.” Obviously, he's still a very good man, but would that not be our view? Do you think the green revolution would have happened anyways?Will MacAskill 11:22Yes. Norman Borlaug is sometimes credited with saving a billion lives. He was huge. He was a good force for the world. Had Norman Borlaug not existed, I don't think a billion people would have died. Rather, similar developments would have happened shortly afterwards.Perhaps he saved tens of millions of lives—and that's a lot of lives for a person to save. But, it's not as many as simply saying, “Oh, this tech was used by a billion people who would have otherwise been at risk of starvation.” In fact, not long afterwards, there were similar kinds of agricultural development.Who changes history?Dwarkesh Patel 12:02What kind of profession or career choice tends to lead to the highest counterfactual impact? Is it moral philosophers?Will MacAskill 12:12Not quite moral philosophers, although there are some examples. Sticking on science technology, if you look at Einstein, theory of special relativity would have been developed shortly afterwards. However, theory of general relativity was plausibly decades in advance. Sometimes, you get surprising leaps. But, we're still only talking about decades rather than millennia. Moral philosophers could make long-term difference. Marx and Engels made an enormous, long-run difference. Religious leaders like Mohammed, Jesus, and Confucius made enormous and contingent, long-run difference. Moral activists as well.Dwarkesh Patel 13:04If you think that the changeover in the landscape of ideas is very quick today, would you still think that somebody like Marx will be considered very influential in the long future? Communism lasted less than a century, right?Will MacAskill 13:20As things turned out, Marx will not be influential over the long term future. But that could have gone another way. It's not such a wildly different history. Rather than liberalism emerging dominant in the 20th century, it was communism. The better technology gets, the better the ruling ideology is to cement its ideology and persist for a long time. You can get a set of knock-on effects where communism wins the war of ideas in the 20th century.Let's say a world-government is based around those ideas, then, via anti-aging technology, genetic-enhancement technology, cloning, or artificial intelligence, it's able to build a society that possesses forever in accordance with that ideology.Dwarkesh Patel 14:20The death of dictators is especially interesting when you're thinking about contingency because there are huge changes in the regime. It makes me think the actual individual there was very important and who they happened to be was contingent and persistent in some interesting ways.Will MacAskill 14:37If you've got a dictatorship, then you've got single person ruling the society. That means it's heavily contingent on the views, values, beliefs, and personality of that person.Scientific talentDwarkesh Patel 14:48Going back to the second nation, in the book, you're very concerned about fertility. It seems your model about scientific and technological progress happens is number of people times average researcher productivity. If resource productivity is declining and the number of people isn't growing that fast, then that's concerning.Will MacAskill 15:07Yes, number of people times fraction of the population devoted to R&D.Dwarkesh Patel 15:11Thanks for the clarification. It seems that there have been a lot of intense concentrations of talent and progress in history. Venice, Athens, or even something like FTX, right? There are 20 developers making this a multibillion dollar company—do these examples suggest that organization and congregation of researchers matter more than the total amount?Will MacAskill 15:36The model works reasonably well. Throughout history, you start from a very low technological baseline compared to today. Most people aren't even trying to innovate. One argument for why Baghdad lost its Scientific Golden Age is because the political landscape changed such that what was incentivized was theological investigation rather than scientific investigation in the 10th/11th century AD.Similarly, one argument for why Britain had a scientific and industrial revolution rather than Germany was because all of the intellectual talent in Germany was focused on making amazing music. That doesn't compound in the way that making textiles does. If you look at like Sparta versus Athens, what was the difference? They had different cultures and intellectual inquiry was more rewarded in Athens.Because they're starting from a lower base, people trying to do something that looks like what we now think of as intellectual inquiry have an enormous impact.Dwarkesh Patel 16:58If you take an example like Bell Labs, the low-hanging fruit is gone by the late 20th century. You have this one small organization that has six Nobel Prizes. Is this a coincidence?Will MacAskill 17:14I wouldn't say that at all. The model we're working with is the size of the population times the fraction of the population doing R&D. It's the simplest model you can have. Bell Labs is punching above its weight. You can create amazing things from a certain environment with the most productive people and putting them in an environment where they're ten times more productive than they would otherwise be.However, when you're looking at the grand sweep of history, those effects are comparatively small compared to the broader culture of a society or the sheer size of a population.Longtermist institutional reformDwarkesh Patel 18:00I want to talk about your paper on longtermist institutional reform. One of the things you advocate in this paper is that we should have one of the houses be dedicated towards longtermist priorities. Can you name some specific performance metrics you would use to judge or incentivize the group of people who make up this body?Will MacAskill 18:23The thing I'll caveat with longtermist institutions is that I'm pessimistic about them. If you're trying to represent or even give consideration to future people, you have to face the fact that they're not around and they can't lobby for themselves. However, you could have an assembly of people who have some legal regulatory power. How would you constitute that? My best guess is you have a random selection from the population? How would you ensure that incentives are aligned?In 30-years time, their performance will be assessed by a panel of people who look back and assess the policies' effectiveness. Perhaps the people who are part of this assembly have their pensions paid on the basis of that assessment. Secondly, the people in 30-years time, both their policies and their assessment of the previous 30-years previous assembly get assessed by another assembly, 30-years after that, and so on. Can you get that to work? Maybe in theory—I'm skeptical in practice, but I would love some country to try it and see what happens.There is some evidence that you can get people to take the interests of future generations more seriously by just telling them their role. There was one study that got people to put on ceremonial robes, and act as trustees of the future. And they did make different policy recommendations than when they were just acting on the basis of their own beliefs and self-interest.Dwarkesh Patel 20:30If you are on that board that is judging these people, is there a metric like GDP growth that would be good heuristics for assessing past policy decisions?Will MacAskill 20:48There are some things you could do: GDP growth, homelessness, technological progress. I would absolutely want there to be an expert assessment of the risk of catastrophe. We don't have this yet, but imagine a panel of super forecasters predicting the chance of a war between great powers occurring in the next ten years that gets aggregated into a war index.That would be a lot more important than the stock market index. Risk of catastrophe would be helpful to feed into because you wouldn't want something only incentivizing economic growth at the expense of tail risks.Dwarkesh Patel 21:42Would that be your objection to a scheme like Robin Hanson's about maximizing the expected future GDP using prediction markets and making decisions that way?Will MacAskill 21:50Maximizing future GDP is an idea I associate with Tyler Cowen. With Robin Hanson's idea of voting on values but betting on beliefs, if people can vote on what collection of goods they want, GDP and unemployment might be good metrics. Beyond that, it's pure prediction markets. It's something I'd love to see tried. It's an idea of speculative political philosophy about how a society could be extraordinarily different in structure that is incredibly neglected.Do I think it'll work in practice? Probably not. Most of these ideas wouldn't work. Prediction markets can be gamed or are simply not liquid enough. There hasn't been a lot of success in prediction markets compared to forecasting. Perhaps you can solve these things. You have laws about what things can be voted on or predicted in the prediction market, you could have government subsidies to ensure there's enough liquidity. Overall, it's likely promising and I'd love to see it tried out on a city-level or something.Dwarkesh Patel 23:13Let's take a scenario where the government starts taking the impact on the long-term seriously and institutes some reforms to integrate that perspective. As an example, you can take a look at the environmental movement. There're environmental review boards that will try to assess the environmental impact of new projects and repeal any proposals based on certain metrics.The impact here, at least in some cases, has been that groups that have no strong, plausible interest in the environment are able to game these mechanisms in order to prevent projects that would actually help the environment. With longtermism, it takes a long time to assess the actual impact of something, but policymakers are tasked with evaluating the long term impacts of something. Are you worried that it'd be a system that'd be easy to game by malicious actors? And they'd ask, “What do you think went wrong with the way that environmentalism was codified into law?”Will MacAskill 24:09It's potentially a devastating worry. You create something to represent future people, but they're not allowed to lobby themselves (it can just be co-opted). My understanding of environmental impact statements has been similar. Similarly, it's not like the environment can represent itself—it can't say what its interests are. What is the right answer there? Maybe there are speculative proposals about having a representative body that assesses these things and elect jobs by people in 30-years time. That's the best we've got at the moment, but we need a lot more thought to see if any of these proposals would be robust for the long term rather than things that are narrowly-focused.Regulation to have liability insurance for dangerous bio labs is not about trying to represent the interests of future generations. But, it's very good for the long-term. At the moment, if longtermists are trying to change the government, let's focus on a narrow set of institutional changes that are very good for the long-term even if they're not in the game of representing the future. That's not to say I'm opposed to all such things. But, there are major problems with implementation for any of them.Dwarkesh Patel 25:35If we don't know how we would do it correctly, did you have an idea of how environmentalism could have been codified better? Why was that not a success in some cases?Will MacAskill 25:46Honestly, I don't have a good understanding of that. I don't know if it's intrinsic to the matter or if you could've had some system that wouldn't have been co-opted in the long-term.Are companies longtermist?Dwarkesh Patel 25:56Theoretically, the incentives of our most long-term U.S. institutions is to maximize future cash flow. Explicitly and theoretically, they should have an incentive to do the most good they can for their own company—which implies that the company can't be around if there's an existential risk…Will MacAskill 26:18I don't think so. Different institutions have different rates of decay associated with them. So, a corporation that is in the top 200 biggest companies has a half-life of only ten years. It's surprisingly short-lived. Whereas, if you look at universities Oxford and Cambridge are 800 years old. University of Bologna is even older. These are very long-lived institutions.For example, Corpus Christi at Oxford was making a decision about having a new tradition that would occur only every 400 years. It makes that kind of decision because it is such a long-lived institution. Similarly, the legends can be even longer-lived again. That type of natural half-life really affects the decisions a company would make versus a university versus a religious institution.Dwarkesh Patel 27:16Does that suggest that there's something fragile and dangerous about trying to make your institution last for a long time—if companies try to do that and are not able to?Will MacAskill 27:24Companies are composed of people. Is it in the interest of a company to last for a long time? Is it in the interests of the people who constitute the company (like the CEO and the board and the shareholders) for that company to last a long time? No, they don't particularly care. Some of them do, but most don't. Whereas other institutions go both ways. This is the issue of lock-in that I talked about at length in What We Owe The future: you get moments of plasticity during the formation of a new institution.Whether that's the Christian church or the Constitution of the United States, you lock-in a certain set of norms. That can be really good. Looking back, the U.S. Constitution seems miraculous as the first democratic constitution. As I understand it, it was created over a period of four months seems to have stood the test of time. Alternatively, lock-in norms could be extremely dangerous. There were horrible things in the U.S. Constitution like the legal right to slavery proposed as a constitutional amendment. If that had locked in, it would have been horrible. It's hard to answer in the abstract because it depends on the thing that's persisting for a long time.Living in an era of plasticityDwarkesh Patel 28:57You say in the book that you expect our current era to be a moment of plasticity. Why do you think that is?Will MacAskill 29:04There are specific types of ‘moments of plasticity' for two reasons. One is a world completely unified in a way that's historically unusual. You can communicate with anyone instantaneously and there's a great diversity of moral views. We can have arguments, like people coming on your podcast can debate what's morally correct. It's plausible to me that one of many different sets of moral views become the most popular ultimately.Secondly, we're at this period where things can really change. But, it's a moment of plasticity because it could plausibly come to an end — and the moral change that we're used to could end in the coming decades. If there was a single global culture or world government that preferred ideological conformity, combined with technology, it becomes unclear why that would end over the long-term? The key technology here is Artificial Intelligence. The point in time (which may be sooner than we think) where the rulers of the world are digital rather than biological, that [ideological conformity] could persist.Once you've got that and a global hegemony of a single ideology, there's not much reason for that set of values to change over time. You've got immortal leaders and no competition. What are the other kind of sources of value-change over time? I think they can be accounted for too.Dwarkesh Patel 30:46Isn't the fact that we are in a time of interconnectedness that won't last if we settle space — isn't that bit of reason for thinking that lock-in is not especially likely? If your overlords are millions of light years away, how well can they control you?Will MacAskill 31:01The “whether” you have is whether the control will happen before the point of space settlement. If we took to space one day, and there're many different settlements and different solar systems pursuing different visions of the good, then you're going to maintain diversity for a very long time (given the physics of the matter).Once a solar system has been settled, it's very hard for other civilizations to come along and conquer you—at least if we're at a period of technological maturity where there aren't groundbreaking technologies to be discovered. But, I'm worried that the control will happen earlier. I'm worried the control might happen this century, within our lifetimes. I don't think it's very likely, but it's seriously on the table - 10% or something?Dwarkesh Patel 31:53Hm, right. Going back to the long-term of the longtermism movement, there are many instructive foundations that were set up about a century ago like the Rockefeller Foundation, Carnegie Foundation. But, they don't seem to be especially creative or impactful today. What do you think went wrong? Why was there, if not value drift, some decay of competence and leadership and insight?Will MacAskill 32:18I don't have strong views about those particular examples, but I have two natural thoughts. For organizations that want to persist a long time and keep having an influence for a long time, they've historically specified their goals in far too narrow terms. One fun example is Benjamin Franklin. He invested a thousand pounds for each of the cities of Philadelphia and Boston to pay out after 100 years and then 200 years for different fractions of the amount invested. But, he specified it to help blacksmith apprentices. You might think this doesn't make much sense when you're in the year 2000. He could have invested more generally: for the prosperity of people in Philadelphia and Boston. It would have had plausibly more impact.The second is a ‘regression to the mean' argument. You have some new foundation and it's doing an extraordinary amount of good as the Rockefeller Foundation did. Over time, if it's exceptional in some dimension, it's probably going to get closer to average on that dimension. This is because you're changing the people involved. If you've picked exceptionally competent and farsighted people, the next generation are statistically going to be less so.Dwarkesh Patel 33:40Going back to that hand problem: if you specify your mission too narrowly and it doesn't make sense in the future—is there a trade off? If you're too broad, you make space for future actors—malicious or uncreative—to take the movement in ways that you would not approve of? With regards to doing good for Philadelphia, what if it turns into something that Ben Franklin would not have thought is good for Philadelphia?Will MacAskill 34:11It depends on what your values and views are. If Benjamin Franklin only cared about blacksmith's apprentices, then he was correct to specify it. But my own values tend to be quite a bit more broad than that. Secondly, I expect people in the future to be smarter and more capable. It's certainly the trend over time. In which case, if we're sharing similar broad goals, and they're implementing it in a different way, then they have it.How good can the future be?Dwarkesh Patel 34:52Let's talk about how good we should expect the future to be. Have you come across Robin Hanson's argument that we'll end up being subsistence-level ems because there'll be a lot of competition and minimizing compute per digital person will create a barely-worth-living experience for every entity?Will MacAskill 35:11Yeah, I'm familiar with the argument. But, we should distinguish the idea that ems are at subsistence level from the idea that we would have bad lives. So subsistence means that you get a balance of income per capita and population growth such that being poorer would cause deaths to outweigh additional births.That doesn't tell you about their well-being. You could be very poor as an emulated being but be in bliss all the time. That's perfectly consistent with the Malthusian theory. It might seem far away from the best possible future, but it could still be very good. At subsistence, those ems could still have lives that are thousands of times better than ours.Dwarkesh Patel 36:02Speaking of being poor and happy, there was a very interesting section in the chapter where you mentioned the study you had commissioned: you were trying to find out if people in the developing world find life worth living. It turns out that 19% of Indians would not want to relive their life every moment. But, 31% of Americans said that they would not want to relive their life at every moment? So, why are Indians seemingly much happier at less than a tenth of the GDP per capita?Will MacAskill 36:29I think the numbers are lower than that from memory, at least. From memory, it's something more like 9% of Indians wouldn't want to live their lives again if they had the option, and 13% of Americans said they wouldn't. You are right on the happiness metric, though. The Indians we surveyed were more optimistic about their lives, happier with their lives than people in the US were. Honestly, I don't want to generalize too far from that because we were sampling comparatively poor Americans to comparatively well-off Indians. Perhaps it's just a sample effect.There are also weird interactions with Hinduism and the belief in reincarnation that could mess up the generalizability of this. On one hand, I don't want to draw any strong conclusion from that. But, it is pretty striking as a piece of information, given that you find people's well-being in richer countries considerably happier than poorer countries, on average.Dwarkesh Patel 37:41I guess you do generalize in a sense that you use it as evidence that most lives today are living, right?Will MacAskill 37:50Exactly. So, I put together various bits of evidence, where approximately 10% of people in the United States and 10% of people in India seem to think that their lives are net negative. They think they contain more suffering than happiness and wouldn't want to be reborn and live the same life if they could.There's another scripture study that looks at people in United States/other wealthy countries, and asks them how much of their conscious life they'd want to skip if they could. Skipping here means that blinking would reach you to the end of whatever activity you're engaging with. For example, perhaps I hate this podcast so much that I would rather be unconscious than be talking to you. In which case, I'd have the option of skipping, and it would be over after 30 minutes.If you look at that, and then also asked people about the trade offs they would be willing to make as a measure of intensity of how much they're enjoying a certain experience, you reach the conclusion that a little over 10% of people regarded their life that day as being surveyed worse than if they'd been unconscious the entire day.Contra Tyler Cowen on what's most importantDwarkesh Patel 39:18Jumping topics here a little bit, on the 80,000 Hours Podcast, you said that you expect scientists who are explicitly trying to maximize their impact might have an adverse impact because they might be ignoring the foundational research that wouldn't be obvious in this way of thinking, but might be more important.Do you think this could be a general problem with longtermism? If you were trying to find the most important things that are important long-term, you might be missing things that wouldn't be obvious thinking this way?Will MacAskill 39:48Yeah, I think that's a risk. Among the ways that people could argue against my general set of views, I argue that we should be doing fairly specific and targeted things like trying to make AI safe, well-govern the rise of AI, reduce worst-case pandemics that can kill us all, prevent a Third World War, ensure that good values are promoted, and avoid value lock-in. But, some people could argue (and people like Tyler Cowen and Patrick Collison do), that it's very hard to predict the future impact of your actions.It's a mug's game to even try. Instead, you should look at the things that have done loads of good consistently in the past, and try to do the same things. In particular, they might argue that means technological progress or boosting economic growth. I dispute that. It's not something I can give a completely knock-down argument to because we don't know when we will find out who's right. Maybe in thousand-years time. But one piece of evidence is the success of forecasters in general. This also was true for Tyler Cowen, but people in Effective Altruism were realizing that the Coronavirus pandemic was going to be a big deal for them. At an early stage, they were worrying about pandemics far in advance. There are some things that are actually quite predictable.For example, Moore's Law has held up for over 70 years. The idea that AI systems are gonna get much larger and leading models are going to get more powerful are on trend. Similarly, the idea that we will be soon be able to develop viruses of unprecedented destructive power doesn't feel too controversial. Even though it's hard to predict loads of things, there are going to be tons of surprises. There are some things, especially when it comes to fairly long-standing technological trends, that we can make reasonable predictions — at least about the range of possibilities that are on the table.Dwarkesh Patel 42:19It sounds like you're saying that the things we know are important now. But, if something didn't turn out, a thousand years ago, looking back to be very important, it wouldn't be salient to us now?Will MacAskill 42:31What I was saying with me versus Patrick Collison and Tyler Cowen, who is correct? We will only get that information in a thousand-years time because we're talking about impactful strategies for the long-term. We might get suggestive evidence earlier. If me and others engaging in longtermism are making specific, measurable forecasts about what is going to happen with AI, or advances in biotechnology, and then are able to take action such that we are clearly reducing certain risks, that's pretty good evidence in favor of our strategy.Whereas, they're doing all sorts of stuff, but not make firm predictions about what's going to happen, but then things pop out of that that are good for the long-term (say we measure this in ten-years time), that would be good evidence for their view.Dwarkesh Patel 43:38You were saying earlier about the contingency in technology implies that given their worldview, even if you're trying to maximize what in the past is at the most impact, if what's had the most impact in the past is changing values, then economic growth might be the most important thing? Or trying to change the rate of economic growth?Will MacAskill 43:57I really do take the argument seriously of how people have acted in the past, especially for people trying to make a long-lasting impact. What things that they do that made sense and whatnot. So, towards the end of the 19th century, John Stuart Mill and the other early utilitarians had this longtermist wave where they started taking the interests of future generations very seriously. Their main concern was Britain running out of coal, and therefore, future generations would be impoverished. It's pretty striking because they had a very bad understanding of how the economy works. They hadn't predicted that we would be able to transition away from coal with continued innovation.Secondly, they had enormously wrong views about how much coal and fossil fuels there were in the world. So, that particular action didn't make any sense given what we know now. In fact, that particular action of trying to keep coal in the ground, given Britain at the time where we're talking about much lower amounts of coal—so small that the climate change effect is negligible at that level—probably would have been harmful.But, we could look at other things that John Stuart Mill could have done such promoting better values. He campaigned for women's suffrage. He was the first British MP. In fact, even the first politician in the world to promote women's suffrage - that seems to be pretty good. That seems to have stood the test of time. That's one historical data point. But potentially, we can learn a more general lesson there.AI and the centralization of powerDwarkesh Patel 45:36Do you think the ability of your global policymakers to come to a consensus is on net, a good or a bad thing? On the positive, maybe it helps around some dangerous tech from taking off, but on the negative side, prevent human challenge trials that cause some lock-in in the future. On net, what do you think about that trend?Will MacAskill 45:54The question of global integration, you're absolutely right, it's double-sided. One hand, it can help us reduce global catastrophic risks. The fact that the world was able to come come together and ban Chlorofluorocarbons was one of the great events of the last 50 years, allowing the hole in the ozone layer to to repair itself. But on the other hand, if it means we all converge to one monoculture and lose out on diversity, that's potentially bad. We could lose out on the most possible value that way.The solution is doing the good bits and not having the bad bits. For example, in a liberal constitution, you can have a country that is bound in certain ways by its constitution and by certain laws yet still enables a flourishing diversity of moral thought and different ways of life. Similarly, in the world, you can have very strong regulation and treaties that only deal with certain global public goods like mitigation of climate change, prevention of development of the next generation of weapons of mass destruction without having some very strong-arm global government that implements a particular vision of the world. Which way are we going at the moment? It seems to me we've been going in a pretty good and not too worrying direction. But, that could change.Dwarkesh Patel 47:34Yeah, it seems the historical trend is when you have a federated political body that even if constitutionally, the Central Powers constrain over time, they tend to gain more power. You can look at the U.S., you can look at the European Union. But yeah, that seems to be the trend.Will MacAskill 47:52Depending on the culture that's embodied there, it's potentially a worry. It might not be if the culture itself is liberal and promoting of moral diversity and moral change and moral progress. But, that needn't be the case.Dwarkesh Patel 48:06Your theory of moral change implies that after a small group starts advocating for a specific idea, it may take a century or more before that idea reaches common purchase. To the extent that you think this is a very important century (I know you have disagreements about that with with others), does that mean that there isn't enough time for longtermism to gain by changing moral values?Will MacAskill 48:32There are lots of people I know and respect fairly well who think that Artificial General Intelligence will likely lead to singularity-level technological progress and extremely rapid rate of technological progress within the next 10-20 years. If so, you're right. Value changes are something that pay off slowly over time.I talk about moral change taking centuries historically, but it can be much faster today. The growth of the Effective Altruism movement is something I know well. If that's growing at something like 30% per year, compound returns mean that it's not that long. That's not growth. That's not change that happens on the order of centuries.If you look at other moral movements like gay rights movement, very fast moral change by historical standards. If you're thinking that we've got ten years till the end of history, then don't broadly try and promote better values. But, we should have a very significant probability mass on the idea that we will not hit some historical end of this century. In those worlds, promoting better values could pay off like very well.Dwarkesh Patel 49:59Have you heard of Slime Mold Time Mold Potato Diet?Will MacAskill 50:03I have indeed heard of Slime Mold Time Mold Potato Diet, and I was tempted as a gimmick to try it. As I'm sure you know, potato is close to a superfood, and you could survive indefinitely on butter mashed potatoes if you occasionally supplement with something like lentils and oats.Dwarkesh Patel 50:25Hm, interesting. Question about your career: why are you still a professor? Does it still allow you to the things that you would otherwise have been doing like converting more SBF's and making moral philosophy arguments for EA? Curious about that.Will MacAskill 50:41It's fairly open to me what I should do, but I do spend significant amounts of time co-founding organizations or being on the board of those organizations I've helped to set up. More recently, working closely with the Future Fund, SBF's new foundation, and helping them do as much good as possible. That being said, if there's a single best guess for what I want to do longer term, and certainly something that plays to my strengths better, it's developing ideas, trying to get the big picture roughly right, and then communicating them in a way that's understandable and gets more people to get off their seats and start to do a lot of good for the long-term. I've had a lot of impact that way. From that perspective, having an Oxford professorship is pretty helpful.The problems with academiaDwarkesh Patel 51:34You mentioned in the book and elsewhere that there's a scarcity of people thinking about big picture questions—How contingent is history? How are people happy generally?—Are these questions that are too hard for other people? Or they don't care enough? What's going on? Why are there so few people talking about this?Will MacAskill 51:54I just think there are many issues that are enormously important but are just not incentivized anywhere in the world. Companies don't incentivize work on them because they're too big picture. Some of these questions are, “Is the future good, rather than bad? If there was a global civilizational collapse, would we recover? How likely is a long stagnation?” There's almost no work done on any of these topics. Companies aren't interested too grand in scale.Academia has developed a culture where you don't tackle such problems. Partly, that's because they fall through the cracks of different disciplines. Partly because they seem too grand or too speculative. Academia is much more in the mode of making incremental gains in our understanding. It didn't always used to be that way.If you look back before the institutionalization of academic research, you weren't a real philosopher unless you had some grand unifying theory of ethics, political philosophy, metaphysics, logic, and epistemology. Probably the natural sciences too and economics. I'm not saying that all of academic inquiry should be like that. But should there be some people whose role is to really think about the big picture? Yes.Dwarkesh Patel 53:20Will I be able to send my kids to MacAskill University? What's the status on that project?Will MacAskill 53:25I'm pretty interested in the idea of creating a new university. There is a project that I've been in discussion about with another person who's fairly excited about making it happen. Will it go ahead? Time will tell. I think you can do both research and education far better than it currently exists. It's extremely hard to break in or creating something that's very prestigious because the leading universities are hundreds of years old. But maybe it's possible. I think it would could generate enormous amounts of value if we were able to pull it off.Dwarkesh Patel 54:10Excellent, alright. So the book is What We Owe The Future. I understand pre-orders help a lot, right? It was such an interesting read. How often does somebody write a book about the questions they consider to be the most important even if they're not the most important questions? Big picture thinking, but also looking at very specific questions and issues that come up. Super interesting read.Will MacAskill 54:34Great. Well, thank you so much!Dwarkesh Patel 54:38Anywhere else they can find you? Or any other information they might need to know?Will MacAskill 54:39Yeah, sure. What We Owe The Future is out on August 16 in the US and first of September in the United Kingdom. If you want to follow me on Twitter, I'm @WillMcCaskill. If you want to try and use your time or money to do good, Giving What We Can is an organization that encourages people to take a pledge to give a significant fraction of the income (10% or more) to the charities that do the most good. It has a list of recommended charities. 80,000 Hours—if you want to use your career to do good—is a place to go for advice on what careers have the biggest impact at all. They provide one-on-one coaching too.If you're feeling inspired and want to do good in the world, you care about future people and I want to help make their lives go better, then, as well as reading What We Owe The Future, Giving What We Can, and 80,000 hours are the sources you can go to and get involved.Dwarkesh Patel 55:33Awesome, thanks so much for coming on the podcast! It was a lot of fun.Will MacAskill 54:39Thanks so much, I loved it. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com

The Nonlinear Library
EA - Announcing the Longtermism Fund by Michael Townsend

The Nonlinear Library

Play Episode Listen Later Aug 11, 2022 8:39


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Longtermism Fund, published by Michael Townsend on August 11, 2022 on The Effective Altruism Forum. Longview Philanthropy and Giving What We Can would like to announce a new fund for donors looking to support longtermist work: the Longtermism Fund. In this post, we outline the motivation behind the fund, reasons you may (or may not) choose to donate using it, and some questions we expect donors may have. What work will the Longtermism Fund support? The fund supports work that: Reduces existential and catastrophic risks, such as those coming from misaligned artificial intelligence, pandemics, and nuclear war. Promotes, improves, and implements key longtermist ideas. The Longtermism Fund aims to be a strong donation option for a wide range of donors interested in longtermism. The fund focuses on organisations that: Have a compelling and transparent case in favour of their cost effectiveness that most donors interested in longtermism will understand; and/or May benefit from being funded by a large number of donors (rather than one specific organisation or donor) — for example, organisations promoting longtermist ideas to the broader public may be more effective if they have been democratically funded. There are other funders supporting longtermist work in this space, such as Open Philanthropy and the FTX Future Fund. The Longtermism Fund's grantmaking is managed by Longview Philanthropy, which works closely with these other organisations, and is well positioned to coordinate with them to efficiently direct funding to the most cost-effective organisations. The fund will make grants approximately once each quarter. To give donors a sense of the kind of work within the fund's scope, here are some examples of organisations the fund would likely give grants to if funds were disbursed today: The Johns Hopkins Center for Health Security (CHS) — CHS is an independent research organisation working to improve organisations, systems, and tools used to prevent and respond to public health crises, including pandemics. Council on Strategic Risks (CSR) — CSR analyses and addresses core systemic risks to security. They focus on how different risks intersect (for example, how nuclear and climate risks may exacerbate each other) and seek to address them by working with key decision-makers. Centre for Human-Compatible Artificial Intelligence (CHAI) — CHAI is a research organisation aiming to shift the development of AI away from potentially dangerous systems we could lose control over, and towards provably safe systems that act in accordance with human interests even as they become increasingly powerful. Centre for the Governance of AI (GovAI) — GovAI is a policy research organisation that aims to build “a global research community, dedicated to helping humanity navigate the transition to a world with advanced AI.” The vision behind the Longtermism Fund We think that longtermism as an idea and movement is likely to become significantly more mainstream — especially with Will MacAskill's soon-to-be-released book, What We Owe The Future, and popular creators becoming more involved in promoting longtermist ideas. But what's the call to action? For many who want to contribute to longtermism, focusing on their careers (perhaps by pursuing one of 80,000 Hours' high-impact career paths) will be their best option. But for many others — and perhaps for most people — the most straightforward and accessible way to contribute is through donations. Our aim is for the Longtermism Fund to make it easier for people to support highly effective organisations working to improve the long-term future. Not only do we think that the money this fund will move will have significant impact, we also think the fund will provide another avenue for the broader community to engage with and implement these...

The Lunar Society
36: Will MacAskill - Longtermism, Altruism, History, & Technology

The Lunar Society

Play Episode Listen Later Aug 9, 2022 56:07


Will MacAskill is one of the founders of the Effective Altruist movement and the author of the upcoming book, What We Owe The Future.We talk about improving the future, risk of extinction & collapse, technological & moral change, problems of academia, who changes history, and much more.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.Read the full transcript here.Follow Will on Twitter. Follow me on Twitter for updates on future episodes.Subscribe to find out about future episodes!Timestamps(00:23) - Effective Altruism and Western values(07:47) - The contingency of technology(12:02) - Who changes history?(18:00) - Longtermist institutional reform(25:56) - Are companies longtermist?(28:57) - Living in an era of plasticity(34:52) - How good can the future be?(39:18) - Contra Tyler Cowen on what’s most important(45:36) - AI and the centralization of power(51:34) - The problems with academiaPlease share if you enjoyed this episode! Helps out a ton!TranscriptDwarkesh Patel 0:06Okay, today I have the pleasure of interviewing William MacAskill. Will is one of the founders of the Effective Altruism movement, and most recently, the author of the upcoming book, What We Owe The Future. Will, thanks for coming on the podcast.Will MacAskill 0:20Thanks so much for having me on.Effective Altruism and Western valuesDwarkesh Patel 0:23My first question is: What is the high-level explanation for the success of the Effective Altruism movement? Is it itself an example of the contingencies you talk about in the book?Will MacAskill 0:32Yeah, I think it is contingent. Maybe not on the order of, “this would never have happened,” but at least on the order of decades. Evidence that Effective Altruism is somewhat contingent is that similar ideas have been promoted many times during history, and not taken on.We can go back to ancient China, the Mohists defended an impartial view of morality, and took very strategic actions to help all people. In particular, providing defensive assistance to cities under siege. Then, there were early utilitarians. Effective Altruism is broader than utilitarianism, but has some similarities. Even Peter Singer in the 70s had been promoting the idea that we should be giving most of our income to help the very poor — and didn’t get a lot of traction until early 2010 after GiveWell and Giving What We Can launched.What explains the rise of it? I think it was a good idea waiting to happen. At some point, the internet helped to gather together a lot of like-minded people which wasn’t possible otherwise. There were some particularly lucky events like Alex meeting Holden and me meeting Toby that helped catalyze it at the particular time it did.Dwarkesh Patel 1:49If it's true, as you say, in the book, that moral values are very contingent, then shouldn't that make us suspect that modern Western values aren't that good? They're mediocre, or worse, because ex ante, you would expect to end up with a median of all the values we could have had at this point. Obviously, we'd be biased in favor of whatever values we were brought up in.Will MacAskill 2:09Absolutely. Taking history seriously and appreciating the contingency of values, appreciating that if the Nazis had won the World War, we would all be thinking, “wow, I'm so glad that moral progress happened the way it did, and we don't have Jewish people around anymore. What huge moral progress we had then!” That's a terrifying thought. I think it should make us take seriously the fact that we're very far away from the moral truth.One of the lessons I draw in the book is that we should not think we're at the end of moral progress. We should not think, “Oh, we should lock in the Western values we have.” Instead, we should spend a lot of time trying to figure out what's actually morally right, so that the future is guided by the right values, rather than whichever happened to win out.Dwarkesh Patel 2:56So that makes a lot of sense. But I'm asking a slightly separate question—not only are there possible values that could be better than ours, but should we expect our values - we have the sense that we've made moral progress (things are better than they were before or better than most possible other worlds in 2100 or 2200)- should we not expect that to be the case? Should our priors be that these are ‘meh’ values?Will MacAskill 3:19Our priors should be that our values are as good as expected on average. Then you can make an assessment like, “Are other values of today going particularly well?” There are some arguments you could make for saying no. Perhaps if the Industrial Revolution happened in India, rather than in Western Europe, then perhaps we wouldn't have wide-scale factory farming—which I think is a moral atrocity. Having said that, my view is to think that we're doing better than average.If civilization were just a redraw, then things would look worse in terms of our moral beliefs and attitudes. The abolition of slavery, the feminist movement, liberalism itself, democracy—these are all things that we could have lost and are huge gains.Dwarkesh Patel 4:14If that's true, does that make the prospect of a long reflection dangerous? If moral progress is a random walk, and we've ended up with a lucky lottery, then you're possibly reversing. Maybe you're risking regression to the mean if you just have 1,000 years of progress.Will MacAskill 4:30Moral progress isn't a random walk in general. There are many forces that act on culture and on what people believe. One of them is, “What’s right, morally speaking? What's their best arguments support?” I think it's a weak force, unfortunately.The idea of lumbar flexion is getting society into a state that before we take any drastic actions that might lock in a particular set of values, we allow this force of reason and empathy and debate and goodhearted model inquiry to guide which values we end up with.Are we unwise?Dwarkesh Patel 5:05In the book, you make this interesting analogy where humans at this point in history are like teenagers. But another common impression that people have of teenagers is that they disregard wisdom and tradition and the opinions of adults too early and too often. And so, do you think it makes sense to extend the analogy this way, and suggest that we should be Burkean Longtermists and reject these inside-view esoteric threats?Will MacAskill 5:32My view goes the opposite of the Burkean view. We are cultural creatures in our nature, and are very inclined to agree with what other people think even if we don't understand the underlying mechanisms. It works well in a low-change environment. The environment we evolved towards didn't change very much. We were hunter-gatherers for hundreds of years.Now, we're in this period of enormous change, where the economy is doubling every 20 years, new technologies arrive every single year. That's unprecedented. It means that we should be trying to figure things out from first principles.Dwarkesh Patel 6:34But at current margins, do you think that's still the case? If a lot of EA and longtermist thought is first principles, do you think that more history would be better than the marginal first-principles thinker?Will MacAskill 6:47Two things. If it's about an understanding of history, then I'd love EA to have a better historical understanding. The most important subject if you want to do good in the world is philosophy of economics. But we've got that in abundance compared to there being very little historical knowledge in the EA community.Should there be even more first-principles thinking? First-principles thinking paid off pretty well in the course of the Coronavirus pandemic. From January 2020, my Facebook wall was completely saturated with people freaking out, or taking it very seriously in a way that the existing institutions weren't. The existing institutions weren't properly updating to a new environment and new evidence.The contingency of technologyDwarkesh Patel 7:47In your book, you point out several examples of societies that went through hardship. Hiroshima after the bombings, Europe after the Black Death—they seem to have rebounded relatively quickly. Does this make you think that perhaps the role of contingency in history, especially economic history is not that large? And it implies a Solow model of growth? That even if bad things happen, you can rebound and it really didn't matter?Will MacAskill 8:17In economic terms, that's the big difference between economic or technological progress and moral progress. In the long run, economic or technological progress is very non-contingent. The Egyptians had an early version of the steam engine, semaphore was only developed very late yet could have been invented thousands of years in the past.But in the long run, the instrumental benefits of tech progress, and the incentives towards tech progress and economic growth are so strong, that we get there in a wide array of circumstances. Imagine there're thousands of different societies, and none are growing except for one. In the long run, that one becomes the whole economy.Dwarkesh Patel 9:10It seems that particular example you gave of the Egyptians having some ancient form of a steam engine points towards there being more contingency? Perhaps because the steam engine comes up in many societies, but it only gets turned into an industrial revolution in one?Will MacAskill 9:22In that particular case, there's a big debate about whether quality of metalwork made it actually possible to build a proper steam engine at that time. I mentioned those to share some amazing examples of contingency prior to the Industrial Revolution.It's still contingency on the order of centuries to thousands of years. Post industrial-revolution world, there's much less contingency. It's much harder to see technologies that wouldn't have happened within decades if they hadn't been developed when they were.Dwarkesh Patel 9:57The model here is, “These general-purpose changes in the state of technology are contingent, and it'd be very important to try to engineer one of those. But other than that, it's going to get done by some guy creating a start-up anyways?”Will MacAskill 10:11Even in the case of the steam engine that seemed contingent, it gets developed in the long run. If the Industrial Revolution hadn't happened in Britain in the 18th century, would it have happened at some point? Would similar technologies that were vital to the industrial revolution developed? Yes, there are very strong incentives for doing so.If there’s a culture that's into making textiles in an automated way as opposed to England in the 18th century, then that economy will take over the world. There's a structural reason why economic growth is much less contingent than moral progress.Dwarkesh Patel 11:06When people think of somebody like Norman Borlaug and the Green Revolution. It's like, “If you could have done something that, you'd be the greatest person in the 20th century.” Obviously, he's still a very good man, but would that not be our view? Do you think the green revolution would have happened anyways?Will MacAskill 11:22Yes. Norman Borlaug is sometimes credited with saving a billion lives. He was huge. He was a good force for the world. Had Norman Borlaug not existed, I don’t think a billion people would have died. Rather, similar developments would have happened shortly afterwards.Perhaps he saved tens of millions of lives—and that's a lot of lives for a person to save. But, it's not as many as simply saying, “Oh, this tech was used by a billion people who would have otherwise been at risk of starvation.” In fact, not long afterwards, there were similar kinds of agricultural development.Who changes history?Dwarkesh Patel 12:02What kind of profession or career choice tends to lead to the highest counterfactual impact? Is it moral philosophers?Will MacAskill 12:12Not quite moral philosophers, although there are some examples. Sticking on science technology, if you look at Einstein, theory of special relativity would have been developed shortly afterwards. However, theory of general relativity was plausibly decades in advance. Sometimes, you get surprising leaps. But, we're still only talking about decades rather than millennia. Moral philosophers could make long-term difference. Marx and Engels made an enormous, long-run difference. Religious leaders like Mohammed, Jesus, and Confucius made enormous and contingent, long-run difference. Moral activists as well.Dwarkesh Patel 13:04If you think that the changeover in the landscape of ideas is very quick today, would you still think that somebody like Marx will be considered very influential in the long future? Communism lasted less than a century, right?Will MacAskill 13:20As things turned out, Marx will not be influential over the long term future. But that could have gone another way. It's not such a wildly different history. Rather than liberalism emerging dominant in the 20th century, it was communism. The better technology gets, the better the ruling ideology is to cement its ideology and persist for a long time. You can get a set of knock-on effects where communism wins the war of ideas in the 20th century.Let’s say a world-government is based around those ideas, then, via anti-aging technology, genetic-enhancement technology, cloning, or artificial intelligence, it's able to build a society that possesses forever in accordance with that ideology.Dwarkesh Patel 14:20The death of dictators is especially interesting when you're thinking about contingency because there are huge changes in the regime. It makes me think the actual individual there was very important and who they happened to be was contingent and persistent in some interesting ways.Will MacAskill 14:37If you've got a dictatorship, then you've got single person ruling the society. That means it's heavily contingent on the views, values, beliefs, and personality of that person.Scientific talentDwarkesh Patel 14:48Going back to the second nation, in the book, you're very concerned about fertility. It seems your model about scientific and technological progress happens is number of people times average researcher productivity. If resource productivity is declining and the number of people isn't growing that fast, then that's concerning.Will MacAskill 15:07Yes, number of people times fraction of the population devoted to R&D.Dwarkesh Patel 15:11Thanks for the clarification. It seems that there have been a lot of intense concentrations of talent and progress in history. Venice, Athens, or even something like FTX, right? There are 20 developers making this a multibillion dollar company—do these examples suggest that organization and congregation of researchers matter more than the total amount?Will MacAskill 15:36The model works reasonably well. Throughout history, you start from a very low technological baseline compared to today. Most people aren't even trying to innovate. One argument for why Baghdad lost its Scientific Golden Age is because the political landscape changed such that what was incentivized was theological investigation rather than scientific investigation in the 10th/11th century AD.Similarly, one argument for why Britain had a scientific and industrial revolution rather than Germany was because all of the intellectual talent in Germany was focused on making amazing music. That doesn't compound in the way that making textiles does. If you look at like Sparta versus Athens, what was the difference? They had different cultures and intellectual inquiry was more rewarded in Athens.Because they're starting from a lower base, people trying to do something that looks like what we now think of as intellectual inquiry have an enormous impact.Dwarkesh Patel 16:58If you take an example like Bell Labs, the low-hanging fruit is gone by the late 20th century. You have this one small organization that has six Nobel Prizes. Is this a coincidence?Will MacAskill 17:14I wouldn't say that at all. The model we’re working with is the size of the population times the fraction of the population doing R&D. It's the simplest model you can have. Bell Labs is punching above its weight. You can create amazing things from a certain environment with the most productive people and putting them in an environment where they're ten times more productive than they would otherwise be.However, when you're looking at the grand sweep of history, those effects are comparatively small compared to the broader culture of a society or the sheer size of a population.Longtermist institutional reformDwarkesh Patel 18:00I want to talk about your paper on longtermist institutional reform. One of the things you advocate in this paper is that we should have one of the houses be dedicated towards longtermist priorities. Can you name some specific performance metrics you would use to judge or incentivize the group of people who make up this body?Will MacAskill 18:23The thing I'll caveat with longtermist institutions is that I’m pessimistic about them. If you're trying to represent or even give consideration to future people, you have to face the fact that they're not around and they can't lobby for themselves. However, you could have an assembly of people who have some legal regulatory power. How would you constitute that? My best guess is you have a random selection from the population? How would you ensure that incentives are aligned?In 30-years time, their performance will be assessed by a panel of people who look back and assess the policies’ effectiveness. Perhaps the people who are part of this assembly have their pensions paid on the basis of that assessment. Secondly, the people in 30-years time, both their policies and their assessment of the previous 30-years previous assembly get assessed by another assembly, 30-years after that, and so on. Can you get that to work? Maybe in theory—I’m skeptical in practice, but I would love some country to try it and see what happens.There is some evidence that you can get people to take the interests of future generations more seriously by just telling them their role. There was one study that got people to put on ceremonial robes, and act as trustees of the future. And they did make different policy recommendations than when they were just acting on the basis of their own beliefs and self-interest.Dwarkesh Patel 20:30If you are on that board that is judging these people, is there a metric like GDP growth that would be good heuristics for assessing past policy decisions?Will MacAskill 20:48There are some things you could do: GDP growth, homelessness, technological progress. I would absolutely want there to be an expert assessment of the risk of catastrophe. We don't have this yet, but imagine a panel of super forecasters predicting the chance of a war between great powers occurring in the next ten years that gets aggregated into a war index.That would be a lot more important than the stock market index. Risk of catastrophe would be helpful to feed into because you wouldn't want something only incentivizing economic growth at the expense of tail risks.Dwarkesh Patel 21:42Would that be your objection to a scheme like Robin Hanson’s about maximizing the expected future GDP using prediction markets and making decisions that way?Will MacAskill 21:50Maximizing future GDP is an idea I associate with Tyler Cowen. With Robin Hanson’s idea of voting on values but betting on beliefs, if people can vote on what collection of goods they want, GDP and unemployment might be good metrics. Beyond that, it's pure prediction markets. It's something I'd love to see tried. It’s an idea of speculative political philosophy about how a society could be extraordinarily different in structure that is incredibly neglected.Do I think it'll work in practice? Probably not. Most of these ideas wouldn't work. Prediction markets can be gamed or are simply not liquid enough. There hasn’t been a lot of success in prediction markets compared to forecasting. Perhaps you can solve these things. You have laws about what things can be voted on or predicted in the prediction market, you could have government subsidies to ensure there's enough liquidity. Overall, it's likely promising and I'd love to see it tried out on a city-level or something.Dwarkesh Patel 23:13Let’s take a scenario where the government starts taking the impact on the long-term seriously and institutes some reforms to integrate that perspective. As an example, you can take a look at the environmental movement. There're environmental review boards that will try to assess the environmental impact of new projects and repeal any proposals based on certain metrics.The impact here, at least in some cases, has been that groups that have no strong, plausible interest in the environment are able to game these mechanisms in order to prevent projects that would actually help the environment. With longtermism, it takes a long time to assess the actual impact of something, but policymakers are tasked with evaluating the long term impacts of something. Are you worried that it'd be a system that'd be easy to game by malicious actors? And they'd ask, “What do you think went wrong with the way that environmentalism was codified into law?”Will MacAskill 24:09It's potentially a devastating worry. You create something to represent future people, but they're not allowed to lobby themselves (it can just be co-opted). My understanding of environmental impact statements has been similar. Similarly, it's not like the environment can represent itself—it can't say what its interests are. What is the right answer there? Maybe there are speculative proposals about having a representative body that assesses these things and elect jobs by people in 30-years time. That's the best we've got at the moment, but we need a lot more thought to see if any of these proposals would be robust for the long term rather than things that are narrowly-focused.Regulation to have liability insurance for dangerous bio labs is not about trying to represent the interests of future generations. But, it's very good for the long-term. At the moment, if longtermists are trying to change the government, let's focus on a narrow set of institutional changes that are very good for the long-term even if they're not in the game of representing the future. That's not to say I'm opposed to all such things. But, there are major problems with implementation for any of them.Dwarkesh Patel 25:35If we don't know how we would do it correctly, did you have an idea of how environmentalism could have been codified better? Why was that not a success in some cases?Will MacAskill 25:46Honestly, I don't have a good understanding of that. I don't know if it's intrinsic to the matter or if you could’ve had some system that wouldn't have been co-opted in the long-term.Are companies longtermist?Dwarkesh Patel 25:56Theoretically, the incentives of our most long-term U.S. institutions is to maximize future cash flow. Explicitly and theoretically, they should have an incentive to do the most good they can for their own company—which implies that the company can’t be around if there’s an existential risk…Will MacAskill 26:18I don't think so. Different institutions have different rates of decay associated with them. So, a corporation that is in the top 200 biggest companies has a half-life of only ten years. It’s surprisingly short-lived. Whereas, if you look at universities Oxford and Cambridge are 800 years old. University of Bologna is even older. These are very long-lived institutions.For example, Corpus Christi at Oxford was making a decision about having a new tradition that would occur only every 400 years. It makes that kind of decision because it is such a long-lived institution. Similarly, the legends can be even longer-lived again. That type of natural half-life really affects the decisions a company would make versus a university versus a religious institution.Dwarkesh Patel 27:16Does that suggest that there's something fragile and dangerous about trying to make your institution last for a long time—if companies try to do that and are not able to?Will MacAskill 27:24Companies are composed of people. Is it in the interest of a company to last for a long time? Is it in the interests of the people who constitute the company (like the CEO and the board and the shareholders) for that company to last a long time? No, they don't particularly care. Some of them do, but most don't. Whereas other institutions go both ways. This is the issue of lock-in that I talked about at length in What We Owe The future: you get moments of plasticity during the formation of a new institution.Whether that’s the Christian church or the Constitution of the United States, you lock-in a certain set of norms. That can be really good. Looking back, the U.S. Constitution seems miraculous as the first democratic constitution. As I understand it, it was created over a period of four months seems to have stood the test of time. Alternatively, lock-in norms could be extremely dangerous. There were horrible things in the U.S. Constitution like the legal right to slavery proposed as a constitutional amendment. If that had locked in, it would have been horrible. It's hard to answer in the abstract because it depends on the thing that's persisting for a long time.Living in an era of plasticityDwarkesh Patel 28:57You say in the book that you expect our current era to be a moment of plasticity. Why do you think that is?Will MacAskill 29:04There are specific types of ‘moments of plasticity’ for two reasons. One is a world completely unified in a way that's historically unusual. You can communicate with anyone instantaneously and there's a great diversity of moral views. We can have arguments, like people coming on your podcast can debate what's morally correct. It's plausible to me that one of many different sets of moral views become the most popular ultimately.Secondly, we're at this period where things can really change. But, it's a moment of plasticity because it could plausibly come to an end — and the moral change that we're used to could end in the coming decades. If there was a single global culture or world government that preferred ideological conformity, combined with technology, it becomes unclear why that would end over the long-term? The key technology here is Artificial Intelligence. The point in time (which may be sooner than we think) where the rulers of the world are digital rather than biological, that [ideological conformity] could persist.Once you've got that and a global hegemony of a single ideology, there's not much reason for that set of values to change over time. You've got immortal leaders and no competition. What are the other kind of sources of value-change over time? I think they can be accounted for too.Dwarkesh Patel 30:46Isn't the fact that we are in a time of interconnectedness that won't last if we settle space — isn't that bit of reason for thinking that lock-in is not especially likely? If your overlords are millions of light years away, how well can they control you?Will MacAskill 31:01The “whether” you have is whether the control will happen before the point of space settlement. If we took to space one day, and there're many different settlements and different solar systems pursuing different visions of the good, then you're going to maintain diversity for a very long time (given the physics of the matter).Once a solar system has been settled, it's very hard for other civilizations to come along and conquer you—at least if we're at a period of technological maturity where there aren't groundbreaking technologies to be discovered. But, I'm worried that the control will happen earlier. I'm worried the control might happen this century, within our lifetimes. I don't think it’s very likely, but it's seriously on the table - 10% or something?Dwarkesh Patel 31:53Hm, right. Going back to the long-term of the longtermism movement, there are many instructive foundations that were set up about a century ago like the Rockefeller Foundation, Carnegie Foundation. But, they don't seem to be especially creative or impactful today. What do you think went wrong? Why was there, if not value drift, some decay of competence and leadership and insight?Will MacAskill 32:18I don't have strong views about those particular examples, but I have two natural thoughts. For organizations that want to persist a long time and keep having an influence for a long time, they’ve historically specified their goals in far too narrow terms. One fun example is Benjamin Franklin. He invested a thousand pounds for each of the cities of Philadelphia and Boston to pay out after 100 years and then 200 years for different fractions of the amount invested. But, he specified it to help blacksmith apprentices. You might think this doesn't make much sense when you’re in the year 2000. He could have invested more generally: for the prosperity of people in Philadelphia and Boston. It would have had plausibly more impact.The second is a ‘regression to the mean’ argument. You have some new foundation and it's doing an extraordinary amount of good as the Rockefeller Foundation did. Over time, if it's exceptional in some dimension, it's probably going to get closer to average on that dimension. This is because you’re changing the people involved. If you've picked exceptionally competent and farsighted people, the next generation are statistically going to be less so.Dwarkesh Patel 33:40Going back to that hand problem: if you specify your mission too narrowly and it doesn't make sense in the future—is there a trade off? If you're too broad, you make space for future actors—malicious or uncreative—to take the movement in ways that you would not approve of? With regards to doing good for Philadelphia, what if it turns into something that Ben Franklin would not have thought is good for Philadelphia?Will MacAskill 34:11It depends on what your values and views are. If Benjamin Franklin only cared about blacksmith's apprentices, then he was correct to specify it. But my own values tend to be quite a bit more broad than that. Secondly, I expect people in the future to be smarter and more capable. It’s certainly the trend over time. In which case, if we’re sharing similar broad goals, and they're implementing it in a different way, then they have it.How good can the future be?Dwarkesh Patel 34:52Let's talk about how good we should expect the future to be. Have you come across Robin Hanson’s argument that we’ll end up being subsistence-level ems because there'll be a lot of competition and minimizing compute per digital person will create a barely-worth-living experience for every entity?Will MacAskill 35:11Yeah, I'm familiar with the argument. But, we should distinguish the idea that ems are at subsistence level from the idea that we would have bad lives. So subsistence means that you get a balance of income per capita and population growth such that being poorer would cause deaths to outweigh additional births.That doesn't tell you about their well-being. You could be very poor as an emulated being but be in bliss all the time. That's perfectly consistent with the Malthusian theory. It might seem far away from the best possible future, but it could still be very good. At subsistence, those ems could still have lives that are thousands of times better than ours.Dwarkesh Patel 36:02Speaking of being poor and happy, there was a very interesting section in the chapter where you mentioned the study you had commissioned: you were trying to find out if people in the developing world find life worth living. It turns out that 19% of Indians would not want to relive their life every moment. But, 31% of Americans said that they would not want to relive their life at every moment? So, why are Indians seemingly much happier at less than a tenth of the GDP per capita?Will MacAskill 36:29I think the numbers are lower than that from memory, at least. From memory, it’s something more like 9% of Indians wouldn't want to live their lives again if they had the option, and 13% of Americans said they wouldn’t. You are right on the happiness metric, though. The Indians we surveyed were more optimistic about their lives, happier with their lives than people in the US were. Honestly, I don't want to generalize too far from that because we were sampling comparatively poor Americans to comparatively well-off Indians. Perhaps it's just a sample effect.There are also weird interactions with Hinduism and the belief in reincarnation that could mess up the generalizability of this. On one hand, I don't want to draw any strong conclusion from that. But, it is pretty striking as a piece of information, given that you find people's well-being in richer countries considerably happier than poorer countries, on average.Dwarkesh Patel 37:41I guess you do generalize in a sense that you use it as evidence that most lives today are living, right?Will MacAskill 37:50Exactly. So, I put together various bits of evidence, where approximately 10% of people in the United States and 10% of people in India seem to think that their lives are net negative. They think they contain more suffering than happiness and wouldn't want to be reborn and live the same life if they could.There's another scripture study that looks at people in United States/other wealthy countries, and asks them how much of their conscious life they'd want to skip if they could. Skipping here means that blinking would reach you to the end of whatever activity you're engaging with. For example, perhaps I hate this podcast so much that I would rather be unconscious than be talking to you. In which case, I'd have the option of skipping, and it would be over after 30 minutes.If you look at that, and then also asked people about the trade offs they would be willing to make as a measure of intensity of how much they're enjoying a certain experience, you reach the conclusion that a little over 10% of people regarded their life that day as being surveyed worse than if they'd been unconscious the entire day.Contra Tyler Cowen on what’s most importantDwarkesh Patel 39:18Jumping topics here a little bit, on the 80,000 Hours Podcast, you said that you expect scientists who are explicitly trying to maximize their impact might have an adverse impact because they might be ignoring the foundational research that wouldn't be obvious in this way of thinking, but might be more important.Do you think this could be a general problem with longtermism? If you were trying to find the most important things that are important long-term, you might be missing things that wouldn't be obvious thinking this way?Will MacAskill 39:48Yeah, I think that's a risk. Among the ways that people could argue against my general set of views, I argue that we should be doing fairly specific and targeted things like trying to make AI safe, well-govern the rise of AI, reduce worst-case pandemics that can kill us all, prevent a Third World War, ensure that good values are promoted, and avoid value lock-in. But, some people could argue (and people like Tyler Cowen and Patrick Collison do), that it's very hard to predict the future impact of your actions.It's a mug's game to even try. Instead, you should look at the things that have done loads of good consistently in the past, and try to do the same things. In particular, they might argue that means technological progress or boosting economic growth. I dispute that. It's not something I can give a completely knock-down argument to because we don’t know when we will find out who's right. Maybe in thousand-years time. But one piece of evidence is the success of forecasters in general. This also was true for Tyler Cowen, but people in Effective Altruism were realizing that the Coronavirus pandemic was going to be a big deal for them. At an early stage, they were worrying about pandemics far in advance. There are some things that are actually quite predictable.For example, Moore's Law has held up for over 70 years. The idea that AI systems are gonna get much larger and leading models are going to get more powerful are on trend. Similarly, the idea that we will be soon be able to develop viruses of unprecedented destructive power doesn’t feel too controversial. Even though it’s hard to predict loads of things, there are going to be tons of surprises. There are some things, especially when it comes to fairly long-standing technological trends, that we can make reasonable predictions — at least about the range of possibilities that are on the table.Dwarkesh Patel 42:19It sounds like you're saying that the things we know are important now. But, if something didn't turn out, a thousand years ago, looking back to be very important, it wouldn't be salient to us now?Will MacAskill 42:31What I was saying with me versus Patrick Collison and Tyler Cowen, who is correct? We will only get that information in a thousand-years time because we're talking about impactful strategies for the long-term. We might get suggestive evidence earlier. If me and others engaging in longtermism are making specific, measurable forecasts about what is going to happen with AI, or advances in biotechnology, and then are able to take action such that we are clearly reducing certain risks, that's pretty good evidence in favor of our strategy.Whereas, they're doing all sorts of stuff, but not make firm predictions about what's going to happen, but then things pop out of that that are good for the long-term (say we measure this in ten-years time), that would be good evidence for their view.Dwarkesh Patel 43:38You were saying earlier about the contingency in technology implies that given their worldview, even if you're trying to maximize what in the past is at the most impact, if what's had the most impact in the past is changing values, then economic growth might be the most important thing? Or trying to change the rate of economic growth?Will MacAskill 43:57I really do take the argument seriously of how people have acted in the past, especially for people trying to make a long-lasting impact. What things that they do that made sense and whatnot. So, towards the end of the 19th century, John Stuart Mill and the other early utilitarians had this longtermist wave where they started taking the interests of future generations very seriously. Their main concern was Britain running out of coal, and therefore, future generations would be impoverished. It's pretty striking because they had a very bad understanding of how the economy works. They hadn't predicted that we would be able to transition away from coal with continued innovation.Secondly, they had enormously wrong views about how much coal and fossil fuels there were in the world. So, that particular action didn't make any sense given what we know now. In fact, that particular action of trying to keep coal in the ground, given Britain at the time where we're talking about much lower amounts of coal—so small that the climate change effect is negligible at that level—probably would have been harmful.But, we could look at other things that John Stuart Mill could have done such promoting better values. He campaigned for women's suffrage. He was the first British MP. In fact, even the first politician in the world to promote women's suffrage - that seems to be pretty good. That seems to have stood the test of time. That's one historical data point. But potentially, we can learn a more general lesson there.AI and the centralization of powerDwarkesh Patel 45:36Do you think the ability of your global policymakers to come to a consensus is on net, a good or a bad thing? On the positive, maybe it helps around some dangerous tech from taking off, but on the negative side, prevent human challenge trials that cause some lock-in in the future. On net, what do you think about that trend?Will MacAskill 45:54The question of global integration, you're absolutely right, it's double-sided. One hand, it can help us reduce global catastrophic risks. The fact that the world was able to come come together and ban Chlorofluorocarbons was one of the great events of the last 50 years, allowing the hole in the ozone layer to to repair itself. But on the other hand, if it means we all converge to one monoculture and lose out on diversity, that's potentially bad. We could lose out on the most possible value that way.The solution is doing the good bits and not having the bad bits. For example, in a liberal constitution, you can have a country that is bound in certain ways by its constitution and by certain laws yet still enables a flourishing diversity of moral thought and different ways of life. Similarly, in the world, you can have very strong regulation and treaties that only deal with certain global public goods like mitigation of climate change, prevention of development of the next generation of weapons of mass destruction without having some very strong-arm global government that implements a particular vision of the world. Which way are we going at the moment? It seems to me we've been going in a pretty good and not too worrying direction. But, that could change.Dwarkesh Patel 47:34Yeah, it seems the historical trend is when you have a federated political body that even if constitutionally, the Central Powers constrain over time, they tend to gain more power. You can look at the U.S., you can look at the European Union. But yeah, that seems to be the trend.Will MacAskill 47:52Depending on the culture that's embodied there, it's potentially a worry. It might not be if the culture itself is liberal and promoting of moral diversity and moral change and moral progress. But, that needn't be the case.Dwarkesh Patel 48:06Your theory of moral change implies that after a small group starts advocating for a specific idea, it may take a century or more before that idea reaches common purchase. To the extent that you think this is a very important century (I know you have disagreements about that with with others), does that mean that there isn't enough time for longtermism to gain by changing moral values?Will MacAskill 48:32There are lots of people I know and respect fairly well who think that Artificial General Intelligence will likely lead to singularity-level technological progress and extremely rapid rate of technological progress within the next 10-20 years. If so, you’re right. Value changes are something that pay off slowly over time.I talk about moral change taking centuries historically, but it can be much faster today. The growth of the Effective Altruism movement is something I know well. If that's growing at something like 30% per year, compound returns mean that it's not that long. That's not growth. That's not change that happens on the order of centuries.If you look at other moral movements like gay rights movement, very fast moral change by historical standards. If you're thinking that we've got ten years till the end of history, then don't broadly try and promote better values. But, we should have a very significant probability mass on the idea that we will not hit some historical end of this century. In those worlds, promoting better values could pay off like very well.Dwarkesh Patel 49:59Have you heard of Slime Mold Time Mold Potato Diet?Will MacAskill 50:03I have indeed heard of Slime Mold Time Mold Potato Diet, and I was tempted as a gimmick to try it. As I'm sure you know, potato is close to a superfood, and you could survive indefinitely on butter mashed potatoes if you occasionally supplement with something like lentils and oats.Dwarkesh Patel 50:25Hm, interesting. Question about your career: why are you still a professor? Does it still allow you to the things that you would otherwise have been doing like converting more SBF’s and making moral philosophy arguments for EA? Curious about that.Will MacAskill 50:41It's fairly open to me what I should do, but I do spend significant amounts of time co-founding organizations or being on the board of those organizations I've helped to set up. More recently, working closely with the Future Fund, SBF’s new foundation, and helping them do as much good as possible. That being said, if there's a single best guess for what I want to do longer term, and certainly something that plays to my strengths better, it's developing ideas, trying to get the big picture roughly right, and then communicating them in a way that's understandable and gets more people to get off their seats and start to do a lot of good for the long-term. I’ve had a lot of impact that way. From that perspective, having an Oxford professorship is pretty helpful.The problems with academiaDwarkesh Patel 51:34You mentioned in the book and elsewhere that there's a scarcity of people thinking about big picture questions—How contingent is history? How are people happy generally?—Are these questions that are too hard for other people? Or they don't care enough? What's going on? Why are there so few people talking about this?Will MacAskill 51:54I just think there are many issues that are enormously important but are just not incentivized anywhere in the world. Companies don't incentivize work on them because they’re too big picture. Some of these questions are, “Is the future good, rather than bad? If there was a global civilizational collapse, would we recover? How likely is a long stagnation?” There’s almost no work done on any of these topics. Companies aren't interested too grand in scale.Academia has developed a culture where you don't tackle such problems. Partly, that's because they fall through the cracks of different disciplines. Partly because they seem too grand or too speculative. Academia is much more in the mode of making incremental gains in our understanding. It didn't always used to be that way.If you look back before the institutionalization of academic research, you weren't a real philosopher unless you had some grand unifying theory of ethics, political philosophy, metaphysics, logic, and epistemology. Probably the natural sciences too and economics. I'm not saying that all of academic inquiry should be like that. But should there be some people whose role is to really think about the big picture? Yes.Dwarkesh Patel 53:20Will I be able to send my kids to MacAskill University? What's the status on that project?Will MacAskill 53:25I'm pretty interested in the idea of creating a new university. There is a project that I've been in discussion about with another person who's fairly excited about making it happen. Will it go ahead? Time will tell. I think you can do both research and education far better than it currently exists. It's extremely hard to break in or creating something that's very prestigious because the leading universities are hundreds of years old. But maybe it's possible. I think it would could generate enormous amounts of value if we were able to pull it off.Dwarkesh Patel 54:10Excellent, alright. So the book is What We Owe The Future. I understand pre-orders help a lot, right? It was such an interesting read. How often does somebody write a book about the questions they consider to be the most important even if they're not the most important questions? Big picture thinking, but also looking at very specific questions and issues that come up. Super interesting read.Will MacAskill 54:34Great. Well, thank you so much!Dwarkesh Patel 54:38Anywhere else they can find you? Or any other information they might need to know?Will MacAskill 54:39Yeah, sure. What We Owe The Future is out on August 16 in the US and first of September in the United Kingdom. If you want to follow me on Twitter, I'm @WillMcCaskill. If you want to try and use your time or money to do good, Giving What We Can is an organization that encourages people to take a pledge to give a significant fraction of the income (10% or more) to the charities that do the most good. It has a list of recommended charities. 80,000 Hours—if you want to use your career to do good—is a place to go for advice on what careers have the biggest impact at all. They provide one-on-one coaching too.If you're feeling inspired and want to do good in the world, you care about future people and I want to help make their lives go better, then, as well as reading What We Owe The Future, Giving What We Can, and 80,000 hours are the sources you can go to and get involved.Dwarkesh Patel 55:33Awesome, thanks so much for coming on the podcast! It was a lot of fun.Will MacAskill 54:39Thanks so much, I loved it. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com

The Nonlinear Library
EA - [link post] The Case for Longtermism in The New York Times by abierohrig

The Nonlinear Library

Play Episode Listen Later Aug 5, 2022 0:38


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [link post] The Case for Longtermism in The New York Times, published by abierohrig on August 5, 2022 on The Effective Altruism Forum. An adapted excerpt from What We Owe The Future by Will MacAskill is now live in The New York Times and will run as the cover story for their Sunday Opinion this weekend. I think the piece makes for a great concise introduction to longtermism, so please consider sharing the piece on social media to boost its reach! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Some updates in EA communications by abierohrig

The Nonlinear Library

Play Episode Listen Later Aug 2, 2022 4:32


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some updates in EA communications, published by abierohrig on August 2, 2022 on The Effective Altruism Forum. In May, Julia Wise posted about how EA will likely get more attention soon and the steps that EA organizations are taking to prepare. This is a brief follow-up post that includes some specific actions that EA organizations are taking in light of this increase in attention. We wanted to provide this update in particular due to the upcoming launch of What We Owe The Future by Will MacAskill, which is releasing on August 16th in the US and September 1st in the UK. Will is doing an extensive podcast tour to promote the book, as well as several interviews and articles in high-profile newspapers, many of which are releasing in the two weeks before the book launch. We are hoping that the media around the launch will help fill a gap in accurate, public-facing descriptions of longtermism. You shouldn't be surprised if there's a significant uptick in public discourse about both effective altruism and longtermism in August! Below are some updates about EA communications activity in the last couple of months. New Head of Communications at CEA The Centre for Effective Altruism recently hired Shakeel Hashim as Head of Communications, to focus on communicating EA ideas accurately outside EA. Shakeel is currently a news editor at The Economist, and will be starting at CEA in early September. As several recent forum posts have noted, there has thus far been somewhat of a vacuum for a proactive EA and longtermism press strategy, and we hope that Shakeel will help to fill that gap. This will include seeking coverage of EA and EA-related issues in credible media sources, building strong relationships with spokespeople in EA and advising them on opportunities to talk to journalists, and generally helping to represent effective altruism accurately. Importantly, Shakeel will not focus on leading communications for CEA as an organization, but rather for the overall EA movement, in coordination with many partner organizations. Greater coordination between organizations on EA communications As Julia mentioned in her original post, staff at CEA, Forethought Foundation, Open Philanthropy, and TSD (a strategic communications firm that has worked with Open Philanthropy for several years) have been meeting regularly to discuss both immediate communications issues and our long-term strategy. Some immediate focuses have been on responding to incoming news inquiries, preparing spokespeople for interviews, and preparing possible responses to articles about EA and longtermism. We've also drafted a longer-term strategy for how we can communicate about EA and related ideas. New Intro to EA Essay CEA has also just posted a new Introduction to Effective Altruism article. The article may go through some further iterations in the coming weeks, but we think it functions as a far better and more up-to-date description of effective altruism than the previous essay. We believe this new essay should serve as a main illustration of communications best practices in action: it uses examples of good EA-inspired projects to illustrate core values like prioritization, impartiality, truthseeking, and collaborative spirit; it recognizes the importance of both research to identify problems and the practical effort to ameliorate them; and it foregrounds both global health and wellbeing and longtermist causes. We hope this essay, beyond being a soft landing spot for people curious about EA, can serve as talking points for EA communicators. We welcome comments and feedback on the essay, but please don't share it too widely (e.g. on Twitter) yet: we want to improve this further and then advertise it. Resources If you'd like to flag any concerns or questions you have about media pieces, please email media@centreforeffe...

EARadio
Fireside chat | Will MacAskill | EA Global: London 2021

EARadio

Play Episode Listen Later Jun 5, 2022 58:45


William MacAskill is an Associate Professor in Philosophy at Oxford University and a senior research fellow at the Global Priorities Institute. He is also the director of the Forethought Foundation for Global Priorities Research and co-founder and President of the Centre for Effective Altruism. He is also the author of Doing Good Better and Moral Uncertainty, and has an upcoming book on longtermism called What We Owe The Future.This talk was taken from EA Global: London 2021. Click here to watch the talk with the video.

80,000 Hours Podcast with Rob Wiblin
#130 - Will MacAskill on balancing frugality with ambition, whether you need longtermism, & mental health under pressure

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later May 23, 2022 136:40


Imagine you lead a nonprofit that operates on a shoestring budget. Staff are paid minimum wage, lunch is bread and hummus, and you're all bunched up on a few tables in a basement office. But over a few years, your cause attracts some major new donors. Your funding jumps a thousandfold, from $100,000 a year to $100,000,000 a year. You're the same group of people committed to making sacrifices for the cause - but these days, rather than cutting costs, the right thing to do seems to be to spend serious money and get things done ASAP. You suddenly have the opportunity to make more progress than ever before, but as well as excitement about this, you have worries about the impacts that large amounts of funding can have. This is roughly the situation faced by today's guest Will MacAskill - University of Oxford philosopher, author of the forthcoming book What We Owe The Future, and founding figure in the effective altruism movement. Links to learn more, summary and full transcript. Years ago, Will pledged to give away more than 50% of his income over his life, and was already donating 10% back when he was a student with next to no income. Since then, the coalition he founded has been super successful at attracting the interest of donors who collectively want to give away billions in the way Will and his colleagues were proposing. While surely a huge success, it brings with it risks that he's never had to consider before: * Will and his colleagues might try to spend a lot of money trying to get more things done more quickly - but actually just waste it. * Being seen as profligate could strike onlookers as selfish and disreputable. * Folks might start pretending to agree with their agenda just to get grants. * People working on nearby issues that are less flush with funding may end up resentful. * People might lose their focus on helping others as they get seduced by the prospect of earning a nice living. * Mediocre projects might find it too easy to get funding, even when the people involved would be better off radically changing their strategy, or shutting down and launching something else entirely. But all these 'risks of commission' have to be weighed against 'risk of omission': the failure to achieve all you could have if you'd been truly ambitious. People looking askance at you for paying high salaries to attract the staff you want is unpleasant. But failing to prevent the next pandemic because you didn't have the necessary medical experts on your grantmaking team is worse than unpleasant - it's a true disaster. Yet few will complain, because they'll never know what might have been if you'd only set frugality aside. Will aims to strike a sensible balance between these competing errors, which he has taken to calling judicious ambition. In today's episode, Rob and Will discuss the above as well as: * Will humanity likely converge on good values as we get more educated and invest more in moral philosophy - or are the things we care about actually quite arbitrary and contingent? * Why are so many nonfiction books full of factual errors? * How does Will avoid anxiety and depression with more responsibility on his shoulders than ever? * What does Will disagree with his colleagues on? * Should we focus on existential risks more or less the same

The Nonlinear Library
EA - "Big tent" effective altruism is very important (particularly right now) by Luke Freeman

The Nonlinear Library

Play Episode Listen Later May 20, 2022 7:33


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Big tent" effective altruism is very important (particularly right now), published by Luke Freeman on May 20, 2022 on The Effective Altruism Forum. This August, when Will MacAskill launches What We Owe The Future, we will see a spike of interest in longtermism and effective altruism more broadly. People will form their first impressions – these will be hard to shake. After hearing of these ideas for the first time, they will be wondering things like: Who are these people? (Can I trust them? Are they like me? Do they have an ulterior agenda?) What can I do (literally right now and also might it might my decisions over time)? What does this all mean for me and my life? If we're lucky, they'll investigate these questions. The answers they get matter (and so does their experience finding those answers). I get the sense that effective altruism is at a crossroads right now. We can either become a movement of people who appear dedicated to a particular set of conclusions about the world, or we can become a movement of people that appear united by a shared commitment to using reason and evidence to do the most good we can. In the former case, I expect to become a much smaller group, easier to coordinate our focus, but it's also a group that's more easily dismissed. People might see us as a bunch of nerds who have read too many philosophy papers and who are out of touch with the real world. In the latter case, I'd expect to become a much bigger group. I'll admit that it's also a group that's harder to organise (people are coming at the problem from different angles and with varying levels of knowledge). However, if we are to have the impact we want: I'd bet on the latter option. I don't believe we can – nor should – simply tinker on the margins forever nor try to act as a "shadowy cabal". As we grow, we will start pushing for bigger and more significant changes, and people will notice. We've already seen this with the increased media coverage of things like political campaigns and prominent people that are seen to be EA-adjacent. A lot of these first impressions we won't be able to control. But we can try to spread good memes about EA (inspiring and accurate ones), and we do have some level of control about what happens when people show up at our "shop fronts" (e.g. prominent organisations, local and university groups, conferences etc.). I recently had a pretty disheartening exchange where I heard from a new GWWC member who'd started to help run a local group felt "discouraged and embarrassed" at an EAGx conference. They left feeling like they weren't earning enough to be "earning to give" and that they didn't belong in the community if they're not doing direct work (or don't have an immediate plan to drop everything and change). They said this "poisoned" their interest in EA. Experiences like this aren't always easy to prevent, but it's worth trying. We are aware that we are one of the "shop fronts" at Giving What We Can. So we're currently thinking about how we represent worldview diversity within effective giving and what options we present to first-time donors. Some examples: We're focusing on providing easily legible options (e.g. larger organisations with an understandable mission and strong track record instead of more speculative small grants that foundations better make) and easier decisions (e.g. "I want to help people now" or "I want to help future generations"). We're also cautious about how we talk about The Giving What We Can Pledge to ensure that it's framed as an invitation for those who want it and not an admonition of those for whom it's not the right fit. We're working to ensure that people who first come across EA via effective giving can find their way to the actions that best fit them (e.g. by introducing them to the broader EA community). We often cros...

The Nonlinear Library
EA - "Long-Termism" vs. "Existential Risk" by Scott Alexander

The Nonlinear Library

Play Episode Listen Later Apr 6, 2022 5:55


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Long-Termism" vs. "Existential Risk", published by Scott Alexander on April 6, 2022 on The Effective Altruism Forum. The phrase "long-termism" is occupying an increasing share of EA community "branding". For example, the Long-Term Future Fund, the FTX Future Fund ("we support ambitious projects to improve humanity's long-term prospects"), and the impending launch of What We Owe The Future ("making the case for long-termism"). Will MacAskill describes long-termism as: I think this is an interesting philosophy, but I worry that in practical and branding situations it rarely adds value, and might subtract it. In The Very Short Run, We're All Dead AI alignment is a central example of a supposedly long-termist cause. But Ajeya Cotra's Biological Anchors report estimates a 10% chance of transformative AI by 2031, and a 50% chance by 2052. Others (eg Eliezer Yudkowsky) think it might happen even sooner. Let me rephrase this in a deliberately inflammatory way: if you're under ~50, unaligned AI might kill you and everyone you know. Not your great-great-(...)-great-grandchildren in the year 30,000 AD. Not even your children. You and everyone you know. As a pitch to get people to care about something, this is a pretty strong one. But right now, a lot of EA discussion about this goes through an argument that starts with "did you know you might want to assign your descendants in the year 30,000 AD exactly equal moral value to yourself? Did you know that maybe you should care about their problems exactly as much as you care about global warming and other problems happening today?" Regardless of whether these statements are true, or whether you could eventually convince someone of them, they're not the most efficient way to make people concerned about something which will also, in the short term, kill them and everyone they know. The same argument applies to other long-termist priorities, like biosecurity and nuclear weapons. Well-known ideas like "the hinge of history", "the most important century" and "the precipice" all point to the idea that existential risk is concentrated in the relatively near future - probably before 2100. The average biosecurity project being funded by Long-Term Future Fund or FTX Future Fund is aimed at preventing pandemics in the next 10 or 30 years. The average nuclear containment project is aimed at preventing nuclear wars in the next 10 to 30 years. One reason all of these projects are good is that they will prevent humanity from being wiped out, leading to a flourishing long-term future. But another reason they're good is that if there's a pandemic or nuclear war 10 or 30 years from now, it might kill you and everyone you know. Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism? I think yes, but pretty rarely, in ways that rarely affect real practice. Long-termism might be more willing to fund Progress Studies type projects that increase the rate of GDP growth by .01% per year in a way that compounds over many centuries. "Value change" type work - gradually shifting civilizational values to those more in line with human flourishing - might fall into this category too. In practice I rarely see long-termists working on these except when they have shorter-term effects. I think there's a sense that in the next 100 years, we'll either get a negative technological singularity which will end civilization, or a positive technological singularity which will solve all of our problems - or at least profoundly change the way we think about things like "GDP growth". Most long-termists I see are trying to shape the progress and values landscape up until that singularity, in the hopes of affecting which way the singularity goes - which puts them on the same page as thoughtful short-termists planning for the next 100 years. Long...

The Nonlinear Library
EA - Announcing What The Future Owes Us by peterbarnett

The Nonlinear Library

Play Episode Listen Later Apr 1, 2022 1:34


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing What The Future Owes Us, published by peterbarnett on April 1, 2022 on The Effective Altruism Forum. I am excited to announce the sequel to the recently announced book What We Owe The Future. This new book will be called What The Future Owes Us, and will focus primarily on the common criticism “why should I do anything for the future if they won't do anything for me?” The first half of the book dives into ways in which the future can do things for us. Sometimes this is in straightforward ways, such as carrying forward our values and continuing our legacies. But it will be argued that there are much more impactful and direct ways the future can do things for us via acausal trade and via the future's acausal influence over the past (our present). Due to the vast number of potential people in the future, we expect that they will be able to exert a large amount of acausal control over the past (the specifics of this argument will be fleshed out by the future author). The second half of the book looks at ways we can ensure the future acts in the best interest of the past, just as we have acted in the best interests of them. We will investigate commitment devices, and novel metaphysics which may allow this to be possible. Ultimately it will be argued that we in the present can control the future, which can in turn have a positive impact on us today. Hopefully, this book will bring more moral egoists to longtermism, because the best way to help oneself may be to help the future. The book is not up for preorder yet, but we do have cover designs. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

speech ea owes rationalist what we owe the future
The Nonlinear Library
EA - What are great marketing ideas to encourage pre-orders of What We Owe The Future? by abierohrig

The Nonlinear Library

Play Episode Listen Later Mar 31, 2022 1:54


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are great marketing ideas to encourage pre-orders of What We Owe The Future?, published by abierohrig on March 30, 2022 on The Effective Altruism Forum. What We Owe The Future, Will MacAskill's upcoming book on longtermism, is available for pre-order! Pre-orders are especially helpful for the book's eventual reach since (a) early pre-orders cause booksellers to order more copies of the book and market it more prominently, and (b) pre-orders combine with first-week sales, making the book more likely to reach bestsellers lists upon launch. In mid-June, 8 weeks before the launch of the book on August 16, we'll begin a pre-order campaign to the general public. I'd love to hear ideas on how to (a) encourage pre-orders within the EA community in the next couple of months, and (b) how to encourage pre-orders from the general public, especially to young people who might switch to high-impact careers. Note that once the book is launched, we'll have more leeway to do large book giveaways, akin to current giveaways of The Precipice. Right now, “organic” sales (orders of 20 or fewer copies coming from individual accounts rather than giveaways or reimbursement schemes) are critical. I'm interested in all takes, but here are some specific questions that might be helpful: How should we promote the book at upcoming EAGs? (Beyond emails to attendees, we're considering giving away free bookmarks with QR codes to pre-order, and are open to creative marketing ideas that are not overbearing to attendees!) Are there specific influential people that you wish EA was engaging more that we should share the book with? Are there specific shows or websites that you watch regularly that you think we should advertise with or pitch content to? Which endorsements would be most impactful for the book? (See our top endorsements so far here) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Announcing What We Owe The Future by William MacAskill

The Nonlinear Library

Play Episode Listen Later Mar 30, 2022 7:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing What We Owe The Future, published by William MacAskill on March 30, 2022 on The Effective Altruism Forum. Summary My new book on longtermism, What We Owe The Future, will come out this summer You can now pre-order a copy (US link, UK link) If you want to help with the book launch, I think pre-ordering is probably the highest-impact thing you can do right now Longer Summary I've written a book!! It's called What We Owe The Future. It makes the case for longtermism — the view that positively affecting the long-run future is a key moral priority of our time — and explores what follows from that view. As well as the familiar topics of AI and extinction risk, it also discusses value lock-in, civilisational collapse and recovery, technological stagnation, population ethics, and the expected value of the future. I see it as a sequel to Doing Good Better, and a complement to The Precipice. I think I've probably worked harder on this project than on any other in my life, and I'm excited that the launch date is finally now in sight: Aug 16th in the US and Sep 1st in the UK. I'm looking forward to being able to share it and discuss it with you all! I'm now focused on trying to make the book launch go well. I'd really like to use the launch as a springboard for wider promotion of longtermism, trying to get journalists and other influencers talking about the idea. In order to achieve that, a huge win would be to hit The New York Times Bestseller list. And in order to achieve that, getting as many pre-orders as possible is crucial. In particular, there's a risk that the book is perceived by booksellers (like Amazon and Barnes and Noble) as being “just another philosophy book”, which means they buy very few copies to sell to consumers. This means that the booksellers don't put any effort into promoting the book, and they can even literally run out of copies (as happened with Superintelligence after the Elon tweet). Preorders are the clearest way for us to demonstrate that WWOTF (or, as I prefer, “WTF?”) is in a different reference class. For these reasons, I think that pre-ordering WWOTF is probably the highest value thing you can do to help with the book launch right now. The US link to pre-order is here, the UK link is here, and for all other countries you can use your country's Amazon link. US version UK version About the book My hope for WWOTF is that it will be like an Animal Liberation for future generations: impacting broadly how society thinks about the interests of future people, and inspiring people to take action to safeguard the long term. If the launch goes well, then significantly more people — including people who are deciding which careers to pursue, philanthropists, and political decision-makers and policy-makers — will be exposed to the core ideas. The book is aimed to be both readable for a general audience and informative for EA researchers or interested academics. (Though I'm not sure if I've succeeded at this!) So there's a wide breadth of content: everything from stories of historical instances of long-run impact to discussion of impossibility theorems in population ethics. And, following Toby Ord's lead, there is an ungodly number of endnotes. The table of contents can give you the gist: Introduction Part I. The Long View Chapter 1: The Case for Longtermism Chapter 2: You Can Shape the Course of History Part II. Trajectory Changes Chapter 3: Moral Change Chapter 4: Value Lock-In Part III. Safeguarding Civilization Chapter 5: Extinction Chapter 6: Collapse Chapter 7: Stagnation Part IV. Assessing the End of the World Chapter 8: Is It Good to Make Happy People? Chapter 9: Will the Future Be Good or Bad? Part V. Taking Action Chapter 10: What to Do In the course of writing the book, I've also changed my mind on a number of issues. I hope to share my ...

The Nonlinear Library: EA Forum Top Posts
Ask Me Anything! by William_MacAskill

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 12, 2021 5:20


welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Ask Me Anything!, published by William_MacAskill on the effective altruism forum. Thanks for all the questions, all - I'm going to wrap up here! Maybe I'll do this again in the future, hopefully others will too! Hi, I thought that it would be interesting to experiment with an Ask Me Anything format on the Forum, and I'll lead by example. (If it goes well, hopefully others will try it out too.) Below I've written out what I'm currently working on. Please ask any questions you like, about anything: I'll then either respond on the Forum (probably over the weekend) or on the 80k podcast, which I'm hopefully recording soon (and maybe as early as Friday). Apologies in advance if there are any questions which, for any of many possible reasons, I'm not able to respond to. If you don't want to post your question publicly or non-anonymously (e.g. you're asking “Why are you such a jerk?” sort of thing), or if you don't have a Forum account, you can use this Google form. What I'm up to Book My main project is a general-audience book on longtermism. It's coming out with Basic Books in the US, Oneworld in the UK, Volante in Sweden and Gimm-Young in South Korea. The working title I'm currently using is What We Owe The Future. It'll hopefully complement Toby Ord's forthcoming book. His is focused on the nature and likelihood of existential risks, and especially extinction risks, arguing that reducing them should be a global priority of our time. He describes the longtermist arguments that support that view but not relying heavily on them. In contrast, mine is focused on the philosophy of longtermism. On the current plan, the book will make the core case for longtermism, and will go into issues like discounting, population ethics, the value of the future, political representation for future people, and trajectory change versus extinction risk mitigation. My goal is to make an argument for the importance and neglectedness of future generations in the same way Animal Liberation did for animal welfare. Roughly, I'm dedicating 2019 to background research and thinking (including posting on the Forum as a way of forcing me to actually get thoughts into the open), and then 2020 to actually writing the book. I've given the publishers a deadline of March 2021 for submission; if so, then it would come out in late 2021 or early 2022. I'm planning to speak at a small number of universities in the US and UK in late September of this year to get feedback on the core content of the book. My academic book, Moral Uncertainty, (co-authored with Toby Ord and Krister Bykvist) should come out early next year: it's been submitted, but OUP have been exceptionally slow in processing it. It's not radically different from my dissertation. Global Priorities Institute I continue to work with Hilary and others on the strategy for GPI. I also have some papers on the go: The case for longtermism, with Hilary Greaves. It's making the core case for strong longtermism, arguing that it's entailed by a wide variety of moral and decision-theoretic views. The Evidentialist's Wager, with Aron Vallinder, Carl Shulman, Caspar Oesterheld and Johannes Treutlein arguing that if one aims to hedge under decision-theoretic uncertainty, one should generally go with evidential decision theory over causal decision theory. A paper, with Tyler John, exploring the political philosophy of age-weighted voting. I have various other draft papers, but have put them on the back burner for the time being while I work on the book. Forethought Foundation Forethought is a sister organisation to GPI, which I take responsibility for: it's legally part of CEA and independent from the University, We had our first class of Global Priorities Fellows this year, and will continue the program into future years. Utilitarianism.net Darius Meissner and I (w...

The Nonlinear Library: EA Forum Top Posts
Longtermism' by William_MacAskill

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 11, 2021 15:00


welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. This is: Longtermism', published William_MacAskill on the effective altruism forum. This post discusses the introduction and definition of the term ‘longtermism'. Thanks to Toby Ord, Matthew van der Merwe and Hilary Greaves for discussion. [Edit, Nov 2021: After many discussions, I've settled on the following informal definition: Longtermism is the view that positively influencing the longterm future is a key moral priority of our time. This is what I'm going with for What We Owe The Future. With this in hand, I call strong longtermism is the view that positively influencing the longterm future is the key moral priority of our time. It turns out to be suprisingly difficult to define this precisely, but Hilary and I give it our best shot in our paper.] Up until recently, there was no name for the cluster of views that involved concern about ensuring the long-run future goes as well as possible. The most common language to refer to this cluster of views was just to say something like ‘people interested in x-risk reduction'. There are a few reasons why this terminology isn't ideal: It's cumbersome and somewhat jargony It's a double negative; whereas focusing on the positive (‘ensuring the long-run future goes well') is more inspiring and captures more accurately what we ultimately care about People tend to understand ‘existential risk' as referring only to extinction risk, which is a strictly narrower concept You could care a lot about reducing existential risk even though you don't care particularly about the long term if, for example, you think that extinction risk is high this century and there's a lot we can do to reduce it, such that it's a very effective thing even by the lights of the present generation's interests. Similarly, you can care a lot about the long-run future without focusing on existential risk reduction, because existential risk is just about drastic reductions in the value of the future. (‘Existential risk' is defined as a risk where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.) But, conceptually at least (and I think in practice, too) smaller improvements in the expected value of the long-run future could be among the things we want to focus on, such as changing people's values, or changing political institutions (like the design of a world government) before some lock-in event occurs. You might also think (as Tyler Cowen does) that speeding up economic and technological progress is one of the best ways of improving the long-run future. For these reasons, and with Toby Ord's in-progress book on existential risk providing urgency, Toby and Joe Carlsmith started leading discussions about whether there were better terms to use. In October 2017, I proposed the term ‘longtermism', with the following definition: “Longtermism =df the view that the most important determinant of the value of our actions today is how those actions affect the very long-run future.” Since then, the term ‘longtermism' seems to have taken off organically. I think it's here to stay. Unlike ‘existential risk reduction', the idea behind ‘longtermism' is that it is compatible with any empirical view about the best way of improving the long-run future and, I hope, helps immediately convey the sentiment behind the philosophical position, in the same way that ‘environmentalism' or ‘liberalism' or ‘cosmopolitanism' does. But getting a good definition of the term is important. As Ben Kuhn notes, the term could currently be understood to refer to a mishmash of different views. I think that's not good, and we should try to develop some standardisation before the term is locked in to something suboptimal. I think that there are three natural concepts in this area, which we should distinguish. My proposal is that ...