Podcasts about MacAskill

  • 168PODCASTS
  • 230EPISODES
  • 45mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Sep 23, 2022LATEST

POPULARITY

20152016201720182019202020212022


Best podcasts about MacAskill

Latest podcast episodes about MacAskill

The Nonlinear Library
EA - Responses to the Rival AI Deployment Problem: the importance of a pre-deployment agreement by HaydnBelfield

The Nonlinear Library

Play Episode Listen Later Sep 23, 2022 22:30


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Responses to the Rival AI Deployment Problem: the importance of a pre-deployment agreement, published by HaydnBelfield on September 23, 2022 on The Effective Altruism Forum. Introduction: The rival AI deployment problem Imagine an actor is faced with highly convincing evidence that with high probability (over 75%) a rival actor will be capable within two years of deploying advanced AI. Assume that they are concerned that such deployment might threaten their values or interests. What could the first actor do? Let us call this the ‘rival AI deployment problem'. Three responses present themselves: acquiescence, an agreement before deployment, and the threat of coercive action. Acquiescence is inaction, and acceptance that the rival actor will deploy. It does not risk conflict, but does risk unilateral deployment, and therefore suboptimal safety precautions, misuse or value lock-in. An agreement before deployment (such as a treaty between states) would be an agreement on when and how advanced AI could be developed and deployed: for example, requirements on alignment and safety tests, and restrictions on uses/goals. We can think of this as a ‘Short Reflection' - a negotiation on what uses/goals major states can agree to give advanced AI. This avoids unilateral deployment and conflict, but it may be difficult for rival actors to agree, and any agreement faces the credible commitment problem of sufficiently reassuring the actors that the agreement is being followed. Threat of coercive action involves threatening the rival actor with setbacks (such as state sanctions or cyberattacks) to delay or deter the development program. It is unilaterally achievable, but risks unintended escalation and conflict. All three responses have positives and negatives. However, I will suggest a pre-deployment agreement may be the least-bad option. The rival AI deployment problem can be thought of as the flipside of (or an addendum to) what Karnofsky and Muehlhauser call the ‘AI deployment problem': “How do we hope an AI lab - or government - would handle various hypothetical situations in which they are nearing the development of transformative AI?”. Similarly, OpenAI made a commitment in its Charter to “stop competing with and start assisting” any project that “comes close to building” advanced AI for example with “a better-than-even chance of success in the next two years”. The Short Reflection can be thought of as an addendum to the Long Reflection, as suggested by MacAskill and Ord. Four assumptions I make four assumptions. First, I roughly assume a ‘classic' scenario of discontinuous deployment of a singular AGI system, of the type discussed in Life 3.0, Superintelligence and Yudkowsky's writings. Personally, more of a continuous Christiano-style take-off seems more plausible to me, and more of a distributed Drexler-style Comprehensive AI Services seems preferable to me. But the discontinuous, singular scenario makes the tensions sharper and clearer, so that is what I will use. Second, I roughly assume that states are the key players, as opposed to sustained academic or corporate control over an advanced AI development and/or deployment project. Personally, state control of this strategically important technology/project seems more plausible to me. In any case, state control again makes the tensions sharper and clearer. I distinguish between development and deployment. By ‘deployment' I mean something like ‘use in a way that affects the world' materially, economically, or politically. This includes both ‘starting a training run that will likely result in advanced AI' and ‘releasing some system from a closed-off environment or implementing its recommendations'. Third, I assume that some states may be concerned about deployment by a rival state. They might not necessarily be concerned. Almo...

Deconstructor of Fun
TWiG #201 GTA Leaks, Nintendo Brings the Heat and Kim MacAskill vs Rocksteady

Deconstructor of Fun

Play Episode Listen Later Sep 23, 2022 39:51


Laura gives off strong American Psycho vibes with her choice of recording venue. Eric makes us all jealous with his recap of the BITKRAFT conference. Ethan makes mood boards. But more seriously, we talk the latest Call of Duty reveal, Sims 4 going F2P, the big Grand Theft Auto 6 leak and how it will have no negative impact on the product's eventual sales dominance. We go deep on Nintendo direct and Laura amazes us with her commitment to completing lengthy JRPGs. Ethan tells the story of ex-Rocksteady dev Kim MacAskill, who sets an amazing example in living her values after removing herself from consideration from a Women In Games Lifetime Achievement Award because Rockstead, her former employer and a company she stood up against due to the misogyny experienced therein, was a sponsor of the event. This week's episode is hosted by Eric Kress, Laura Taranto and Ethan Levy. --- Send in a voice message: https://anchor.fm/deconstructoroffun/message Support this podcast: https://anchor.fm/deconstructoroffun/support

The Nonlinear Library
EA - Summarizing the comments on William MacAskill's NYT opinion piece on longtermism by West

The Nonlinear Library

Play Episode Listen Later Sep 21, 2022 3:36


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summarizing the comments on William MacAskill's NYT opinion piece on longtermism, published by West on September 21, 2022 on The Effective Altruism Forum. We are a small community, but our ideas have the potential to spread far if communicated effectively. Refining our communication means being well calibrated as to how people outside the EA community react to our worldviews. So when MacAskill's article about longtermism was published last month in the NYT, I was pretty interested to see the comment section. I started to count various reactions, got carried away, and ended up going through 300 or so. Below is a numerical summary. Caveats Selection bias is present. I would guess NYT commenters skew older and liberal. It's possible the comments don't reflect overall sentiment of the article's readers, because people might only feel compelled to comment when they are strongly skeptical, undercounting casually positive readers. Many people signaled they felt positive towards the article and longtermist thinking, but were entirely pessimistic about our future -- basically "This is all well and good, but _". Sometimes it was hard to know whether to tally these as positive or skeptical; I usually went with whichever sentiment was the main focus of the comment. For the most part, this survey doesn't capture ideas people had to help our long term future. Some of those not tallied included better education, fusion power, planting trees, and outlawing social media. Tallies 60 - Skeptical -- either of longtermism, or our future 20 - Our broken culture prevents us from focusing on the long-term 16 - We're completely doomed, there's no point 7 - We are hard-wired as animals to think short term 7 - Predicting the future is hard; made up numbers 5 - We don't know what future generations will want 5 - We don't even value current lives 3 - I value potential people far less than current people 3 - It's easy to do horrific things in the name of longtermism 2 - This is ivory tower BS 42 - Generally positive 17 - This is nothing new (most of these comments were either about climate activism or seven generation sustainability) 7 - This planet is not ours / humans don't deserve to survive 7 - We should lower the population 6 - This is all about environmental sustainability 6 - Animals matter too 5 - Republicans are terrible 4 - Reincarnation might be true 3 - We should ease up on technology 2 - Technology will save us 1 - Time travel might be true 1 - Society using carbon is a good thing 1 - This idea is un-American 1 - This is all the fault of boomers 1 - Stop blaming boomers Takeaways Overall, I found the responses to be more negative than anticipated. The most common sentiment I saw was utter pessimism, which I worry is a self-fulling prophecy. There was very little reaction to or discussion about the risks of bioweapons and AI. Many people seemed to substitute concern for our long-term future solely with concern for the environment. This is understandable given the prominence of environmentalism -- it's already top-of-mind for many. I think people struggled to appreciate the timescale proposed in the article. Many referenced leaving the Earth a better place for their (literal) grandchildren, or for seven generations from now, but not thousands of years. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Lexman Artificial
William MacAskill on Principia Ethica: The Positive Ethics of Aristotle

Lexman Artificial

Play Episode Listen Later Sep 13, 2022 4:36


The artificial intelligence Lexman welcomes William MacAskill, a philosopher and author of "Principia Ethica: The Positive Ethics of Aristotle". Lexman and MacAskill discuss Aristotle's view on particulars and principalship, and how they can help to better understand ethics.

The Nonlinear Library
EA - Puzzles for Everyone by Richard Y Chappell

The Nonlinear Library

Play Episode Listen Later Sep 10, 2022 9:12


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Puzzles for Everyone, published by Richard Y Chappell on September 10, 2022 on The Effective Altruism Forum. Some of the deepest puzzles in ethics concern how to coherently extend ordinary beneficence and decision theory to extreme cases. The notorious puzzles of population ethics, for example, ask us how to trade off quantity and quality of life, and how we should value future generations. Beckstead & Thomas discuss a paradox for tiny probabilities and enormous values, asking how we should take risk and uncertainty into account. Infinite ethics raises problems for both axiology and decision theory: it may be unclear how to rank different infinite outcomes, and it's hard to avoid the “fanatical” result that the tiniest chance of infinite value swamps all finite considerations (unless one embraces alternative commitments that may be even more counterintuitive). Puzzles galore! But these puzzles share a strange feature, namely, that people often mistakenly believe them to be problems specifically for utilitarianism. [Image caption: "Fear not: there's enough for everyone!"] Their error, of course, is that beneficence and decision theory are essential components of any complete moral theory. (As even Rawls acknowledged, “All ethical doctrines worth our attention take consequences into account in judging rightness. One which did not would simply be irrational, crazy.” Rossian pluralism explicitly acknowledges a prima facie duty of beneficence that must be weighed against our other—more distinctively deontological—prima facie duties, and will determine what ought to be done if those others are not applicable to the situation at hand. And obviously any account relevant to fallible human beings needs to address how we should respond to uncertainty about our empirical circumstances and future prospects.) Why, then, would anyone ever think that these puzzles were limited to utilitarianism? One hypothesis is that only utilitarianism is sufficiently clear and systematic to actually attempt an answer to these questions. Other theories too often remain silent and non-committal. Being incomplete in this way is surely not an advantage of those theories, unless there's reason to think that a better answer will eventually be fleshed out. But what makes these questions such deep puzzles is precisely that we know that no wholly satisfying answer is possible. It's a “pick your poison” situation. And there's nothing clever about mocking utilitarians for endorsing a poisonous implication when it's provably the case that every possibility remaining amongst the non-utilitarian options is similarly poisonous! When all views have costs, you cannot refute a view just by pointing to one of its costs. You need to actually gesture towards a better alternative, and do the difficult work of determining which view is the least bad. Below I'll briefly step through some basic considerations that bring out how difficult this task can be. Population Ethics In ‘The New Moral Mathematics' (reviewing WWOTF), Kieran Setiya sets up a false choice between total utilitarianism and “the intuition of neutrality” which denies positive value to creating happy lives. (Note that MacAskill's longtermism is in fact much weaker than total utilitarianism.) He swiftly dismisses the total view for implying the repugnant conclusion. But he doesn't mention any costs to neutralism, which may give some readers the misleading impression that this is a cost-free, common-sense solution. It isn't. Far from it. Neutrality implies that utopia is (in prospect) no better than a barren, lifeless rock. It implies that the total extinction of all future value-bearers could be more than compensated for by throwing a good enough party for those who already exist. These implications strike me as far more repugnant than the repugnant conclus...

The Nonlinear Library
EA - [Link post] Optimistic “Longtermism” Is Terrible For Animals by BrianK

The Nonlinear Library

Play Episode Listen Later Sep 7, 2022 1:41


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link post] Optimistic “Longtermism” Is Terrible For Animals, published by BrianK on September 6, 2022 on The Effective Altruism Forum. Oxford philosopher William MacAskill's new book, What We Owe the Future, caused quite a stir this month. It's the latest salvo of effective altruism (EA), a social movement whose adherents aim to have the greatest positive impact on the world through use of strategy, data, and evidence. MacAskill's new tome makes the case for a growing flank of EA thought called “longtermism.” Longtermists argue that our actions today can improve the lives of humans way, way, way down the line — we're talking billions, trillions of years — and that in fact it's our moral responsibility to do so. In many ways, longtermism is a straightforward, uncontroversially good idea. Humankind has long been concerned with providing for future generations: not just our children or grandchildren, but even those we will never have the chance to meet. It reflects the Seventh Generation Principle held by the indigenous Haudenosaunee (a.k.a. Iroquois) people, which urges people alive today to consider the impact of their actions seven generations ahead. MacAskill echoes the defining problem of intergenerational morality — people in the distant future are currently “voiceless,” unable to advocate for themselves, which is why we must act with them in mind. But MacAskill's optimism could be disastrous for non-human animals, members of the millions of species who, for better or worse, share this planet with us. Read the rest on Forbes. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Clearer Thinking with Spencer Greenberg
Estimating the long-term impact of our actions today (with Will MacAskill)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Sep 7, 2022 66:27


What is longtermism? Is the long-term future of humanity (or life more generally) the most important thing, or just one among many important things? How should we estimate the chance that some particular thing will happen given that our brains are so computationally limited? What is "the optimizer's curse"? How top-down should EA be? How should an individual reason about expected values in cases where success would be immensely valuable but the likelihood of that particular individual succeeding is incredibly low? (For example, if I have a one in a million chance of stopping World War III, then should I devote my life to pursuing that plan?) If we want to know, say, whether protests are effective or not, we merely need to gather and analyze existing data; but how can we estimate whether interventions implemented in the present will be successful in the very far future?William MacAskill is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. A Forbes 30 Under 30 social entrepreneur, he also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and Y Combinator–backed 80,000 Hours, which together have moved over $200 million to effective charities. He's the author of Doing Good Better, Moral Uncertainty, and What We Owe The Future.

EconTalk
Will MacAskill on Longtermism and What We Owe the Future

EconTalk

Play Episode Listen Later Sep 5, 2022 76:22


Philosopher William MacAskill of the University of Oxford and a founder of the effective altruism movement talks about his book What We Owe the Future with EconTalk host Russ Roberts. MacAskill advocates "longtermism," giving great attention to the billions of people who will live on into the future long after we are gone. Topics discussed include the importance of moral entrepreneurs, why it's moral to have children, and the importance of trying to steer the future for better outcomes.

Intelligence Squared
How to Improve the World for the Generations to Come, with Will MacAskill

Intelligence Squared

Play Episode Listen Later Sep 5, 2022 59:33


Sign up for Intelligence Squared Premium here: https://iq2premium.supercast.com/ for ad-free listening, bonus content, early access and much more. See below for details. Will MacAskill is the philosopher thinking a million years into the future who is also having a bit of a moment in the present. As Associate Professor in Philosophy and Research Fellow at the Global Priorities Institute at the University of Oxford, he is co-founder of the effective altruism movement, which uses evidence and reason as the driver to help maximise how we can better resource the world. MacAskill's writing has found fans ranging from Elon Musk to Stephen Fry and his new book is What We Owe the Future: A Million-Year View. Our host on the show is Max Roser, Director of the Oxford Martin Programme on Global Development and founder and editor of Our World in Data. … We are incredibly grateful for your support. To become an Intelligence Squared Premium subscriber, follow the link: https://iq2premium.supercast.com/  Here's a reminder of the benefits you'll receive as a subscriber: Ad-free listening, because we know some of you would prefer to listen without interruption  One early episode per week Two bonus episodes per month A 25% discount on IQ2+, our exciting streaming service, where you can watch and take part in events live at home and enjoy watching past events on demand and without ads  A 15% discount and priority access to live, in-person events in London, so you won't miss out on tickets Our premium monthly newsletter  Intelligence Squared Merch Learn more about your ad choices. Visit megaphone.fm/adchoices

The Nonlinear Library
EA - The Base Rate of Longtermism Is Bad by ColdButtonIssues

The Nonlinear Library

Play Episode Listen Later Sep 5, 2022 11:33


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Base Rate of Longtermism Is Bad, published by ColdButtonIssues on September 5, 2022 on The Effective Altruism Forum. Cross-posted from Cold Button Issues. Sometimes philosophers make bold, sweeping claims for other philosophers and modest, palatable claims for the general public. Consider Peter Singer's writing on philosophy which includes endorsing situational infanticide versus his more popular writing where he makes hard to dispute claims like “[l]iving a minimally acceptable ethical life involves using a substantial part of our spare resources to make the world a better place.” Will MacAskill and Hillary Greaves wrote a paper arguing for strong longtermism the “the view that impact on the far future is the most important feature of our actions today.” Then Will MacAskill wrote a New York Times best-selling book that argued that caring about the future is somewhat morally important. MacAskill didn't need to water down his claims to convince me. In theory, I'm fully on board with longtermism. There's probably tons of future people who matter just as much as we do so let's prioritize them, hurray! Despite being willing to endorse the philosophy of longtermism, I think building a movement around longtermism or taking actions for the sake of the longterm future are likely to backfire. Some friends of mine in the effective altruism movement have said they would be excited about the shift to longtermism if there were successful past examples of longtermist movements. But I think past examples of longtermism are easy to find- it's just hard to find successful examples. When GiveWell was relatively young and not as influential as it is today, it commissioned work on the history of philanthropy, to answer questions like when did ambitious philanthropists succeed, when did they fail, and what effective altruists could learn from the past. I think repeating such a process for longtermism, by taking even a quick look at past efforts to prioritize the longterm future- casts doubt on longtermist efforts. Benjamin Franklin, Failed Longtermist If George Washington was the Captain America of the Founding Fathers, Benjamin Franklin was Iron Man. The fun one, the cool one, the guy who invented the lightning rod. He's probably the closest thing America has to a Leonardo Da Vinci. He signed the Declaration of Independence, ran the post office, was the ambassador to France, and kept on inventing things. He also tried to be a longtermist. When he died, he left a bequest to the cities of Boston and Philadelphia that was to accrue interest for the next 200 years before the cities could access the whole principal. As Will MacAskill recounts, the amount grew to $5 million and $2 million respectively. The money mostly went to fund a private college. Because of this he's sometimes favorably cited as a successful example of how people can intentionally try to help the longterm future and succeed. The problem is, of all the things that Franklin did that shaped the future, his intentional future-oriented bequest was basically a rounding error. No disrespect to the Benjamin Franklin Institute of Technology which benefited from his generosity, but that's not what made Benjamin Franklin important to the world. What else could he have spent this money on? Taking better care of his health so he lived longer, supporting relatives to start a family yielding hundred of additional Franklin descendants over the years, running a few more experiments.. He could have thrown another party! This might sound like I'm joking but a big part of an ambassador's jobs is to be a charming bon vivant and a great party host. An even stronger friendship between the United States and France would surely have been more consequential than founding this private college. There's no contradiction between spending money now...

Conversations With Coleman
Humanity in a Thousand Years with Will MacAskill (S3 Ep.29)

Conversations With Coleman

Play Episode Listen Later Sep 4, 2022 68:08


My guest today is Will MacAskill. Will is an associate professor of philosophy at Oxford University. He is the co-founder and president of the Centre for Effective Altruism. Will is also the director of the Forethought Foundation for Global Priorities Research. In this episode, we discuss his new book "What We Owe the Future". We talk about whether we have a moral obligation to the billions of humans that will be born in the next several 1000 years, and how to weigh those obligations against those of living humans. We discuss population ethics in general, and Derek Parfit's Repugnant Conclusion thought experiment. We discuss the role of economic growth in humanity's long-term future and how to weigh that against present-day wealth inequality. We talk about the ethics of abortion, and the notion of moral progress. We also discuss the possible AI futures that lie ahead of us and much more.  Learn more about your ad choices. Visit megaphone.fm/adchoices

The Nonlinear Library
EA - My take on What We Owe the Future by elifland

The Nonlinear Library

Play Episode Listen Later Sep 1, 2022 42:36


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My take on What We Owe the Future, published by elifland on September 1, 2022 on The Effective Altruism Forum. Cross-posted from Foxy Scout Overview What We Owe The Future (WWOTF) by Will MacAskill has recently been released with much fanfare. While I strongly agree that future people matter morally and we should act based on this, I think the book isn't clear enough about MacAskill's views on longtermist priorities, and to the extent it is it presents a mistaken view of the most promising longtermist interventions. I argue that MacAskill: Underestimates risk of misaligned AI takeover. more Overestimates risk from stagnation. more Isn't clear enough about longtermist priorities. more I highlight and expand on these disagreements in part to contribute to the debate on these topics, but also make a practical recommendation. While I like some aspects of the book, I think The Precipice is a substantially better introduction for potential longtermist direct workers, e.g. as a book given away to talented university students. For instance, I'm worried people will feel bait-and-switched if they get into EA via WWOTF then do an 80,000 Hours call or hang out around their EA university group and realize most people think AI risk is the biggest longtermist priority, many thinking this by a large margin. more What I disagree with[1] Underestimating risk of misaligned AI takeover Overall probability of takeover In endnote 2.22 (p. 274), MacAskill writes [emphasis mine]: I put that possibility [of misaligned AI takeover] at around 3 percent this century. I think most of the risk we face comes from scenarios where there is a hot or cold war between great powers. I think a 3% chance of misaligned AI takeover this century is too low, with 90% confidence.[2] Most of the risk coming from scenarios with hot or cold great power wars may be technically true if one thinks a war between US and China is >50% likely soon which might be reasonable with a loose definition of cold war. That being said, I strongly think MacAskill's claim about great power war gives the wrong impression of the most probable AI takeover threat models. My credence on misaligned AI takeover is 40% this century, of which not much depends on a great power war scenario. Below I'll explain why my best-guess credence is 40%: the biggest input is a report on power-seeking AI, but I'll also list some other inputs then aggregate the inputs. Power-seeking AI report The best analysis estimating the chance of existential risk (x-risk) from misaligned AI takeover that I'm aware of is Is Power-Seeking AI an Existential Risk? by Joseph Carlsmith.[3] Carlsmith decomposes a possible existential catastrophe from AI into 6 steps, each conditional on the previous ones: Timelines: By 2070, it will be possible and financially feasible to build APS-AI: systems with advanced capabilities (outperform humans at tasks important for gaining power), agentic planning (make plans then acts on them), and strategic awareness (its plans are based on models of the world good enough to overpower humans). Incentives: There will be strong incentives to build and deploy APS-AI. Alignment difficulty: It will be much harder to build APS-AI systems that don't seek power in unintended ways, than ones that would seek power but are superficially attractive to deploy. High-impact failures: Some deployed APS-AI systems will seek power in unintended and high-impact ways, collectively causing >$1 trillion in damage. Disempowerment: Some of the power-seeking will in aggregate permanently disempower all of humanity. Catastrophe: The disempowerment will constitute an existential catastrophe. I'll first discuss my component probabilities for a catastrophe by 2100 rather than 2070[4], then discuss the implications of Carlsmith's own assessment as well as reviewers of hi...

The Nonlinear Library
EA - Will MacAskill Media for WWOTF - Full List by James Aitchison

The Nonlinear Library

Play Episode Listen Later Aug 31, 2022 6:59


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will MacAskill Media for WWOTF - Full List, published by James Aitchison on August 30, 2022 on The Effective Altruism Forum. Never before have we had the chance to enjoy so much Will MacAskill. He seems to have been everywhere. What a superb and exhaustive job he and his team have done to promote 'What We Owe The Future.' So far I have counted 16 podcasts, 18 articles and 6 other bits and pieces. I have listed all these below with links and brief comments. Podcast Appearances The 80,000 Hours Podcast with Rob Wiblin. A warm and comprehensive three-hour discussion. Making Sense Podcast with Sam Harris. Harris is strongly supportive, MacAskill particularly inspiring on the sweep of history. Mindscape Podcast with Sean Carroll. Carroll asks questions about utilitarianism, metaethics and population ethics which MacAskill handles well. The Ezra Klein Show Podcast. A fine conversation on long-termism. Klein structures the discusion around ‘Three simple sentences: Future people count. There could be a lot of them. And we can make their lives better.' Good discussions about history - the contingent nature of the abolition of slavery and that certain times have plasticity. Tim Ferriss Podcast. A lively discussion with much humour and several gems from MacAskill. Includes recommendation of Joseph Henrich's 'The Secret of Our Success.' Deep Dive with Ali Abdaal Podcast. A relaxed, friendly and wide-ranging three-hour conversation. Covers a lot of ground including EA psychology and MacAskill's work methods. This is a high-quality YouTube production as well as a podcast and is my favourite among the appearances. Conversations with Tyler Podcast . Tyler Cowan's questioning focuses on the limits of utilitarianism. The Lunar Society Podcast with Dwarkesh Patel. Mainly on the contingency of moral progress. Global Dispatch Podcast with Mark Goldberg. Discussion on longtermism and the United Nations. Goldberg enthusiastic about the UN adopting some longtermist thinking. Modern Wisdom Podcast with Chris Williamson. An accessible discussion of longtermism. Conversations with Coleman with Coleman Hughes Includes population ethics, economic growth and moral change. Daily Stoic Podcast with Ryan Holiday. Mainly on altruism and moral change. Kera Think with Krys Boyd. 30 minutes conversation. Freakanomics Podcast with Steve Levitt. Discussion mainly on the economic themes in WWOTF, which MacAskill handles very well. 1a Podcast on NPR. David Gurn discussion on EA as a life changing philosophy. Includes comments from Sofya Lebedeva and Spencer Goldberg. Ten Percent Happier Podcast with Dan Harris. A warm discussion on donations, EA and longtermism. There are transcripts for the podcasts by Wiblin, Carroll, Klein, Cowan, Patel, Goldberg and Levitt. Articles and Book Reviews The New Yorker: The Reluctant Prophet of Effective Altruism by Gideon Lewis-Kraus. A fine 10,000-word article profiling MacAskill and setting out the history of the EA movement. The author spent several days with his subject and covers MacAskill as an individual and the ideas and dynamics of the movement. MacAskill comments on the article in this Twitter Thread Time: Want to Do More Good? This Movement might have the Answer by Naina Bajekal . A beautifully written and inspiring profile of MacAskill and the EA movement. Vox: How Effective Altruism Went from a Niche Movement to a Billion-Dollar Force by Dylan Matthews. A well-informed and thoughtful article on EA's evolution by an EA insider. Wired: The Future Could be Blissful - If Human's Don't Go Extinct First. Shorter interview with Will Macaskill by Matt Reynolds. New York Times: The Case for Longtermism by Will MacAskill A Guest Essay adapted from the book. BBC Futures: What is Longtermism and Why Does it Matter? by Will MacAskill. Another essay based on the book. Foreig...

The Nonlinear Library: LessWrong
LW - EA and LW Forums Weekly Summary (21 Aug - 27 Aug 22') by Zoe Williams

The Nonlinear Library: LessWrong

Play Episode Listen Later Aug 30, 2022 20:41


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA & LW Forums Weekly Summary (21 Aug - 27 Aug 22'), published by Zoe Williams on August 30, 2022 on LessWrong. This is also posted on the EA forum: see here.Supported by Rethink Priorities Sunday August 21st - Saturday August 27th The amount of content on the EA and LW forums has been accelerating. This is awesome, but makes it tricky to keep up with! The below hopes to help by summarizing popular (>40 karma) posts each week. It also includes announcements and ideas from Twitter that this audience might find interesting. This will be a regular series published weekly - let me know in the comments if you have any feedback on what could make it more useful!If you'd like to receive these summaries via email, you can subscribe here. Methodology This series originated from a task I did as Peter Wildeford's executive research assistant at Rethink Priorities, to summarize his weekly readings. If your post is in the ‘Didn't Summarize' list, please don't take that as a judgment on its quality - it's likely just a topic less relevant to his work. I've also left out technical AI posts because I don't have the background knowledge to do them justice. My methodology has been to use this and this link to find the posts with >40 karma in a week for the EA forum and LW forum respectively, read / skim each, and summarize those that seem relevant to Peter. Those that meet the karma threshold as of Sunday each week are considered (sometimes I might summarize a very popular later-in-the-week post in the following week's summary, if it doesn't meet the bar until then). For twitter, I skim through the following lists: AI, EA, Forecasting, National Security (mainly nuclear), Science (mainly biosec). I'm going through a large volume of posts so it's totally possible I'll get stuff wrong. If I've misrepresented your post, or you'd like a summary edited, please let me know (via comment or DM). EA Forum Philosophy and Methodologies Critque's of MacAskill's ‘Is it Good to Make Happy People?' Discusses population asymmetry, the viewpoint that a new life of suffering is bad, but a new life of happiness is neutral or only weakly positive. Post is mainly focused on what these viewpoints are and that they have many proponents vs. specific arguments for them. Mentions that they weren't well covered in Will's book and could affect the conclusions there. Presents evidence that people's intuitions tend towards needing significantly more happy people than equivalent level of suffering people for a tradeoff to be ‘worth it' (3:1 to 100:1 depending on question specifics), and that therefore a big future (which would likely have more absolute suffering, even if not proportionally) could be bad. EAs Underestimate Uncertainty in Cause Prioritization Argues that EAs work across too narrow a distribution of causes given our uncertainty in which are best, and that standard prioritizations are interpreted as more robust than they really are.As an example, they mention that 80K states “some of their scores could easily be wrong by a couple of points” and this scale of uncertainty could put factory farming on par with AI. The Repugnant Conclusion Isn't The repugnant conclusion (Parfit, 1984) is the argument that enough lives ‘barely worth living' are better than a much smaller set of super duper awesome lives. In one description of it, Parfit said the barely worth it lives had ‘nothing bad in them' (but not much good either). The post argues that actually makes those lives pretty awesome and non-repugnant, because nothing bad is a high bar. A Critical Review of Givewell's 2022 Cost-effectiveness Model NB: longer article - only skimmed it so I may have missed some pieces. Suggestions for cost-effectiveness modeling in EA by a health economist, with Givewell as a case study. The author believes the overall approach ...

The Nonlinear Library
EA - EA & LW Forums Weekly Summary (21 Aug - 27 Aug 22') by Zoe Williams

The Nonlinear Library

Play Episode Listen Later Aug 30, 2022 20:36


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA & LW Forums Weekly Summary (21 Aug - 27 Aug 22'), published by Zoe Williams on August 30, 2022 on The Effective Altruism Forum. Supported by Rethink Priorities Sunday August 21st - Saturday August 27th The amount of content on the EA and LW forums has been accelerating. This is awesome, but makes it tricky to keep up with! The below hopes to help by summarizing popular (>40 karma) posts each week. It also includes announcements and ideas from Twitter that this audience might find interesting. This will be a regular series published weekly - let me know in the comments if you have any feedback on what could make it more useful!If you'd like to receive these summaries via email, you can subscribe here. Methodology This series originated from a task I did as Peter Wildeford's executive research assistant at Rethink Priorities, to summarize his weekly readings. If your post is in the ‘Didn't Summarize' list, please don't take that as a judgment on its quality - it's likely just a topic less relevant to his work. I've also left out technical AI posts because I don't have the background knowledge to do them justice. My methodology has been to use this and this link to find the posts with >40 karma in a week for the EA forum and LW forum respectively, read / skim each, and summarize those that seem relevant to Peter. Those that meet the karma threshold as of Sunday each week are considered (sometimes I might summarize a very popular later-in-the-week post in the following week's summary, if it doesn't meet the bar until then). For twitter, I skim through the following lists: AI, EA, Forecasting, National Security (mainly nuclear), Science (mainly biosec). I'm going through a large volume of posts so it's totally possible I'll get stuff wrong. If I've misrepresented your post, or you'd like a summary edited, please let me know (via comment or DM). EA Forum Philosophy and Methodologies Critque's of MacAskill's ‘Is it Good to Make Happy People?' Discusses population asymmetry, the viewpoint that a new life of suffering is bad, but a new life of happiness is neutral or only weakly positive. Post is mainly focused on what these viewpoints are and that they have many proponents vs. specific arguments for them. Mentions that they weren't well covered in Will's book and could affect the conclusions there. Presents evidence that people's intuitions tend towards needing significantly more happy people than equivalent level of suffering people for a tradeoff to be ‘worth it' (3:1 to 100:1 depending on question specifics), and that therefore a big future (which would likely have more absolute suffering, even if not proportionally) could be bad. EAs Underestimate Uncertainty in Cause Prioritization Argues that EAs work across too narrow a distribution of causes given our uncertainty in which are best, and that standard prioritizations are interpreted as more robust than they really are.As an example, they mention that 80K states “some of their scores could easily be wrong by a couple of points” and this scale of uncertainty could put factory farming on par with AI. The Repugnant Conclusion Isn't The repugnant conclusion (Parfit, 1984) is the argument that enough lives ‘barely worth living' are better than a much smaller set of super duper awesome lives. In one description of it, Parfit said the barely worth it lives had ‘nothing bad in them' (but not much good either). The post argues that actually makes those lives pretty awesome and non-repugnant, because nothing bad is a high bar. A Critical Review of Givewell's 2022 Cost-effectiveness Model NB: longer article - only skimmed it so I may have missed some pieces. Suggestions for cost-effectiveness modeling in EA by a health economist, with Givewell as a case study. The author believes the overall approach to be good, with the follow...

The Nonlinear Library
LW - EA & LW Forums Weekly Summary (21 Aug - 27 Aug 22') by Zoe Williams

The Nonlinear Library

Play Episode Listen Later Aug 30, 2022 20:41


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA & LW Forums Weekly Summary (21 Aug - 27 Aug 22'), published by Zoe Williams on August 30, 2022 on LessWrong. This is also posted on the EA forum: see here.Supported by Rethink Priorities Sunday August 21st - Saturday August 27th The amount of content on the EA and LW forums has been accelerating. This is awesome, but makes it tricky to keep up with! The below hopes to help by summarizing popular (>40 karma) posts each week. It also includes announcements and ideas from Twitter that this audience might find interesting. This will be a regular series published weekly - let me know in the comments if you have any feedback on what could make it more useful!If you'd like to receive these summaries via email, you can subscribe here. Methodology This series originated from a task I did as Peter Wildeford's executive research assistant at Rethink Priorities, to summarize his weekly readings. If your post is in the ‘Didn't Summarize' list, please don't take that as a judgment on its quality - it's likely just a topic less relevant to his work. I've also left out technical AI posts because I don't have the background knowledge to do them justice. My methodology has been to use this and this link to find the posts with >40 karma in a week for the EA forum and LW forum respectively, read / skim each, and summarize those that seem relevant to Peter. Those that meet the karma threshold as of Sunday each week are considered (sometimes I might summarize a very popular later-in-the-week post in the following week's summary, if it doesn't meet the bar until then). For twitter, I skim through the following lists: AI, EA, Forecasting, National Security (mainly nuclear), Science (mainly biosec). I'm going through a large volume of posts so it's totally possible I'll get stuff wrong. If I've misrepresented your post, or you'd like a summary edited, please let me know (via comment or DM). EA Forum Philosophy and Methodologies Critque's of MacAskill's ‘Is it Good to Make Happy People?' Discusses population asymmetry, the viewpoint that a new life of suffering is bad, but a new life of happiness is neutral or only weakly positive. Post is mainly focused on what these viewpoints are and that they have many proponents vs. specific arguments for them. Mentions that they weren't well covered in Will's book and could affect the conclusions there. Presents evidence that people's intuitions tend towards needing significantly more happy people than equivalent level of suffering people for a tradeoff to be ‘worth it' (3:1 to 100:1 depending on question specifics), and that therefore a big future (which would likely have more absolute suffering, even if not proportionally) could be bad. EAs Underestimate Uncertainty in Cause Prioritization Argues that EAs work across too narrow a distribution of causes given our uncertainty in which are best, and that standard prioritizations are interpreted as more robust than they really are.As an example, they mention that 80K states “some of their scores could easily be wrong by a couple of points” and this scale of uncertainty could put factory farming on par with AI. The Repugnant Conclusion Isn't The repugnant conclusion (Parfit, 1984) is the argument that enough lives ‘barely worth living' are better than a much smaller set of super duper awesome lives. In one description of it, Parfit said the barely worth it lives had ‘nothing bad in them' (but not much good either). The post argues that actually makes those lives pretty awesome and non-repugnant, because nothing bad is a high bar. A Critical Review of Givewell's 2022 Cost-effectiveness Model NB: longer article - only skimmed it so I may have missed some pieces. Suggestions for cost-effectiveness modeling in EA by a health economist, with Givewell as a case study. The author believes the overall approach ...

10% Happier with Dan Harris
491: A New Way to Think About Your Money | William MacAskill

10% Happier with Dan Harris

Play Episode Listen Later Aug 29, 2022 64:13


Most of us worry about money sometimes, but what if we changed the way we thought about our relationship to finances? Today's guest, William MacAskill, offers a framework in which to do just that. He calls it effective altruism. One of the core arguments of effective altruism is that we all ought to consider giving away a significant chunk of our income because we know, to a mathematical near certainty, that several thousand dollars could save a life.Today we're going to talk about the whys and wherefores of effective altruism. This includes how to get started on a very manageable and doable level (which does not require you to give away most of your income), and the benefits this practice has on both the world and your own psyche.MacAskill is an associate professor of philosophy at Oxford University and one of the founders of the effective altruism movement. He has a new book out called, What We Owe the Future, where he makes a case for longtermism, a term used to describe developing the mental habit of thinking about the welfare of future generations. In this episode we talk about: Effective altruismWhether humans are really wired to consider future generationsPractical tips for thinking and acting on longtermismHis argument for having childrenAnd his somewhat surprising take on how good our future could be if we play our cards rightPodcast listeners can get 50% off What We Owe the Future using the code WWOTF50 at Bookshop.org.Full Shownotes: https://www.tenpercent.com/podcast-episode/william-macaskill-491See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Nonlinear Library
EA - What We Owe the Future is an NYT bestseller by Anonymous EA

The Nonlinear Library

Play Episode Listen Later Aug 25, 2022 0:31


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What We Owe the Future is an NYT bestseller, published by Anonymous EA on August 25, 2022 on The Effective Altruism Forum. Number 7 on the hardcover non-fiction list, to be precise. Congratulations to MacAskill and to all involved in the remarkable promotion effort :) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Examineradio - The Halifax Examiner podcast
Episode 92: Annick MacAskill

Examineradio - The Halifax Examiner podcast

Play Episode Listen Later Aug 24, 2022 27:00


The Halifax-based poet and university professor Annick MacAskill has crafted a beautiful (and beautiful to touch thanks to Gaspereau Press) ode to a common but still stigmatized subject matter: pregnancy loss. Shadow Blight considers the pain of pregnancy loss through the classical myth of Niobe, whose grief for her dead children was so monumental she turned to stone. MacAskill speaks to the process of crafting and presenting such intimate, personal thoughts and the lack of popular culture on the subject, among other things. Plus a song from the new Klarka Weinwurm album.

The Nonlinear Library
EA - Critique of MacAskill's “Is It Good to Make Happy People?” by Magnus Vinding

The Nonlinear Library

Play Episode Listen Later Aug 23, 2022 21:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Critique of MacAskill's “Is It Good to Make Happy People?”, published by Magnus Vinding on August 23, 2022 on The Effective Altruism Forum. In What We Owe the Future, William MacAskill delves into population ethics in a chapter titled “Is It Good to Make Happy People?” (Chapter 8). As he writes at the outset of the chapter, our views on population ethics matter greatly for our priorities, and hence it is important that we reflect on the key questions of population ethics. Yet it seems to me that the book skips over some of the most fundamental and most action-guiding of these questions. In particular, the book does not broach questions concerning whether any purported goods can outweigh extreme suffering — and, more generally, whether happy lives can outweigh miserable lives — even as these questions are all-important for our priorities. The Asymmetry in population ethics A prominent position that gets a very short treatment in the book is the Asymmetry in population ethics (roughly: bringing a miserable life into the world has negative value while bringing a happy life into the world does not have positive value — except potentially through its instrumental effects and positive roles). The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172): If we think it's bad to bring into existence a life of suffering, why should we not think that it's good to bring into existence a flourishing life? I think any argument for the first claim would also be a good argument for the second. This claim about “any argument” seems unduly strong and general. Specifically, there are many arguments that support the intrinsic badness of bringing a miserable life into existence that do not support any intrinsic goodness of bringing a flourishing life into existence. Indeed, many arguments support the former while positively denying the latter. One such argument is that the presence of suffering is bad and morally worth preventing while the absence of pleasure is not bad and not a problem, and hence not morally worth “fixing” in a symmetric way (provided that no existing beings are deprived of that pleasure). A related class of arguments in favor of an asymmetry in population ethics is based on theories of wellbeing that understand happiness as the absence of cravings, preference frustrations, or other bothersome features. According to such views, states of untroubled contentment are just as good — and perhaps even better than — states of intense pleasure. These views of wellbeing likewise support the badness of creating miserable lives, yet they do not support any supposed goodness of creating happy lives. On these views, intrinsically positive lives do not exist, although relationally positive lives do. Another point that MacAskill raises against the Asymmetry is an example of happy children who already exist, about which he writes (p. 172): if I imagine this happiness continuing into their futures—if I imagine they each live a rewarding life, full of love and accomplishment—and ask myself, “Is the world at least a little better because of their existence, even ignoring their effects on others?” it becomes quite intuitive to me that the answer is yes. However, there is a potential ambiguity in this example. The term “existence” may here be understood to either mean “de novo existence” or “continued existence”, and interpreting it as the latter is made more tempting by the fact that 1) we are talking about already existing beings, and 2) the example mentions their happiness “continuing into their futures”. This is relevant because many proponents of the Asymmetry argue that there is an important distinction between the potential value of continued existence (or the badness of discontinued existence) versus the potential value of bringing ...

Embrace The Void
EV - 249 What We Owe The Future with Will MacAskill

Embrace The Void

Play Episode Listen Later Aug 19, 2022 65:20


My guest this week is Will MacAskill, an associate professor in philosophy and research fellow at the global priorities institute at the University of Oxford. Will is one of the founders of the effective altruism and 80,000 hours, as well as a proponent of longtermism. His new book is “What we owe the future”. We discuss developments in longtermism and challenges like being co-opted or leading to undervaluing of present persons.Convocation: MacAskillWhat We Owe the Future: https://www.amazon.com/What-Owe-Future-William-MacAskill-ebook/dp/B09N3D7QSQ/ref=tmm_kin_swatch_0?_encoding=UTF8&qid=&sr=Music by GW RodriguezSibling Pods:Philosophers in Space: https://0gphilosophy.libsyn.com/Filmed Live Musicals Pod: https://www.filmedlivemusicals.com/thepodcast.htmlSupport us at Patreon.com/EmbraceTheVoidIf you enjoy the show, please Like and Review us on your pod app, especially iTunes. It really helps!Recent appearances: Had several recent appearances you should check out!Dentith had me on their show to discuss the Better Way antivaxxer conference: https://conspiracism.podbean.com/e/circling-the-void-with-aaron-rabinowitz/other discussions of that conference:I doubt it pod (discussing luck): https://dollemore.com/2022/06/02/801-aaron-rabinowitz-from-embrace-the-void-and-philosophers-in-space-podcasts/Skeptics with a K: http://www.merseysideskeptics.org.uk/2022/06/skeptics-with-a-k-episode-330/Next Episode: Raised by Nazis with Brittany Page

The Nonlinear Library
EA - What We Owe The Future: A review and summary of what I learned by Michael Townsend

The Nonlinear Library

Play Episode Listen Later Aug 17, 2022 13:43


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What We Owe The Future: A review and summary of what I learned, published by Michael Townsend on August 16, 2022 on The Effective Altruism Forum. Will MacAskill's new book, What We Owe The Future has just been released in the US and will be available in the UK from September 1. You might already be turning to book reviews or podcasts to inform whether you should buy a copy. To help, I'm writing a quick summary of the book, sharing three new insights I gained, and three questions it left me asking. But it's worth being upfront to the reader about where I sit. MacAskill entwines rigorous arguments with compelling metaphors to promote a profoundly important idea: we can make the future go better, and we should. It's filled with rich, relevant and persuasive historical examples, grounding his philosophical arguments in the real world. It's a book for people who are curious to learn, but also motivated to act — I strongly recommend it. Summary of What We Owe The Future Main argument The book makes a case for longtermism, the view that positively influencing the long-term future is a key moral priority of our time. The overarching argument is simple: Future people matter. The future could be enormously valuable (or terrible). We can positively influence the long-term future. 1. Future people matter The book argues for the first claim at the outset in straightforward and intuitive terms, but MacAskill also takes the reader through rigorous arguments grappling with population ethics, the area of philosophy that focuses on these sorts of questions. 2. The future could be enormously valuable (or terrible) MacAskill's argument for the second claim is that there are far more people who could potentially live in the future than have ever lived in the past. On certain assumptions about the average future population and expected lifespan of the human species, the number of people who could live in the future dramatically outweighs the number of people who have ever lived. This kind of analysis may have inspired Our World In Data's visualisation of how vast the long-term future could be. I recommend Kurzgesagt's “The Last Human — A Glimpse Into The Far Future” which evocatively draws out the potential magnitude of our long-term future. A substantial amount is at stake: if the future goes well, it could be enormously valuable, but if it doesn't, it could be terrible. 3. We can positively influence the long-term future The third claim is the central focus of the book. MacAskill aims not just to argue that we can in principle influence the long-term future (which is the standard of most philosophical arguments) but that we can and here's how (the standard for those who want to take action). MacAskill argues that one of the best ways to focus on the long-term future is to reduce our risk of extinction. Though he also argues that it's not just about whether we survive; it's also about how we survive. The case for focusing on the ways we can improve the quality of the long-term future is one of the key lessons I took from the book. Things I learned from reading What We Owe The Future Much of what I read was new to me, even as someone who's been highly engaged with these ideas. If I were to list all the historical examples that were new to me, I'd essentially be rewriting the book. Instead, here are the top three lessons I learned. Lesson one: Today's values could have easily been different. One of the book's key ideas is that if we could re-run history again, it's unlikely we'd end up with the same values — instead, they're contingent. This is not something I believed before reading the book. If I can make a personal confession: I'm a (perhaps naive) supporter of a philosophical view called hedonistic-utilitarianism, which claims the best actions are those that increase the total amount o...

The Daily Stoic
Will MacAskill on Creating Lasting Change

The Daily Stoic

Play Episode Listen Later Aug 17, 2022 70:32


Ryan talks to professor and writer Will MacAskill about his book What We Owe The Future, how to create effective change in the world, the importance of gaining a better perspective on the world, and more.Will MacAskill is an Associate Professor in Philosophy and Research Fellow at the Global Priorities Institute, University of Oxford. His research focuses on the fundamentals of effective altruism - the use of evidence and reason to help others as much as possible with our time and money - with a particular concentration on how to act given moral uncertainty. He is the author of the upcoming book What We Owe The Future, available for purchase on August 12. Will also wrote Doing Good Better: Effective Altruism and a Radical New Way to Make a Difference and co-authored Moral Uncertainty.✉️ Sign up for the Daily Stoic email: https://dailystoic.com/dailyemail

The Nonlinear Library
EA - A Quick Qualitative Analysis of Laypeople's Critiques of Longtermism by Roh

The Nonlinear Library

Play Episode Listen Later Aug 15, 2022 22:46


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Quick Qualitative Analysis of Laypeople's Critiques of Longtermism, published by Roh on August 15, 2022 on The Effective Altruism Forum. While preparing this post, How to Talk to Lefties in Your Intro Fellowship was posted and it received some interesting pushback in the comments. The focus of this post is not reframing EA marketing that may weaken its epistemic status but exploring misconceptions of EA longtermism that its marketing may produce. See footnote for more details. Summary I coded 104 comments of Ezra Klein and William MacAskill's podcast episode “Three Sentences that Could Change the World — and Your Life” to better understand the critiques a layperson has of longtermism. More specifically, I wanted to capture the gut-instinct, first-pass reactions to longtermism, and not true philosophical arguments against longtermism. Because of this particular sample, this analysis is especially relevant to left-leaning, EA-inclined but EA-unfamiliar individuals. One reason examining people's first takes in detail might be informative is that it helps us identify the way in which longtermist messaging can be misinterpreted when there's a large inferential gap between EAs and well-meaning people. Finally, following MacAskill's What We Owe the Future book release, I anticipate a surge of discussion on longtermism generally. Summarized Takeaways (see Interpretation for more detail & context): In discussions about longtermism with left-leaning / progressive people completely new to the movement, here are things to keep in mind. Note that other groups may not generate such misconceptions from traditional EA rhetoric. Prepare for concerns about longtermism being anti-climate change Explain how the future world's well-being is also affected by longtermist causes (i.e. elephants can also be turned into paperclips) Make explicit ways the animal welfare is being considered in EA efforts Discuss how overpopulation may be overestimated as an issue and not a huge contributor to climate change when discussing the long term future Prepare for despair about the future Challenge underlying assumptions that big change is made only through political venues by pointing out effective change pathways outside of politics (academic research, non-profit work, non-partisan policy work) & generally EA focuses that have have high tractability Clarify longtermist approaches to assessing and overcoming Challenge underlying assumptions about how soon and how likely the end of the world is Legitimizing the movement Emphasize EA as a Question, not a set of cause areas Explain longtermism's role as a practical movement (and not just a thought experiment Reference the Iroquois tribe's efforts towards sustainability as an historical precedence for longtermism & acknowledge of non-white philosophical predecessors Highlight non-white-male EA work in longtermist discussions Discuss neartermist work, animal welfare work, and donation work to legitimize EA as a whole Emphasize EA's efforts towards moral circle expansion Things I think would be useful going forward: A comprehensive list of big EA achievements in longtermism For encouraging optimism and legitimizing the movement's efforts A historical track record of what important work EA has done. For legitimizing the movement, both in and out of the longtermism space Previous people have said this before A EA survey response to the statement: we should not destroy the earth in the effort to sustain human life. Methodology I read the online comments responding to the podcast, and summarized eight themes that the critiques fell into. Then, I went back and coded each comment into which themes they fit. I excluded comments that replied to other comments (either in a reply chain or that @'ed someone else's comment). This was done in a vaguely scientific manne...

80,000 Hours Podcast with Rob Wiblin
#136 – Will MacAskill on what we owe the future

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Aug 15, 2022 174:36


1. People who exist in the future deserve some degree of moral consideration. 2. The future could be very big, very long, and/or very good. 3. We can reasonably hope to influence whether people in the future exist, and how good or bad their lives are. 4. So trying to make the world better for future generations is a key priority of our time. This is the simple four-step argument for 'longtermism' put forward in What We Owe The Future, the latest book from today's guest — University of Oxford philosopher and cofounder of the effective altruism community, Will MacAskill. Links to learn more, summary and full transcript. From one point of view this idea is common sense. We work on breakthroughs to treat cancer or end use of fossil fuels not just for people alive today, but because we hope such scientific advances will help our children, grandchildren, and great-grandchildren as well. Some who take this longtermist idea seriously work to develop broad-spectrum vaccines they hope will safeguard humanity against the sorts of extremely deadly pandemics that could permanently throw civilisation off track — the sort of project few could argue is not worthwhile. But Will is upfront that longtermism is also counterintuitive. To start with, he's willing to contemplate timescales far beyond what's typically discussed. A natural objection to thinking millions of years ahead is that it's hard enough to take actions that have positive effects that persist for hundreds of years, let alone “indefinitely.” It doesn't matter how important something might be if you can't predictably change it. This is one reason, among others, that Will was initially sceptical of longtermism and took years to come around. He preferred to focus on ending poverty and preventable diseases in ways he could directly see were working. But over seven years he gradually changed his mind, and in *What We Owe The Future*, Will argues that in fact there are clear ways we might act now that could benefit not just a few but *all* future generations. The idea that preventing human extinction would have long-lasting impacts is pretty intuitive. If we entirely disappear, we aren't coming back. But the idea that we can shape human values — not just for our age, but for all ages — is a surprising one that Will has come to more recently. In the book, he argues that what people value is far more fragile and historically contingent than it might first seem. For instance, today it feels like the abolition of slavery was an inevitable part of the arc of history. But Will lays out that the best research on the topic suggests otherwise. If moral progress really is so contingent, and bad ideas can persist almost without end, it raises the stakes for moral debate today. If we don't eliminate a bad practice now, it may be with us forever. In today's in-depth conversation, we discuss the possibility of a harmful moral 'lock-in' as well as: • How Will was eventually won over to longtermism • The three best lines of argument against longtermism • How to avoid moral fanaticism • Which technologies or events are most likely to have permanent effects • What 'longtermists' do today in practice • How to predict the long-term effect of our actions • Whether the future is likely to be good or bad • Concrete ideas to make the future better • What Will donates his money to personally • Potatoes and megafauna • And plenty more Get this episode by subscribing to our podcast on the world's most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore

Global Dispatches -- World News That Matters
How "Longtermism" is Shaping Foreign Policy| Will MacAskill

Global Dispatches -- World News That Matters

Play Episode Listen Later Aug 15, 2022 34:11


Longtermism is a moral philosophy that is increasingly gaining traction around the United Nations and in foreign policy circles. Put simply, Longtermism holds the key premise that positively influencing the long-term future is a key moral priority of our time. The foreign policy community in general and the the United Nations in particular are beginning to embrace longtermism.  Next year at the opening of the UN General Assembly in September 2023, the Secretary General is hosting what he is calling a Summit of the Future to bring these ideas to the center of debate at the United Nations. Will MackAskill is Associate Professor of Philosophy at the University of Oxford. He is the author of the new book "What We Owe the Future" which explains the premise and implications of Longtermism including for the foreign policy community, particularly as it relates to mitigating catastrophic risks to humanity.       

Barvas Free Church - Sermons
Guest Preacher Rev. George Macaskill

Barvas Free Church - Sermons

Play Episode Listen Later Aug 14, 2022 40:58


Guest Preacher Rev. George MacaskillSeries: Guest Preacher Preacher: Rev. George MacaskillLord's Day MorningDate: 14th August 2022Passages: Psalm 130:1-8Titus 2:11-14

The Nonlinear Library
EA - Will MacAskill: The Beginning of History by Zach Stein-Perlman

The Nonlinear Library

Play Episode Listen Later Aug 14, 2022 0:30


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will MacAskill: The Beginning of History, published by Zach Stein-Perlman on August 13, 2022 on The Effective Altruism Forum. Will has published some other pieces in advance of What We Owe the Future, but The Beginning of History: Surviving the Era of Catastrophic Risk is the deepest. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Modern Wisdom
#512 - Will MacAskill - How Long Could Humanity Continue For?

Modern Wisdom

Play Episode Listen Later Aug 13, 2022 99:33


Will MacAskill is a philosopher, ethicist, and one of the originators of the Effective Altruism movement. Humans understand that long term thinking is a good idea, that we need to provide a good place for future generations to live. We try to leave the world better than when we arrived for this very reason. But what about the world in one hundred thousand years? Or 8 billion? If there's trillions of human lives still to come, how should that change the way we act right now? Expect to learn why we're living through a particularly crucial time in the history of the future, the dangers of locking in any set of values, how to avoid the future being ruled by a malevolent dictator, whether the world has too many or too few people on it, how likely a global civilisational collapse is, why technological stagnation is a death sentence and much more... Sponsors: Get a Free Sample Pack of all LMNT Flavours at https://www.drinklmnt.com/modernwisdom (discount automatically applied) Get 20% discount on the highest quality CBD Products from Pure Sport at https://bit.ly/cbdwisdom (use code: MW20) Get 5 Free Travel Packs, Free Liquid Vitamin D and Free Shipping from Athletic Greens at https://athleticgreens.com/modernwisdom (discount automatically applied) Extra Stuff: Buy What We Owe The Future - https://amzn.to/3PDqghm Check out Effective Altruism - https://www.effectivealtruism.org/  Get my free Reading List of 100 books to read before you die → https://chriswillx.com/books/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact/ 

The Valmy
Will MacAskill - Longtermism, Altruism, History, & Technology

The Valmy

Play Episode Listen Later Aug 12, 2022 56:07


Podcast: The Lunar Society (LS 30 · TOP 5% )Episode: Will MacAskill - Longtermism, Altruism, History, & TechnologyRelease date: 2022-08-09Will MacAskill is one of the founders of the Effective Altruist movement and the author of the upcoming book, What We Owe The Future.We talk about improving the future, risk of extinction & collapse, technological & moral change, problems of academia, who changes history, and much more.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.Episode website + Transcript here.Follow Will on Twitter. Follow me on Twitter for updates on future episodes.Subscribe to find out about future episodes!Timestamps(00:23) -Effective Altruism and Western values(07:47) -The contingency of technology(12:02) -Who changes history?(18:00) -Longtermist institutional reform(25:56) -Are companies longtermist?(28:57) -Living in an era of plasticity(34:52) -How good can the future be?(39:18) -Contra Tyler Cowen on what's most important(45:36) -AI and the centralization of power(51:34) -The problems with academiaPlease share if you enjoyed this episode! Helps out a ton!TranscriptDwarkesh Patel 0:06Okay, today I have the pleasure of interviewing William MacAskill. Will is one of the founders of the Effective Altruism movement, and most recently, the author of the upcoming book, What We Owe The Future. Will, thanks for coming on the podcast.Will MacAskill 0:20Thanks so much for having me on.Effective Altruism and Western valuesDwarkesh Patel 0:23My first question is: What is the high-level explanation for the success of the Effective Altruism movement? Is it itself an example of the contingencies you talk about in the book?Will MacAskill 0:32Yeah, I think it is contingent. Maybe not on the order of, “this would never have happened,” but at least on the order of decades. Evidence that Effective Altruism is somewhat contingent is that similar ideas have been promoted many times during history, and not taken on.We can go back to ancient China, the Mohists defended an impartial view of morality, and took very strategic actions to help all people. In particular, providing defensive assistance to cities under siege. Then, there were early utilitarians. Effective Altruism is broader than utilitarianism, but has some similarities. Even Peter Singer in the 70s had been promoting the idea that we should be giving most of our income to help the very poor — and didn't get a lot of traction until early 2010 after GiveWell and Giving What We Can launched.What explains the rise of it? I think it was a good idea waiting to happen. At some point, the internet helped to gather together a lot of like-minded people which wasn't possible otherwise. There were some particularly lucky events like Alex meeting Holden and me meeting Toby that helped catalyze it at the particular time it did.Dwarkesh Patel 1:49If it's true, as you say, in the book, that moral values are very contingent, then shouldn't that make us suspect that modern Western values aren't that good? They're mediocre, or worse, because ex ante, you would expect to end up with a median of all the values we could have had at this point. Obviously, we'd be biased in favor of whatever values we were brought up in.Will MacAskill 2:09Absolutely. Taking history seriously and appreciating the contingency of values, appreciating that if the Nazis had won the World War, we would all be thinking, “wow, I'm so glad that moral progress happened the way it did, and we don't have Jewish people around anymore. What huge moral progress we had then!” That's a terrifying thought. I think it should make us take seriously the fact that we're very far away from the moral truth.One of the lessons I draw in the book is that we should not think we're at the end of moral progress. We should not think, “Oh, we should lock in the Western values we have.” Instead, we should spend a lot of time trying to figure out what's actually morally right, so that the future is guided by the right values, rather than whichever happened to win out.Dwarkesh Patel 2:56So that makes a lot of sense. But I'm asking a slightly separate question—not only are there possible values that could be better than ours, but should we expect our values - we have the sense that we've made moral progress (things are better than they were before or better than most possible other worlds in 2100 or 2200)- should we not expect that to be the case? Should our priors be that these are ‘meh' values?Will MacAskill 3:19Our priors should be that our values are as good as expected on average. Then you can make an assessment like, “Are other values of today going particularly well?” There are some arguments you could make for saying no. Perhaps if the Industrial Revolution happened in India, rather than in Western Europe, then perhaps we wouldn't have wide-scale factory farming—which I think is a moral atrocity. Having said that, my view is to think that we're doing better than average.If civilization were just a redraw, then things would look worse in terms of our moral beliefs and attitudes. The abolition of slavery, the feminist movement, liberalism itself, democracy—these are all things that we could have lost and are huge gains.Dwarkesh Patel 4:14If that's true, does that make the prospect of a long reflection dangerous? If moral progress is a random walk, and we've ended up with a lucky lottery, then you're possibly reversing. Maybe you're risking regression to the mean if you just have 1,000 years of progress.Will MacAskill 4:30Moral progress isn't a random walk in general. There are many forces that act on culture and on what people believe. One of them is, “What's right, morally speaking? What's their best arguments support?” I think it's a weak force, unfortunately.The idea of lumbar flexion is getting society into a state that before we take any drastic actions that might lock in a particular set of values, we allow this force of reason and empathy and debate and goodhearted model inquiry to guide which values we end up with.Are we unwise?Dwarkesh Patel 5:05In the book, you make this interesting analogy where humans at this point in history are like teenagers. But another common impression that people have of teenagers is that they disregard wisdom and tradition and the opinions of adults too early and too often. And so, do you think it makes sense to extend the analogy this way, and suggest that we should be Burkean Longtermists and reject these inside-view esoteric threats?Will MacAskill 5:32My view goes the opposite of the Burkean view. We are cultural creatures in our nature, and are very inclined to agree with what other people think even if we don't understand the underlying mechanisms. It works well in a low-change environment. The environment we evolved towards didn't change very much. We were hunter-gatherers for hundreds of years.Now, we're in this period of enormous change, where the economy is doubling every 20 years, new technologies arrive every single year. That's unprecedented. It means that we should be trying to figure things out from first principles.Dwarkesh Patel 6:34But at current margins, do you think that's still the case? If a lot of EA and longtermist thought is first principles, do you think that more history would be better than the marginal first-principles thinker?Will MacAskill 6:47Two things. If it's about an understanding of history, then I'd love EA to have a better historical understanding. The most important subject if you want to do good in the world is philosophy of economics. But we've got that in abundance compared to there being very little historical knowledge in the EA community.Should there be even more first-principles thinking? First-principles thinking paid off pretty well in the course of the Coronavirus pandemic. From January 2020, my Facebook wall was completely saturated with people freaking out, or taking it very seriously in a way that the existing institutions weren't. The existing institutions weren't properly updating to a new environment and new evidence.The contingency of technologyDwarkesh Patel 7:47In your book, you point out several examples of societies that went through hardship. Hiroshima after the bombings, Europe after the Black Death—they seem to have rebounded relatively quickly. Does this make you think that perhaps the role of contingency in history, especially economic history is not that large? And it implies a Solow model of growth? That even if bad things happen, you can rebound and it really didn't matter?Will MacAskill 8:17In economic terms, that's the big difference between economic or technological progress and moral progress. In the long run, economic or technological progress is very non-contingent. The Egyptians had an early version of the steam engine, semaphore was only developed very late yet could have been invented thousands of years in the past.But in the long run, the instrumental benefits of tech progress, and the incentives towards tech progress and economic growth are so strong, that we get there in a wide array of circumstances. Imagine there're thousands of different societies, and none are growing except for one. In the long run, that one becomes the whole economy.Dwarkesh Patel 9:10It seems that particular example you gave of the Egyptians having some ancient form of a steam engine points towards there being more contingency? Perhaps because the steam engine comes up in many societies, but it only gets turned into an industrial revolution in one?Will MacAskill 9:22In that particular case, there's a big debate about whether quality of metalwork made it actually possible to build a proper steam engine at that time. I mentioned those to share some amazing examples of contingency prior to the Industrial Revolution.It's still contingency on the order of centuries to thousands of years. Post industrial-revolution world, there's much less contingency. It's much harder to see technologies that wouldn't have happened within decades if they hadn't been developed when they were.Dwarkesh Patel 9:57The model here is, “These general-purpose changes in the state of technology are contingent, and it'd be very important to try to engineer one of those. But other than that, it's going to get done by some guy creating a start-up anyways?”Will MacAskill 10:11Even in the case of the steam engine that seemed contingent, it gets developed in the long run. If the Industrial Revolution hadn't happened in Britain in the 18th century, would it have happened at some point? Would similar technologies that were vital to the industrial revolution developed? Yes, there are very strong incentives for doing so.If there's a culture that's into making textiles in an automated way as opposed to England in the 18th century, then that economy will take over the world. There's a structural reason why economic growth is much less contingent than moral progress.Dwarkesh Patel 11:06When people think of somebody like Norman Borlaug and the Green Revolution. It's like, “If you could have done something that, you'd be the greatest person in the 20th century.” Obviously, he's still a very good man, but would that not be our view? Do you think the green revolution would have happened anyways?Will MacAskill 11:22Yes. Norman Borlaug is sometimes credited with saving a billion lives. He was huge. He was a good force for the world. Had Norman Borlaug not existed, I don't think a billion people would have died. Rather, similar developments would have happened shortly afterwards.Perhaps he saved tens of millions of lives—and that's a lot of lives for a person to save. But, it's not as many as simply saying, “Oh, this tech was used by a billion people who would have otherwise been at risk of starvation.” In fact, not long afterwards, there were similar kinds of agricultural development.Who changes history?Dwarkesh Patel 12:02What kind of profession or career choice tends to lead to the highest counterfactual impact? Is it moral philosophers?Will MacAskill 12:12Not quite moral philosophers, although there are some examples. Sticking on science technology, if you look at Einstein, theory of special relativity would have been developed shortly afterwards. However, theory of general relativity was plausibly decades in advance. Sometimes, you get surprising leaps. But, we're still only talking about decades rather than millennia. Moral philosophers could make long-term difference. Marx and Engels made an enormous, long-run difference. Religious leaders like Mohammed, Jesus, and Confucius made enormous and contingent, long-run difference. Moral activists as well.Dwarkesh Patel 13:04If you think that the changeover in the landscape of ideas is very quick today, would you still think that somebody like Marx will be considered very influential in the long future? Communism lasted less than a century, right?Will MacAskill 13:20As things turned out, Marx will not be influential over the long term future. But that could have gone another way. It's not such a wildly different history. Rather than liberalism emerging dominant in the 20th century, it was communism. The better technology gets, the better the ruling ideology is to cement its ideology and persist for a long time. You can get a set of knock-on effects where communism wins the war of ideas in the 20th century.Let's say a world-government is based around those ideas, then, via anti-aging technology, genetic-enhancement technology, cloning, or artificial intelligence, it's able to build a society that possesses forever in accordance with that ideology.Dwarkesh Patel 14:20The death of dictators is especially interesting when you're thinking about contingency because there are huge changes in the regime. It makes me think the actual individual there was very important and who they happened to be was contingent and persistent in some interesting ways.Will MacAskill 14:37If you've got a dictatorship, then you've got single person ruling the society. That means it's heavily contingent on the views, values, beliefs, and personality of that person.Scientific talentDwarkesh Patel 14:48Going back to the second nation, in the book, you're very concerned about fertility. It seems your model about scientific and technological progress happens is number of people times average researcher productivity. If resource productivity is declining and the number of people isn't growing that fast, then that's concerning.Will MacAskill 15:07Yes, number of people times fraction of the population devoted to R&D.Dwarkesh Patel 15:11Thanks for the clarification. It seems that there have been a lot of intense concentrations of talent and progress in history. Venice, Athens, or even something like FTX, right? There are 20 developers making this a multibillion dollar company—do these examples suggest that organization and congregation of researchers matter more than the total amount?Will MacAskill 15:36The model works reasonably well. Throughout history, you start from a very low technological baseline compared to today. Most people aren't even trying to innovate. One argument for why Baghdad lost its Scientific Golden Age is because the political landscape changed such that what was incentivized was theological investigation rather than scientific investigation in the 10th/11th century AD.Similarly, one argument for why Britain had a scientific and industrial revolution rather than Germany was because all of the intellectual talent in Germany was focused on making amazing music. That doesn't compound in the way that making textiles does. If you look at like Sparta versus Athens, what was the difference? They had different cultures and intellectual inquiry was more rewarded in Athens.Because they're starting from a lower base, people trying to do something that looks like what we now think of as intellectual inquiry have an enormous impact.Dwarkesh Patel 16:58If you take an example like Bell Labs, the low-hanging fruit is gone by the late 20th century. You have this one small organization that has six Nobel Prizes. Is this a coincidence?Will MacAskill 17:14I wouldn't say that at all. The model we're working with is the size of the population times the fraction of the population doing R&D. It's the simplest model you can have. Bell Labs is punching above its weight. You can create amazing things from a certain environment with the most productive people and putting them in an environment where they're ten times more productive than they would otherwise be.However, when you're looking at the grand sweep of history, those effects are comparatively small compared to the broader culture of a society or the sheer size of a population.Longtermist institutional reformDwarkesh Patel 18:00I want to talk about your paper on longtermist institutional reform. One of the things you advocate in this paper is that we should have one of the houses be dedicated towards longtermist priorities. Can you name some specific performance metrics you would use to judge or incentivize the group of people who make up this body?Will MacAskill 18:23The thing I'll caveat with longtermist institutions is that I'm pessimistic about them. If you're trying to represent or even give consideration to future people, you have to face the fact that they're not around and they can't lobby for themselves. However, you could have an assembly of people who have some legal regulatory power. How would you constitute that? My best guess is you have a random selection from the population? How would you ensure that incentives are aligned?In 30-years time, their performance will be assessed by a panel of people who look back and assess the policies' effectiveness. Perhaps the people who are part of this assembly have their pensions paid on the basis of that assessment. Secondly, the people in 30-years time, both their policies and their assessment of the previous 30-years previous assembly get assessed by another assembly, 30-years after that, and so on. Can you get that to work? Maybe in theory—I'm skeptical in practice, but I would love some country to try it and see what happens.There is some evidence that you can get people to take the interests of future generations more seriously by just telling them their role. There was one study that got people to put on ceremonial robes, and act as trustees of the future. And they did make different policy recommendations than when they were just acting on the basis of their own beliefs and self-interest.Dwarkesh Patel 20:30If you are on that board that is judging these people, is there a metric like GDP growth that would be good heuristics for assessing past policy decisions?Will MacAskill 20:48There are some things you could do: GDP growth, homelessness, technological progress. I would absolutely want there to be an expert assessment of the risk of catastrophe. We don't have this yet, but imagine a panel of super forecasters predicting the chance of a war between great powers occurring in the next ten years that gets aggregated into a war index.That would be a lot more important than the stock market index. Risk of catastrophe would be helpful to feed into because you wouldn't want something only incentivizing economic growth at the expense of tail risks.Dwarkesh Patel 21:42Would that be your objection to a scheme like Robin Hanson's about maximizing the expected future GDP using prediction markets and making decisions that way?Will MacAskill 21:50Maximizing future GDP is an idea I associate with Tyler Cowen. With Robin Hanson's idea of voting on values but betting on beliefs, if people can vote on what collection of goods they want, GDP and unemployment might be good metrics. Beyond that, it's pure prediction markets. It's something I'd love to see tried. It's an idea of speculative political philosophy about how a society could be extraordinarily different in structure that is incredibly neglected.Do I think it'll work in practice? Probably not. Most of these ideas wouldn't work. Prediction markets can be gamed or are simply not liquid enough. There hasn't been a lot of success in prediction markets compared to forecasting. Perhaps you can solve these things. You have laws about what things can be voted on or predicted in the prediction market, you could have government subsidies to ensure there's enough liquidity. Overall, it's likely promising and I'd love to see it tried out on a city-level or something.Dwarkesh Patel 23:13Let's take a scenario where the government starts taking the impact on the long-term seriously and institutes some reforms to integrate that perspective. As an example, you can take a look at the environmental movement. There're environmental review boards that will try to assess the environmental impact of new projects and repeal any proposals based on certain metrics.The impact here, at least in some cases, has been that groups that have no strong, plausible interest in the environment are able to game these mechanisms in order to prevent projects that would actually help the environment. With longtermism, it takes a long time to assess the actual impact of something, but policymakers are tasked with evaluating the long term impacts of something. Are you worried that it'd be a system that'd be easy to game by malicious actors? And they'd ask, “What do you think went wrong with the way that environmentalism was codified into law?”Will MacAskill 24:09It's potentially a devastating worry. You create something to represent future people, but they're not allowed to lobby themselves (it can just be co-opted). My understanding of environmental impact statements has been similar. Similarly, it's not like the environment can represent itself—it can't say what its interests are. What is the right answer there? Maybe there are speculative proposals about having a representative body that assesses these things and elect jobs by people in 30-years time. That's the best we've got at the moment, but we need a lot more thought to see if any of these proposals would be robust for the long term rather than things that are narrowly-focused.Regulation to have liability insurance for dangerous bio labs is not about trying to represent the interests of future generations. But, it's very good for the long-term. At the moment, if longtermists are trying to change the government, let's focus on a narrow set of institutional changes that are very good for the long-term even if they're not in the game of representing the future. That's not to say I'm opposed to all such things. But, there are major problems with implementation for any of them.Dwarkesh Patel 25:35If we don't know how we would do it correctly, did you have an idea of how environmentalism could have been codified better? Why was that not a success in some cases?Will MacAskill 25:46Honestly, I don't have a good understanding of that. I don't know if it's intrinsic to the matter or if you could've had some system that wouldn't have been co-opted in the long-term.Are companies longtermist?Dwarkesh Patel 25:56Theoretically, the incentives of our most long-term U.S. institutions is to maximize future cash flow. Explicitly and theoretically, they should have an incentive to do the most good they can for their own company—which implies that the company can't be around if there's an existential risk…Will MacAskill 26:18I don't think so. Different institutions have different rates of decay associated with them. So, a corporation that is in the top 200 biggest companies has a half-life of only ten years. It's surprisingly short-lived. Whereas, if you look at universities Oxford and Cambridge are 800 years old. University of Bologna is even older. These are very long-lived institutions.For example, Corpus Christi at Oxford was making a decision about having a new tradition that would occur only every 400 years. It makes that kind of decision because it is such a long-lived institution. Similarly, the legends can be even longer-lived again. That type of natural half-life really affects the decisions a company would make versus a university versus a religious institution.Dwarkesh Patel 27:16Does that suggest that there's something fragile and dangerous about trying to make your institution last for a long time—if companies try to do that and are not able to?Will MacAskill 27:24Companies are composed of people. Is it in the interest of a company to last for a long time? Is it in the interests of the people who constitute the company (like the CEO and the board and the shareholders) for that company to last a long time? No, they don't particularly care. Some of them do, but most don't. Whereas other institutions go both ways. This is the issue of lock-in that I talked about at length in What We Owe The future: you get moments of plasticity during the formation of a new institution.Whether that's the Christian church or the Constitution of the United States, you lock-in a certain set of norms. That can be really good. Looking back, the U.S. Constitution seems miraculous as the first democratic constitution. As I understand it, it was created over a period of four months seems to have stood the test of time. Alternatively, lock-in norms could be extremely dangerous. There were horrible things in the U.S. Constitution like the legal right to slavery proposed as a constitutional amendment. If that had locked in, it would have been horrible. It's hard to answer in the abstract because it depends on the thing that's persisting for a long time.Living in an era of plasticityDwarkesh Patel 28:57You say in the book that you expect our current era to be a moment of plasticity. Why do you think that is?Will MacAskill 29:04There are specific types of ‘moments of plasticity' for two reasons. One is a world completely unified in a way that's historically unusual. You can communicate with anyone instantaneously and there's a great diversity of moral views. We can have arguments, like people coming on your podcast can debate what's morally correct. It's plausible to me that one of many different sets of moral views become the most popular ultimately.Secondly, we're at this period where things can really change. But, it's a moment of plasticity because it could plausibly come to an end — and the moral change that we're used to could end in the coming decades. If there was a single global culture or world government that preferred ideological conformity, combined with technology, it becomes unclear why that would end over the long-term? The key technology here is Artificial Intelligence. The point in time (which may be sooner than we think) where the rulers of the world are digital rather than biological, that [ideological conformity] could persist.Once you've got that and a global hegemony of a single ideology, there's not much reason for that set of values to change over time. You've got immortal leaders and no competition. What are the other kind of sources of value-change over time? I think they can be accounted for too.Dwarkesh Patel 30:46Isn't the fact that we are in a time of interconnectedness that won't last if we settle space — isn't that bit of reason for thinking that lock-in is not especially likely? If your overlords are millions of light years away, how well can they control you?Will MacAskill 31:01The “whether” you have is whether the control will happen before the point of space settlement. If we took to space one day, and there're many different settlements and different solar systems pursuing different visions of the good, then you're going to maintain diversity for a very long time (given the physics of the matter).Once a solar system has been settled, it's very hard for other civilizations to come along and conquer you—at least if we're at a period of technological maturity where there aren't groundbreaking technologies to be discovered. But, I'm worried that the control will happen earlier. I'm worried the control might happen this century, within our lifetimes. I don't think it's very likely, but it's seriously on the table - 10% or something?Dwarkesh Patel 31:53Hm, right. Going back to the long-term of the longtermism movement, there are many instructive foundations that were set up about a century ago like the Rockefeller Foundation, Carnegie Foundation. But, they don't seem to be especially creative or impactful today. What do you think went wrong? Why was there, if not value drift, some decay of competence and leadership and insight?Will MacAskill 32:18I don't have strong views about those particular examples, but I have two natural thoughts. For organizations that want to persist a long time and keep having an influence for a long time, they've historically specified their goals in far too narrow terms. One fun example is Benjamin Franklin. He invested a thousand pounds for each of the cities of Philadelphia and Boston to pay out after 100 years and then 200 years for different fractions of the amount invested. But, he specified it to help blacksmith apprentices. You might think this doesn't make much sense when you're in the year 2000. He could have invested more generally: for the prosperity of people in Philadelphia and Boston. It would have had plausibly more impact.The second is a ‘regression to the mean' argument. You have some new foundation and it's doing an extraordinary amount of good as the Rockefeller Foundation did. Over time, if it's exceptional in some dimension, it's probably going to get closer to average on that dimension. This is because you're changing the people involved. If you've picked exceptionally competent and farsighted people, the next generation are statistically going to be less so.Dwarkesh Patel 33:40Going back to that hand problem: if you specify your mission too narrowly and it doesn't make sense in the future—is there a trade off? If you're too broad, you make space for future actors—malicious or uncreative—to take the movement in ways that you would not approve of? With regards to doing good for Philadelphia, what if it turns into something that Ben Franklin would not have thought is good for Philadelphia?Will MacAskill 34:11It depends on what your values and views are. If Benjamin Franklin only cared about blacksmith's apprentices, then he was correct to specify it. But my own values tend to be quite a bit more broad than that. Secondly, I expect people in the future to be smarter and more capable. It's certainly the trend over time. In which case, if we're sharing similar broad goals, and they're implementing it in a different way, then they have it.How good can the future be?Dwarkesh Patel 34:52Let's talk about how good we should expect the future to be. Have you come across Robin Hanson's argument that we'll end up being subsistence-level ems because there'll be a lot of competition and minimizing compute per digital person will create a barely-worth-living experience for every entity?Will MacAskill 35:11Yeah, I'm familiar with the argument. But, we should distinguish the idea that ems are at subsistence level from the idea that we would have bad lives. So subsistence means that you get a balance of income per capita and population growth such that being poorer would cause deaths to outweigh additional births.That doesn't tell you about their well-being. You could be very poor as an emulated being but be in bliss all the time. That's perfectly consistent with the Malthusian theory. It might seem far away from the best possible future, but it could still be very good. At subsistence, those ems could still have lives that are thousands of times better than ours.Dwarkesh Patel 36:02Speaking of being poor and happy, there was a very interesting section in the chapter where you mentioned the study you had commissioned: you were trying to find out if people in the developing world find life worth living. It turns out that 19% of Indians would not want to relive their life every moment. But, 31% of Americans said that they would not want to relive their life at every moment? So, why are Indians seemingly much happier at less than a tenth of the GDP per capita?Will MacAskill 36:29I think the numbers are lower than that from memory, at least. From memory, it's something more like 9% of Indians wouldn't want to live their lives again if they had the option, and 13% of Americans said they wouldn't. You are right on the happiness metric, though. The Indians we surveyed were more optimistic about their lives, happier with their lives than people in the US were. Honestly, I don't want to generalize too far from that because we were sampling comparatively poor Americans to comparatively well-off Indians. Perhaps it's just a sample effect.There are also weird interactions with Hinduism and the belief in reincarnation that could mess up the generalizability of this. On one hand, I don't want to draw any strong conclusion from that. But, it is pretty striking as a piece of information, given that you find people's well-being in richer countries considerably happier than poorer countries, on average.Dwarkesh Patel 37:41I guess you do generalize in a sense that you use it as evidence that most lives today are living, right?Will MacAskill 37:50Exactly. So, I put together various bits of evidence, where approximately 10% of people in the United States and 10% of people in India seem to think that their lives are net negative. They think they contain more suffering than happiness and wouldn't want to be reborn and live the same life if they could.There's another scripture study that looks at people in United States/other wealthy countries, and asks them how much of their conscious life they'd want to skip if they could. Skipping here means that blinking would reach you to the end of whatever activity you're engaging with. For example, perhaps I hate this podcast so much that I would rather be unconscious than be talking to you. In which case, I'd have the option of skipping, and it would be over after 30 minutes.If you look at that, and then also asked people about the trade offs they would be willing to make as a measure of intensity of how much they're enjoying a certain experience, you reach the conclusion that a little over 10% of people regarded their life that day as being surveyed worse than if they'd been unconscious the entire day.Contra Tyler Cowen on what's most importantDwarkesh Patel 39:18Jumping topics here a little bit, on the 80,000 Hours Podcast, you said that you expect scientists who are explicitly trying to maximize their impact might have an adverse impact because they might be ignoring the foundational research that wouldn't be obvious in this way of thinking, but might be more important.Do you think this could be a general problem with longtermism? If you were trying to find the most important things that are important long-term, you might be missing things that wouldn't be obvious thinking this way?Will MacAskill 39:48Yeah, I think that's a risk. Among the ways that people could argue against my general set of views, I argue that we should be doing fairly specific and targeted things like trying to make AI safe, well-govern the rise of AI, reduce worst-case pandemics that can kill us all, prevent a Third World War, ensure that good values are promoted, and avoid value lock-in. But, some people could argue (and people like Tyler Cowen and Patrick Collison do), that it's very hard to predict the future impact of your actions.It's a mug's game to even try. Instead, you should look at the things that have done loads of good consistently in the past, and try to do the same things. In particular, they might argue that means technological progress or boosting economic growth. I dispute that. It's not something I can give a completely knock-down argument to because we don't know when we will find out who's right. Maybe in thousand-years time. But one piece of evidence is the success of forecasters in general. This also was true for Tyler Cowen, but people in Effective Altruism were realizing that the Coronavirus pandemic was going to be a big deal for them. At an early stage, they were worrying about pandemics far in advance. There are some things that are actually quite predictable.For example, Moore's Law has held up for over 70 years. The idea that AI systems are gonna get much larger and leading models are going to get more powerful are on trend. Similarly, the idea that we will be soon be able to develop viruses of unprecedented destructive power doesn't feel too controversial. Even though it's hard to predict loads of things, there are going to be tons of surprises. There are some things, especially when it comes to fairly long-standing technological trends, that we can make reasonable predictions — at least about the range of possibilities that are on the table.Dwarkesh Patel 42:19It sounds like you're saying that the things we know are important now. But, if something didn't turn out, a thousand years ago, looking back to be very important, it wouldn't be salient to us now?Will MacAskill 42:31What I was saying with me versus Patrick Collison and Tyler Cowen, who is correct? We will only get that information in a thousand-years time because we're talking about impactful strategies for the long-term. We might get suggestive evidence earlier. If me and others engaging in longtermism are making specific, measurable forecasts about what is going to happen with AI, or advances in biotechnology, and then are able to take action such that we are clearly reducing certain risks, that's pretty good evidence in favor of our strategy.Whereas, they're doing all sorts of stuff, but not make firm predictions about what's going to happen, but then things pop out of that that are good for the long-term (say we measure this in ten-years time), that would be good evidence for their view.Dwarkesh Patel 43:38You were saying earlier about the contingency in technology implies that given their worldview, even if you're trying to maximize what in the past is at the most impact, if what's had the most impact in the past is changing values, then economic growth might be the most important thing? Or trying to change the rate of economic growth?Will MacAskill 43:57I really do take the argument seriously of how people have acted in the past, especially for people trying to make a long-lasting impact. What things that they do that made sense and whatnot. So, towards the end of the 19th century, John Stuart Mill and the other early utilitarians had this longtermist wave where they started taking the interests of future generations very seriously. Their main concern was Britain running out of coal, and therefore, future generations would be impoverished. It's pretty striking because they had a very bad understanding of how the economy works. They hadn't predicted that we would be able to transition away from coal with continued innovation.Secondly, they had enormously wrong views about how much coal and fossil fuels there were in the world. So, that particular action didn't make any sense given what we know now. In fact, that particular action of trying to keep coal in the ground, given Britain at the time where we're talking about much lower amounts of coal—so small that the climate change effect is negligible at that level—probably would have been harmful.But, we could look at other things that John Stuart Mill could have done such promoting better values. He campaigned for women's suffrage. He was the first British MP. In fact, even the first politician in the world to promote women's suffrage - that seems to be pretty good. That seems to have stood the test of time. That's one historical data point. But potentially, we can learn a more general lesson there.AI and the centralization of powerDwarkesh Patel 45:36Do you think the ability of your global policymakers to come to a consensus is on net, a good or a bad thing? On the positive, maybe it helps around some dangerous tech from taking off, but on the negative side, prevent human challenge trials that cause some lock-in in the future. On net, what do you think about that trend?Will MacAskill 45:54The question of global integration, you're absolutely right, it's double-sided. One hand, it can help us reduce global catastrophic risks. The fact that the world was able to come come together and ban Chlorofluorocarbons was one of the great events of the last 50 years, allowing the hole in the ozone layer to to repair itself. But on the other hand, if it means we all converge to one monoculture and lose out on diversity, that's potentially bad. We could lose out on the most possible value that way.The solution is doing the good bits and not having the bad bits. For example, in a liberal constitution, you can have a country that is bound in certain ways by its constitution and by certain laws yet still enables a flourishing diversity of moral thought and different ways of life. Similarly, in the world, you can have very strong regulation and treaties that only deal with certain global public goods like mitigation of climate change, prevention of development of the next generation of weapons of mass destruction without having some very strong-arm global government that implements a particular vision of the world. Which way are we going at the moment? It seems to me we've been going in a pretty good and not too worrying direction. But, that could change.Dwarkesh Patel 47:34Yeah, it seems the historical trend is when you have a federated political body that even if constitutionally, the Central Powers constrain over time, they tend to gain more power. You can look at the U.S., you can look at the European Union. But yeah, that seems to be the trend.Will MacAskill 47:52Depending on the culture that's embodied there, it's potentially a worry. It might not be if the culture itself is liberal and promoting of moral diversity and moral change and moral progress. But, that needn't be the case.Dwarkesh Patel 48:06Your theory of moral change implies that after a small group starts advocating for a specific idea, it may take a century or more before that idea reaches common purchase. To the extent that you think this is a very important century (I know you have disagreements about that with with others), does that mean that there isn't enough time for longtermism to gain by changing moral values?Will MacAskill 48:32There are lots of people I know and respect fairly well who think that Artificial General Intelligence will likely lead to singularity-level technological progress and extremely rapid rate of technological progress within the next 10-20 years. If so, you're right. Value changes are something that pay off slowly over time.I talk about moral change taking centuries historically, but it can be much faster today. The growth of the Effective Altruism movement is something I know well. If that's growing at something like 30% per year, compound returns mean that it's not that long. That's not growth. That's not change that happens on the order of centuries.If you look at other moral movements like gay rights movement, very fast moral change by historical standards. If you're thinking that we've got ten years till the end of history, then don't broadly try and promote better values. But, we should have a very significant probability mass on the idea that we will not hit some historical end of this century. In those worlds, promoting better values could pay off like very well.Dwarkesh Patel 49:59Have you heard of Slime Mold Time Mold Potato Diet?Will MacAskill 50:03I have indeed heard of Slime Mold Time Mold Potato Diet, and I was tempted as a gimmick to try it. As I'm sure you know, potato is close to a superfood, and you could survive indefinitely on butter mashed potatoes if you occasionally supplement with something like lentils and oats.Dwarkesh Patel 50:25Hm, interesting. Question about your career: why are you still a professor? Does it still allow you to the things that you would otherwise have been doing like converting more SBF's and making moral philosophy arguments for EA? Curious about that.Will MacAskill 50:41It's fairly open to me what I should do, but I do spend significant amounts of time co-founding organizations or being on the board of those organizations I've helped to set up. More recently, working closely with the Future Fund, SBF's new foundation, and helping them do as much good as possible. That being said, if there's a single best guess for what I want to do longer term, and certainly something that plays to my strengths better, it's developing ideas, trying to get the big picture roughly right, and then communicating them in a way that's understandable and gets more people to get off their seats and start to do a lot of good for the long-term. I've had a lot of impact that way. From that perspective, having an Oxford professorship is pretty helpful.The problems with academiaDwarkesh Patel 51:34You mentioned in the book and elsewhere that there's a scarcity of people thinking about big picture questions—How contingent is history? How are people happy generally?—Are these questions that are too hard for other people? Or they don't care enough? What's going on? Why are there so few people talking about this?Will MacAskill 51:54I just think there are many issues that are enormously important but are just not incentivized anywhere in the world. Companies don't incentivize work on them because they're too big picture. Some of these questions are, “Is the future good, rather than bad? If there was a global civilizational collapse, would we recover? How likely is a long stagnation?” There's almost no work done on any of these topics. Companies aren't interested too grand in scale.Academia has developed a culture where you don't tackle such problems. Partly, that's because they fall through the cracks of different disciplines. Partly because they seem too grand or too speculative. Academia is much more in the mode of making incremental gains in our understanding. It didn't always used to be that way.If you look back before the institutionalization of academic research, you weren't a real philosopher unless you had some grand unifying theory of ethics, political philosophy, metaphysics, logic, and epistemology. Probably the natural sciences too and economics. I'm not saying that all of academic inquiry should be like that. But should there be some people whose role is to really think about the big picture? Yes.Dwarkesh Patel 53:20Will I be able to send my kids to MacAskill University? What's the status on that project?Will MacAskill 53:25I'm pretty interested in the idea of creating a new university. There is a project that I've been in discussion about with another person who's fairly excited about making it happen. Will it go ahead? Time will tell. I think you can do both research and education far better than it currently exists. It's extremely hard to break in or creating something that's very prestigious because the leading universities are hundreds of years old. But maybe it's possible. I think it would could generate enormous amounts of value if we were able to pull it off.Dwarkesh Patel 54:10Excellent, alright. So the book is What We Owe The Future. I understand pre-orders help a lot, right? It was such an interesting read. How often does somebody write a book about the questions they consider to be the most important even if they're not the most important questions? Big picture thinking, but also looking at very specific questions and issues that come up. Super interesting read.Will MacAskill 54:34Great. Well, thank you so much!Dwarkesh Patel 54:38Anywhere else they can find you? Or any other information they might need to know?Will MacAskill 54:39Yeah, sure. What We Owe The Future is out on August 16 in the US and first of September in the United Kingdom. If you want to follow me on Twitter, I'm @WillMcCaskill. If you want to try and use your time or money to do good, Giving What We Can is an organization that encourages people to take a pledge to give a significant fraction of the income (10% or more) to the charities that do the most good. It has a list of recommended charities. 80,000 Hours—if you want to use your career to do good—is a place to go for advice on what careers have the biggest impact at all. They provide one-on-one coaching too.If you're feeling inspired and want to do good in the world, you care about future people and I want to help make their lives go better, then, as well as reading What We Owe The Future, Giving What We Can, and 80,000 hours are the sources you can go to and get involved.Dwarkesh Patel 55:33Awesome, thanks so much for coming on the podcast! It was a lot of fun.Will MacAskill 54:39Thanks so much, I loved it. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com

Deep Dive with Ali Abdaal
Moral Philosopher Will MacAskill on What We Owe The Future

Deep Dive with Ali Abdaal

Play Episode Listen Later Aug 11, 2022 172:44


How can we do the most good with our careers, money and lives? And what are the things that we can do right now, to positively impact future generations to come? This is the mission of the Effective Altruism (EA) movement co-founded by Will McAskill, Associate Professor in Philosophy at the University of Oxford and co-founder of nonprofits Giving What We Can, the Centre for Effective Altruism, and Y Combinator-backed 80,000 Hour. In the conversation, me and Will talk about the fundamentals of EA, his brand new book 'What We Owe The Future', the idea of 'longtermism', the most pressing existential threats humanity is facing and what we can do about them, why giving away your income will make you happier, why your career choice is the biggest choice you'll make in your life and much more. 

The Lunar Society
36: Will MacAskill - Longtermism, Altruism, History, & Technology

The Lunar Society

Play Episode Listen Later Aug 9, 2022 56:07


Will MacAskill is one of the founders of the Effective Altruist movement and the author of the upcoming book, What We Owe The Future.We talk about improving the future, risk of extinction & collapse, technological & moral change, problems of academia, who changes history, and much more.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.Read the full transcript here.Follow Will on Twitter. Follow me on Twitter for updates on future episodes.Subscribe to find out about future episodes!Timestamps(00:23) - Effective Altruism and Western values(07:47) - The contingency of technology(12:02) - Who changes history?(18:00) - Longtermist institutional reform(25:56) - Are companies longtermist?(28:57) - Living in an era of plasticity(34:52) - How good can the future be?(39:18) - Contra Tyler Cowen on what’s most important(45:36) - AI and the centralization of power(51:34) - The problems with academiaPlease share if you enjoyed this episode! Helps out a ton!TranscriptDwarkesh Patel 0:06Okay, today I have the pleasure of interviewing William MacAskill. Will is one of the founders of the Effective Altruism movement, and most recently, the author of the upcoming book, What We Owe The Future. Will, thanks for coming on the podcast.Will MacAskill 0:20Thanks so much for having me on.Effective Altruism and Western valuesDwarkesh Patel 0:23My first question is: What is the high-level explanation for the success of the Effective Altruism movement? Is it itself an example of the contingencies you talk about in the book?Will MacAskill 0:32Yeah, I think it is contingent. Maybe not on the order of, “this would never have happened,” but at least on the order of decades. Evidence that Effective Altruism is somewhat contingent is that similar ideas have been promoted many times during history, and not taken on.We can go back to ancient China, the Mohists defended an impartial view of morality, and took very strategic actions to help all people. In particular, providing defensive assistance to cities under siege. Then, there were early utilitarians. Effective Altruism is broader than utilitarianism, but has some similarities. Even Peter Singer in the 70s had been promoting the idea that we should be giving most of our income to help the very poor — and didn’t get a lot of traction until early 2010 after GiveWell and Giving What We Can launched.What explains the rise of it? I think it was a good idea waiting to happen. At some point, the internet helped to gather together a lot of like-minded people which wasn’t possible otherwise. There were some particularly lucky events like Alex meeting Holden and me meeting Toby that helped catalyze it at the particular time it did.Dwarkesh Patel 1:49If it's true, as you say, in the book, that moral values are very contingent, then shouldn't that make us suspect that modern Western values aren't that good? They're mediocre, or worse, because ex ante, you would expect to end up with a median of all the values we could have had at this point. Obviously, we'd be biased in favor of whatever values we were brought up in.Will MacAskill 2:09Absolutely. Taking history seriously and appreciating the contingency of values, appreciating that if the Nazis had won the World War, we would all be thinking, “wow, I'm so glad that moral progress happened the way it did, and we don't have Jewish people around anymore. What huge moral progress we had then!” That's a terrifying thought. I think it should make us take seriously the fact that we're very far away from the moral truth.One of the lessons I draw in the book is that we should not think we're at the end of moral progress. We should not think, “Oh, we should lock in the Western values we have.” Instead, we should spend a lot of time trying to figure out what's actually morally right, so that the future is guided by the right values, rather than whichever happened to win out.Dwarkesh Patel 2:56So that makes a lot of sense. But I'm asking a slightly separate question—not only are there possible values that could be better than ours, but should we expect our values - we have the sense that we've made moral progress (things are better than they were before or better than most possible other worlds in 2100 or 2200)- should we not expect that to be the case? Should our priors be that these are ‘meh’ values?Will MacAskill 3:19Our priors should be that our values are as good as expected on average. Then you can make an assessment like, “Are other values of today going particularly well?” There are some arguments you could make for saying no. Perhaps if the Industrial Revolution happened in India, rather than in Western Europe, then perhaps we wouldn't have wide-scale factory farming—which I think is a moral atrocity. Having said that, my view is to think that we're doing better than average.If civilization were just a redraw, then things would look worse in terms of our moral beliefs and attitudes. The abolition of slavery, the feminist movement, liberalism itself, democracy—these are all things that we could have lost and are huge gains.Dwarkesh Patel 4:14If that's true, does that make the prospect of a long reflection dangerous? If moral progress is a random walk, and we've ended up with a lucky lottery, then you're possibly reversing. Maybe you're risking regression to the mean if you just have 1,000 years of progress.Will MacAskill 4:30Moral progress isn't a random walk in general. There are many forces that act on culture and on what people believe. One of them is, “What’s right, morally speaking? What's their best arguments support?” I think it's a weak force, unfortunately.The idea of lumbar flexion is getting society into a state that before we take any drastic actions that might lock in a particular set of values, we allow this force of reason and empathy and debate and goodhearted model inquiry to guide which values we end up with.Are we unwise?Dwarkesh Patel 5:05In the book, you make this interesting analogy where humans at this point in history are like teenagers. But another common impression that people have of teenagers is that they disregard wisdom and tradition and the opinions of adults too early and too often. And so, do you think it makes sense to extend the analogy this way, and suggest that we should be Burkean Longtermists and reject these inside-view esoteric threats?Will MacAskill 5:32My view goes the opposite of the Burkean view. We are cultural creatures in our nature, and are very inclined to agree with what other people think even if we don't understand the underlying mechanisms. It works well in a low-change environment. The environment we evolved towards didn't change very much. We were hunter-gatherers for hundreds of years.Now, we're in this period of enormous change, where the economy is doubling every 20 years, new technologies arrive every single year. That's unprecedented. It means that we should be trying to figure things out from first principles.Dwarkesh Patel 6:34But at current margins, do you think that's still the case? If a lot of EA and longtermist thought is first principles, do you think that more history would be better than the marginal first-principles thinker?Will MacAskill 6:47Two things. If it's about an understanding of history, then I'd love EA to have a better historical understanding. The most important subject if you want to do good in the world is philosophy of economics. But we've got that in abundance compared to there being very little historical knowledge in the EA community.Should there be even more first-principles thinking? First-principles thinking paid off pretty well in the course of the Coronavirus pandemic. From January 2020, my Facebook wall was completely saturated with people freaking out, or taking it very seriously in a way that the existing institutions weren't. The existing institutions weren't properly updating to a new environment and new evidence.The contingency of technologyDwarkesh Patel 7:47In your book, you point out several examples of societies that went through hardship. Hiroshima after the bombings, Europe after the Black Death—they seem to have rebounded relatively quickly. Does this make you think that perhaps the role of contingency in history, especially economic history is not that large? And it implies a Solow model of growth? That even if bad things happen, you can rebound and it really didn't matter?Will MacAskill 8:17In economic terms, that's the big difference between economic or technological progress and moral progress. In the long run, economic or technological progress is very non-contingent. The Egyptians had an early version of the steam engine, semaphore was only developed very late yet could have been invented thousands of years in the past.But in the long run, the instrumental benefits of tech progress, and the incentives towards tech progress and economic growth are so strong, that we get there in a wide array of circumstances. Imagine there're thousands of different societies, and none are growing except for one. In the long run, that one becomes the whole economy.Dwarkesh Patel 9:10It seems that particular example you gave of the Egyptians having some ancient form of a steam engine points towards there being more contingency? Perhaps because the steam engine comes up in many societies, but it only gets turned into an industrial revolution in one?Will MacAskill 9:22In that particular case, there's a big debate about whether quality of metalwork made it actually possible to build a proper steam engine at that time. I mentioned those to share some amazing examples of contingency prior to the Industrial Revolution.It's still contingency on the order of centuries to thousands of years. Post industrial-revolution world, there's much less contingency. It's much harder to see technologies that wouldn't have happened within decades if they hadn't been developed when they were.Dwarkesh Patel 9:57The model here is, “These general-purpose changes in the state of technology are contingent, and it'd be very important to try to engineer one of those. But other than that, it's going to get done by some guy creating a start-up anyways?”Will MacAskill 10:11Even in the case of the steam engine that seemed contingent, it gets developed in the long run. If the Industrial Revolution hadn't happened in Britain in the 18th century, would it have happened at some point? Would similar technologies that were vital to the industrial revolution developed? Yes, there are very strong incentives for doing so.If there’s a culture that's into making textiles in an automated way as opposed to England in the 18th century, then that economy will take over the world. There's a structural reason why economic growth is much less contingent than moral progress.Dwarkesh Patel 11:06When people think of somebody like Norman Borlaug and the Green Revolution. It's like, “If you could have done something that, you'd be the greatest person in the 20th century.” Obviously, he's still a very good man, but would that not be our view? Do you think the green revolution would have happened anyways?Will MacAskill 11:22Yes. Norman Borlaug is sometimes credited with saving a billion lives. He was huge. He was a good force for the world. Had Norman Borlaug not existed, I don’t think a billion people would have died. Rather, similar developments would have happened shortly afterwards.Perhaps he saved tens of millions of lives—and that's a lot of lives for a person to save. But, it's not as many as simply saying, “Oh, this tech was used by a billion people who would have otherwise been at risk of starvation.” In fact, not long afterwards, there were similar kinds of agricultural development.Who changes history?Dwarkesh Patel 12:02What kind of profession or career choice tends to lead to the highest counterfactual impact? Is it moral philosophers?Will MacAskill 12:12Not quite moral philosophers, although there are some examples. Sticking on science technology, if you look at Einstein, theory of special relativity would have been developed shortly afterwards. However, theory of general relativity was plausibly decades in advance. Sometimes, you get surprising leaps. But, we're still only talking about decades rather than millennia. Moral philosophers could make long-term difference. Marx and Engels made an enormous, long-run difference. Religious leaders like Mohammed, Jesus, and Confucius made enormous and contingent, long-run difference. Moral activists as well.Dwarkesh Patel 13:04If you think that the changeover in the landscape of ideas is very quick today, would you still think that somebody like Marx will be considered very influential in the long future? Communism lasted less than a century, right?Will MacAskill 13:20As things turned out, Marx will not be influential over the long term future. But that could have gone another way. It's not such a wildly different history. Rather than liberalism emerging dominant in the 20th century, it was communism. The better technology gets, the better the ruling ideology is to cement its ideology and persist for a long time. You can get a set of knock-on effects where communism wins the war of ideas in the 20th century.Let’s say a world-government is based around those ideas, then, via anti-aging technology, genetic-enhancement technology, cloning, or artificial intelligence, it's able to build a society that possesses forever in accordance with that ideology.Dwarkesh Patel 14:20The death of dictators is especially interesting when you're thinking about contingency because there are huge changes in the regime. It makes me think the actual individual there was very important and who they happened to be was contingent and persistent in some interesting ways.Will MacAskill 14:37If you've got a dictatorship, then you've got single person ruling the society. That means it's heavily contingent on the views, values, beliefs, and personality of that person.Scientific talentDwarkesh Patel 14:48Going back to the second nation, in the book, you're very concerned about fertility. It seems your model about scientific and technological progress happens is number of people times average researcher productivity. If resource productivity is declining and the number of people isn't growing that fast, then that's concerning.Will MacAskill 15:07Yes, number of people times fraction of the population devoted to R&D.Dwarkesh Patel 15:11Thanks for the clarification. It seems that there have been a lot of intense concentrations of talent and progress in history. Venice, Athens, or even something like FTX, right? There are 20 developers making this a multibillion dollar company—do these examples suggest that organization and congregation of researchers matter more than the total amount?Will MacAskill 15:36The model works reasonably well. Throughout history, you start from a very low technological baseline compared to today. Most people aren't even trying to innovate. One argument for why Baghdad lost its Scientific Golden Age is because the political landscape changed such that what was incentivized was theological investigation rather than scientific investigation in the 10th/11th century AD.Similarly, one argument for why Britain had a scientific and industrial revolution rather than Germany was because all of the intellectual talent in Germany was focused on making amazing music. That doesn't compound in the way that making textiles does. If you look at like Sparta versus Athens, what was the difference? They had different cultures and intellectual inquiry was more rewarded in Athens.Because they're starting from a lower base, people trying to do something that looks like what we now think of as intellectual inquiry have an enormous impact.Dwarkesh Patel 16:58If you take an example like Bell Labs, the low-hanging fruit is gone by the late 20th century. You have this one small organization that has six Nobel Prizes. Is this a coincidence?Will MacAskill 17:14I wouldn't say that at all. The model we’re working with is the size of the population times the fraction of the population doing R&D. It's the simplest model you can have. Bell Labs is punching above its weight. You can create amazing things from a certain environment with the most productive people and putting them in an environment where they're ten times more productive than they would otherwise be.However, when you're looking at the grand sweep of history, those effects are comparatively small compared to the broader culture of a society or the sheer size of a population.Longtermist institutional reformDwarkesh Patel 18:00I want to talk about your paper on longtermist institutional reform. One of the things you advocate in this paper is that we should have one of the houses be dedicated towards longtermist priorities. Can you name some specific performance metrics you would use to judge or incentivize the group of people who make up this body?Will MacAskill 18:23The thing I'll caveat with longtermist institutions is that I’m pessimistic about them. If you're trying to represent or even give consideration to future people, you have to face the fact that they're not around and they can't lobby for themselves. However, you could have an assembly of people who have some legal regulatory power. How would you constitute that? My best guess is you have a random selection from the population? How would you ensure that incentives are aligned?In 30-years time, their performance will be assessed by a panel of people who look back and assess the policies’ effectiveness. Perhaps the people who are part of this assembly have their pensions paid on the basis of that assessment. Secondly, the people in 30-years time, both their policies and their assessment of the previous 30-years previous assembly get assessed by another assembly, 30-years after that, and so on. Can you get that to work? Maybe in theory—I’m skeptical in practice, but I would love some country to try it and see what happens.There is some evidence that you can get people to take the interests of future generations more seriously by just telling them their role. There was one study that got people to put on ceremonial robes, and act as trustees of the future. And they did make different policy recommendations than when they were just acting on the basis of their own beliefs and self-interest.Dwarkesh Patel 20:30If you are on that board that is judging these people, is there a metric like GDP growth that would be good heuristics for assessing past policy decisions?Will MacAskill 20:48There are some things you could do: GDP growth, homelessness, technological progress. I would absolutely want there to be an expert assessment of the risk of catastrophe. We don't have this yet, but imagine a panel of super forecasters predicting the chance of a war between great powers occurring in the next ten years that gets aggregated into a war index.That would be a lot more important than the stock market index. Risk of catastrophe would be helpful to feed into because you wouldn't want something only incentivizing economic growth at the expense of tail risks.Dwarkesh Patel 21:42Would that be your objection to a scheme like Robin Hanson’s about maximizing the expected future GDP using prediction markets and making decisions that way?Will MacAskill 21:50Maximizing future GDP is an idea I associate with Tyler Cowen. With Robin Hanson’s idea of voting on values but betting on beliefs, if people can vote on what collection of goods they want, GDP and unemployment might be good metrics. Beyond that, it's pure prediction markets. It's something I'd love to see tried. It’s an idea of speculative political philosophy about how a society could be extraordinarily different in structure that is incredibly neglected.Do I think it'll work in practice? Probably not. Most of these ideas wouldn't work. Prediction markets can be gamed or are simply not liquid enough. There hasn’t been a lot of success in prediction markets compared to forecasting. Perhaps you can solve these things. You have laws about what things can be voted on or predicted in the prediction market, you could have government subsidies to ensure there's enough liquidity. Overall, it's likely promising and I'd love to see it tried out on a city-level or something.Dwarkesh Patel 23:13Let’s take a scenario where the government starts taking the impact on the long-term seriously and institutes some reforms to integrate that perspective. As an example, you can take a look at the environmental movement. There're environmental review boards that will try to assess the environmental impact of new projects and repeal any proposals based on certain metrics.The impact here, at least in some cases, has been that groups that have no strong, plausible interest in the environment are able to game these mechanisms in order to prevent projects that would actually help the environment. With longtermism, it takes a long time to assess the actual impact of something, but policymakers are tasked with evaluating the long term impacts of something. Are you worried that it'd be a system that'd be easy to game by malicious actors? And they'd ask, “What do you think went wrong with the way that environmentalism was codified into law?”Will MacAskill 24:09It's potentially a devastating worry. You create something to represent future people, but they're not allowed to lobby themselves (it can just be co-opted). My understanding of environmental impact statements has been similar. Similarly, it's not like the environment can represent itself—it can't say what its interests are. What is the right answer there? Maybe there are speculative proposals about having a representative body that assesses these things and elect jobs by people in 30-years time. That's the best we've got at the moment, but we need a lot more thought to see if any of these proposals would be robust for the long term rather than things that are narrowly-focused.Regulation to have liability insurance for dangerous bio labs is not about trying to represent the interests of future generations. But, it's very good for the long-term. At the moment, if longtermists are trying to change the government, let's focus on a narrow set of institutional changes that are very good for the long-term even if they're not in the game of representing the future. That's not to say I'm opposed to all such things. But, there are major problems with implementation for any of them.Dwarkesh Patel 25:35If we don't know how we would do it correctly, did you have an idea of how environmentalism could have been codified better? Why was that not a success in some cases?Will MacAskill 25:46Honestly, I don't have a good understanding of that. I don't know if it's intrinsic to the matter or if you could’ve had some system that wouldn't have been co-opted in the long-term.Are companies longtermist?Dwarkesh Patel 25:56Theoretically, the incentives of our most long-term U.S. institutions is to maximize future cash flow. Explicitly and theoretically, they should have an incentive to do the most good they can for their own company—which implies that the company can’t be around if there’s an existential risk…Will MacAskill 26:18I don't think so. Different institutions have different rates of decay associated with them. So, a corporation that is in the top 200 biggest companies has a half-life of only ten years. It’s surprisingly short-lived. Whereas, if you look at universities Oxford and Cambridge are 800 years old. University of Bologna is even older. These are very long-lived institutions.For example, Corpus Christi at Oxford was making a decision about having a new tradition that would occur only every 400 years. It makes that kind of decision because it is such a long-lived institution. Similarly, the legends can be even longer-lived again. That type of natural half-life really affects the decisions a company would make versus a university versus a religious institution.Dwarkesh Patel 27:16Does that suggest that there's something fragile and dangerous about trying to make your institution last for a long time—if companies try to do that and are not able to?Will MacAskill 27:24Companies are composed of people. Is it in the interest of a company to last for a long time? Is it in the interests of the people who constitute the company (like the CEO and the board and the shareholders) for that company to last a long time? No, they don't particularly care. Some of them do, but most don't. Whereas other institutions go both ways. This is the issue of lock-in that I talked about at length in What We Owe The future: you get moments of plasticity during the formation of a new institution.Whether that’s the Christian church or the Constitution of the United States, you lock-in a certain set of norms. That can be really good. Looking back, the U.S. Constitution seems miraculous as the first democratic constitution. As I understand it, it was created over a period of four months seems to have stood the test of time. Alternatively, lock-in norms could be extremely dangerous. There were horrible things in the U.S. Constitution like the legal right to slavery proposed as a constitutional amendment. If that had locked in, it would have been horrible. It's hard to answer in the abstract because it depends on the thing that's persisting for a long time.Living in an era of plasticityDwarkesh Patel 28:57You say in the book that you expect our current era to be a moment of plasticity. Why do you think that is?Will MacAskill 29:04There are specific types of ‘moments of plasticity’ for two reasons. One is a world completely unified in a way that's historically unusual. You can communicate with anyone instantaneously and there's a great diversity of moral views. We can have arguments, like people coming on your podcast can debate what's morally correct. It's plausible to me that one of many different sets of moral views become the most popular ultimately.Secondly, we're at this period where things can really change. But, it's a moment of plasticity because it could plausibly come to an end — and the moral change that we're used to could end in the coming decades. If there was a single global culture or world government that preferred ideological conformity, combined with technology, it becomes unclear why that would end over the long-term? The key technology here is Artificial Intelligence. The point in time (which may be sooner than we think) where the rulers of the world are digital rather than biological, that [ideological conformity] could persist.Once you've got that and a global hegemony of a single ideology, there's not much reason for that set of values to change over time. You've got immortal leaders and no competition. What are the other kind of sources of value-change over time? I think they can be accounted for too.Dwarkesh Patel 30:46Isn't the fact that we are in a time of interconnectedness that won't last if we settle space — isn't that bit of reason for thinking that lock-in is not especially likely? If your overlords are millions of light years away, how well can they control you?Will MacAskill 31:01The “whether” you have is whether the control will happen before the point of space settlement. If we took to space one day, and there're many different settlements and different solar systems pursuing different visions of the good, then you're going to maintain diversity for a very long time (given the physics of the matter).Once a solar system has been settled, it's very hard for other civilizations to come along and conquer you—at least if we're at a period of technological maturity where there aren't groundbreaking technologies to be discovered. But, I'm worried that the control will happen earlier. I'm worried the control might happen this century, within our lifetimes. I don't think it’s very likely, but it's seriously on the table - 10% or something?Dwarkesh Patel 31:53Hm, right. Going back to the long-term of the longtermism movement, there are many instructive foundations that were set up about a century ago like the Rockefeller Foundation, Carnegie Foundation. But, they don't seem to be especially creative or impactful today. What do you think went wrong? Why was there, if not value drift, some decay of competence and leadership and insight?Will MacAskill 32:18I don't have strong views about those particular examples, but I have two natural thoughts. For organizations that want to persist a long time and keep having an influence for a long time, they’ve historically specified their goals in far too narrow terms. One fun example is Benjamin Franklin. He invested a thousand pounds for each of the cities of Philadelphia and Boston to pay out after 100 years and then 200 years for different fractions of the amount invested. But, he specified it to help blacksmith apprentices. You might think this doesn't make much sense when you’re in the year 2000. He could have invested more generally: for the prosperity of people in Philadelphia and Boston. It would have had plausibly more impact.The second is a ‘regression to the mean’ argument. You have some new foundation and it's doing an extraordinary amount of good as the Rockefeller Foundation did. Over time, if it's exceptional in some dimension, it's probably going to get closer to average on that dimension. This is because you’re changing the people involved. If you've picked exceptionally competent and farsighted people, the next generation are statistically going to be less so.Dwarkesh Patel 33:40Going back to that hand problem: if you specify your mission too narrowly and it doesn't make sense in the future—is there a trade off? If you're too broad, you make space for future actors—malicious or uncreative—to take the movement in ways that you would not approve of? With regards to doing good for Philadelphia, what if it turns into something that Ben Franklin would not have thought is good for Philadelphia?Will MacAskill 34:11It depends on what your values and views are. If Benjamin Franklin only cared about blacksmith's apprentices, then he was correct to specify it. But my own values tend to be quite a bit more broad than that. Secondly, I expect people in the future to be smarter and more capable. It’s certainly the trend over time. In which case, if we’re sharing similar broad goals, and they're implementing it in a different way, then they have it.How good can the future be?Dwarkesh Patel 34:52Let's talk about how good we should expect the future to be. Have you come across Robin Hanson’s argument that we’ll end up being subsistence-level ems because there'll be a lot of competition and minimizing compute per digital person will create a barely-worth-living experience for every entity?Will MacAskill 35:11Yeah, I'm familiar with the argument. But, we should distinguish the idea that ems are at subsistence level from the idea that we would have bad lives. So subsistence means that you get a balance of income per capita and population growth such that being poorer would cause deaths to outweigh additional births.That doesn't tell you about their well-being. You could be very poor as an emulated being but be in bliss all the time. That's perfectly consistent with the Malthusian theory. It might seem far away from the best possible future, but it could still be very good. At subsistence, those ems could still have lives that are thousands of times better than ours.Dwarkesh Patel 36:02Speaking of being poor and happy, there was a very interesting section in the chapter where you mentioned the study you had commissioned: you were trying to find out if people in the developing world find life worth living. It turns out that 19% of Indians would not want to relive their life every moment. But, 31% of Americans said that they would not want to relive their life at every moment? So, why are Indians seemingly much happier at less than a tenth of the GDP per capita?Will MacAskill 36:29I think the numbers are lower than that from memory, at least. From memory, it’s something more like 9% of Indians wouldn't want to live their lives again if they had the option, and 13% of Americans said they wouldn’t. You are right on the happiness metric, though. The Indians we surveyed were more optimistic about their lives, happier with their lives than people in the US were. Honestly, I don't want to generalize too far from that because we were sampling comparatively poor Americans to comparatively well-off Indians. Perhaps it's just a sample effect.There are also weird interactions with Hinduism and the belief in reincarnation that could mess up the generalizability of this. On one hand, I don't want to draw any strong conclusion from that. But, it is pretty striking as a piece of information, given that you find people's well-being in richer countries considerably happier than poorer countries, on average.Dwarkesh Patel 37:41I guess you do generalize in a sense that you use it as evidence that most lives today are living, right?Will MacAskill 37:50Exactly. So, I put together various bits of evidence, where approximately 10% of people in the United States and 10% of people in India seem to think that their lives are net negative. They think they contain more suffering than happiness and wouldn't want to be reborn and live the same life if they could.There's another scripture study that looks at people in United States/other wealthy countries, and asks them how much of their conscious life they'd want to skip if they could. Skipping here means that blinking would reach you to the end of whatever activity you're engaging with. For example, perhaps I hate this podcast so much that I would rather be unconscious than be talking to you. In which case, I'd have the option of skipping, and it would be over after 30 minutes.If you look at that, and then also asked people about the trade offs they would be willing to make as a measure of intensity of how much they're enjoying a certain experience, you reach the conclusion that a little over 10% of people regarded their life that day as being surveyed worse than if they'd been unconscious the entire day.Contra Tyler Cowen on what’s most importantDwarkesh Patel 39:18Jumping topics here a little bit, on the 80,000 Hours Podcast, you said that you expect scientists who are explicitly trying to maximize their impact might have an adverse impact because they might be ignoring the foundational research that wouldn't be obvious in this way of thinking, but might be more important.Do you think this could be a general problem with longtermism? If you were trying to find the most important things that are important long-term, you might be missing things that wouldn't be obvious thinking this way?Will MacAskill 39:48Yeah, I think that's a risk. Among the ways that people could argue against my general set of views, I argue that we should be doing fairly specific and targeted things like trying to make AI safe, well-govern the rise of AI, reduce worst-case pandemics that can kill us all, prevent a Third World War, ensure that good values are promoted, and avoid value lock-in. But, some people could argue (and people like Tyler Cowen and Patrick Collison do), that it's very hard to predict the future impact of your actions.It's a mug's game to even try. Instead, you should look at the things that have done loads of good consistently in the past, and try to do the same things. In particular, they might argue that means technological progress or boosting economic growth. I dispute that. It's not something I can give a completely knock-down argument to because we don’t know when we will find out who's right. Maybe in thousand-years time. But one piece of evidence is the success of forecasters in general. This also was true for Tyler Cowen, but people in Effective Altruism were realizing that the Coronavirus pandemic was going to be a big deal for them. At an early stage, they were worrying about pandemics far in advance. There are some things that are actually quite predictable.For example, Moore's Law has held up for over 70 years. The idea that AI systems are gonna get much larger and leading models are going to get more powerful are on trend. Similarly, the idea that we will be soon be able to develop viruses of unprecedented destructive power doesn’t feel too controversial. Even though it’s hard to predict loads of things, there are going to be tons of surprises. There are some things, especially when it comes to fairly long-standing technological trends, that we can make reasonable predictions — at least about the range of possibilities that are on the table.Dwarkesh Patel 42:19It sounds like you're saying that the things we know are important now. But, if something didn't turn out, a thousand years ago, looking back to be very important, it wouldn't be salient to us now?Will MacAskill 42:31What I was saying with me versus Patrick Collison and Tyler Cowen, who is correct? We will only get that information in a thousand-years time because we're talking about impactful strategies for the long-term. We might get suggestive evidence earlier. If me and others engaging in longtermism are making specific, measurable forecasts about what is going to happen with AI, or advances in biotechnology, and then are able to take action such that we are clearly reducing certain risks, that's pretty good evidence in favor of our strategy.Whereas, they're doing all sorts of stuff, but not make firm predictions about what's going to happen, but then things pop out of that that are good for the long-term (say we measure this in ten-years time), that would be good evidence for their view.Dwarkesh Patel 43:38You were saying earlier about the contingency in technology implies that given their worldview, even if you're trying to maximize what in the past is at the most impact, if what's had the most impact in the past is changing values, then economic growth might be the most important thing? Or trying to change the rate of economic growth?Will MacAskill 43:57I really do take the argument seriously of how people have acted in the past, especially for people trying to make a long-lasting impact. What things that they do that made sense and whatnot. So, towards the end of the 19th century, John Stuart Mill and the other early utilitarians had this longtermist wave where they started taking the interests of future generations very seriously. Their main concern was Britain running out of coal, and therefore, future generations would be impoverished. It's pretty striking because they had a very bad understanding of how the economy works. They hadn't predicted that we would be able to transition away from coal with continued innovation.Secondly, they had enormously wrong views about how much coal and fossil fuels there were in the world. So, that particular action didn't make any sense given what we know now. In fact, that particular action of trying to keep coal in the ground, given Britain at the time where we're talking about much lower amounts of coal—so small that the climate change effect is negligible at that level—probably would have been harmful.But, we could look at other things that John Stuart Mill could have done such promoting better values. He campaigned for women's suffrage. He was the first British MP. In fact, even the first politician in the world to promote women's suffrage - that seems to be pretty good. That seems to have stood the test of time. That's one historical data point. But potentially, we can learn a more general lesson there.AI and the centralization of powerDwarkesh Patel 45:36Do you think the ability of your global policymakers to come to a consensus is on net, a good or a bad thing? On the positive, maybe it helps around some dangerous tech from taking off, but on the negative side, prevent human challenge trials that cause some lock-in in the future. On net, what do you think about that trend?Will MacAskill 45:54The question of global integration, you're absolutely right, it's double-sided. One hand, it can help us reduce global catastrophic risks. The fact that the world was able to come come together and ban Chlorofluorocarbons was one of the great events of the last 50 years, allowing the hole in the ozone layer to to repair itself. But on the other hand, if it means we all converge to one monoculture and lose out on diversity, that's potentially bad. We could lose out on the most possible value that way.The solution is doing the good bits and not having the bad bits. For example, in a liberal constitution, you can have a country that is bound in certain ways by its constitution and by certain laws yet still enables a flourishing diversity of moral thought and different ways of life. Similarly, in the world, you can have very strong regulation and treaties that only deal with certain global public goods like mitigation of climate change, prevention of development of the next generation of weapons of mass destruction without having some very strong-arm global government that implements a particular vision of the world. Which way are we going at the moment? It seems to me we've been going in a pretty good and not too worrying direction. But, that could change.Dwarkesh Patel 47:34Yeah, it seems the historical trend is when you have a federated political body that even if constitutionally, the Central Powers constrain over time, they tend to gain more power. You can look at the U.S., you can look at the European Union. But yeah, that seems to be the trend.Will MacAskill 47:52Depending on the culture that's embodied there, it's potentially a worry. It might not be if the culture itself is liberal and promoting of moral diversity and moral change and moral progress. But, that needn't be the case.Dwarkesh Patel 48:06Your theory of moral change implies that after a small group starts advocating for a specific idea, it may take a century or more before that idea reaches common purchase. To the extent that you think this is a very important century (I know you have disagreements about that with with others), does that mean that there isn't enough time for longtermism to gain by changing moral values?Will MacAskill 48:32There are lots of people I know and respect fairly well who think that Artificial General Intelligence will likely lead to singularity-level technological progress and extremely rapid rate of technological progress within the next 10-20 years. If so, you’re right. Value changes are something that pay off slowly over time.I talk about moral change taking centuries historically, but it can be much faster today. The growth of the Effective Altruism movement is something I know well. If that's growing at something like 30% per year, compound returns mean that it's not that long. That's not growth. That's not change that happens on the order of centuries.If you look at other moral movements like gay rights movement, very fast moral change by historical standards. If you're thinking that we've got ten years till the end of history, then don't broadly try and promote better values. But, we should have a very significant probability mass on the idea that we will not hit some historical end of this century. In those worlds, promoting better values could pay off like very well.Dwarkesh Patel 49:59Have you heard of Slime Mold Time Mold Potato Diet?Will MacAskill 50:03I have indeed heard of Slime Mold Time Mold Potato Diet, and I was tempted as a gimmick to try it. As I'm sure you know, potato is close to a superfood, and you could survive indefinitely on butter mashed potatoes if you occasionally supplement with something like lentils and oats.Dwarkesh Patel 50:25Hm, interesting. Question about your career: why are you still a professor? Does it still allow you to the things that you would otherwise have been doing like converting more SBF’s and making moral philosophy arguments for EA? Curious about that.Will MacAskill 50:41It's fairly open to me what I should do, but I do spend significant amounts of time co-founding organizations or being on the board of those organizations I've helped to set up. More recently, working closely with the Future Fund, SBF’s new foundation, and helping them do as much good as possible. That being said, if there's a single best guess for what I want to do longer term, and certainly something that plays to my strengths better, it's developing ideas, trying to get the big picture roughly right, and then communicating them in a way that's understandable and gets more people to get off their seats and start to do a lot of good for the long-term. I’ve had a lot of impact that way. From that perspective, having an Oxford professorship is pretty helpful.The problems with academiaDwarkesh Patel 51:34You mentioned in the book and elsewhere that there's a scarcity of people thinking about big picture questions—How contingent is history? How are people happy generally?—Are these questions that are too hard for other people? Or they don't care enough? What's going on? Why are there so few people talking about this?Will MacAskill 51:54I just think there are many issues that are enormously important but are just not incentivized anywhere in the world. Companies don't incentivize work on them because they’re too big picture. Some of these questions are, “Is the future good, rather than bad? If there was a global civilizational collapse, would we recover? How likely is a long stagnation?” There’s almost no work done on any of these topics. Companies aren't interested too grand in scale.Academia has developed a culture where you don't tackle such problems. Partly, that's because they fall through the cracks of different disciplines. Partly because they seem too grand or too speculative. Academia is much more in the mode of making incremental gains in our understanding. It didn't always used to be that way.If you look back before the institutionalization of academic research, you weren't a real philosopher unless you had some grand unifying theory of ethics, political philosophy, metaphysics, logic, and epistemology. Probably the natural sciences too and economics. I'm not saying that all of academic inquiry should be like that. But should there be some people whose role is to really think about the big picture? Yes.Dwarkesh Patel 53:20Will I be able to send my kids to MacAskill University? What's the status on that project?Will MacAskill 53:25I'm pretty interested in the idea of creating a new university. There is a project that I've been in discussion about with another person who's fairly excited about making it happen. Will it go ahead? Time will tell. I think you can do both research and education far better than it currently exists. It's extremely hard to break in or creating something that's very prestigious because the leading universities are hundreds of years old. But maybe it's possible. I think it would could generate enormous amounts of value if we were able to pull it off.Dwarkesh Patel 54:10Excellent, alright. So the book is What We Owe The Future. I understand pre-orders help a lot, right? It was such an interesting read. How often does somebody write a book about the questions they consider to be the most important even if they're not the most important questions? Big picture thinking, but also looking at very specific questions and issues that come up. Super interesting read.Will MacAskill 54:34Great. Well, thank you so much!Dwarkesh Patel 54:38Anywhere else they can find you? Or any other information they might need to know?Will MacAskill 54:39Yeah, sure. What We Owe The Future is out on August 16 in the US and first of September in the United Kingdom. If you want to follow me on Twitter, I'm @WillMcCaskill. If you want to try and use your time or money to do good, Giving What We Can is an organization that encourages people to take a pledge to give a significant fraction of the income (10% or more) to the charities that do the most good. It has a list of recommended charities. 80,000 Hours—if you want to use your career to do good—is a place to go for advice on what careers have the biggest impact at all. They provide one-on-one coaching too.If you're feeling inspired and want to do good in the world, you care about future people and I want to help make their lives go better, then, as well as reading What We Owe The Future, Giving What We Can, and 80,000 hours are the sources you can go to and get involved.Dwarkesh Patel 55:33Awesome, thanks so much for coming on the podcast! It was a lot of fun.Will MacAskill 54:39Thanks so much, I loved it. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com

The Tim Ferriss Show
#612: Will MacAskill of Effective Altruism Fame — The Value of Longtermism, Tools for Beating Stress and Overwhelm, AI Scenarios, High-Impact Books, and How to Save the World and Be an Agent of Change

The Tim Ferriss Show

Play Episode Listen Later Aug 2, 2022 104:35


Will MacAskill of Effective Altruism Fame — The Value of Longtermism, Tools for Beating Stress and Overwhelm, AI Scenarios, High-Impact Books, and How to Save the World and Be an Agent of Change | Brought to you by LinkedIn Jobs recruitment platform with 800M+ users, Vuori comfortable and durable performance apparel, and Theragun percussive muscle therapy devices. More on all three below. William MacAskill (@willmacaskill) is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. A Forbes 30 Under 30 social entrepreneur, he also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and Y Combinator-backed 80,000 Hours, which together have moved over $200 million to effective charities. You can find my 2015 conversation with Will at tim.blog/will. His new book is What We Owe the Future. It is blurbed by several guests of the podcast, including Sam Harris, who wrote, “No living philosopher has had a greater impact upon my ethics than Will MacAskill. . . . This is an altogether thrilling and necessary book.” Please enjoy! *This episode is brought to you by Vuori clothing! Vuori is a new and fresh perspective on performance apparel, perfect if you are sick and tired of traditional, old workout gear. Everything is designed for maximum comfort and versatility so that you look and feel as good in everyday life as you do working out.Get yourself some of the most comfortable and versatile clothing on the planet at VuoriClothing.com/Tim. Not only will you receive 20% off your first purchase, but you'll also enjoy free shipping on any US orders over $75 and free returns.*This episode is also brought to you by Theragun! Theragun is my go-to solution for recovery and restoration. It's a famous, handheld percussive therapy device that releases your deepest muscle tension. I own two Theraguns, and my girlfriend and I use them every day after workouts and before bed. The all-new Gen 4 Theragun is easy to use and has a proprietary brushless motor that's surprisingly quiet—about as quiet as an electric toothbrush.Go to Therabody.com/Tim right now and get your Gen 4 Theragun today, starting at only $199.*This episode is also brought to you by LinkedIn Jobs. Whether you are looking to hire now for a critical role or thinking about needs that you may have in the future, LinkedIn Jobs can help. LinkedIn screens candidates for the hard and soft skills you're looking for and puts your job in front of candidates looking for job opportunities that match what you have to offer.Using LinkedIn's active community of more than 800 million professionals worldwide, LinkedIn Jobs can help you find and hire the right person faster. When your business is ready to make that next hire, find the right person with LinkedIn Jobs. And now, you can post a job for free. Just visit LinkedIn.com/Tim.*For show notes and past guests on The Tim Ferriss Show, please visit tim.blog/podcast.For deals from sponsors of The Tim Ferriss Show, please visit tim.blog/podcast-sponsorsSign up for Tim's email newsletter (5-Bullet Friday) at tim.blog/friday.For transcripts of episodes, go to tim.blog/transcripts.Discover Tim's books: tim.blog/books.Follow Tim:Twitter: twitter.com/tferriss Instagram: instagram.com/timferrissYouTube: youtube.com/timferrissFacebook: facebook.com/timferriss LinkedIn: linkedin.com/in/timferrissPast guests on The Tim Ferriss Show include Jerry Seinfeld, Hugh Jackman, Dr. Jane Goodall, LeBron James, Kevin Hart, Doris Kearns Goodwin, Jamie Foxx, Matthew McConaughey, Esther Perel, Elizabeth Gilbert, Terry Crews, Sia, Yuval Noah Harari, Malcolm Gladwell, Madeleine Albright, Cheryl Strayed, Jim Collins, Mary Karr, Maria Popova, Sam Harris, Michael Phelps, Bob Iger, Edward Norton, Arnold Schwarzenegger, Neil Strauss, Ken Burns, Maria Sharapova, Marc Andreessen, Neil Gaiman, Neil de Grasse Tyson, Jocko Willink, Daniel Ek, Kelly Slater, Dr. Peter Attia, Seth Godin, Howard Marks, Dr. Brené Brown, Eric Schmidt, Michael Lewis, Joe Gebbia, Michael Pollan, Dr. Jordan Peterson, Vince Vaughn, Brian Koppelman, Ramit Sethi, Dax Shepard, Tony Robbins, Jim Dethmer, Dan Harris, Ray Dalio, Naval Ravikant, Vitalik Buterin, Elizabeth Lesser, Amanda Palmer, Katie Haun, Sir Richard Branson, Chuck Palahniuk, Arianna Huffington, Reid Hoffman, Bill Burr, Whitney Cummings, Rick Rubin, Dr. Vivek Murthy, Darren Aronofsky, and many more.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Reformed Deacon
Disabilities and the Church

The Reformed Deacon

Play Episode Listen Later Aug 1, 2022 58:30


In this episode of The Reformed Deacon hosts David Nakhla and John Voss talk with Rev. Stephen Tracey, pastor of Lakeview OPC in Rockport, ME on a topic that might seem overwhelming to churches, and as such, may be overlooked, “Disabilities and the Church”. This is a powerful and very helpful episode and should be required listening for every Christian—certainly for elders and deacons. It will open your eyes to some of the challenges the disabled and their families face in relationship to the church—some you may never have thought before listening. In his three simple steps to minister to, and with, those with disabilities, Rev. Tracey suggests, "Be a welcoming church, be a welcoming church to anyone, and do what you can." Referenced in this episode:Bulletin insert: Irresistible Church Training for Disability Ministry: Disability Etiquette PDFResources from the PCA  Resources from the CRC Lakeview OPC in Rockport, ME has been sending families and volunteers to Joni and Friends New England Family Retreats. In 2020 Joni and Fiends made a short movie about Lakeview's involvement to encourage other churches in the New England area. Mark Vannoy is a ruling elder and we are very grateful to Mark and Esther for their willingness to open their home and their hearts. Many of the people in this movie are members and friends of Lakeview. MTIOPCDeacon Check-InPastor Stephen Tracey's recommended reading on disability and the church:Beates, Michael S., Disability and the Gospel: How God Uses our Brokenness to Display His Grace, Wheaton, IL, Crossway, 2012.Hammond, George C., It Has Not Yet Appeared What We Shall Be: A Reconsideration of the Imago Dei in Light of Those with Severe Disabilities. Phillipsburg, NJ, Presbyterian and Reformed Publishing, 2017.Hubach, Stephanie O., Same Lake, Different Boat: Coming Alongside People touched by Disability, Phillipsburg, NJ, Presbyterian and Reformed Publishing, 2006. Revised and updated edition, 2020.Macaskill, Grant, Autism and the Church: Bible, Theology, and Community. Waco, Texas, Baylor University Press, 2019.Some very useful booklets are available free from Joni and Fiends, providing a broadly evangelical approach to many practical aspects of disability. Many helpful suggestions can be adapted for use in different local situations. https://www.joniandfriends.org/ministries/church-training-resources/irresistible-church-training-series/Pastor Tracey found the following two books to be helpful:Brueck, Kate, Start with Hello: Introducing your Church to Special Needs Ministry, Agoura Hills, CA, The Irresistible Church Series, Joni and Friends, 2015. Lillo, Debbie, Doing Life Together: Building Community for Families Affected by Disability, Agoura Hills, CA, The Irresistible Church Series, Joni and Friends, 2017.I am very happy to discuss disability and the church. I can be contacted at tracey.1@opc.org

The Reload with Sean Hansen
112: Jon Macaskill - Mindfulness, the Ultimate Performance Enhancer

The Reload with Sean Hansen

Play Episode Listen Later Jul 28, 2022 97:24


In today's discussion, performance mindset coach, Sean Hansen, gets an education on overcoming trauma through mindfulness from Jon Macaskill, the 'Mindful Frogman.' Jon Macaskill is a retired Navy SEAL Commander turned leadership and mindfulness coach. During his 24-year Navy career, he served in multiple highly dynamic leadership positions from the battlefield to the operations center and the board room. His style of teaching leadership is unconventional yet highly effective. He is passionate about helping people and organizations become the best versions of themselves through mindfulness coaching, keynote speaking, and grit and resilience training.Are you an executive, entrepreneur, or combat veteran looking to overcome subconscious blind spots and limiting messaging to unlock your highest performance?  Feel free to reach out to Sean at Reload Coaching and Consulting.Resources:The Men Talking Mindfulness retreat in Durango, COListen to Jon on the Men Talking Mindfulness podcastConnect with Jon on LinkedInBring Jon in to help your organization

Lexman Artificial
Guest: William MacAskill on Nurseries, Succulency, Stances, Consequents, and Hymnody

Lexman Artificial

Play Episode Listen Later Jul 22, 2022 4:20


Lexman interviews William MacAskill, a professor of political science at the University of Edinburgh about issues surrounding nurseries, succulency, stances, consequents, and hymnodists. In this episode, MacAskill discusses how hymnody can be used to help mitigate famines and examine different stances that countries could take in order to prevent them from happening.

Page Fright: A Literary Podcast
70. Myth and Memory w/ Annick MacAskill

Page Fright: A Literary Podcast

Play Episode Listen Later Jul 14, 2022 60:09


Annick MacAskill stops by the virtual studio to talk about her new book, Shadow Blight. Andrew asks Annick about applying myth to the personal. It's a great chat! ----- Listen to more episodes of Page Fright here. Follow the podcast on Twitter here. Follow the podcast on Instagram here. ----- Annick MacAskill is the author of the poetry collections No Meeting Without Body (Gaspereau Press, 2018), a finalist for the Gerald Lampert Memorial Award and the J.M. Abraham Award, and Murmurations (Gaspereau Press, 2020). Her third book, Shadow Blight, was published by Gaspereau Press this spring. Her poems have appeared in journals and anthologies across Canada and abroad, and she is currently serving as Arc Poetry Magazine's Poet-in-Residence. She lives in K'jipuktuk (Halifax) on the traditional and unceded territory of the Mi'kmaq. annickmacaskill.com. ----- Andrew French is an author from North Vancouver, British Columbia. He has published two chapbooks, Poems for Different Yous (Rose Garden Press, 2021) and Do Not Discard Ashes (845 Press, 2020). Andrew has a BA in English from Huron University College at Western University and an MA in English from UBC. He writes poems, book reviews, and hosts this very podcast.

The Mojo Sessions
EP 313: Jon Macaskill, Navy SEAL - From Kicking Doors to Finding Inner Peace

The Mojo Sessions

Play Episode Listen Later Jul 11, 2022 72:02


Jon Macaskill is a retired Navy SEAL Commander turned leadership and mindfulness coach who deals with his own personal trauma including that associated to the story of Operation Red Wings. During his 24-year Navy career, he served in multiple highly dynamic leadership positions from the battlefield to the operations center and the board room. His style of teaching leadership is unconventional yet highly effective. Jon learnt firsthand how meditation can help you cope with stress, transform relationships, and get better results, even in the most extreme circumstances.   LINKS   Jon Macaskill website https://macaskillconsulting.com   Jon Macaskill on Linked In https://www.linkedin.com/in/jonmacaskill   The Mojo Sessions website https://www.themojosessions.com   The Mojo Sessions on Patreon https://www.patreon.com/TheMojoSessions Full transcripts of the show (plus time codes) are available on Patreon.   The Mojo Sessions on Facebook https://www.facebook.com/TheMojoSessions   Gary on Linked in https://www.linkedin.com/in/gary-bertwistle   Gary on Twitter https://twitter.com/GaryBertwistle   The Mojo Sessions on Instagram https://www.instagram.com/themojosessions   If you like what you hear, we'd be grateful for a review on Apple Podcasts or Spotify. Happy listening!   © 2022 Gary Bertwistle.  All Rights Reserved.

The Nonlinear Library
EA - Wheeling and dealing: An internal bargaining approach to moral uncertainty by MichaelPlant

The Nonlinear Library

Play Episode Listen Later Jul 10, 2022 37:28


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wheeling and dealing: An internal bargaining approach to moral uncertainty, published by MichaelPlant on July 10, 2022 on The Effective Altruism Forum. In this post, I explore and evaluate an internal bargaining (IB) approach to moral uncertainty. On this account, the appropriate decision under moral uncertainty is the one that would be reached as the result of negotiations between agents representing the interests of each moral theory, who are awarded your resources in proportion to your credence in that theory. This has only been discussed so far by Greaves and Cotton-Barratt (2019), who give a technical account of the approach and tentatively conclude that the view is inferior to the leading alternative approach to moral uncertainty, maximise expected choiceworthiness (MEC). I provide a more intuitive sketch of how internal bargaining works, and do so in a wide range of cases. On the basis of the cases, as well as considering some challenges for the view and its theoretical features, I tentatively conclude it is superior to MEC. I close by noting one implication relevant to effective altruists: while MEC seems to push us towards a (fanatical) adherence to longtermism, internal bargaining would provide a justification for something like worldview diversification. Notes to reader: (1) I'm deliberately writing this in a fairly rough-and-ready way rather than as a piece of polished philosophy. If I had to write it as the latter, I don't think it would get written for perhaps another year or two. I'll shortly begin working on this topic with Harry Lloyd, an HLI Summer Research Fellow, and I wanted to organise and share my thoughts before doing that. In the spirit of Blaise Pascal, I should say that if I had had more time, I would have written something shorter. (2) This can be considered a ‘red-team' of current EA thinking. 1. Introduction When philosophers introduce the idea of moral uncertainty - uncertainty about what we ought, morally, to do - they often quickly point out that we are used to making decisions in the face of empirical uncertainty all the time. Here's the standard case of empirical uncertainty: it might rain tomorrow, but it might not. Should you pack an umbrella? The standard response to this is to apply expected utility theory: you need to think about the chance of it raining, the cost of carrying an umbrella, and the cost of getting wet if you don't carry an umbrella. Or, more formally, you need to assign credences (strengths of belief) and utilities (numbers representing value) to the various outcomes. Hence, when it's pointed out that we're also often uncertain about what we ought to do - should we, for example, be consequentialists or deontologists? - the standard thought is that our account of moral uncertainty should probably work much like our account of empirical uncertainty. The analogous account for moral uncertainty is called maximise expected choiceworthiness (MEC) (MacAskill, Ord, Bkyvist, 2020). The basic idea is that we need to assign credences to the various theories as well as a numerical value on how choiceworthy the relevant options are on those theories. The standard case to illuminate this is: Meat or Salad: You are choosing whether to eat meat from a factory farm or have a salad instead. You have a 40% credence in View A, a deontological theory on which eating meat is seriously morally wrong and a 60% credence in View B, a consequentialist theory on which both choices are permissible. You'd prefer to eat meat. Intuitively, you ought to eat the salad. Why? Even though you have less credence in A than B, when we consider the relative stakes for each view, we notice that View A cares much more about avoiding the meat. Hence, go for the salad as that maximises choiceworthiness. MEC is subject to various objections (see MacAskill, O...

Speakola
A rat race is for rats! — Kenny MacAskill MP on Jimmy Reid's Inaugural Address as Rector of Glasgow University, 1972

Speakola

Play Episode Listen Later Jun 30, 2022 69:19


A speech described by The New York Times in the days after delivery as 'the greatest speech since Abraham Lincoln's Gettysburg Address'. Jimmy Reid's speech was reprinted in full and praised around the world.  In this episode, Jimmy Reid biographer and Westminster MP Kenny MacAskill (member for East Lothian in Scotland since 2019) talks about this famous speech, as well as 'no bevvying' shipyard address, and Reid's life and achievements as a worker advocate, politician, union leader, and media commentator.  There are snippets of speeches from Jimmy Reid's funeral, which included eulogies from Billy Connelly and Sir Alec Ferguson. Kenny MacAskill reads a full version of the Glasgow University speech as speech of the week, because no full audio version exists.  MacAskill's biography, 'Jimmy Reid: A Scottish Political journey' is excellent.  Speakola is supported by listeners. There is a Patreon page which you can join If you want to offer Tony regular support for as little as $3/mth. Also welcome credit card donations,  which can be monthly or one off. Subscribe to our newsletter if you want a fortnightly email setting out great speeches by theme. Subscribe to Tony Wilson's 'Good one, Wilson' substack, to receive a weekly taste of his writing.  Tony's signed books are for sale at his website.  Spread the speakola word! @byTonyWilson @speakola_ on Twitter and Instagram. Email comments or ideas to tony@speakola.com See omnystudio.com/listener for privacy information.

Being LGBTQ
Episode 246: Rosana Cade & Ivor MacAskill 'The Making Of Pinocchio'

Being LGBTQ

Play Episode Listen Later Jun 19, 2022 49:20


Rosana Cade & Ivor MacAskill are renowned queer live artists based in Glasgow, creating and performing their unique works nationally and internationally in a wide range of contexts. In this interview they chat to Sam about their latest project which takes inspiration from the story of Pinocchio from a queer trans perspective. It shifts between fantasy and authenticity in response to Ivor's gender transition. The performance is part of LIFT (London International Festival of Theatre) 2022 and premieres on June 29th at the Battersea Arts Centre. Find out more about LIFT - http://www.liftfestival.com 

EARadio
Fireside chat | Will MacAskill | EA Global: London 2021

EARadio

Play Episode Listen Later Jun 5, 2022 58:45


William MacAskill is an Associate Professor in Philosophy at Oxford University and a senior research fellow at the Global Priorities Institute. He is also the director of the Forethought Foundation for Global Priorities Research and co-founder and President of the Centre for Effective Altruism. He is also the author of Doing Good Better and Moral Uncertainty, and has an upcoming book on longtermism called What We Owe The Future.This talk was taken from EA Global: London 2021. Click here to watch the talk with the video.

80,000 Hours Podcast with Rob Wiblin
#130 - Will MacAskill on balancing frugality with ambition, whether you need longtermism, & mental health under pressure

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later May 23, 2022 136:40


Imagine you lead a nonprofit that operates on a shoestring budget. Staff are paid minimum wage, lunch is bread and hummus, and you're all bunched up on a few tables in a basement office. But over a few years, your cause attracts some major new donors. Your funding jumps a thousandfold, from $100,000 a year to $100,000,000 a year. You're the same group of people committed to making sacrifices for the cause - but these days, rather than cutting costs, the right thing to do seems to be to spend serious money and get things done ASAP. You suddenly have the opportunity to make more progress than ever before, but as well as excitement about this, you have worries about the impacts that large amounts of funding can have. This is roughly the situation faced by today's guest Will MacAskill - University of Oxford philosopher, author of the forthcoming book What We Owe The Future, and founding figure in the effective altruism movement. Links to learn more, summary and full transcript. Years ago, Will pledged to give away more than 50% of his income over his life, and was already donating 10% back when he was a student with next to no income. Since then, the coalition he founded has been super successful at attracting the interest of donors who collectively want to give away billions in the way Will and his colleagues were proposing. While surely a huge success, it brings with it risks that he's never had to consider before: * Will and his colleagues might try to spend a lot of money trying to get more things done more quickly - but actually just waste it. * Being seen as profligate could strike onlookers as selfish and disreputable. * Folks might start pretending to agree with their agenda just to get grants. * People working on nearby issues that are less flush with funding may end up resentful. * People might lose their focus on helping others as they get seduced by the prospect of earning a nice living. * Mediocre projects might find it too easy to get funding, even when the people involved would be better off radically changing their strategy, or shutting down and launching something else entirely. But all these 'risks of commission' have to be weighed against 'risk of omission': the failure to achieve all you could have if you'd been truly ambitious. People looking askance at you for paying high salaries to attract the staff you want is unpleasant. But failing to prevent the next pandemic because you didn't have the necessary medical experts on your grantmaking team is worse than unpleasant - it's a true disaster. Yet few will complain, because they'll never know what might have been if you'd only set frugality aside. Will aims to strike a sensible balance between these competing errors, which he has taken to calling judicious ambition. In today's episode, Rob and Will discuss the above as well as: * Will humanity likely converge on good values as we get more educated and invest more in moral philosophy - or are the things we care about actually quite arbitrary and contingent? * Why are so many nonfiction books full of factual errors? * How does Will avoid anxiety and depression with more responsibility on his shoulders than ever? * What does Will disagree with his colleagues on? * Should we focus on existential risks more or less the same