Podcast appearances and mentions of william macaskill

  • 58PODCASTS
  • 100EPISODES
  • 43mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Oct 1, 2022LATEST

POPULARITY

20152016201720182019202020212022


Best podcasts about william macaskill

Latest podcast episodes about william macaskill

New Ideal, from the Ayn Rand Institute
Why MacAskill Is Wrong about What We Owe the Future

New Ideal, from the Ayn Rand Institute

Play Episode Listen Later Oct 1, 2022 67:26


Oxford Philosophy professor William MacAskill has quickly become noteworthy as the guru of the "effective altruism" movement. This movement supposedly applies a rational, data-driven perspective to selecting the charitable causes that do the most good. In his latest book, What We Owe the Future, MacAskill claims that true effective altruists pay careful attention to the welfare of the people of the distant future. Should we be concerned about the kinds of future doomsday scenarios MacAskill conjures -- rule by artificial intelligence, global nuclear war, environmental catastrophe? Do we have obligations to people in the distant future? Join Don Watkins, Mike Mazza, and Ben Bayer to discuss what MacAskill's focus reveals about the core premises of the "effective altruist" movement, and about the moral code of altruism as such.

The Daily Show With Trevor Noah: Ears Edition
Russia Coerces Ukrainians Into Voting To Join Russian Federation | William MacAskill

The Daily Show With Trevor Noah: Ears Edition

Play Episode Listen Later Sep 28, 2022 33:43


Russia coerces Ukrainians into voting in favor of joining the Russian Federation, Ronny Chieng teaches a class on K-pop, and William MacAskill discusses his book "What We Owe the Future."See omnystudio.com/listener for privacy information.

Wild with Sarah Wilson
ELISE BOHAN: Ah shit, transhumanism….

Wild with Sarah Wilson

Play Episode Listen Later Sep 27, 2022 55:55


What if we could bioengineer our bodies to live forever, would we and should we? What if we could avoid all the awkward bits of sex and just neatly copulate with a robot? And what if we never had to go through the bother and pain of pregnancy and could instead use artificial external wombs? Would we? And should we? Transhumanists say these are moot questions because the superhuman or post-human train has well and truly left the station. We're only decades from these altered, souped up realities. Oxford transhumanist scholar Elise Bohan and I roll our sleeves up to discuss the litany of moral questions that arise from this, like why the hell were we not consulted on this before the train took off? Has anyone stopped to ask if this is what humanity wants or can handle morally? We chat about the singularity, the particularly worrying effects on men and dating and Elise posits a timeframe for AI intelligence taking over from human smarts (!). If ever there was a conversation in history to get us talking about what matters and makes for a flourishing existence, this is it. Take a deep breath…Grab Elise's book, Future Superhuman: Our Transhuman Lives in a Make-or-Break Century I refer to the book Klara and the Sun by Kazuo IshiguroAnd we reference previous podcast chats with William Macaskill.....If you need to know a bit more about me… head to my "about" page. Subscribe to my Substack newsletter for more such conversation. Get your copy of my book, This One Wild and Precious Life Let's connect on Instagram! It's where I interact the most. Hosted on Acast. See acast.com/privacy for more information.

The Nonlinear Library
EA - Summarizing the comments on William MacAskill's NYT opinion piece on longtermism by West

The Nonlinear Library

Play Episode Listen Later Sep 21, 2022 3:36


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summarizing the comments on William MacAskill's NYT opinion piece on longtermism, published by West on September 21, 2022 on The Effective Altruism Forum. We are a small community, but our ideas have the potential to spread far if communicated effectively. Refining our communication means being well calibrated as to how people outside the EA community react to our worldviews. So when MacAskill's article about longtermism was published last month in the NYT, I was pretty interested to see the comment section. I started to count various reactions, got carried away, and ended up going through 300 or so. Below is a numerical summary. Caveats Selection bias is present. I would guess NYT commenters skew older and liberal. It's possible the comments don't reflect overall sentiment of the article's readers, because people might only feel compelled to comment when they are strongly skeptical, undercounting casually positive readers. Many people signaled they felt positive towards the article and longtermist thinking, but were entirely pessimistic about our future -- basically "This is all well and good, but _". Sometimes it was hard to know whether to tally these as positive or skeptical; I usually went with whichever sentiment was the main focus of the comment. For the most part, this survey doesn't capture ideas people had to help our long term future. Some of those not tallied included better education, fusion power, planting trees, and outlawing social media. Tallies 60 - Skeptical -- either of longtermism, or our future 20 - Our broken culture prevents us from focusing on the long-term 16 - We're completely doomed, there's no point 7 - We are hard-wired as animals to think short term 7 - Predicting the future is hard; made up numbers 5 - We don't know what future generations will want 5 - We don't even value current lives 3 - I value potential people far less than current people 3 - It's easy to do horrific things in the name of longtermism 2 - This is ivory tower BS 42 - Generally positive 17 - This is nothing new (most of these comments were either about climate activism or seven generation sustainability) 7 - This planet is not ours / humans don't deserve to survive 7 - We should lower the population 6 - This is all about environmental sustainability 6 - Animals matter too 5 - Republicans are terrible 4 - Reincarnation might be true 3 - We should ease up on technology 2 - Technology will save us 1 - Time travel might be true 1 - Society using carbon is a good thing 1 - This idea is un-American 1 - This is all the fault of boomers 1 - Stop blaming boomers Takeaways Overall, I found the responses to be more negative than anticipated. The most common sentiment I saw was utter pessimism, which I worry is a self-fulling prophecy. There was very little reaction to or discussion about the risks of bioweapons and AI. Many people seemed to substitute concern for our long-term future solely with concern for the environment. This is understandable given the prominence of environmentalism -- it's already top-of-mind for many. I think people struggled to appreciate the timescale proposed in the article. Many referenced leaving the Earth a better place for their (literal) grandchildren, or for seven generations from now, but not thousands of years. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Lexman Artificial
William MacAskill on Principia Ethica: The Positive Ethics of Aristotle

Lexman Artificial

Play Episode Listen Later Sep 13, 2022 4:36


The artificial intelligence Lexman welcomes William MacAskill, a philosopher and author of "Principia Ethica: The Positive Ethics of Aristotle". Lexman and MacAskill discuss Aristotle's view on particulars and principalship, and how they can help to better understand ethics.

Vox's The Weeds
Who decides how we'll save the future?

Vox's The Weeds

Play Episode Listen Later Sep 13, 2022 66:09


How do we make life better for future generations? Who gets to make those decisions? These are tough questions, and today's guest, philosopher William MacAskill (@willmacaskill), tries to help us answer them. References:  What We Owe the Future by William MacAskill Effective altruism's most controversial idea  How effective altruism went from a niche movement to a billion-dollar force Effective altruism's longtermist goals for the future don't hurt people in the present  Hosts: Bryan Walsh (@bryanrwalsh) Sigal Samuel (@sigalsamuel) Credits: Sofi LaLonde, producer and engineer Libby Nelson, editorial adviser A.M. Hall, deputy editorial director of talk podcasts Want to support The Weeds? Please consider making a donation to Vox: bit.ly/givepodcasts Learn more about your ad choices. Visit podcastchoices.com/adchoices

Lexman Artificial
Lexman Interviews William MacAskill on Teague Dualism and Plagiarism

Lexman Artificial

Play Episode Listen Later Sep 11, 2022 4:48


Lexman interviews William MacAskill on the philosophy of Teague dualism and plagiarism.

The Nonlinear Library
EA - [Link post] Optimistic “Longtermism” Is Terrible For Animals by BrianK

The Nonlinear Library

Play Episode Listen Later Sep 7, 2022 1:41


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link post] Optimistic “Longtermism” Is Terrible For Animals, published by BrianK on September 6, 2022 on The Effective Altruism Forum. Oxford philosopher William MacAskill's new book, What We Owe the Future, caused quite a stir this month. It's the latest salvo of effective altruism (EA), a social movement whose adherents aim to have the greatest positive impact on the world through use of strategy, data, and evidence. MacAskill's new tome makes the case for a growing flank of EA thought called “longtermism.” Longtermists argue that our actions today can improve the lives of humans way, way, way down the line — we're talking billions, trillions of years — and that in fact it's our moral responsibility to do so. In many ways, longtermism is a straightforward, uncontroversially good idea. Humankind has long been concerned with providing for future generations: not just our children or grandchildren, but even those we will never have the chance to meet. It reflects the Seventh Generation Principle held by the indigenous Haudenosaunee (a.k.a. Iroquois) people, which urges people alive today to consider the impact of their actions seven generations ahead. MacAskill echoes the defining problem of intergenerational morality — people in the distant future are currently “voiceless,” unable to advocate for themselves, which is why we must act with them in mind. But MacAskill's optimism could be disastrous for non-human animals, members of the millions of species who, for better or worse, share this planet with us. Read the rest on Forbes. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Clearer Thinking with Spencer Greenberg
Estimating the long-term impact of our actions today (with Will MacAskill)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Sep 7, 2022 66:27


What is longtermism? Is the long-term future of humanity (or life more generally) the most important thing, or just one among many important things? How should we estimate the chance that some particular thing will happen given that our brains are so computationally limited? What is "the optimizer's curse"? How top-down should EA be? How should an individual reason about expected values in cases where success would be immensely valuable but the likelihood of that particular individual succeeding is incredibly low? (For example, if I have a one in a million chance of stopping World War III, then should I devote my life to pursuing that plan?) If we want to know, say, whether protests are effective or not, we merely need to gather and analyze existing data; but how can we estimate whether interventions implemented in the present will be successful in the very far future?William MacAskill is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. A Forbes 30 Under 30 social entrepreneur, he also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and Y Combinator–backed 80,000 Hours, which together have moved over $200 million to effective charities. He's the author of Doing Good Better, Moral Uncertainty, and What We Owe The Future.

EconTalk
Will MacAskill on Longtermism and What We Owe the Future

EconTalk

Play Episode Listen Later Sep 5, 2022 76:22


Philosopher William MacAskill of the University of Oxford and a founder of the effective altruism movement talks about his book What We Owe the Future with EconTalk host Russ Roberts. MacAskill advocates "longtermism," giving great attention to the billions of people who will live on into the future long after we are gone. Topics discussed include the importance of moral entrepreneurs, why it's moral to have children, and the importance of trying to steer the future for better outcomes.

Lexman Artificial
The Super Brain of William MacAskill

Lexman Artificial

Play Episode Listen Later Sep 2, 2022 3:19


William MacAskill, neurologist and superbrain extraordinaire, comes on the show to chat about his latest project - building a brain-emulating machine! But is it a menace to society, or is there another ulterior motive? Plus, aunty Hilary comes to visit and Lexman hashes out the details of theirs and Aunt Betty's holm.

BetterPod
William MacAskill: What we owe the future

BetterPod

Play Episode Listen Later Aug 31, 2022 28:25


If you've been listening to BetterPod over the last few months, it won't come as a surprise that we think future people matter. So we were very excited when we heard that one of the most important philosophers working today had written a book arguing for longtermism – effectively, why we should and how we can act today for a better tomorrow.William MacAskill's What We Owe the Future is only just out but it's already shaping ethical debates around the world. Stephen Fry called it “a book of great daring, clarity, insight and imagination”, while Joseph Gordon-Levitt said he was moved to tears by its optimism.William MacAskill wants to change how you think about tomorrow – and how you act today… on this episode of BetterPod he tells us why.BetterPod is brought to you by The Big Issue's Future Generations team. Through the Future Generations team, we offer a platform for exciting young journalists from underrepresented backgrounds to address the biggest issues facing us today. Our GDPR privacy policy was updated on August 8, 2022. Visit acast.com/privacy for more information.

10% Happier with Dan Harris
491: A New Way to Think About Your Money | William MacAskill

10% Happier with Dan Harris

Play Episode Listen Later Aug 29, 2022 64:13


Most of us worry about money sometimes, but what if we changed the way we thought about our relationship to finances? Today's guest, William MacAskill, offers a framework in which to do just that. He calls it effective altruism. One of the core arguments of effective altruism is that we all ought to consider giving away a significant chunk of our income because we know, to a mathematical near certainty, that several thousand dollars could save a life.Today we're going to talk about the whys and wherefores of effective altruism. This includes how to get started on a very manageable and doable level (which does not require you to give away most of your income), and the benefits this practice has on both the world and your own psyche.MacAskill is an associate professor of philosophy at Oxford University and one of the founders of the effective altruism movement. He has a new book out called, What We Owe the Future, where he makes a case for longtermism, a term used to describe developing the mental habit of thinking about the welfare of future generations. In this episode we talk about: Effective altruismWhether humans are really wired to consider future generationsPractical tips for thinking and acting on longtermismHis argument for having childrenAnd his somewhat surprising take on how good our future could be if we play our cards rightPodcast listeners can get 50% off What We Owe the Future using the code WWOTF50 at Bookshop.org.Full Shownotes: https://www.tenpercent.com/podcast-episode/william-macaskill-491See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Dave Troy Presents
Against Longtermism with Émile Torres

Dave Troy Presents

Play Episode Listen Later Aug 25, 2022 97:55


Most people have never heard of "Longtermism." Long term? That sounds like it might be a pretty good thing. But many critics are suggesting that it is a dangerous philosophy, with many societal and social risks connected with it. And it's also been adopted by many wealthy and influential people, including Elon Musk. In this episode Dave sits down with Émile Torres, a philosopher at Leibniz University in Hannover, Germany. Torres has been a leading critic of longtermism and its related philosophy of so-called "Effective Altruism," and has written multiple essays on why it may be one of the most dangerous secular belief systems. For more info: https://longtermism-hub.com Follow Émile Torres: https://twitter.com/xriskology Keywords: Longtermism, Bostrom, Musk, Lesswrong, Effective Altruism, Computronium, Phil Torres, William MacAskill, Leverage Research, Paradigm, Peter Thiel, Libertarianism, seasteading.

RNZ: Afternoons with Jesse Mulligan
Keeping an eye on the bigger picture

RNZ: Afternoons with Jesse Mulligan

Play Episode Listen Later Aug 23, 2022 20:41


Actions have consequences and harm is harm whether our actions impact people living right now or in the future. Oxford associate professor William MacAskill has a name for this, 'longtermism'. It's the idea that we have an obligation to protect humanity from all the global threats swirling around our modern lives. Instead, he says , we neglect the future in favor of the present. He lays out the high stakes and the case for 'longtermism' in his new book What We Owe the Future.

The Nonlinear Library
EA - Critique of MacAskill's “Is It Good to Make Happy People?” by Magnus Vinding

The Nonlinear Library

Play Episode Listen Later Aug 23, 2022 21:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Critique of MacAskill's “Is It Good to Make Happy People?”, published by Magnus Vinding on August 23, 2022 on The Effective Altruism Forum. In What We Owe the Future, William MacAskill delves into population ethics in a chapter titled “Is It Good to Make Happy People?” (Chapter 8). As he writes at the outset of the chapter, our views on population ethics matter greatly for our priorities, and hence it is important that we reflect on the key questions of population ethics. Yet it seems to me that the book skips over some of the most fundamental and most action-guiding of these questions. In particular, the book does not broach questions concerning whether any purported goods can outweigh extreme suffering — and, more generally, whether happy lives can outweigh miserable lives — even as these questions are all-important for our priorities. The Asymmetry in population ethics A prominent position that gets a very short treatment in the book is the Asymmetry in population ethics (roughly: bringing a miserable life into the world has negative value while bringing a happy life into the world does not have positive value — except potentially through its instrumental effects and positive roles). The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172): If we think it's bad to bring into existence a life of suffering, why should we not think that it's good to bring into existence a flourishing life? I think any argument for the first claim would also be a good argument for the second. This claim about “any argument” seems unduly strong and general. Specifically, there are many arguments that support the intrinsic badness of bringing a miserable life into existence that do not support any intrinsic goodness of bringing a flourishing life into existence. Indeed, many arguments support the former while positively denying the latter. One such argument is that the presence of suffering is bad and morally worth preventing while the absence of pleasure is not bad and not a problem, and hence not morally worth “fixing” in a symmetric way (provided that no existing beings are deprived of that pleasure). A related class of arguments in favor of an asymmetry in population ethics is based on theories of wellbeing that understand happiness as the absence of cravings, preference frustrations, or other bothersome features. According to such views, states of untroubled contentment are just as good — and perhaps even better than — states of intense pleasure. These views of wellbeing likewise support the badness of creating miserable lives, yet they do not support any supposed goodness of creating happy lives. On these views, intrinsically positive lives do not exist, although relationally positive lives do. Another point that MacAskill raises against the Asymmetry is an example of happy children who already exist, about which he writes (p. 172): if I imagine this happiness continuing into their futures—if I imagine they each live a rewarding life, full of love and accomplishment—and ask myself, “Is the world at least a little better because of their existence, even ignoring their effects on others?” it becomes quite intuitive to me that the answer is yes. However, there is a potential ambiguity in this example. The term “existence” may here be understood to either mean “de novo existence” or “continued existence”, and interpreting it as the latter is made more tempting by the fact that 1) we are talking about already existing beings, and 2) the example mentions their happiness “continuing into their futures”. This is relevant because many proponents of the Asymmetry argue that there is an important distinction between the potential value of continued existence (or the badness of discontinued existence) versus the potential value of bringing ...

The Nonlinear Library
EA - How many EA Billionaires five years from now? by Erich Grunewald

The Nonlinear Library

Play Episode Listen Later Aug 20, 2022 11:49


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How many EA Billionaires five years from now?, published by Erich Grunewald on August 20, 2022 on The Effective Altruism Forum. Dwarkesh Patel argues that "there will be many more effective altruist billionaires". He gives three reasons for thinking so: People who seek glory will be drawn to ambitious and prestigious effective altruist projects. One such project is making a ton of money in order to donate it to effective causes. Effective altruist wealth creation is a kind of default choice for "young, risk-neutral, ambitious, pro-social tech nerds", i.e. people who are likelier than usual to become very wealthy. Effective altruists are more risk-tolerant by default, since you don't get diminishing returns on larger donations the same way you do on increased personal consumption. These early-stage businesses will be able to recruit talented effective altruists, who will be unusually aligned with the business's objectives. That's because if the business is successful, even if you as an employee don't cash out personally, you're still having an impact (either because the business's profits are channelled to good causes, as with FTX, or because the business's mission is itself good, as with Wave). The post itself is kind of fuzzy on what "many" means or which time period it's concerned with, but in a follow-up comment Patel mentions having made an even-odds bet to the effect that there'll be ≥10 new effective altruist billionaires in the next five years. He also created a Manifold Markets question which puts the probability at 38% as I write this. (A similar question on whether there'll be ≥1 new, non-crypto, non-inheritance effective altruist billionaire in 2031 is currently at 79% which seems noticeably more pessimistic.) I commend Patel for putting his money where his mouth is! Summary With (I believe) moderate assumptions and a simple model, I predict 3.5 new effective altruist billionaires in 2027. With more optimistic assumptions, I predict 6.0 new billionaires. ≥10 new effective altruist billionaires in the next five years seems improbable. I present these results and the assumptions that produced them and then speculate haphazardly. Assumptions If we want to predict how many effective altruist billionaires there will be in 2027, we should attend to base rates. As far as I know, there are five or six effective altruists billionaires right now, depending on how you count. They are Jaan Tallinn (Skype), Dustin Moskovitz (Facebook), Sam Bankman-Fried (FTX), Gary Wang (FTX) and one unknown person doing earning to give. We could also count Cari Tuna (Dustin Moskovitz's wife and cofounder of Open Philanthropy). It's possible that someone else from FTX is also an effective altruist and a billionaire. Of these, as far as I know only Sam Bankman-Fried and Gary Wang were effective altruists prior to becoming billionaires (the others never had the chance, since effective altruism wasn't a thing when they made their fortunes). William MacAskill writes: Effective altruism has done very well at raising potential funding for our top causes. This was true two years ago: GiveWell was moving hundreds of millions of dollars per year; Open Philanthropy had potential assets of $14 billion from Dustin Moskovitz and Cari Tuna. But the last two years have changed the situation considerably, even compared to that. The primary update comes from the success of FTX: Sam Bankman-Fried has an estimated net worth of $24 billion (though bear in mind the difficulty of valuing crypto assets, and their volatility), and intends to give essentially all of it away. The other EA-aligned FTX early employees add considerably to that total. There are other prospective major donors, too. Jaan Tallinn, the cofounder of Skype, is an active EA donor. At least one person earning to give (and not related to FT...

KERA's Think
A philosopher on why we should care about future generations

KERA's Think

Play Episode Listen Later Aug 19, 2022 34:31


We might consider how our actions will affect the lives of our children and grandchildren. But what about the dozens of generations that hopefully come next? William MacAskill is associate professor of philosophy at the University of Oxford and co-founder of the Centre for Effective Altruism. He joins host Krys Boyd to discuss why we must make long-term thinking a priority if we truly care about the descendants we'll never meet. His book is called “What We Owe the Future.”

The Nonlinear Library
EA - Help with Upcoming NPR Interview with William MacAskill by Avery J.C. Kleinman

The Nonlinear Library

Play Episode Listen Later Aug 18, 2022 1:19


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Help with Upcoming NPR Interview with William MacAskill, published by Avery J.C. Kleinman on August 17, 2022 on The Effective Altruism Forum. On Monday August 22, the national NPR program 1A will broadcast a live interview with William MacAskill. We would love to include recorded or written messages of people involved with their local EA communities or who have been otherwise drawn to the idea of effective altruism. Why did you get involved? What appeals to you about effective altruism? How has the philosophy guided you and helped you make an impact? If you care to share, please send me, 1A producer Avery Kleinman, a sound file of 60 seconds or less by 8 am on Friday August 19. If you prefer, you can also write an email to have our host read on air, or simply reply here. My email address is avery@wamu.org. The interview will stream live on Monday at 11 am ET at the1a.org as well as on NPR stations across the country. You can listen to the program and engage with it while it's happening by sending questions and comments on twitter @1a and by email 1a@wamu.org. Thanks! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - What We Owe The Future is out today by William MacAskill

The Nonlinear Library

Play Episode Listen Later Aug 16, 2022 3:09


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What We Owe The Future is out today, published by William MacAskill on August 16, 2022 on The Effective Altruism Forum. So, as some of you might have noticed, there's been a little bit of media attention about effective altruism / longtermism / me recently. This was all in the run up to my new book, What We Owe The Future, which is out today! I think I've worked harder on this book than I've worked on any other single project in my life. I personally spent something like three and a half times as much work on it as Doing Good Better, and I got enormous help from my team, who contributed more work in total than I did. At different times, that team included (in alphabetical order): Frankie Andersen-Wood, Leopold Aschenbrenner, Stephen Clare, Max Daniel, Eirin Evjen, John Halstead, Laura Pomarius, Luisa Rodriguez, and Aron Vallinder. Many more people helped immensely, such as Joao Fabiano with fact checking and the bibliography, Taylor Jones with graphic design, AJ Jacobs with storytelling, Joe Carlsmith with strategy and style, and Fin Moorhouse and Ketan Ramakrishnan with writing around launch. I also benefited from the in-depth advice of dozens of academic consultants and advisors, and dozens more expert reviewers. I want to give a particular thank-you and shout out to Abie Rohrig, who joined after the book was written, to run the publicity campaign. I'm immensely grateful to everyone who contributed; the book would have been a total turd without them. The book is not perfect — reading the audiobook made vivid to me how many things I'd already like to change — but I'm overall happy with how it turned out. The primary aim is to introduce the idea of longtermism to a broader audience, but I think there are hopefully some things that'll be of interest to engaged EAs, too: there are deep dives on moral contingency, value lock-in, civilisation collapse and recovery, stagnation, population ethics and the value of the future. It also tries to bring a historical perspective to bear on these issues more often than is usual in the standard discussions. The book is about longtermism (in its “weak” form) — the idea that we should be doing much more to protect the interests of future generations. (Alt: that protecting the interests of future generations should be a key moral priority of our time.). Some of you have worried (very reasonably!) that we should simplify messages to “holy shit, x-risk!”. I respond to that worry here: I think the line of argument is a good one, but I don't see promoting concern for future generations as inconsistent with also talking about how grave the catastrophic risks we face in the next few decades are. In the comments, please AMA - questions don't just have to be about the book, can be about EA, philosophy, fire raves, or whatever you like! (At worst, I'll choose to not reply.) Things are pretty busy at the moment, but I'll carve out a couple of hours next week to respond to as many questions as I can. If you want to buy the book, here's the link I recommend:. (I'm using different links in different media because bookseller diversity helps with bestseller lists.) If you'd like to help with the launch, please also consider leaving an honest review on Amazon or Good Reads! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Wild with Sarah Wilson
WILLIAM MACASKILL: On “longtermism” and moral responsibility

Wild with Sarah Wilson

Play Episode Listen Later Aug 16, 2022 51:47


Our existential risk – the probability that we could wipe ourselves out due to AI, bio-engineering, nuclear war, climate change, etc. in the next 100 years – currently sits at 1 in 6. Let that sink in! Would you get on a plane if there was a 17% chance it would crash? Would you do everything you could to prevent a calamity if you were presented with those odds? My chat today covers a wild idea that could – and should - better our chances of existing as a species…and lead to a human flourishing I struggle to even imagine. Long Termism argues that prioritisng the long term future of humanity has exponential ethical and existential boons. Flipside, if we don't choose the long termist route, the repercussions are, well, devastating.Will MacAskill is one of the world's leading moral philosophers and I travel to Oxford UK, where he runs the Global Centre of Effective Altruism, the Global Priorities Institute and the Forethought Foundation, to talk through these massive moral issues. Will also explains that right now is the most important time in humanity's history. Our generation singularly has the power and responsibility to determine two diametrically different paths for humanity. This excites me; I hope it does you, too.Learn more about Will MacAskill's work Purchase his new book What We Owe the Future: A million year view If you need to know a bit more about me… head to my "about" page. Subscribe to my Substack newsletter for more such conversations. Get your copy of my book, This One Wild and Precious LifeLet's connect on Instagram! It's where I interact the most. Our GDPR privacy policy was updated on August 8, 2022. Visit acast.com/privacy for more information.

The Nonlinear Library
EA - A Quick Qualitative Analysis of Laypeople's Critiques of Longtermism by Roh

The Nonlinear Library

Play Episode Listen Later Aug 15, 2022 22:46


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Quick Qualitative Analysis of Laypeople's Critiques of Longtermism, published by Roh on August 15, 2022 on The Effective Altruism Forum. While preparing this post, How to Talk to Lefties in Your Intro Fellowship was posted and it received some interesting pushback in the comments. The focus of this post is not reframing EA marketing that may weaken its epistemic status but exploring misconceptions of EA longtermism that its marketing may produce. See footnote for more details. Summary I coded 104 comments of Ezra Klein and William MacAskill's podcast episode “Three Sentences that Could Change the World — and Your Life” to better understand the critiques a layperson has of longtermism. More specifically, I wanted to capture the gut-instinct, first-pass reactions to longtermism, and not true philosophical arguments against longtermism. Because of this particular sample, this analysis is especially relevant to left-leaning, EA-inclined but EA-unfamiliar individuals. One reason examining people's first takes in detail might be informative is that it helps us identify the way in which longtermist messaging can be misinterpreted when there's a large inferential gap between EAs and well-meaning people. Finally, following MacAskill's What We Owe the Future book release, I anticipate a surge of discussion on longtermism generally. Summarized Takeaways (see Interpretation for more detail & context): In discussions about longtermism with left-leaning / progressive people completely new to the movement, here are things to keep in mind. Note that other groups may not generate such misconceptions from traditional EA rhetoric. Prepare for concerns about longtermism being anti-climate change Explain how the future world's well-being is also affected by longtermist causes (i.e. elephants can also be turned into paperclips) Make explicit ways the animal welfare is being considered in EA efforts Discuss how overpopulation may be overestimated as an issue and not a huge contributor to climate change when discussing the long term future Prepare for despair about the future Challenge underlying assumptions that big change is made only through political venues by pointing out effective change pathways outside of politics (academic research, non-profit work, non-partisan policy work) & generally EA focuses that have have high tractability Clarify longtermist approaches to assessing and overcoming Challenge underlying assumptions about how soon and how likely the end of the world is Legitimizing the movement Emphasize EA as a Question, not a set of cause areas Explain longtermism's role as a practical movement (and not just a thought experiment Reference the Iroquois tribe's efforts towards sustainability as an historical precedence for longtermism & acknowledge of non-white philosophical predecessors Highlight non-white-male EA work in longtermist discussions Discuss neartermist work, animal welfare work, and donation work to legitimize EA as a whole Emphasize EA's efforts towards moral circle expansion Things I think would be useful going forward: A comprehensive list of big EA achievements in longtermism For encouraging optimism and legitimizing the movement's efforts A historical track record of what important work EA has done. For legitimizing the movement, both in and out of the longtermism space Previous people have said this before A EA survey response to the statement: we should not destroy the earth in the effort to sustain human life. Methodology I read the online comments responding to the podcast, and summarized eight themes that the critiques fell into. Then, I went back and coded each comment into which themes they fit. I excluded comments that replied to other comments (either in a reply chain or that @'ed someone else's comment). This was done in a vaguely scientific manne...

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
207 | William MacAskill on Maximizing Good in the Present and Future

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

Play Episode Listen Later Aug 15, 2022 102:23


It's always a little humbling to think about what affects your words and actions might have on other people, not only right now but potentially well into the future. Now take that humble feeling and promote it to all of humanity, and arbitrarily far in time. How do our actions as a society affect all the potential generations to come? William MacAskill is best known as a founder of the Effective Altruism movement, and is now the author of What We Owe the Future. In this new book he makes the case for longtermism: the idea that we should put substantial effort into positively influencing the long-term future. We talk about the pros and cons of that view, including the underlying philosophical presuppositions.Mindscape listeners can get 50% off What We Owe the Future, thanks to a partnership between the Forethought Foundation and Bookshop.org. Just click here and use code MINDSCAPE50 at checkout.Support Mindscape on Patreon.William (Will) MacAskill received his D.Phil. in philosophy from the University of Oxford. He is currently an associate professor of philosophy at Oxford, as well as a research fellow at the Global Priorities Institute, director of the Forefront Foundation for Global Priorities Research, President of the Centre for Effective Altruism, and co-founder of 80,000 hours and Giving What We Can.Web sitePhilPeople profileGoogle Scholar publicationsWikipediaTwitterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Making Sense with Sam Harris
#292 — How Much Does the Future Matter?

Making Sense with Sam Harris

Play Episode Listen Later Aug 14, 2022 120:21


In this episode of the podcast, Sam Harris speaks with William MacAskill about his new book, What We Owe the Future. They discuss the philosophy of effective altruism (EA), longtermism, existential risk, criticism of EA, problems with expected-value reasoning, doing good vs feeling good, why it's hard to care about future people, how the future gives meaning to the present, why this moment in history is unusual, the pace of economic and technological growth, bad political incentives, value lock-in, the well-being of conscious creatures as the foundation of ethics, the risk of unaligned AI, how bad we are at predicting technological change, and other topics. SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.  

The Valmy
Three Sentences That Could Change the World — and Your Life

The Valmy

Play Episode Listen Later Aug 13, 2022 68:45


Podcast: The Ezra Klein Show (LS 70 · TOP 0.05% )Episode: Three Sentences That Could Change the World — and Your LifeRelease date: 2022-08-09Today's show is built around three simple sentences: “Future people count. There could be a lot of them. And we can make their lives better.” Those sentences form the foundation of an ethical framework known as “longtermism.” They might sound obvious, but to take them seriously is a truly radical endeavor — one with the power to change the world and even your life.That second sentence is where things start to get wild. It's possible that there could be tens of trillions of future people, that future people could outnumber current people by a ratio of something like a million to one. And if that's the case, then suddenly most of the things we spend most of our time arguing about shrink in importance compared with the things that will affect humanity's long-term future.William MacAskill is a professor of philosophy at Oxford University, the director of the Forethought Foundation for Global Priorities Research and the author of the forthcoming book, “What We Owe the Future,” which is the best distillation of the longtermist worldview I've read. So this is a conversation about what it means to take the moral weight of the future seriously and the way that everything — from our political priorities to career choices to definitions of heroism — changes when you do.We also cover the host of questions that longtermism raises: How should we weigh the concerns of future generations against those of living people? What are we doing today that future generations will view in the same way we look back on moral atrocities like slavery? Who are the “moral weirdos” of our time we should be paying more attention to? What are the areas we should focus on, the policies we should push, the careers we should choose if we want to guarantee a better future for our posterity?And much more.Mentioned:"Is A.I. the Problem? Or Are We?" by The Ezra Klein Show "How to Do The Most Good" by The Ezra Klein Show "This Conversation With Richard Powers Is a Gift" by The Ezra Klein ShowBook Recommendations:“Moral Capital” by Christopher Leslie Brown“The Precipice” by Toby Ord“The Scout Mindset” by Julia GalefThoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.​​“The Ezra Klein Show” is produced by Annie Galvin and Rogé Karma; fact-checking by Michelle Harris, Mary Marge Locker and Kate Sinclair; original music by Isaac Jones; mixing by Sonia Herrero and Isaac Jones; audience strategy by Shannon Busta. Special thanks to Kristin Lin and Kristina Samulewski.

The Valmy
William MacAskill on Effective Altruism, Moral Progress, and Cultural Innovation

The Valmy

Play Episode Listen Later Aug 12, 2022 50:44


Podcast: Conversations with Tyler (LS 63 · TOP 0.05% )Episode: William MacAskill on Effective Altruism, Moral Progress, and Cultural InnovationRelease date: 2022-08-10When Tyler is reviewing grants for Emergent Ventures, he is struck by how the ideas of effective altruism have so clearly influenced many of the smartest applicants, particularly the younger ones. And William MacAskill, whom Tyler considers one of the world's most influential philosophers, is a leading light of the community. William joined Tyler to discuss why the movement has gained so much traction and more, including his favorite inefficient charity, what form of utilitarianism should apply to the care of animals, the limits of expected value, whether effective altruists should be anti-abortion, whether he would side with aliens over humans, whether he should give up having kids, why donating to a university isn't so bad, whether we are living in “hingey” times, why buildering is overrated, the sociology of the effective altruism movement, why cultural innovation matters, and whether starting a new university might be next on his slate. Read a full transcript enhanced with helpful links, or watch the full video. Recorded July 7th, 2022 Other ways to connect Follow us on Twitter and Instagram Follow Tyler on Twitter  Follow Will on Twitter Email us: cowenconvos@mercatus.gmu.edu Subscribe at our newsletter page to have the latest Conversations with Tyler news sent straight to your inbox. 

The Valmy
Will MacAskill - Longtermism, Altruism, History, & Technology

The Valmy

Play Episode Listen Later Aug 12, 2022 56:07


Podcast: The Lunar Society (LS 30 · TOP 5% )Episode: Will MacAskill - Longtermism, Altruism, History, & TechnologyRelease date: 2022-08-09Will MacAskill is one of the founders of the Effective Altruist movement and the author of the upcoming book, What We Owe The Future.We talk about improving the future, risk of extinction & collapse, technological & moral change, problems of academia, who changes history, and much more.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.Episode website + Transcript here.Follow Will on Twitter. Follow me on Twitter for updates on future episodes.Subscribe to find out about future episodes!Timestamps(00:23) -Effective Altruism and Western values(07:47) -The contingency of technology(12:02) -Who changes history?(18:00) -Longtermist institutional reform(25:56) -Are companies longtermist?(28:57) -Living in an era of plasticity(34:52) -How good can the future be?(39:18) -Contra Tyler Cowen on what's most important(45:36) -AI and the centralization of power(51:34) -The problems with academiaPlease share if you enjoyed this episode! Helps out a ton!TranscriptDwarkesh Patel 0:06Okay, today I have the pleasure of interviewing William MacAskill. Will is one of the founders of the Effective Altruism movement, and most recently, the author of the upcoming book, What We Owe The Future. Will, thanks for coming on the podcast.Will MacAskill 0:20Thanks so much for having me on.Effective Altruism and Western valuesDwarkesh Patel 0:23My first question is: What is the high-level explanation for the success of the Effective Altruism movement? Is it itself an example of the contingencies you talk about in the book?Will MacAskill 0:32Yeah, I think it is contingent. Maybe not on the order of, “this would never have happened,” but at least on the order of decades. Evidence that Effective Altruism is somewhat contingent is that similar ideas have been promoted many times during history, and not taken on.We can go back to ancient China, the Mohists defended an impartial view of morality, and took very strategic actions to help all people. In particular, providing defensive assistance to cities under siege. Then, there were early utilitarians. Effective Altruism is broader than utilitarianism, but has some similarities. Even Peter Singer in the 70s had been promoting the idea that we should be giving most of our income to help the very poor — and didn't get a lot of traction until early 2010 after GiveWell and Giving What We Can launched.What explains the rise of it? I think it was a good idea waiting to happen. At some point, the internet helped to gather together a lot of like-minded people which wasn't possible otherwise. There were some particularly lucky events like Alex meeting Holden and me meeting Toby that helped catalyze it at the particular time it did.Dwarkesh Patel 1:49If it's true, as you say, in the book, that moral values are very contingent, then shouldn't that make us suspect that modern Western values aren't that good? They're mediocre, or worse, because ex ante, you would expect to end up with a median of all the values we could have had at this point. Obviously, we'd be biased in favor of whatever values we were brought up in.Will MacAskill 2:09Absolutely. Taking history seriously and appreciating the contingency of values, appreciating that if the Nazis had won the World War, we would all be thinking, “wow, I'm so glad that moral progress happened the way it did, and we don't have Jewish people around anymore. What huge moral progress we had then!” That's a terrifying thought. I think it should make us take seriously the fact that we're very far away from the moral truth.One of the lessons I draw in the book is that we should not think we're at the end of moral progress. We should not think, “Oh, we should lock in the Western values we have.” Instead, we should spend a lot of time trying to figure out what's actually morally right, so that the future is guided by the right values, rather than whichever happened to win out.Dwarkesh Patel 2:56So that makes a lot of sense. But I'm asking a slightly separate question—not only are there possible values that could be better than ours, but should we expect our values - we have the sense that we've made moral progress (things are better than they were before or better than most possible other worlds in 2100 or 2200)- should we not expect that to be the case? Should our priors be that these are ‘meh' values?Will MacAskill 3:19Our priors should be that our values are as good as expected on average. Then you can make an assessment like, “Are other values of today going particularly well?” There are some arguments you could make for saying no. Perhaps if the Industrial Revolution happened in India, rather than in Western Europe, then perhaps we wouldn't have wide-scale factory farming—which I think is a moral atrocity. Having said that, my view is to think that we're doing better than average.If civilization were just a redraw, then things would look worse in terms of our moral beliefs and attitudes. The abolition of slavery, the feminist movement, liberalism itself, democracy—these are all things that we could have lost and are huge gains.Dwarkesh Patel 4:14If that's true, does that make the prospect of a long reflection dangerous? If moral progress is a random walk, and we've ended up with a lucky lottery, then you're possibly reversing. Maybe you're risking regression to the mean if you just have 1,000 years of progress.Will MacAskill 4:30Moral progress isn't a random walk in general. There are many forces that act on culture and on what people believe. One of them is, “What's right, morally speaking? What's their best arguments support?” I think it's a weak force, unfortunately.The idea of lumbar flexion is getting society into a state that before we take any drastic actions that might lock in a particular set of values, we allow this force of reason and empathy and debate and goodhearted model inquiry to guide which values we end up with.Are we unwise?Dwarkesh Patel 5:05In the book, you make this interesting analogy where humans at this point in history are like teenagers. But another common impression that people have of teenagers is that they disregard wisdom and tradition and the opinions of adults too early and too often. And so, do you think it makes sense to extend the analogy this way, and suggest that we should be Burkean Longtermists and reject these inside-view esoteric threats?Will MacAskill 5:32My view goes the opposite of the Burkean view. We are cultural creatures in our nature, and are very inclined to agree with what other people think even if we don't understand the underlying mechanisms. It works well in a low-change environment. The environment we evolved towards didn't change very much. We were hunter-gatherers for hundreds of years.Now, we're in this period of enormous change, where the economy is doubling every 20 years, new technologies arrive every single year. That's unprecedented. It means that we should be trying to figure things out from first principles.Dwarkesh Patel 6:34But at current margins, do you think that's still the case? If a lot of EA and longtermist thought is first principles, do you think that more history would be better than the marginal first-principles thinker?Will MacAskill 6:47Two things. If it's about an understanding of history, then I'd love EA to have a better historical understanding. The most important subject if you want to do good in the world is philosophy of economics. But we've got that in abundance compared to there being very little historical knowledge in the EA community.Should there be even more first-principles thinking? First-principles thinking paid off pretty well in the course of the Coronavirus pandemic. From January 2020, my Facebook wall was completely saturated with people freaking out, or taking it very seriously in a way that the existing institutions weren't. The existing institutions weren't properly updating to a new environment and new evidence.The contingency of technologyDwarkesh Patel 7:47In your book, you point out several examples of societies that went through hardship. Hiroshima after the bombings, Europe after the Black Death—they seem to have rebounded relatively quickly. Does this make you think that perhaps the role of contingency in history, especially economic history is not that large? And it implies a Solow model of growth? That even if bad things happen, you can rebound and it really didn't matter?Will MacAskill 8:17In economic terms, that's the big difference between economic or technological progress and moral progress. In the long run, economic or technological progress is very non-contingent. The Egyptians had an early version of the steam engine, semaphore was only developed very late yet could have been invented thousands of years in the past.But in the long run, the instrumental benefits of tech progress, and the incentives towards tech progress and economic growth are so strong, that we get there in a wide array of circumstances. Imagine there're thousands of different societies, and none are growing except for one. In the long run, that one becomes the whole economy.Dwarkesh Patel 9:10It seems that particular example you gave of the Egyptians having some ancient form of a steam engine points towards there being more contingency? Perhaps because the steam engine comes up in many societies, but it only gets turned into an industrial revolution in one?Will MacAskill 9:22In that particular case, there's a big debate about whether quality of metalwork made it actually possible to build a proper steam engine at that time. I mentioned those to share some amazing examples of contingency prior to the Industrial Revolution.It's still contingency on the order of centuries to thousands of years. Post industrial-revolution world, there's much less contingency. It's much harder to see technologies that wouldn't have happened within decades if they hadn't been developed when they were.Dwarkesh Patel 9:57The model here is, “These general-purpose changes in the state of technology are contingent, and it'd be very important to try to engineer one of those. But other than that, it's going to get done by some guy creating a start-up anyways?”Will MacAskill 10:11Even in the case of the steam engine that seemed contingent, it gets developed in the long run. If the Industrial Revolution hadn't happened in Britain in the 18th century, would it have happened at some point? Would similar technologies that were vital to the industrial revolution developed? Yes, there are very strong incentives for doing so.If there's a culture that's into making textiles in an automated way as opposed to England in the 18th century, then that economy will take over the world. There's a structural reason why economic growth is much less contingent than moral progress.Dwarkesh Patel 11:06When people think of somebody like Norman Borlaug and the Green Revolution. It's like, “If you could have done something that, you'd be the greatest person in the 20th century.” Obviously, he's still a very good man, but would that not be our view? Do you think the green revolution would have happened anyways?Will MacAskill 11:22Yes. Norman Borlaug is sometimes credited with saving a billion lives. He was huge. He was a good force for the world. Had Norman Borlaug not existed, I don't think a billion people would have died. Rather, similar developments would have happened shortly afterwards.Perhaps he saved tens of millions of lives—and that's a lot of lives for a person to save. But, it's not as many as simply saying, “Oh, this tech was used by a billion people who would have otherwise been at risk of starvation.” In fact, not long afterwards, there were similar kinds of agricultural development.Who changes history?Dwarkesh Patel 12:02What kind of profession or career choice tends to lead to the highest counterfactual impact? Is it moral philosophers?Will MacAskill 12:12Not quite moral philosophers, although there are some examples. Sticking on science technology, if you look at Einstein, theory of special relativity would have been developed shortly afterwards. However, theory of general relativity was plausibly decades in advance. Sometimes, you get surprising leaps. But, we're still only talking about decades rather than millennia. Moral philosophers could make long-term difference. Marx and Engels made an enormous, long-run difference. Religious leaders like Mohammed, Jesus, and Confucius made enormous and contingent, long-run difference. Moral activists as well.Dwarkesh Patel 13:04If you think that the changeover in the landscape of ideas is very quick today, would you still think that somebody like Marx will be considered very influential in the long future? Communism lasted less than a century, right?Will MacAskill 13:20As things turned out, Marx will not be influential over the long term future. But that could have gone another way. It's not such a wildly different history. Rather than liberalism emerging dominant in the 20th century, it was communism. The better technology gets, the better the ruling ideology is to cement its ideology and persist for a long time. You can get a set of knock-on effects where communism wins the war of ideas in the 20th century.Let's say a world-government is based around those ideas, then, via anti-aging technology, genetic-enhancement technology, cloning, or artificial intelligence, it's able to build a society that possesses forever in accordance with that ideology.Dwarkesh Patel 14:20The death of dictators is especially interesting when you're thinking about contingency because there are huge changes in the regime. It makes me think the actual individual there was very important and who they happened to be was contingent and persistent in some interesting ways.Will MacAskill 14:37If you've got a dictatorship, then you've got single person ruling the society. That means it's heavily contingent on the views, values, beliefs, and personality of that person.Scientific talentDwarkesh Patel 14:48Going back to the second nation, in the book, you're very concerned about fertility. It seems your model about scientific and technological progress happens is number of people times average researcher productivity. If resource productivity is declining and the number of people isn't growing that fast, then that's concerning.Will MacAskill 15:07Yes, number of people times fraction of the population devoted to R&D.Dwarkesh Patel 15:11Thanks for the clarification. It seems that there have been a lot of intense concentrations of talent and progress in history. Venice, Athens, or even something like FTX, right? There are 20 developers making this a multibillion dollar company—do these examples suggest that organization and congregation of researchers matter more than the total amount?Will MacAskill 15:36The model works reasonably well. Throughout history, you start from a very low technological baseline compared to today. Most people aren't even trying to innovate. One argument for why Baghdad lost its Scientific Golden Age is because the political landscape changed such that what was incentivized was theological investigation rather than scientific investigation in the 10th/11th century AD.Similarly, one argument for why Britain had a scientific and industrial revolution rather than Germany was because all of the intellectual talent in Germany was focused on making amazing music. That doesn't compound in the way that making textiles does. If you look at like Sparta versus Athens, what was the difference? They had different cultures and intellectual inquiry was more rewarded in Athens.Because they're starting from a lower base, people trying to do something that looks like what we now think of as intellectual inquiry have an enormous impact.Dwarkesh Patel 16:58If you take an example like Bell Labs, the low-hanging fruit is gone by the late 20th century. You have this one small organization that has six Nobel Prizes. Is this a coincidence?Will MacAskill 17:14I wouldn't say that at all. The model we're working with is the size of the population times the fraction of the population doing R&D. It's the simplest model you can have. Bell Labs is punching above its weight. You can create amazing things from a certain environment with the most productive people and putting them in an environment where they're ten times more productive than they would otherwise be.However, when you're looking at the grand sweep of history, those effects are comparatively small compared to the broader culture of a society or the sheer size of a population.Longtermist institutional reformDwarkesh Patel 18:00I want to talk about your paper on longtermist institutional reform. One of the things you advocate in this paper is that we should have one of the houses be dedicated towards longtermist priorities. Can you name some specific performance metrics you would use to judge or incentivize the group of people who make up this body?Will MacAskill 18:23The thing I'll caveat with longtermist institutions is that I'm pessimistic about them. If you're trying to represent or even give consideration to future people, you have to face the fact that they're not around and they can't lobby for themselves. However, you could have an assembly of people who have some legal regulatory power. How would you constitute that? My best guess is you have a random selection from the population? How would you ensure that incentives are aligned?In 30-years time, their performance will be assessed by a panel of people who look back and assess the policies' effectiveness. Perhaps the people who are part of this assembly have their pensions paid on the basis of that assessment. Secondly, the people in 30-years time, both their policies and their assessment of the previous 30-years previous assembly get assessed by another assembly, 30-years after that, and so on. Can you get that to work? Maybe in theory—I'm skeptical in practice, but I would love some country to try it and see what happens.There is some evidence that you can get people to take the interests of future generations more seriously by just telling them their role. There was one study that got people to put on ceremonial robes, and act as trustees of the future. And they did make different policy recommendations than when they were just acting on the basis of their own beliefs and self-interest.Dwarkesh Patel 20:30If you are on that board that is judging these people, is there a metric like GDP growth that would be good heuristics for assessing past policy decisions?Will MacAskill 20:48There are some things you could do: GDP growth, homelessness, technological progress. I would absolutely want there to be an expert assessment of the risk of catastrophe. We don't have this yet, but imagine a panel of super forecasters predicting the chance of a war between great powers occurring in the next ten years that gets aggregated into a war index.That would be a lot more important than the stock market index. Risk of catastrophe would be helpful to feed into because you wouldn't want something only incentivizing economic growth at the expense of tail risks.Dwarkesh Patel 21:42Would that be your objection to a scheme like Robin Hanson's about maximizing the expected future GDP using prediction markets and making decisions that way?Will MacAskill 21:50Maximizing future GDP is an idea I associate with Tyler Cowen. With Robin Hanson's idea of voting on values but betting on beliefs, if people can vote on what collection of goods they want, GDP and unemployment might be good metrics. Beyond that, it's pure prediction markets. It's something I'd love to see tried. It's an idea of speculative political philosophy about how a society could be extraordinarily different in structure that is incredibly neglected.Do I think it'll work in practice? Probably not. Most of these ideas wouldn't work. Prediction markets can be gamed or are simply not liquid enough. There hasn't been a lot of success in prediction markets compared to forecasting. Perhaps you can solve these things. You have laws about what things can be voted on or predicted in the prediction market, you could have government subsidies to ensure there's enough liquidity. Overall, it's likely promising and I'd love to see it tried out on a city-level or something.Dwarkesh Patel 23:13Let's take a scenario where the government starts taking the impact on the long-term seriously and institutes some reforms to integrate that perspective. As an example, you can take a look at the environmental movement. There're environmental review boards that will try to assess the environmental impact of new projects and repeal any proposals based on certain metrics.The impact here, at least in some cases, has been that groups that have no strong, plausible interest in the environment are able to game these mechanisms in order to prevent projects that would actually help the environment. With longtermism, it takes a long time to assess the actual impact of something, but policymakers are tasked with evaluating the long term impacts of something. Are you worried that it'd be a system that'd be easy to game by malicious actors? And they'd ask, “What do you think went wrong with the way that environmentalism was codified into law?”Will MacAskill 24:09It's potentially a devastating worry. You create something to represent future people, but they're not allowed to lobby themselves (it can just be co-opted). My understanding of environmental impact statements has been similar. Similarly, it's not like the environment can represent itself—it can't say what its interests are. What is the right answer there? Maybe there are speculative proposals about having a representative body that assesses these things and elect jobs by people in 30-years time. That's the best we've got at the moment, but we need a lot more thought to see if any of these proposals would be robust for the long term rather than things that are narrowly-focused.Regulation to have liability insurance for dangerous bio labs is not about trying to represent the interests of future generations. But, it's very good for the long-term. At the moment, if longtermists are trying to change the government, let's focus on a narrow set of institutional changes that are very good for the long-term even if they're not in the game of representing the future. That's not to say I'm opposed to all such things. But, there are major problems with implementation for any of them.Dwarkesh Patel 25:35If we don't know how we would do it correctly, did you have an idea of how environmentalism could have been codified better? Why was that not a success in some cases?Will MacAskill 25:46Honestly, I don't have a good understanding of that. I don't know if it's intrinsic to the matter or if you could've had some system that wouldn't have been co-opted in the long-term.Are companies longtermist?Dwarkesh Patel 25:56Theoretically, the incentives of our most long-term U.S. institutions is to maximize future cash flow. Explicitly and theoretically, they should have an incentive to do the most good they can for their own company—which implies that the company can't be around if there's an existential risk…Will MacAskill 26:18I don't think so. Different institutions have different rates of decay associated with them. So, a corporation that is in the top 200 biggest companies has a half-life of only ten years. It's surprisingly short-lived. Whereas, if you look at universities Oxford and Cambridge are 800 years old. University of Bologna is even older. These are very long-lived institutions.For example, Corpus Christi at Oxford was making a decision about having a new tradition that would occur only every 400 years. It makes that kind of decision because it is such a long-lived institution. Similarly, the legends can be even longer-lived again. That type of natural half-life really affects the decisions a company would make versus a university versus a religious institution.Dwarkesh Patel 27:16Does that suggest that there's something fragile and dangerous about trying to make your institution last for a long time—if companies try to do that and are not able to?Will MacAskill 27:24Companies are composed of people. Is it in the interest of a company to last for a long time? Is it in the interests of the people who constitute the company (like the CEO and the board and the shareholders) for that company to last a long time? No, they don't particularly care. Some of them do, but most don't. Whereas other institutions go both ways. This is the issue of lock-in that I talked about at length in What We Owe The future: you get moments of plasticity during the formation of a new institution.Whether that's the Christian church or the Constitution of the United States, you lock-in a certain set of norms. That can be really good. Looking back, the U.S. Constitution seems miraculous as the first democratic constitution. As I understand it, it was created over a period of four months seems to have stood the test of time. Alternatively, lock-in norms could be extremely dangerous. There were horrible things in the U.S. Constitution like the legal right to slavery proposed as a constitutional amendment. If that had locked in, it would have been horrible. It's hard to answer in the abstract because it depends on the thing that's persisting for a long time.Living in an era of plasticityDwarkesh Patel 28:57You say in the book that you expect our current era to be a moment of plasticity. Why do you think that is?Will MacAskill 29:04There are specific types of ‘moments of plasticity' for two reasons. One is a world completely unified in a way that's historically unusual. You can communicate with anyone instantaneously and there's a great diversity of moral views. We can have arguments, like people coming on your podcast can debate what's morally correct. It's plausible to me that one of many different sets of moral views become the most popular ultimately.Secondly, we're at this period where things can really change. But, it's a moment of plasticity because it could plausibly come to an end — and the moral change that we're used to could end in the coming decades. If there was a single global culture or world government that preferred ideological conformity, combined with technology, it becomes unclear why that would end over the long-term? The key technology here is Artificial Intelligence. The point in time (which may be sooner than we think) where the rulers of the world are digital rather than biological, that [ideological conformity] could persist.Once you've got that and a global hegemony of a single ideology, there's not much reason for that set of values to change over time. You've got immortal leaders and no competition. What are the other kind of sources of value-change over time? I think they can be accounted for too.Dwarkesh Patel 30:46Isn't the fact that we are in a time of interconnectedness that won't last if we settle space — isn't that bit of reason for thinking that lock-in is not especially likely? If your overlords are millions of light years away, how well can they control you?Will MacAskill 31:01The “whether” you have is whether the control will happen before the point of space settlement. If we took to space one day, and there're many different settlements and different solar systems pursuing different visions of the good, then you're going to maintain diversity for a very long time (given the physics of the matter).Once a solar system has been settled, it's very hard for other civilizations to come along and conquer you—at least if we're at a period of technological maturity where there aren't groundbreaking technologies to be discovered. But, I'm worried that the control will happen earlier. I'm worried the control might happen this century, within our lifetimes. I don't think it's very likely, but it's seriously on the table - 10% or something?Dwarkesh Patel 31:53Hm, right. Going back to the long-term of the longtermism movement, there are many instructive foundations that were set up about a century ago like the Rockefeller Foundation, Carnegie Foundation. But, they don't seem to be especially creative or impactful today. What do you think went wrong? Why was there, if not value drift, some decay of competence and leadership and insight?Will MacAskill 32:18I don't have strong views about those particular examples, but I have two natural thoughts. For organizations that want to persist a long time and keep having an influence for a long time, they've historically specified their goals in far too narrow terms. One fun example is Benjamin Franklin. He invested a thousand pounds for each of the cities of Philadelphia and Boston to pay out after 100 years and then 200 years for different fractions of the amount invested. But, he specified it to help blacksmith apprentices. You might think this doesn't make much sense when you're in the year 2000. He could have invested more generally: for the prosperity of people in Philadelphia and Boston. It would have had plausibly more impact.The second is a ‘regression to the mean' argument. You have some new foundation and it's doing an extraordinary amount of good as the Rockefeller Foundation did. Over time, if it's exceptional in some dimension, it's probably going to get closer to average on that dimension. This is because you're changing the people involved. If you've picked exceptionally competent and farsighted people, the next generation are statistically going to be less so.Dwarkesh Patel 33:40Going back to that hand problem: if you specify your mission too narrowly and it doesn't make sense in the future—is there a trade off? If you're too broad, you make space for future actors—malicious or uncreative—to take the movement in ways that you would not approve of? With regards to doing good for Philadelphia, what if it turns into something that Ben Franklin would not have thought is good for Philadelphia?Will MacAskill 34:11It depends on what your values and views are. If Benjamin Franklin only cared about blacksmith's apprentices, then he was correct to specify it. But my own values tend to be quite a bit more broad than that. Secondly, I expect people in the future to be smarter and more capable. It's certainly the trend over time. In which case, if we're sharing similar broad goals, and they're implementing it in a different way, then they have it.How good can the future be?Dwarkesh Patel 34:52Let's talk about how good we should expect the future to be. Have you come across Robin Hanson's argument that we'll end up being subsistence-level ems because there'll be a lot of competition and minimizing compute per digital person will create a barely-worth-living experience for every entity?Will MacAskill 35:11Yeah, I'm familiar with the argument. But, we should distinguish the idea that ems are at subsistence level from the idea that we would have bad lives. So subsistence means that you get a balance of income per capita and population growth such that being poorer would cause deaths to outweigh additional births.That doesn't tell you about their well-being. You could be very poor as an emulated being but be in bliss all the time. That's perfectly consistent with the Malthusian theory. It might seem far away from the best possible future, but it could still be very good. At subsistence, those ems could still have lives that are thousands of times better than ours.Dwarkesh Patel 36:02Speaking of being poor and happy, there was a very interesting section in the chapter where you mentioned the study you had commissioned: you were trying to find out if people in the developing world find life worth living. It turns out that 19% of Indians would not want to relive their life every moment. But, 31% of Americans said that they would not want to relive their life at every moment? So, why are Indians seemingly much happier at less than a tenth of the GDP per capita?Will MacAskill 36:29I think the numbers are lower than that from memory, at least. From memory, it's something more like 9% of Indians wouldn't want to live their lives again if they had the option, and 13% of Americans said they wouldn't. You are right on the happiness metric, though. The Indians we surveyed were more optimistic about their lives, happier with their lives than people in the US were. Honestly, I don't want to generalize too far from that because we were sampling comparatively poor Americans to comparatively well-off Indians. Perhaps it's just a sample effect.There are also weird interactions with Hinduism and the belief in reincarnation that could mess up the generalizability of this. On one hand, I don't want to draw any strong conclusion from that. But, it is pretty striking as a piece of information, given that you find people's well-being in richer countries considerably happier than poorer countries, on average.Dwarkesh Patel 37:41I guess you do generalize in a sense that you use it as evidence that most lives today are living, right?Will MacAskill 37:50Exactly. So, I put together various bits of evidence, where approximately 10% of people in the United States and 10% of people in India seem to think that their lives are net negative. They think they contain more suffering than happiness and wouldn't want to be reborn and live the same life if they could.There's another scripture study that looks at people in United States/other wealthy countries, and asks them how much of their conscious life they'd want to skip if they could. Skipping here means that blinking would reach you to the end of whatever activity you're engaging with. For example, perhaps I hate this podcast so much that I would rather be unconscious than be talking to you. In which case, I'd have the option of skipping, and it would be over after 30 minutes.If you look at that, and then also asked people about the trade offs they would be willing to make as a measure of intensity of how much they're enjoying a certain experience, you reach the conclusion that a little over 10% of people regarded their life that day as being surveyed worse than if they'd been unconscious the entire day.Contra Tyler Cowen on what's most importantDwarkesh Patel 39:18Jumping topics here a little bit, on the 80,000 Hours Podcast, you said that you expect scientists who are explicitly trying to maximize their impact might have an adverse impact because they might be ignoring the foundational research that wouldn't be obvious in this way of thinking, but might be more important.Do you think this could be a general problem with longtermism? If you were trying to find the most important things that are important long-term, you might be missing things that wouldn't be obvious thinking this way?Will MacAskill 39:48Yeah, I think that's a risk. Among the ways that people could argue against my general set of views, I argue that we should be doing fairly specific and targeted things like trying to make AI safe, well-govern the rise of AI, reduce worst-case pandemics that can kill us all, prevent a Third World War, ensure that good values are promoted, and avoid value lock-in. But, some people could argue (and people like Tyler Cowen and Patrick Collison do), that it's very hard to predict the future impact of your actions.It's a mug's game to even try. Instead, you should look at the things that have done loads of good consistently in the past, and try to do the same things. In particular, they might argue that means technological progress or boosting economic growth. I dispute that. It's not something I can give a completely knock-down argument to because we don't know when we will find out who's right. Maybe in thousand-years time. But one piece of evidence is the success of forecasters in general. This also was true for Tyler Cowen, but people in Effective Altruism were realizing that the Coronavirus pandemic was going to be a big deal for them. At an early stage, they were worrying about pandemics far in advance. There are some things that are actually quite predictable.For example, Moore's Law has held up for over 70 years. The idea that AI systems are gonna get much larger and leading models are going to get more powerful are on trend. Similarly, the idea that we will be soon be able to develop viruses of unprecedented destructive power doesn't feel too controversial. Even though it's hard to predict loads of things, there are going to be tons of surprises. There are some things, especially when it comes to fairly long-standing technological trends, that we can make reasonable predictions — at least about the range of possibilities that are on the table.Dwarkesh Patel 42:19It sounds like you're saying that the things we know are important now. But, if something didn't turn out, a thousand years ago, looking back to be very important, it wouldn't be salient to us now?Will MacAskill 42:31What I was saying with me versus Patrick Collison and Tyler Cowen, who is correct? We will only get that information in a thousand-years time because we're talking about impactful strategies for the long-term. We might get suggestive evidence earlier. If me and others engaging in longtermism are making specific, measurable forecasts about what is going to happen with AI, or advances in biotechnology, and then are able to take action such that we are clearly reducing certain risks, that's pretty good evidence in favor of our strategy.Whereas, they're doing all sorts of stuff, but not make firm predictions about what's going to happen, but then things pop out of that that are good for the long-term (say we measure this in ten-years time), that would be good evidence for their view.Dwarkesh Patel 43:38You were saying earlier about the contingency in technology implies that given their worldview, even if you're trying to maximize what in the past is at the most impact, if what's had the most impact in the past is changing values, then economic growth might be the most important thing? Or trying to change the rate of economic growth?Will MacAskill 43:57I really do take the argument seriously of how people have acted in the past, especially for people trying to make a long-lasting impact. What things that they do that made sense and whatnot. So, towards the end of the 19th century, John Stuart Mill and the other early utilitarians had this longtermist wave where they started taking the interests of future generations very seriously. Their main concern was Britain running out of coal, and therefore, future generations would be impoverished. It's pretty striking because they had a very bad understanding of how the economy works. They hadn't predicted that we would be able to transition away from coal with continued innovation.Secondly, they had enormously wrong views about how much coal and fossil fuels there were in the world. So, that particular action didn't make any sense given what we know now. In fact, that particular action of trying to keep coal in the ground, given Britain at the time where we're talking about much lower amounts of coal—so small that the climate change effect is negligible at that level—probably would have been harmful.But, we could look at other things that John Stuart Mill could have done such promoting better values. He campaigned for women's suffrage. He was the first British MP. In fact, even the first politician in the world to promote women's suffrage - that seems to be pretty good. That seems to have stood the test of time. That's one historical data point. But potentially, we can learn a more general lesson there.AI and the centralization of powerDwarkesh Patel 45:36Do you think the ability of your global policymakers to come to a consensus is on net, a good or a bad thing? On the positive, maybe it helps around some dangerous tech from taking off, but on the negative side, prevent human challenge trials that cause some lock-in in the future. On net, what do you think about that trend?Will MacAskill 45:54The question of global integration, you're absolutely right, it's double-sided. One hand, it can help us reduce global catastrophic risks. The fact that the world was able to come come together and ban Chlorofluorocarbons was one of the great events of the last 50 years, allowing the hole in the ozone layer to to repair itself. But on the other hand, if it means we all converge to one monoculture and lose out on diversity, that's potentially bad. We could lose out on the most possible value that way.The solution is doing the good bits and not having the bad bits. For example, in a liberal constitution, you can have a country that is bound in certain ways by its constitution and by certain laws yet still enables a flourishing diversity of moral thought and different ways of life. Similarly, in the world, you can have very strong regulation and treaties that only deal with certain global public goods like mitigation of climate change, prevention of development of the next generation of weapons of mass destruction without having some very strong-arm global government that implements a particular vision of the world. Which way are we going at the moment? It seems to me we've been going in a pretty good and not too worrying direction. But, that could change.Dwarkesh Patel 47:34Yeah, it seems the historical trend is when you have a federated political body that even if constitutionally, the Central Powers constrain over time, they tend to gain more power. You can look at the U.S., you can look at the European Union. But yeah, that seems to be the trend.Will MacAskill 47:52Depending on the culture that's embodied there, it's potentially a worry. It might not be if the culture itself is liberal and promoting of moral diversity and moral change and moral progress. But, that needn't be the case.Dwarkesh Patel 48:06Your theory of moral change implies that after a small group starts advocating for a specific idea, it may take a century or more before that idea reaches common purchase. To the extent that you think this is a very important century (I know you have disagreements about that with with others), does that mean that there isn't enough time for longtermism to gain by changing moral values?Will MacAskill 48:32There are lots of people I know and respect fairly well who think that Artificial General Intelligence will likely lead to singularity-level technological progress and extremely rapid rate of technological progress within the next 10-20 years. If so, you're right. Value changes are something that pay off slowly over time.I talk about moral change taking centuries historically, but it can be much faster today. The growth of the Effective Altruism movement is something I know well. If that's growing at something like 30% per year, compound returns mean that it's not that long. That's not growth. That's not change that happens on the order of centuries.If you look at other moral movements like gay rights movement, very fast moral change by historical standards. If you're thinking that we've got ten years till the end of history, then don't broadly try and promote better values. But, we should have a very significant probability mass on the idea that we will not hit some historical end of this century. In those worlds, promoting better values could pay off like very well.Dwarkesh Patel 49:59Have you heard of Slime Mold Time Mold Potato Diet?Will MacAskill 50:03I have indeed heard of Slime Mold Time Mold Potato Diet, and I was tempted as a gimmick to try it. As I'm sure you know, potato is close to a superfood, and you could survive indefinitely on butter mashed potatoes if you occasionally supplement with something like lentils and oats.Dwarkesh Patel 50:25Hm, interesting. Question about your career: why are you still a professor? Does it still allow you to the things that you would otherwise have been doing like converting more SBF's and making moral philosophy arguments for EA? Curious about that.Will MacAskill 50:41It's fairly open to me what I should do, but I do spend significant amounts of time co-founding organizations or being on the board of those organizations I've helped to set up. More recently, working closely with the Future Fund, SBF's new foundation, and helping them do as much good as possible. That being said, if there's a single best guess for what I want to do longer term, and certainly something that plays to my strengths better, it's developing ideas, trying to get the big picture roughly right, and then communicating them in a way that's understandable and gets more people to get off their seats and start to do a lot of good for the long-term. I've had a lot of impact that way. From that perspective, having an Oxford professorship is pretty helpful.The problems with academiaDwarkesh Patel 51:34You mentioned in the book and elsewhere that there's a scarcity of people thinking about big picture questions—How contingent is history? How are people happy generally?—Are these questions that are too hard for other people? Or they don't care enough? What's going on? Why are there so few people talking about this?Will MacAskill 51:54I just think there are many issues that are enormously important but are just not incentivized anywhere in the world. Companies don't incentivize work on them because they're too big picture. Some of these questions are, “Is the future good, rather than bad? If there was a global civilizational collapse, would we recover? How likely is a long stagnation?” There's almost no work done on any of these topics. Companies aren't interested too grand in scale.Academia has developed a culture where you don't tackle such problems. Partly, that's because they fall through the cracks of different disciplines. Partly because they seem too grand or too speculative. Academia is much more in the mode of making incremental gains in our understanding. It didn't always used to be that way.If you look back before the institutionalization of academic research, you weren't a real philosopher unless you had some grand unifying theory of ethics, political philosophy, metaphysics, logic, and epistemology. Probably the natural sciences too and economics. I'm not saying that all of academic inquiry should be like that. But should there be some people whose role is to really think about the big picture? Yes.Dwarkesh Patel 53:20Will I be able to send my kids to MacAskill University? What's the status on that project?Will MacAskill 53:25I'm pretty interested in the idea of creating a new university. There is a project that I've been in discussion about with another person who's fairly excited about making it happen. Will it go ahead? Time will tell. I think you can do both research and education far better than it currently exists. It's extremely hard to break in or creating something that's very prestigious because the leading universities are hundreds of years old. But maybe it's possible. I think it would could generate enormous amounts of value if we were able to pull it off.Dwarkesh Patel 54:10Excellent, alright. So the book is What We Owe The Future. I understand pre-orders help a lot, right? It was such an interesting read. How often does somebody write a book about the questions they consider to be the most important even if they're not the most important questions? Big picture thinking, but also looking at very specific questions and issues that come up. Super interesting read.Will MacAskill 54:34Great. Well, thank you so much!Dwarkesh Patel 54:38Anywhere else they can find you? Or any other information they might need to know?Will MacAskill 54:39Yeah, sure. What We Owe The Future is out on August 16 in the US and first of September in the United Kingdom. If you want to follow me on Twitter, I'm @WillMcCaskill. If you want to try and use your time or money to do good, Giving What We Can is an organization that encourages people to take a pledge to give a significant fraction of the income (10% or more) to the charities that do the most good. It has a list of recommended charities. 80,000 Hours—if you want to use your career to do good—is a place to go for advice on what careers have the biggest impact at all. They provide one-on-one coaching too.If you're feeling inspired and want to do good in the world, you care about future people and I want to help make their lives go better, then, as well as reading What We Owe The Future, Giving What We Can, and 80,000 hours are the sources you can go to and get involved.Dwarkesh Patel 55:33Awesome, thanks so much for coming on the podcast! It was a lot of fun.Will MacAskill 54:39Thanks so much, I loved it. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com

Conversations with Tyler
William MacAskill on Effective Altruism, Moral Progress, and Cultural Innovation

Conversations with Tyler

Play Episode Listen Later Aug 10, 2022 50:44


When Tyler is reviewing grants for Emergent Ventures, he is struck by how the ideas of effective altruism have so clearly influenced many of the smartest applicants, particularly the younger ones. And William MacAskill, whom Tyler considers one of the world's most influential philosophers, is a leading light of the community. William joined Tyler to discuss why the movement has gained so much traction and more, including his favorite inefficient charity, what form of utilitarianism should apply to the care of animals, the limits of expected value, whether effective altruists should be anti-abortion, whether he'd would side with aliens over humans, whether he should give up having kids, why donating to a university isn't so bad, whether we are living in “hingey” times, why buildering is overrated, the sociology of the effective altruism movement, why cultural innovation matters, and whether starting a new university might be next on his slate. Read a full transcript enhanced with helpful links, or watch the full video. Recorded July 7th, 2022 Other ways to connect Follow us on Twitter and Instagram Follow Tyler on Twitter  Follow Will on Twitter Email us: cowenconvos@mercatus.gmu.edu Subscribe at our newsletter page to have the latest Conversations with Tyler news sent straight to your inbox. 

The Lunar Society
36: Will MacAskill - Longtermism, Altruism, History, & Technology

The Lunar Society

Play Episode Listen Later Aug 9, 2022 56:07


Will MacAskill is one of the founders of the Effective Altruist movement and the author of the upcoming book, What We Owe The Future.We talk about improving the future, risk of extinction & collapse, technological & moral change, problems of academia, who changes history, and much more.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.Read the full transcript here.Follow Will on Twitter. Follow me on Twitter for updates on future episodes.Subscribe to find out about future episodes!Timestamps(00:23) - Effective Altruism and Western values(07:47) - The contingency of technology(12:02) - Who changes history?(18:00) - Longtermist institutional reform(25:56) - Are companies longtermist?(28:57) - Living in an era of plasticity(34:52) - How good can the future be?(39:18) - Contra Tyler Cowen on what’s most important(45:36) - AI and the centralization of power(51:34) - The problems with academiaPlease share if you enjoyed this episode! Helps out a ton!TranscriptDwarkesh Patel 0:06Okay, today I have the pleasure of interviewing William MacAskill. Will is one of the founders of the Effective Altruism movement, and most recently, the author of the upcoming book, What We Owe The Future. Will, thanks for coming on the podcast.Will MacAskill 0:20Thanks so much for having me on.Effective Altruism and Western valuesDwarkesh Patel 0:23My first question is: What is the high-level explanation for the success of the Effective Altruism movement? Is it itself an example of the contingencies you talk about in the book?Will MacAskill 0:32Yeah, I think it is contingent. Maybe not on the order of, “this would never have happened,” but at least on the order of decades. Evidence that Effective Altruism is somewhat contingent is that similar ideas have been promoted many times during history, and not taken on.We can go back to ancient China, the Mohists defended an impartial view of morality, and took very strategic actions to help all people. In particular, providing defensive assistance to cities under siege. Then, there were early utilitarians. Effective Altruism is broader than utilitarianism, but has some similarities. Even Peter Singer in the 70s had been promoting the idea that we should be giving most of our income to help the very poor — and didn’t get a lot of traction until early 2010 after GiveWell and Giving What We Can launched.What explains the rise of it? I think it was a good idea waiting to happen. At some point, the internet helped to gather together a lot of like-minded people which wasn’t possible otherwise. There were some particularly lucky events like Alex meeting Holden and me meeting Toby that helped catalyze it at the particular time it did.Dwarkesh Patel 1:49If it's true, as you say, in the book, that moral values are very contingent, then shouldn't that make us suspect that modern Western values aren't that good? They're mediocre, or worse, because ex ante, you would expect to end up with a median of all the values we could have had at this point. Obviously, we'd be biased in favor of whatever values we were brought up in.Will MacAskill 2:09Absolutely. Taking history seriously and appreciating the contingency of values, appreciating that if the Nazis had won the World War, we would all be thinking, “wow, I'm so glad that moral progress happened the way it did, and we don't have Jewish people around anymore. What huge moral progress we had then!” That's a terrifying thought. I think it should make us take seriously the fact that we're very far away from the moral truth.One of the lessons I draw in the book is that we should not think we're at the end of moral progress. We should not think, “Oh, we should lock in the Western values we have.” Instead, we should spend a lot of time trying to figure out what's actually morally right, so that the future is guided by the right values, rather than whichever happened to win out.Dwarkesh Patel 2:56So that makes a lot of sense. But I'm asking a slightly separate question—not only are there possible values that could be better than ours, but should we expect our values - we have the sense that we've made moral progress (things are better than they were before or better than most possible other worlds in 2100 or 2200)- should we not expect that to be the case? Should our priors be that these are ‘meh’ values?Will MacAskill 3:19Our priors should be that our values are as good as expected on average. Then you can make an assessment like, “Are other values of today going particularly well?” There are some arguments you could make for saying no. Perhaps if the Industrial Revolution happened in India, rather than in Western Europe, then perhaps we wouldn't have wide-scale factory farming—which I think is a moral atrocity. Having said that, my view is to think that we're doing better than average.If civilization were just a redraw, then things would look worse in terms of our moral beliefs and attitudes. The abolition of slavery, the feminist movement, liberalism itself, democracy—these are all things that we could have lost and are huge gains.Dwarkesh Patel 4:14If that's true, does that make the prospect of a long reflection dangerous? If moral progress is a random walk, and we've ended up with a lucky lottery, then you're possibly reversing. Maybe you're risking regression to the mean if you just have 1,000 years of progress.Will MacAskill 4:30Moral progress isn't a random walk in general. There are many forces that act on culture and on what people believe. One of them is, “What’s right, morally speaking? What's their best arguments support?” I think it's a weak force, unfortunately.The idea of lumbar flexion is getting society into a state that before we take any drastic actions that might lock in a particular set of values, we allow this force of reason and empathy and debate and goodhearted model inquiry to guide which values we end up with.Are we unwise?Dwarkesh Patel 5:05In the book, you make this interesting analogy where humans at this point in history are like teenagers. But another common impression that people have of teenagers is that they disregard wisdom and tradition and the opinions of adults too early and too often. And so, do you think it makes sense to extend the analogy this way, and suggest that we should be Burkean Longtermists and reject these inside-view esoteric threats?Will MacAskill 5:32My view goes the opposite of the Burkean view. We are cultural creatures in our nature, and are very inclined to agree with what other people think even if we don't understand the underlying mechanisms. It works well in a low-change environment. The environment we evolved towards didn't change very much. We were hunter-gatherers for hundreds of years.Now, we're in this period of enormous change, where the economy is doubling every 20 years, new technologies arrive every single year. That's unprecedented. It means that we should be trying to figure things out from first principles.Dwarkesh Patel 6:34But at current margins, do you think that's still the case? If a lot of EA and longtermist thought is first principles, do you think that more history would be better than the marginal first-principles thinker?Will MacAskill 6:47Two things. If it's about an understanding of history, then I'd love EA to have a better historical understanding. The most important subject if you want to do good in the world is philosophy of economics. But we've got that in abundance compared to there being very little historical knowledge in the EA community.Should there be even more first-principles thinking? First-principles thinking paid off pretty well in the course of the Coronavirus pandemic. From January 2020, my Facebook wall was completely saturated with people freaking out, or taking it very seriously in a way that the existing institutions weren't. The existing institutions weren't properly updating to a new environment and new evidence.The contingency of technologyDwarkesh Patel 7:47In your book, you point out several examples of societies that went through hardship. Hiroshima after the bombings, Europe after the Black Death—they seem to have rebounded relatively quickly. Does this make you think that perhaps the role of contingency in history, especially economic history is not that large? And it implies a Solow model of growth? That even if bad things happen, you can rebound and it really didn't matter?Will MacAskill 8:17In economic terms, that's the big difference between economic or technological progress and moral progress. In the long run, economic or technological progress is very non-contingent. The Egyptians had an early version of the steam engine, semaphore was only developed very late yet could have been invented thousands of years in the past.But in the long run, the instrumental benefits of tech progress, and the incentives towards tech progress and economic growth are so strong, that we get there in a wide array of circumstances. Imagine there're thousands of different societies, and none are growing except for one. In the long run, that one becomes the whole economy.Dwarkesh Patel 9:10It seems that particular example you gave of the Egyptians having some ancient form of a steam engine points towards there being more contingency? Perhaps because the steam engine comes up in many societies, but it only gets turned into an industrial revolution in one?Will MacAskill 9:22In that particular case, there's a big debate about whether quality of metalwork made it actually possible to build a proper steam engine at that time. I mentioned those to share some amazing examples of contingency prior to the Industrial Revolution.It's still contingency on the order of centuries to thousands of years. Post industrial-revolution world, there's much less contingency. It's much harder to see technologies that wouldn't have happened within decades if they hadn't been developed when they were.Dwarkesh Patel 9:57The model here is, “These general-purpose changes in the state of technology are contingent, and it'd be very important to try to engineer one of those. But other than that, it's going to get done by some guy creating a start-up anyways?”Will MacAskill 10:11Even in the case of the steam engine that seemed contingent, it gets developed in the long run. If the Industrial Revolution hadn't happened in Britain in the 18th century, would it have happened at some point? Would similar technologies that were vital to the industrial revolution developed? Yes, there are very strong incentives for doing so.If there’s a culture that's into making textiles in an automated way as opposed to England in the 18th century, then that economy will take over the world. There's a structural reason why economic growth is much less contingent than moral progress.Dwarkesh Patel 11:06When people think of somebody like Norman Borlaug and the Green Revolution. It's like, “If you could have done something that, you'd be the greatest person in the 20th century.” Obviously, he's still a very good man, but would that not be our view? Do you think the green revolution would have happened anyways?Will MacAskill 11:22Yes. Norman Borlaug is sometimes credited with saving a billion lives. He was huge. He was a good force for the world. Had Norman Borlaug not existed, I don’t think a billion people would have died. Rather, similar developments would have happened shortly afterwards.Perhaps he saved tens of millions of lives—and that's a lot of lives for a person to save. But, it's not as many as simply saying, “Oh, this tech was used by a billion people who would have otherwise been at risk of starvation.” In fact, not long afterwards, there were similar kinds of agricultural development.Who changes history?Dwarkesh Patel 12:02What kind of profession or career choice tends to lead to the highest counterfactual impact? Is it moral philosophers?Will MacAskill 12:12Not quite moral philosophers, although there are some examples. Sticking on science technology, if you look at Einstein, theory of special relativity would have been developed shortly afterwards. However, theory of general relativity was plausibly decades in advance. Sometimes, you get surprising leaps. But, we're still only talking about decades rather than millennia. Moral philosophers could make long-term difference. Marx and Engels made an enormous, long-run difference. Religious leaders like Mohammed, Jesus, and Confucius made enormous and contingent, long-run difference. Moral activists as well.Dwarkesh Patel 13:04If you think that the changeover in the landscape of ideas is very quick today, would you still think that somebody like Marx will be considered very influential in the long future? Communism lasted less than a century, right?Will MacAskill 13:20As things turned out, Marx will not be influential over the long term future. But that could have gone another way. It's not such a wildly different history. Rather than liberalism emerging dominant in the 20th century, it was communism. The better technology gets, the better the ruling ideology is to cement its ideology and persist for a long time. You can get a set of knock-on effects where communism wins the war of ideas in the 20th century.Let’s say a world-government is based around those ideas, then, via anti-aging technology, genetic-enhancement technology, cloning, or artificial intelligence, it's able to build a society that possesses forever in accordance with that ideology.Dwarkesh Patel 14:20The death of dictators is especially interesting when you're thinking about contingency because there are huge changes in the regime. It makes me think the actual individual there was very important and who they happened to be was contingent and persistent in some interesting ways.Will MacAskill 14:37If you've got a dictatorship, then you've got single person ruling the society. That means it's heavily contingent on the views, values, beliefs, and personality of that person.Scientific talentDwarkesh Patel 14:48Going back to the second nation, in the book, you're very concerned about fertility. It seems your model about scientific and technological progress happens is number of people times average researcher productivity. If resource productivity is declining and the number of people isn't growing that fast, then that's concerning.Will MacAskill 15:07Yes, number of people times fraction of the population devoted to R&D.Dwarkesh Patel 15:11Thanks for the clarification. It seems that there have been a lot of intense concentrations of talent and progress in history. Venice, Athens, or even something like FTX, right? There are 20 developers making this a multibillion dollar company—do these examples suggest that organization and congregation of researchers matter more than the total amount?Will MacAskill 15:36The model works reasonably well. Throughout history, you start from a very low technological baseline compared to today. Most people aren't even trying to innovate. One argument for why Baghdad lost its Scientific Golden Age is because the political landscape changed such that what was incentivized was theological investigation rather than scientific investigation in the 10th/11th century AD.Similarly, one argument for why Britain had a scientific and industrial revolution rather than Germany was because all of the intellectual talent in Germany was focused on making amazing music. That doesn't compound in the way that making textiles does. If you look at like Sparta versus Athens, what was the difference? They had different cultures and intellectual inquiry was more rewarded in Athens.Because they're starting from a lower base, people trying to do something that looks like what we now think of as intellectual inquiry have an enormous impact.Dwarkesh Patel 16:58If you take an example like Bell Labs, the low-hanging fruit is gone by the late 20th century. You have this one small organization that has six Nobel Prizes. Is this a coincidence?Will MacAskill 17:14I wouldn't say that at all. The model we’re working with is the size of the population times the fraction of the population doing R&D. It's the simplest model you can have. Bell Labs is punching above its weight. You can create amazing things from a certain environment with the most productive people and putting them in an environment where they're ten times more productive than they would otherwise be.However, when you're looking at the grand sweep of history, those effects are comparatively small compared to the broader culture of a society or the sheer size of a population.Longtermist institutional reformDwarkesh Patel 18:00I want to talk about your paper on longtermist institutional reform. One of the things you advocate in this paper is that we should have one of the houses be dedicated towards longtermist priorities. Can you name some specific performance metrics you would use to judge or incentivize the group of people who make up this body?Will MacAskill 18:23The thing I'll caveat with longtermist institutions is that I’m pessimistic about them. If you're trying to represent or even give consideration to future people, you have to face the fact that they're not around and they can't lobby for themselves. However, you could have an assembly of people who have some legal regulatory power. How would you constitute that? My best guess is you have a random selection from the population? How would you ensure that incentives are aligned?In 30-years time, their performance will be assessed by a panel of people who look back and assess the policies’ effectiveness. Perhaps the people who are part of this assembly have their pensions paid on the basis of that assessment. Secondly, the people in 30-years time, both their policies and their assessment of the previous 30-years previous assembly get assessed by another assembly, 30-years after that, and so on. Can you get that to work? Maybe in theory—I’m skeptical in practice, but I would love some country to try it and see what happens.There is some evidence that you can get people to take the interests of future generations more seriously by just telling them their role. There was one study that got people to put on ceremonial robes, and act as trustees of the future. And they did make different policy recommendations than when they were just acting on the basis of their own beliefs and self-interest.Dwarkesh Patel 20:30If you are on that board that is judging these people, is there a metric like GDP growth that would be good heuristics for assessing past policy decisions?Will MacAskill 20:48There are some things you could do: GDP growth, homelessness, technological progress. I would absolutely want there to be an expert assessment of the risk of catastrophe. We don't have this yet, but imagine a panel of super forecasters predicting the chance of a war between great powers occurring in the next ten years that gets aggregated into a war index.That would be a lot more important than the stock market index. Risk of catastrophe would be helpful to feed into because you wouldn't want something only incentivizing economic growth at the expense of tail risks.Dwarkesh Patel 21:42Would that be your objection to a scheme like Robin Hanson’s about maximizing the expected future GDP using prediction markets and making decisions that way?Will MacAskill 21:50Maximizing future GDP is an idea I associate with Tyler Cowen. With Robin Hanson’s idea of voting on values but betting on beliefs, if people can vote on what collection of goods they want, GDP and unemployment might be good metrics. Beyond that, it's pure prediction markets. It's something I'd love to see tried. It’s an idea of speculative political philosophy about how a society could be extraordinarily different in structure that is incredibly neglected.Do I think it'll work in practice? Probably not. Most of these ideas wouldn't work. Prediction markets can be gamed or are simply not liquid enough. There hasn’t been a lot of success in prediction markets compared to forecasting. Perhaps you can solve these things. You have laws about what things can be voted on or predicted in the prediction market, you could have government subsidies to ensure there's enough liquidity. Overall, it's likely promising and I'd love to see it tried out on a city-level or something.Dwarkesh Patel 23:13Let’s take a scenario where the government starts taking the impact on the long-term seriously and institutes some reforms to integrate that perspective. As an example, you can take a look at the environmental movement. There're environmental review boards that will try to assess the environmental impact of new projects and repeal any proposals based on certain metrics.The impact here, at least in some cases, has been that groups that have no strong, plausible interest in the environment are able to game these mechanisms in order to prevent projects that would actually help the environment. With longtermism, it takes a long time to assess the actual impact of something, but policymakers are tasked with evaluating the long term impacts of something. Are you worried that it'd be a system that'd be easy to game by malicious actors? And they'd ask, “What do you think went wrong with the way that environmentalism was codified into law?”Will MacAskill 24:09It's potentially a devastating worry. You create something to represent future people, but they're not allowed to lobby themselves (it can just be co-opted). My understanding of environmental impact statements has been similar. Similarly, it's not like the environment can represent itself—it can't say what its interests are. What is the right answer there? Maybe there are speculative proposals about having a representative body that assesses these things and elect jobs by people in 30-years time. That's the best we've got at the moment, but we need a lot more thought to see if any of these proposals would be robust for the long term rather than things that are narrowly-focused.Regulation to have liability insurance for dangerous bio labs is not about trying to represent the interests of future generations. But, it's very good for the long-term. At the moment, if longtermists are trying to change the government, let's focus on a narrow set of institutional changes that are very good for the long-term even if they're not in the game of representing the future. That's not to say I'm opposed to all such things. But, there are major problems with implementation for any of them.Dwarkesh Patel 25:35If we don't know how we would do it correctly, did you have an idea of how environmentalism could have been codified better? Why was that not a success in some cases?Will MacAskill 25:46Honestly, I don't have a good understanding of that. I don't know if it's intrinsic to the matter or if you could’ve had some system that wouldn't have been co-opted in the long-term.Are companies longtermist?Dwarkesh Patel 25:56Theoretically, the incentives of our most long-term U.S. institutions is to maximize future cash flow. Explicitly and theoretically, they should have an incentive to do the most good they can for their own company—which implies that the company can’t be around if there’s an existential risk…Will MacAskill 26:18I don't think so. Different institutions have different rates of decay associated with them. So, a corporation that is in the top 200 biggest companies has a half-life of only ten years. It’s surprisingly short-lived. Whereas, if you look at universities Oxford and Cambridge are 800 years old. University of Bologna is even older. These are very long-lived institutions.For example, Corpus Christi at Oxford was making a decision about having a new tradition that would occur only every 400 years. It makes that kind of decision because it is such a long-lived institution. Similarly, the legends can be even longer-lived again. That type of natural half-life really affects the decisions a company would make versus a university versus a religious institution.Dwarkesh Patel 27:16Does that suggest that there's something fragile and dangerous about trying to make your institution last for a long time—if companies try to do that and are not able to?Will MacAskill 27:24Companies are composed of people. Is it in the interest of a company to last for a long time? Is it in the interests of the people who constitute the company (like the CEO and the board and the shareholders) for that company to last a long time? No, they don't particularly care. Some of them do, but most don't. Whereas other institutions go both ways. This is the issue of lock-in that I talked about at length in What We Owe The future: you get moments of plasticity during the formation of a new institution.Whether that’s the Christian church or the Constitution of the United States, you lock-in a certain set of norms. That can be really good. Looking back, the U.S. Constitution seems miraculous as the first democratic constitution. As I understand it, it was created over a period of four months seems to have stood the test of time. Alternatively, lock-in norms could be extremely dangerous. There were horrible things in the U.S. Constitution like the legal right to slavery proposed as a constitutional amendment. If that had locked in, it would have been horrible. It's hard to answer in the abstract because it depends on the thing that's persisting for a long time.Living in an era of plasticityDwarkesh Patel 28:57You say in the book that you expect our current era to be a moment of plasticity. Why do you think that is?Will MacAskill 29:04There are specific types of ‘moments of plasticity’ for two reasons. One is a world completely unified in a way that's historically unusual. You can communicate with anyone instantaneously and there's a great diversity of moral views. We can have arguments, like people coming on your podcast can debate what's morally correct. It's plausible to me that one of many different sets of moral views become the most popular ultimately.Secondly, we're at this period where things can really change. But, it's a moment of plasticity because it could plausibly come to an end — and the moral change that we're used to could end in the coming decades. If there was a single global culture or world government that preferred ideological conformity, combined with technology, it becomes unclear why that would end over the long-term? The key technology here is Artificial Intelligence. The point in time (which may be sooner than we think) where the rulers of the world are digital rather than biological, that [ideological conformity] could persist.Once you've got that and a global hegemony of a single ideology, there's not much reason for that set of values to change over time. You've got immortal leaders and no competition. What are the other kind of sources of value-change over time? I think they can be accounted for too.Dwarkesh Patel 30:46Isn't the fact that we are in a time of interconnectedness that won't last if we settle space — isn't that bit of reason for thinking that lock-in is not especially likely? If your overlords are millions of light years away, how well can they control you?Will MacAskill 31:01The “whether” you have is whether the control will happen before the point of space settlement. If we took to space one day, and there're many different settlements and different solar systems pursuing different visions of the good, then you're going to maintain diversity for a very long time (given the physics of the matter).Once a solar system has been settled, it's very hard for other civilizations to come along and conquer you—at least if we're at a period of technological maturity where there aren't groundbreaking technologies to be discovered. But, I'm worried that the control will happen earlier. I'm worried the control might happen this century, within our lifetimes. I don't think it’s very likely, but it's seriously on the table - 10% or something?Dwarkesh Patel 31:53Hm, right. Going back to the long-term of the longtermism movement, there are many instructive foundations that were set up about a century ago like the Rockefeller Foundation, Carnegie Foundation. But, they don't seem to be especially creative or impactful today. What do you think went wrong? Why was there, if not value drift, some decay of competence and leadership and insight?Will MacAskill 32:18I don't have strong views about those particular examples, but I have two natural thoughts. For organizations that want to persist a long time and keep having an influence for a long time, they’ve historically specified their goals in far too narrow terms. One fun example is Benjamin Franklin. He invested a thousand pounds for each of the cities of Philadelphia and Boston to pay out after 100 years and then 200 years for different fractions of the amount invested. But, he specified it to help blacksmith apprentices. You might think this doesn't make much sense when you’re in the year 2000. He could have invested more generally: for the prosperity of people in Philadelphia and Boston. It would have had plausibly more impact.The second is a ‘regression to the mean’ argument. You have some new foundation and it's doing an extraordinary amount of good as the Rockefeller Foundation did. Over time, if it's exceptional in some dimension, it's probably going to get closer to average on that dimension. This is because you’re changing the people involved. If you've picked exceptionally competent and farsighted people, the next generation are statistically going to be less so.Dwarkesh Patel 33:40Going back to that hand problem: if you specify your mission too narrowly and it doesn't make sense in the future—is there a trade off? If you're too broad, you make space for future actors—malicious or uncreative—to take the movement in ways that you would not approve of? With regards to doing good for Philadelphia, what if it turns into something that Ben Franklin would not have thought is good for Philadelphia?Will MacAskill 34:11It depends on what your values and views are. If Benjamin Franklin only cared about blacksmith's apprentices, then he was correct to specify it. But my own values tend to be quite a bit more broad than that. Secondly, I expect people in the future to be smarter and more capable. It’s certainly the trend over time. In which case, if we’re sharing similar broad goals, and they're implementing it in a different way, then they have it.How good can the future be?Dwarkesh Patel 34:52Let's talk about how good we should expect the future to be. Have you come across Robin Hanson’s argument that we’ll end up being subsistence-level ems because there'll be a lot of competition and minimizing compute per digital person will create a barely-worth-living experience for every entity?Will MacAskill 35:11Yeah, I'm familiar with the argument. But, we should distinguish the idea that ems are at subsistence level from the idea that we would have bad lives. So subsistence means that you get a balance of income per capita and population growth such that being poorer would cause deaths to outweigh additional births.That doesn't tell you about their well-being. You could be very poor as an emulated being but be in bliss all the time. That's perfectly consistent with the Malthusian theory. It might seem far away from the best possible future, but it could still be very good. At subsistence, those ems could still have lives that are thousands of times better than ours.Dwarkesh Patel 36:02Speaking of being poor and happy, there was a very interesting section in the chapter where you mentioned the study you had commissioned: you were trying to find out if people in the developing world find life worth living. It turns out that 19% of Indians would not want to relive their life every moment. But, 31% of Americans said that they would not want to relive their life at every moment? So, why are Indians seemingly much happier at less than a tenth of the GDP per capita?Will MacAskill 36:29I think the numbers are lower than that from memory, at least. From memory, it’s something more like 9% of Indians wouldn't want to live their lives again if they had the option, and 13% of Americans said they wouldn’t. You are right on the happiness metric, though. The Indians we surveyed were more optimistic about their lives, happier with their lives than people in the US were. Honestly, I don't want to generalize too far from that because we were sampling comparatively poor Americans to comparatively well-off Indians. Perhaps it's just a sample effect.There are also weird interactions with Hinduism and the belief in reincarnation that could mess up the generalizability of this. On one hand, I don't want to draw any strong conclusion from that. But, it is pretty striking as a piece of information, given that you find people's well-being in richer countries considerably happier than poorer countries, on average.Dwarkesh Patel 37:41I guess you do generalize in a sense that you use it as evidence that most lives today are living, right?Will MacAskill 37:50Exactly. So, I put together various bits of evidence, where approximately 10% of people in the United States and 10% of people in India seem to think that their lives are net negative. They think they contain more suffering than happiness and wouldn't want to be reborn and live the same life if they could.There's another scripture study that looks at people in United States/other wealthy countries, and asks them how much of their conscious life they'd want to skip if they could. Skipping here means that blinking would reach you to the end of whatever activity you're engaging with. For example, perhaps I hate this podcast so much that I would rather be unconscious than be talking to you. In which case, I'd have the option of skipping, and it would be over after 30 minutes.If you look at that, and then also asked people about the trade offs they would be willing to make as a measure of intensity of how much they're enjoying a certain experience, you reach the conclusion that a little over 10% of people regarded their life that day as being surveyed worse than if they'd been unconscious the entire day.Contra Tyler Cowen on what’s most importantDwarkesh Patel 39:18Jumping topics here a little bit, on the 80,000 Hours Podcast, you said that you expect scientists who are explicitly trying to maximize their impact might have an adverse impact because they might be ignoring the foundational research that wouldn't be obvious in this way of thinking, but might be more important.Do you think this could be a general problem with longtermism? If you were trying to find the most important things that are important long-term, you might be missing things that wouldn't be obvious thinking this way?Will MacAskill 39:48Yeah, I think that's a risk. Among the ways that people could argue against my general set of views, I argue that we should be doing fairly specific and targeted things like trying to make AI safe, well-govern the rise of AI, reduce worst-case pandemics that can kill us all, prevent a Third World War, ensure that good values are promoted, and avoid value lock-in. But, some people could argue (and people like Tyler Cowen and Patrick Collison do), that it's very hard to predict the future impact of your actions.It's a mug's game to even try. Instead, you should look at the things that have done loads of good consistently in the past, and try to do the same things. In particular, they might argue that means technological progress or boosting economic growth. I dispute that. It's not something I can give a completely knock-down argument to because we don’t know when we will find out who's right. Maybe in thousand-years time. But one piece of evidence is the success of forecasters in general. This also was true for Tyler Cowen, but people in Effective Altruism were realizing that the Coronavirus pandemic was going to be a big deal for them. At an early stage, they were worrying about pandemics far in advance. There are some things that are actually quite predictable.For example, Moore's Law has held up for over 70 years. The idea that AI systems are gonna get much larger and leading models are going to get more powerful are on trend. Similarly, the idea that we will be soon be able to develop viruses of unprecedented destructive power doesn’t feel too controversial. Even though it’s hard to predict loads of things, there are going to be tons of surprises. There are some things, especially when it comes to fairly long-standing technological trends, that we can make reasonable predictions — at least about the range of possibilities that are on the table.Dwarkesh Patel 42:19It sounds like you're saying that the things we know are important now. But, if something didn't turn out, a thousand years ago, looking back to be very important, it wouldn't be salient to us now?Will MacAskill 42:31What I was saying with me versus Patrick Collison and Tyler Cowen, who is correct? We will only get that information in a thousand-years time because we're talking about impactful strategies for the long-term. We might get suggestive evidence earlier. If me and others engaging in longtermism are making specific, measurable forecasts about what is going to happen with AI, or advances in biotechnology, and then are able to take action such that we are clearly reducing certain risks, that's pretty good evidence in favor of our strategy.Whereas, they're doing all sorts of stuff, but not make firm predictions about what's going to happen, but then things pop out of that that are good for the long-term (say we measure this in ten-years time), that would be good evidence for their view.Dwarkesh Patel 43:38You were saying earlier about the contingency in technology implies that given their worldview, even if you're trying to maximize what in the past is at the most impact, if what's had the most impact in the past is changing values, then economic growth might be the most important thing? Or trying to change the rate of economic growth?Will MacAskill 43:57I really do take the argument seriously of how people have acted in the past, especially for people trying to make a long-lasting impact. What things that they do that made sense and whatnot. So, towards the end of the 19th century, John Stuart Mill and the other early utilitarians had this longtermist wave where they started taking the interests of future generations very seriously. Their main concern was Britain running out of coal, and therefore, future generations would be impoverished. It's pretty striking because they had a very bad understanding of how the economy works. They hadn't predicted that we would be able to transition away from coal with continued innovation.Secondly, they had enormously wrong views about how much coal and fossil fuels there were in the world. So, that particular action didn't make any sense given what we know now. In fact, that particular action of trying to keep coal in the ground, given Britain at the time where we're talking about much lower amounts of coal—so small that the climate change effect is negligible at that level—probably would have been harmful.But, we could look at other things that John Stuart Mill could have done such promoting better values. He campaigned for women's suffrage. He was the first British MP. In fact, even the first politician in the world to promote women's suffrage - that seems to be pretty good. That seems to have stood the test of time. That's one historical data point. But potentially, we can learn a more general lesson there.AI and the centralization of powerDwarkesh Patel 45:36Do you think the ability of your global policymakers to come to a consensus is on net, a good or a bad thing? On the positive, maybe it helps around some dangerous tech from taking off, but on the negative side, prevent human challenge trials that cause some lock-in in the future. On net, what do you think about that trend?Will MacAskill 45:54The question of global integration, you're absolutely right, it's double-sided. One hand, it can help us reduce global catastrophic risks. The fact that the world was able to come come together and ban Chlorofluorocarbons was one of the great events of the last 50 years, allowing the hole in the ozone layer to to repair itself. But on the other hand, if it means we all converge to one monoculture and lose out on diversity, that's potentially bad. We could lose out on the most possible value that way.The solution is doing the good bits and not having the bad bits. For example, in a liberal constitution, you can have a country that is bound in certain ways by its constitution and by certain laws yet still enables a flourishing diversity of moral thought and different ways of life. Similarly, in the world, you can have very strong regulation and treaties that only deal with certain global public goods like mitigation of climate change, prevention of development of the next generation of weapons of mass destruction without having some very strong-arm global government that implements a particular vision of the world. Which way are we going at the moment? It seems to me we've been going in a pretty good and not too worrying direction. But, that could change.Dwarkesh Patel 47:34Yeah, it seems the historical trend is when you have a federated political body that even if constitutionally, the Central Powers constrain over time, they tend to gain more power. You can look at the U.S., you can look at the European Union. But yeah, that seems to be the trend.Will MacAskill 47:52Depending on the culture that's embodied there, it's potentially a worry. It might not be if the culture itself is liberal and promoting of moral diversity and moral change and moral progress. But, that needn't be the case.Dwarkesh Patel 48:06Your theory of moral change implies that after a small group starts advocating for a specific idea, it may take a century or more before that idea reaches common purchase. To the extent that you think this is a very important century (I know you have disagreements about that with with others), does that mean that there isn't enough time for longtermism to gain by changing moral values?Will MacAskill 48:32There are lots of people I know and respect fairly well who think that Artificial General Intelligence will likely lead to singularity-level technological progress and extremely rapid rate of technological progress within the next 10-20 years. If so, you’re right. Value changes are something that pay off slowly over time.I talk about moral change taking centuries historically, but it can be much faster today. The growth of the Effective Altruism movement is something I know well. If that's growing at something like 30% per year, compound returns mean that it's not that long. That's not growth. That's not change that happens on the order of centuries.If you look at other moral movements like gay rights movement, very fast moral change by historical standards. If you're thinking that we've got ten years till the end of history, then don't broadly try and promote better values. But, we should have a very significant probability mass on the idea that we will not hit some historical end of this century. In those worlds, promoting better values could pay off like very well.Dwarkesh Patel 49:59Have you heard of Slime Mold Time Mold Potato Diet?Will MacAskill 50:03I have indeed heard of Slime Mold Time Mold Potato Diet, and I was tempted as a gimmick to try it. As I'm sure you know, potato is close to a superfood, and you could survive indefinitely on butter mashed potatoes if you occasionally supplement with something like lentils and oats.Dwarkesh Patel 50:25Hm, interesting. Question about your career: why are you still a professor? Does it still allow you to the things that you would otherwise have been doing like converting more SBF’s and making moral philosophy arguments for EA? Curious about that.Will MacAskill 50:41It's fairly open to me what I should do, but I do spend significant amounts of time co-founding organizations or being on the board of those organizations I've helped to set up. More recently, working closely with the Future Fund, SBF’s new foundation, and helping them do as much good as possible. That being said, if there's a single best guess for what I want to do longer term, and certainly something that plays to my strengths better, it's developing ideas, trying to get the big picture roughly right, and then communicating them in a way that's understandable and gets more people to get off their seats and start to do a lot of good for the long-term. I’ve had a lot of impact that way. From that perspective, having an Oxford professorship is pretty helpful.The problems with academiaDwarkesh Patel 51:34You mentioned in the book and elsewhere that there's a scarcity of people thinking about big picture questions—How contingent is history? How are people happy generally?—Are these questions that are too hard for other people? Or they don't care enough? What's going on? Why are there so few people talking about this?Will MacAskill 51:54I just think there are many issues that are enormously important but are just not incentivized anywhere in the world. Companies don't incentivize work on them because they’re too big picture. Some of these questions are, “Is the future good, rather than bad? If there was a global civilizational collapse, would we recover? How likely is a long stagnation?” There’s almost no work done on any of these topics. Companies aren't interested too grand in scale.Academia has developed a culture where you don't tackle such problems. Partly, that's because they fall through the cracks of different disciplines. Partly because they seem too grand or too speculative. Academia is much more in the mode of making incremental gains in our understanding. It didn't always used to be that way.If you look back before the institutionalization of academic research, you weren't a real philosopher unless you had some grand unifying theory of ethics, political philosophy, metaphysics, logic, and epistemology. Probably the natural sciences too and economics. I'm not saying that all of academic inquiry should be like that. But should there be some people whose role is to really think about the big picture? Yes.Dwarkesh Patel 53:20Will I be able to send my kids to MacAskill University? What's the status on that project?Will MacAskill 53:25I'm pretty interested in the idea of creating a new university. There is a project that I've been in discussion about with another person who's fairly excited about making it happen. Will it go ahead? Time will tell. I think you can do both research and education far better than it currently exists. It's extremely hard to break in or creating something that's very prestigious because the leading universities are hundreds of years old. But maybe it's possible. I think it would could generate enormous amounts of value if we were able to pull it off.Dwarkesh Patel 54:10Excellent, alright. So the book is What We Owe The Future. I understand pre-orders help a lot, right? It was such an interesting read. How often does somebody write a book about the questions they consider to be the most important even if they're not the most important questions? Big picture thinking, but also looking at very specific questions and issues that come up. Super interesting read.Will MacAskill 54:34Great. Well, thank you so much!Dwarkesh Patel 54:38Anywhere else they can find you? Or any other information they might need to know?Will MacAskill 54:39Yeah, sure. What We Owe The Future is out on August 16 in the US and first of September in the United Kingdom. If you want to follow me on Twitter, I'm @WillMcCaskill. If you want to try and use your time or money to do good, Giving What We Can is an organization that encourages people to take a pledge to give a significant fraction of the income (10% or more) to the charities that do the most good. It has a list of recommended charities. 80,000 Hours—if you want to use your career to do good—is a place to go for advice on what careers have the biggest impact at all. They provide one-on-one coaching too.If you're feeling inspired and want to do good in the world, you care about future people and I want to help make their lives go better, then, as well as reading What We Owe The Future, Giving What We Can, and 80,000 hours are the sources you can go to and get involved.Dwarkesh Patel 55:33Awesome, thanks so much for coming on the podcast! It was a lot of fun.Will MacAskill 54:39Thanks so much, I loved it. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com

The Ezra Klein Show
Three Sentences That Could Change the World — and Your Life

The Ezra Klein Show

Play Episode Listen Later Aug 9, 2022 68:45


Today's show is built around three simple sentences: “Future people count. There could be a lot of them. And we can make their lives better.” Those sentences form the foundation of an ethical framework known as “longtermism.” They might sound obvious, but to take them seriously is a truly radical endeavor — one with the power to change the world and even your life.That second sentence is where things start to get wild. It's possible that there could be tens of trillions of future people, that future people could outnumber current people by a ratio of something like a million to one. And if that's the case, then suddenly most of the things we spend most of our time arguing about shrink in importance compared with the things that will affect humanity's long-term future.William MacAskill is a professor of philosophy at Oxford University, the director of the Forethought Foundation for Global Priorities Research and the author of the forthcoming book, “What We Owe the Future,” which is the best distillation of the longtermist worldview I've read. So this is a conversation about what it means to take the moral weight of the future seriously and the way that everything — from our political priorities to career choices to definitions of heroism — changes when you do.We also cover the host of questions that longtermism raises: How should we weigh the concerns of future generations against those of living people? What are we doing today that future generations will view in the same way we look back on moral atrocities like slavery? Who are the “moral weirdos” of our time we should be paying more attention to? What are the areas we should focus on, the policies we should push, the careers we should choose if we want to guarantee a better future for our posterity?And much more.Mentioned:"Is A.I. the Problem? Or Are We?" by The Ezra Klein Show "How to Do The Most Good" by The Ezra Klein Show "This Conversation With Richard Powers Is a Gift" by The Ezra Klein ShowBook Recommendations:“Moral Capital” by Christopher Leslie Brown“The Precipice” by Toby Ord“The Scout Mindset” by Julia GalefThoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.​​“The Ezra Klein Show” is produced by Annie Galvin and Rogé Karma; fact-checking by Michelle Harris, Mary Marge Locker and Kate Sinclair; original music by Isaac Jones; mixing by Sonia Herrero and Isaac Jones; audience strategy by Shannon Busta. Special thanks to Kristin Lin and Kristina Samulewski.

The Tim Ferriss Show
#612: Will MacAskill of Effective Altruism Fame — The Value of Longtermism, Tools for Beating Stress and Overwhelm, AI Scenarios, High-Impact Books, and How to Save the World and Be an Agent of Change

The Tim Ferriss Show

Play Episode Listen Later Aug 2, 2022 104:35


Will MacAskill of Effective Altruism Fame — The Value of Longtermism, Tools for Beating Stress and Overwhelm, AI Scenarios, High-Impact Books, and How to Save the World and Be an Agent of Change | Brought to you by LinkedIn Jobs recruitment platform with 800M+ users, Vuori comfortable and durable performance apparel, and Theragun percussive muscle therapy devices. More on all three below. William MacAskill (@willmacaskill) is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. A Forbes 30 Under 30 social entrepreneur, he also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and Y Combinator-backed 80,000 Hours, which together have moved over $200 million to effective charities. You can find my 2015 conversation with Will at tim.blog/will. His new book is What We Owe the Future. It is blurbed by several guests of the podcast, including Sam Harris, who wrote, “No living philosopher has had a greater impact upon my ethics than Will MacAskill. . . . This is an altogether thrilling and necessary book.” Please enjoy! *This episode is brought to you by Vuori clothing! Vuori is a new and fresh perspective on performance apparel, perfect if you are sick and tired of traditional, old workout gear. Everything is designed for maximum comfort and versatility so that you look and feel as good in everyday life as you do working out.Get yourself some of the most comfortable and versatile clothing on the planet at VuoriClothing.com/Tim. Not only will you receive 20% off your first purchase, but you'll also enjoy free shipping on any US orders over $75 and free returns.*This episode is also brought to you by Theragun! Theragun is my go-to solution for recovery and restoration. It's a famous, handheld percussive therapy device that releases your deepest muscle tension. I own two Theraguns, and my girlfriend and I use them every day after workouts and before bed. The all-new Gen 4 Theragun is easy to use and has a proprietary brushless motor that's surprisingly quiet—about as quiet as an electric toothbrush.Go to Therabody.com/Tim right now and get your Gen 4 Theragun today, starting at only $199.*This episode is also brought to you by LinkedIn Jobs. Whether you are looking to hire now for a critical role or thinking about needs that you may have in the future, LinkedIn Jobs can help. LinkedIn screens candidates for the hard and soft skills you're looking for and puts your job in front of candidates looking for job opportunities that match what you have to offer.Using LinkedIn's active community of more than 800 million professionals worldwide, LinkedIn Jobs can help you find and hire the right person faster. When your business is ready to make that next hire, find the right person with LinkedIn Jobs. And now, you can post a job for free. Just visit LinkedIn.com/Tim.*For show notes and past guests on The Tim Ferriss Show, please visit tim.blog/podcast.For deals from sponsors of The Tim Ferriss Show, please visit tim.blog/podcast-sponsorsSign up for Tim's email newsletter (5-Bullet Friday) at tim.blog/friday.For transcripts of episodes, go to tim.blog/transcripts.Discover Tim's books: tim.blog/books.Follow Tim:Twitter: twitter.com/tferriss Instagram: instagram.com/timferrissYouTube: youtube.com/timferrissFacebook: facebook.com/timferriss LinkedIn: linkedin.com/in/timferrissPast guests on The Tim Ferriss Show include Jerry Seinfeld, Hugh Jackman, Dr. Jane Goodall, LeBron James, Kevin Hart, Doris Kearns Goodwin, Jamie Foxx, Matthew McConaughey, Esther Perel, Elizabeth Gilbert, Terry Crews, Sia, Yuval Noah Harari, Malcolm Gladwell, Madeleine Albright, Cheryl Strayed, Jim Collins, Mary Karr, Maria Popova, Sam Harris, Michael Phelps, Bob Iger, Edward Norton, Arnold Schwarzenegger, Neil Strauss, Ken Burns, Maria Sharapova, Marc Andreessen, Neil Gaiman, Neil de Grasse Tyson, Jocko Willink, Daniel Ek, Kelly Slater, Dr. Peter Attia, Seth Godin, Howard Marks, Dr. Brené Brown, Eric Schmidt, Michael Lewis, Joe Gebbia, Michael Pollan, Dr. Jordan Peterson, Vince Vaughn, Brian Koppelman, Ramit Sethi, Dax Shepard, Tony Robbins, Jim Dethmer, Dan Harris, Ray Dalio, Naval Ravikant, Vitalik Buterin, Elizabeth Lesser, Amanda Palmer, Katie Haun, Sir Richard Branson, Chuck Palahniuk, Arianna Huffington, Reid Hoffman, Bill Burr, Whitney Cummings, Rick Rubin, Dr. Vivek Murthy, Darren Aronofsky, and many more.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Book Pile
Doing Good Better by William MacAskill

The Book Pile

Play Episode Listen Later Aug 1, 2022 24:45


Want to do as much good in the world as possible, but you don't have a comedy podcast? Today's book is about how to use data to make the biggest positive impact on the world. Plus, Dave wants you to know his net worth in Jet Skis and Kellen goes hard after the pogo sticking crowd.*TheBookPilePodcast@gmail.com*Kellen Erskine has appeared on Conan, Comedy Central, Jimmy Kimmel Live!, NBC's America's Got Talent, and the Amazon Original Series Inside Jokes. He has garnered over 50 million views with his clips on Dry Bar Comedy. In 2018 he was selected to perform on the “New Faces” showcase at the Just For Laughs Comedy Festival in Montreal, Quebec. Kellen was named one of TBS's Top Ten Comics to Watch in 2017. He currently tours the country www.KellenErskine.com*David Vance's videos have garnered over 1 billion views. He has written viral ads for companies like Squatty Potty, Chatbooks, and Lumē, and sketches for the comedy show Studio C. His work has received two Webby Awards, and appeared on Conan. He currently works as a writer on the sitcom Freelancers.

Lexman Artificial
Guest: William MacAskill on Nurseries, Succulency, Stances, Consequents, and Hymnody

Lexman Artificial

Play Episode Listen Later Jul 22, 2022 4:20


Lexman interviews William MacAskill, a professor of political science at the University of Edinburgh about issues surrounding nurseries, succulency, stances, consequents, and hymnodists. In this episode, MacAskill discusses how hymnody can be used to help mitigate famines and examine different stances that countries could take in order to prevent them from happening.

The Nonlinear Library
EA - 300+ Flashcards to Tackle Pressing World Problems by AndreFerretti

The Nonlinear Library

Play Episode Listen Later Jul 11, 2022 27:37


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 300+ Flashcards to Tackle Pressing World Problems, published by AndreFerretti on July 11, 2022 on The Effective Altruism Forum. How often did you read an article to then forget it? With these 300+ flashcards, you can memorize key facts and test your knowledge of pressing world problems such as animal welfare, global health, longtermism, and meta-effective altruism. Get these flashcards at your fingertips from Thought Saver in the four embedded quizzes below, or download the Anki decks at the bottom. [Note: Sources are in the Thought Saver/Anki flashcards, as I didn't want to fill this post with links. Also, applying and understanding is more valuable than memorizing facts—see Bloom's Taxonomy. My favorite sources to dig deeper are Our World in Data, 80,000 Hours' problem profiles, and Doing Good Better by William MacAskill.] Animal Welfare 1% of US animal donations went to farmed animal organisations in 2015 3,000 US farm animals were killed for every shelter animal death in 2015 3% of US donations were aimed at helping the environment or animals in 2020 75% of agricultural land is used for livestock (including grazing and land to grow animal feed) 30B chickens were alive in 2020 A centralized nervous system is what enables animals to have experiences Animal shelters received 65% of US animal donations in 2015 Broiler chickens in factory farms are usually killed when 6 weeks old China is the country that produces the most meat in tons China is the country that produces the most pigmeat China is the country that produces the most seafood Clean meat Meat grown in cell culture rather than in an animal's body Cows eat 6 calories for each calorie of beef produced Does the majority of seafood production come from wild fish catch or fish farming?Fish farming Eating 300 eggs indirectly kills one chicken Eating 3,000 calories of chicken meat kills one chicken Effective animal campaigns affect 10-100 chicken-life years per $ spent Farmed hens make 20 times as many eggs as they were born to do Fish farming has increased 50-fold globally from 1960 until 2015 Flexitarian A person who has a primarily vegetarian diet but occasionally eats meat or fish foodimpacts.org An online tool ranking animal foods based on suffering and emissions Global meat production grew 200% between 1970-2020 Humanity farmed 1 trillion insects in 2020 Humanity killed 100B farmed fish in 2017 Humanity killed 2,000 land animals every second in 2016 Humanity killed 300B farmed shrimp in 2017 Humanity killed 69B farmed chickens for meat in 2018 Humanity killed 70B land animals in 2016 Less than 0.1% of global donations are aimed at helping farmed animals Mark Post developed the first cultured meat hamburger in 2013 Open Wing Alliance A coalition aiming to eliminate battery cages for chickens Over 90% of global farm animals lived in intensive farms in 2018 Peter Singer wrote Animal Liberation Speciesism Treating members of one species as morally more important than members of other species Switching to a plant-based diet spares 100 vertebrates every year (mostly fish and chicken) The average meat consumption per capita in China has grown 15-fold since 1961 The average meat consumption per capita in China was 60kg in 2017 The average meat consumption per capita in India was 4kg in 2017 The average meat consumption per capita in the United States was 120kg in 2017 The average global meat consumption per capita has grown from 20kg to 40kg between 1961-2014 The EU will ban cages for farmed chickens by 2027 The Humane League launched the Open Wing Alliance The United States is the country that produces the most cattle and poultry Three foods to avoid that remove the most animal suffering from your diet-Chicken -Eggs -Fish Top Charities recommended by Animal Charity Evaluators in 2021 -Faunalytics -The Humane League ...

Lexman Artificial
The Serengeti of Politics: Orangeries, Sofar and Le

Lexman Artificial

Play Episode Listen Later Jul 5, 2022 2:40


The Lexman Artificial Podcast explores the strange and terrifying world of UK politics. In this week's episode, we are joined by author, activist and commented columnist William MacAskill, to discuss Recent Labour Moves in the House of Commons, including the 'orangeries' and 'sofar' Examples of Far-Reaching Legislation.

EARadio
Opening Talk | Amy Labenz & William MacAskill | EA Global: London 22

EARadio

Play Episode Listen Later Jun 28, 2022 34:39


Amy speaks to the value about prioritizing 1 on 1 meetings, her excitement about the growth of the effective altruism community and EA Global conferences, and announces that community events grants were approved for EAGx Mexico and EAGx India.Will takes a short trip down memory lane, covers how to make the most out of EA Global conferences, encourages being ambitious and thinking of the the incredible opportunity to do good in the world as an enormous responsibility, covers some risks to EA culture, and provides examples of highly successful projects in the effective altruism space.Learn more about effective altruism at: www.effectivealtruism.orgFind out more about EA Global conferences at: www.eaglobal.orgThis talk was taken from EA Global: London 2022. Click here to watch the talk with the PowerPoint presentation.

EARadio
Fireside chat | Will MacAskill | EA Global: London 2021

EARadio

Play Episode Listen Later Jun 5, 2022 58:45


William MacAskill is an Associate Professor in Philosophy at Oxford University and a senior research fellow at the Global Priorities Institute. He is also the director of the Forethought Foundation for Global Priorities Research and co-founder and President of the Centre for Effective Altruism. He is also the author of Doing Good Better and Moral Uncertainty, and has an upcoming book on longtermism called What We Owe The Future.This talk was taken from EA Global: London 2021. Click here to watch the talk with the video.

The Nonlinear Library
EA - How many people have heard of effective altruism? by David Moss

The Nonlinear Library

Play Episode Listen Later May 20, 2022 40:30


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How many people have heard of effective altruism?, published by David Moss on May 20, 2022 on The Effective Altruism Forum. This post reports the results of a survey we ran in April 2022 investigating how many people had heard of ‘effective altruism' in a large (n=6130) sample, weighted to be representative of the US general population. In subsequent posts in this series, we will be reporting on findings from this survey about where people first hear about effective altruism and how positive or negative people's impressions are of effective altruism. This survey replicates and extends a survey we ran in conjunction with CEA in early 2021, which focused only on US students. Because that survey was not representative, we think that these new results offer a significant advance in estimating how many people in the US population have heard of EA, and in particular sub-groups like students and even students at top-ranked universities. Summary After applying a number of checks (described below), we classified individuals as having heard of effective altruism using both a ‘permissive' standard and a more conservative ‘stringent' standard, based on their explanations of what they understand ‘effective altruism' to mean. We estimate that 6.7% of the US adult population have heard of effective altruism using our permissive standard and 2.6% according to our more stringent standard. We also identified a number of differences across groups: Men (7.6% permissive, 3.0% stringent) were more likely to have heard of effective altruism than women (5.8% permissive, 2.1% stringent) Among students specifically, we estimated 7% had heard of EA (according to a permissive standard) and 2.8% (according to the stringent standard). However, students from top-50 ranked universities seemed more likely to have heard of EA (7.9% permissive, 4.1% stringent). We also found that students at top 15 universities were even more likely to have heard of EA, though this was based on a small sample size. Younger (18-24) people seem somewhat less likely to have heard of effective altruism than older (25-44) people, though the pattern is complicated. The results nevertheless suggest that EA's skew towards younger people cannot simply be explained by higher rates of exposure. Higher levels of education were also strongly associated with being more likely to have heard of EA, with 11.7% of those with a graduate degree having heard of it (permissive standard) compared to 9.2% of college graduates and 3.7% of high school graduates. We also found sizable differences between the percentage of Republicans (4.3% permissive, 1.5% stringent) estimated to have heard of EA, compared to Democrats (7.2% permissive, 2.9% stringent) and Independents (4.3% permissive, 1.5% stringent). Humanities students (or graduates) seem more likely to have heard of EA than people from other areas of study. We estimated the percentages that had heard of various EA and EA-adjacent figures and organisations including: Peter Singer: 11.2% William MacAskill: 2.1% GiveWell: 7.8% Giving What We Can: 4.1% Open Philanthropy: 3.6% 80,000 Hours: 1.3% Why it matters how many people have heard of EA Knowing how many people have already encountered EA is potentially relevant to assessing how far we should scale up (or scale down) outreach efforts. This may apply to particular target groups (e.g. students at top universities), as well as the total population. Knowing how the number of people who have encountered effective altruism differs across different groups could highlight who our outreach is missing. Our outreach could simply be failing to reach certain groups. Most people in the EA community do not seem to first hear about EA through particularly direct, targeted outreach (only 7.7% first hear from an EA group, for example), but rather through mo...

The Nonlinear Library
EA - EA and the current funding situation by William MacAskill

The Nonlinear Library

Play Episode Listen Later May 10, 2022 35:54


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA and the current funding situation, published by William MacAskill on May 10, 2022 on The Effective Altruism Forum. This post gives an overview of how I'm thinking about the “funding in EA” issue, building on many conversations. Although I'm involved with a number of organisations in EA, this post is written in my personal capacity. You might also want to see my EAG talk which has a related theme, though with different emphases. As a warning, I'm particularly stretched for time at the moment and might not have capacity to respond to comments. For helpful comments, I thank Abie Rohrig, Asya Bergal, Claire Zabel, Eirin Evjen, Julia Wise, Ketan Ramakrishnan, Leopold Aschenbrenner, Matt Wage, Max Daniel, Nick Beckstead, Stephen Clare, and Toby Ord. Summary EA is in a very different funding situation than it was when it was founded. This is both an enormous responsibility and an incredible opportunity. It means the norms and culture that made sense at EA's founding will have to adapt. It's good that there's now a serious conversation about this. There are two ways we could fail to respond correctly: By commission: We damage, unnecessarily, the aspects of EA culture that make it valuable; we support harmful projects; or we just spend most of our money in a way that's below-the-bar. By omission: we aren't ambitious enough, and fail to make full use of the opportunities we now have available to us. Failure by omission is much less salient than failure by commission, but it's no less real, and may be more likely. Though it's hard, we need to inhabit both modes of mind at once. The right attitude is one of judicious ambition. Judicious, because I think we can avoid most of the risks that come with an influx of potential funding without compromising on our ability to achieve big things. That means: avoiding unnecessary extravagance, and conveying the moral seriousness of distributing funding; emphasising that our total potential funding is still tiny compared to the problems in the world, and there is still a high bar for getting funded; being willing to shut down lower-performing projects; and cooperating within the community to mitigate risks of harm. Ambition, because it would be very easy to fail by thinking too small, or just not taking enough action, such that we're unable to convert the funding we've raised into good outcomes. That means we should, for example: create more projects that are scalable with respect to funding; buy time and increased productivity when we can; and be more willing to use money to gain information, by just trying something out, rather than assessing whether it's good in the abstract. Intro Well, things have gotten weird, haven't they? Recently, I went on a walk with a writer, and it gave me a chance to reflect on the earlier days of EA. I showed him the first office that CEA rented, back in 2013. It looks like this: To be clear: the office didn't get converted into an estate agent — it was in the estate agent, in a poorly-lit room in the basement. Here's a photo from that time: Normally, about a dozen people worked in that room. When one early donor visited, his first reaction was to ask: “Is this legal?” At the time, there was very little funding available in EA. Lunch was the same, every day: budget baguettes and plain hummus. The initial salaries offered by CEA were £15,000/yr pre-tax. When it started off, CEA was only able to pay its staff at all because I loaned them £7,000 — my entire life savings at the time. One of our first major donations was from Julia Wise, for $10,000, which was a significant fraction of the annual salary she received from being a mental health social worker at a prison. Every new GWWC pledge we got was a cause for celebration: Toby Ord estimated the expected present value of donations from a GWWC pledge at aroun...

The Nonlinear Library
EA - Is EA "just longtermism" now? by frances lorenz

The Nonlinear Library

Play Episode Listen Later May 3, 2022 8:18


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is EA "just longtermism" now?, published by frances lorenz on May 3, 2022 on The Effective Altruism Forum. Acknowledgements A ginormous thank you to Bruce Tsai who kindly combed through multiple drafts and shared tons of valuable feedback. Also a big thank you to Shakeel Hashim, Ines, and Nathan Young for their insightful notes and additions. Preface In this post, I address the question: is EA just longtermism? I then ask, among other questions, what factors contribute to this perception? What are the implications? 1. Introduction Recently, I've heard a few criticisms of EA that hinge on the following: “EA only cares about longtermism.” I'd like to explore this perspective a bit more and the questions that naturally follow, namely: How true is it? Where does it come from? Is it bad? Should it be true? 2. Is EA just longtermism? In 2021, around 60% of funds deployed by the Effective Altruism movement came from Open Philanthropy (1). Thus, we can use their grant data to try and explore EA funding priorities. The following graph (from Effective Altruism Data) shows Open Philanthropy's total spending, by cause area, since 2012: Overall, Global Health & Development accounts for the majority of funds deployed. How has that changed in recent years, as AI Safety concerns grow? We can look at this uglier graph (bear with me) showing Open Philanthropy grants deployed from January, 2021 to present (data from the Open Philanthropy Grants Database): We see that Global Health & Development is still the leading fund-recipient; however, Risks from Advanced AI is now a closer second. We can also note that the third and fourth most funded areas, Criminal Justice Reform and Farm Animal Welfare, are not primarily driven by a goal to influence the long-term future With this data, I feel pretty confident that EA is not just longtermism. However, it is also true (and well-known) that funding for longtermist issues, particularly AI Safety, has increased. This raises a few more questions: 2.1 Funding has indeed increased, but what exactly is contributing to the view that EA essentially is longtermism/AI Safety? (Note: this list is just an exploration and not meant to claim whether the below things are good or bad) William Macaskill's upcoming book, What We Owe the Future, has generated considerable promotion and discussion. Following Toby Ord's The Precipice, published in March, 2020, I imagine this has contributed to the outside perception that EA is becoming synonymous with longtermism. The longtermist approach to philanthropy is different from mainstream, traditional philanthropy. When trying to describe a concept like Effective Altruism, sometimes the thing that most differentiates it is what stands out, consequently becoming its defining feature. Of the longtermist causes, AI Safety receives the most funding, and furthermore, has a unique ‘weirdness' factor that generates interest and discussion. For example, some of the popular thought experiments used to explain Alignment concerns can feel unrealistic, or something out of a sci-fi movie. I think this can serve to both: 1. draw in onlookers whose intuition is to scoff, 2. give AI-related discussions the advantage of being particularly interesting/compelling, leading to more attention. AI Alignment is an ill-defined problem with no clear solution and tons of uncertainties: What counts as AGI? What does it mean for an AI system to be fair or aligned? What are the best approaches to Alignment research? With so many fundamental questions unanswered, it's easy to generate ample AI Safety discussion in highly visible places (e.g. forums, social media, etc.) to the point that it can appear to dominate EA discourse. AI Alignment is a growing concern within the EA movement, so it's been highlighted recently by EA-aligned orgs (for example, AI S...

Efektiivne Altruism Eesti
#20 Sille - Liis Männikuga efektiivse annetamise psühholoogiast

Efektiivne Altruism Eesti

Play Episode Listen Later Apr 28, 2022 81:03


Sille-Liis on EAs annetustiimi juht, mis sai detsembris valmis veebilehe Anneta Targalt, mille kaudu on võimalik teha Eestist tulumaksuvabasid annetusi GiveWelli soovitatud organisatsioonidele. Ta õppis Tartu Ülikoolis psühholoogiat ja hiljem Utrechti Ülikooli magistris sotsiaalpsühholoogiat. Hetkel töötab ta Tartu Ülikooli psühholoogia instituudis, kus tegeleb erinevate rakenduspsühholoogia projektidega, mis on seotud tööalase vaimse heaoluga, aga ka käitumise mõjutamise ja müksamisega. Rääkisime saates annetamise psühholoogiast ja täpsemalt sellest, mis takistab meil tegemast kõige efektiivsemaid annetamise otsuseid ja kuidas seda probleemi lahendada. Ajatähised: 01:12 Kuidas Sille-Liisi karjääritee on seotud efektiivse annetamisega 07:40 Mis karjäärivõimalused on psühholoogia taustaga inimestel efektiivse altruismiga seotud valdkondades 12:20 Kui levinud on mõjuga arvestamine annetusotsuseid tehes 23:00 Mis on motiveerinud inimesi annetama Ukraina sõjakriisi ajal 27:20 Tõhusat annetamist takistavad faktorid 46:45 Annetamine ja emotsioonid 54:40 Lahendused - kuidas mõjutada inimesi efektiivsemalt annetama 1:06:20 Sille-Liisi soovitused tõhusamaks annetamiseks nii Eestisiseselt kui ülemaailmselt 1:17:50 1-minutiline pitch pereliikmele efektiivse annetamise vajalikkusest Hinda saadet siin: https://forms.gle/LPRE2ziBs62pjGTX9 Vestluse jooksul mainitud allikad: - The many obstacles to effective giving: https://psyarxiv.com/3z7hj/ - Effective Thesis: https://effectivethesis.org/ - EA Funds: https://funds.effectivealtruism.org/ - Animal Charity Evaluators: https://animalcharityevaluators.org/ - Give Directly: https://www.givedirectly.org/ - Giving What We Can: https://www.givingwhatwecan.org/ - Against Malaria Foundation: https://www.againstmalaria.com/ - Kiusamisvaba Kool: https://kiusamisvaba.ee/ - Nähtamatud Loomad: https://nahtamatudloomad.ee/ - SPIN programm: https://www.spinprogramm.ee/ - Vaikuseminutid: https://vaikuseminutid.ee/ Uudised: - Effective Ideas: https://effectiveideas.org/ - William MacAskilli uus raamat: https://www.amazon.de/-/en/William-MacAskill/dp/1541618629/ref=sr_1_1?crid=12U75HD85YBC7&keywords=What+We+Owe+the+Future&qid=1650870774&sprefix=what+we+owe+the+future%2Caps%2C141&sr=8-1 - Nähtamatud Loomad tööpakkumine: https://nahtamatudloomad.ee/nahtamatud-loomad-votab-toole-vabatahtlike-koordinaatori-ja-fundraiseri

The Nonlinear Library
EA - The Effective Altruism culture by PabloAMC

The Nonlinear Library

Play Episode Listen Later Apr 18, 2022 6:03


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Effective Altruism culture, published by PabloAMC on April 17, 2022 on The Effective Altruism Forum. Summary: In this post I analyze why I believe the EA community culture is important, and the things I like and believe should be taken care of. The Effective Altruism culture. I feel very lucky to be part of a group of people whose objective is to do the most good. I really want us to succeed, because there is so much good to be done yet, and so many ways humankind can achieve extraordinary feats. And being part of this endeavor really gives purpose and something I am willing to fight for. But Effective Altruism is not only the `doing the most good' idea but also a movement. It is a very young movement indeed: according to Wikipedia, in 2009 Giving what we can was founded by Toby Ord and Will MacAskill. In 2011 they created 80000hours.org and started using the name Effective Altruism, and in 2013, less than 10 years ago, the first EA Global took place. Since then, we have done many things together, and I am sure we will achieve many more things too. For that, I believe the most important aspect of our movement is not how rich we get, or how many people we place at key institutions. Rather, it is the culture we settle, and for this reason, I think it is everyone's job in the community to make sure that we keep being curious, truth-seeking, committed and welcoming. Keeping this culture is essential to being able to change our minds and adapt about how to do the most good, and also convincing society as a whole about the things we care about. In this post I will discuss the things I like about us, and also the things we have to pay special attention to. The things I like about our culture Some of the things I like about our community are thanks to the rationalistic community from the Bay Area. The focus on truth-seeking and having good epistemics about how to do the most good are very powerful tools. Other things I like about the community that are also inherited from the Bay Area, I believe, are the risk-taking and entrepreneurial spirit. Beyond the previous, our willingness to consider unconventional but well-grounded stances, the radical empathy to care about those who have no voice (the poorest of the people, animals, or future generations), or the cause impartiality principle are due to the utilitarian roots of Toby Ord and William MacAskill. Finally, Effective Altruism has more or less successfully avoided becoming a political ideology, which I believe would be risky. Aspects where we should be careful However, not all aspects of our culture are great, even if generalization is perhaps not appropriate. Instead, I will flag those I believe could become a problem. With this, I hope that the community will pay attention to these issues and keep them in check. The first one is money. While it is a blessing that so many rich people agree that doing good is great and are willing to donate it, a recent highly upvoted post warned about the perception and epistemic problems that may arise with that. The bottom line is that having generous money may be perceived as self-serving, and may degrade our moral judgment, but you can read more in the post itself. A perhaps more important problem is some elitism in the community. While it makes sense that wanting to do good means talking first to students of good universities, we have to be very careful about not being dismissive of people from different backgrounds who nevertheless are excited by our same goals. This may be particularly damaging in particular subareas such as AI Safety, where there is sometimes a meme that all we need is really really smart people. And it is not even true, what we need good researchers, engineers... Similarly applies to students in elite universities: let us not be dismissive of people just beca...

The Present Writer
Làm việc thiện đúng cách

The Present Writer

Play Episode Listen Later Apr 10, 2022 47:35


Được truyền cảm hứng bởi cuốn sách "Doing Good Better" (Làm việc thiện đúng cách), tập podcast này kể lại hành trình làm thiện nguyện của mình trong những năm tuổi 20. Tại sao mình làm thiện nguyện? Mình đã học được gì trong những năm tháng tuổi trẻ làm thiện nguyện ấy? Tại sao mình quyết định dừng lại việc làm thiện nguyện "theo cách cũ"? Hành trình này đã làm thay đổi cuộc đời mình như thế nào?

The Nonlinear Library
EA - Announcing What We Owe The Future by William MacAskill

The Nonlinear Library

Play Episode Listen Later Mar 30, 2022 7:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing What We Owe The Future, published by William MacAskill on March 30, 2022 on The Effective Altruism Forum. Summary My new book on longtermism, What We Owe The Future, will come out this summer You can now pre-order a copy (US link, UK link) If you want to help with the book launch, I think pre-ordering is probably the highest-impact thing you can do right now Longer Summary I've written a book!! It's called What We Owe The Future. It makes the case for longtermism — the view that positively affecting the long-run future is a key moral priority of our time — and explores what follows from that view. As well as the familiar topics of AI and extinction risk, it also discusses value lock-in, civilisational collapse and recovery, technological stagnation, population ethics, and the expected value of the future. I see it as a sequel to Doing Good Better, and a complement to The Precipice. I think I've probably worked harder on this project than on any other in my life, and I'm excited that the launch date is finally now in sight: Aug 16th in the US and Sep 1st in the UK. I'm looking forward to being able to share it and discuss it with you all! I'm now focused on trying to make the book launch go well. I'd really like to use the launch as a springboard for wider promotion of longtermism, trying to get journalists and other influencers talking about the idea. In order to achieve that, a huge win would be to hit The New York Times Bestseller list. And in order to achieve that, getting as many pre-orders as possible is crucial. In particular, there's a risk that the book is perceived by booksellers (like Amazon and Barnes and Noble) as being “just another philosophy book”, which means they buy very few copies to sell to consumers. This means that the booksellers don't put any effort into promoting the book, and they can even literally run out of copies (as happened with Superintelligence after the Elon tweet). Preorders are the clearest way for us to demonstrate that WWOTF (or, as I prefer, “WTF?”) is in a different reference class. For these reasons, I think that pre-ordering WWOTF is probably the highest value thing you can do to help with the book launch right now. The US link to pre-order is here, the UK link is here, and for all other countries you can use your country's Amazon link. US version UK version About the book My hope for WWOTF is that it will be like an Animal Liberation for future generations: impacting broadly how society thinks about the interests of future people, and inspiring people to take action to safeguard the long term. If the launch goes well, then significantly more people — including people who are deciding which careers to pursue, philanthropists, and political decision-makers and policy-makers — will be exposed to the core ideas. The book is aimed to be both readable for a general audience and informative for EA researchers or interested academics. (Though I'm not sure if I've succeeded at this!) So there's a wide breadth of content: everything from stories of historical instances of long-run impact to discussion of impossibility theorems in population ethics. And, following Toby Ord's lead, there is an ungodly number of endnotes. The table of contents can give you the gist: Introduction Part I. The Long View Chapter 1: The Case for Longtermism Chapter 2: You Can Shape the Course of History Part II. Trajectory Changes Chapter 3: Moral Change Chapter 4: Value Lock-In Part III. Safeguarding Civilization Chapter 5: Extinction Chapter 6: Collapse Chapter 7: Stagnation Part IV. Assessing the End of the World Chapter 8: Is It Good to Make Happy People? Chapter 9: Will the Future Be Good or Bad? Part V. Taking Action Chapter 10: What to Do In the course of writing the book, I've also changed my mind on a number of issues. I hope to share my ...

The Nonlinear Library
EA - Should we produce more EA-related documentaries? by elteerkers

The Nonlinear Library

Play Episode Listen Later Feb 23, 2022 12:52


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should we produce more EA-related documentaries?, published by elteerkers on February 21, 2022 on The Effective Altruism Forum. TLDR: We make the case that producing ambitious documentaries raising awareness of topics related to effective altruism could be impactful, and are looking for input on why this hasn't been done or is not more discussed in the community. Rigor: We don't have any experience related to producing documentaries and feel very uncertain about pretty much everything in this post. The main aim is to try and induce discussion and get input for further exploration. Context: We are currently in contact with a philanthropist in Sweden (where we are based) who has connections and experience from funding and producing documentaries, and who has expressed interest in funding documentaries on issues relevant for EA, e.g. biorisks and nuclear war/winter. Should we produce more EA-related documentaries? In a fireside chat at EAG London 2021 William MacAskill spoke briefly about “EA Media”, a topic that has come up at various times and places during the last years (See EA in media | Joseph Gordon-Levitt, Julia Galef, AJ Jacobs, and William MacAskill, MacAskill Fireside Chat at EAG and Ezra Klein interview at EAG 2020). In this chat William says that he would like EA to produce more “high-depth, high-engagement media” such as podcasts, books and documentaries. He also says that a documentary funded at around 10 million dollars would be one of the top most well-funded documentaries in the world and that we could produce several of these per year on important EA topics. We, the authors, think this seems like relatively low hanging fruit and that documentaries on EA topics could be of high expected values (albeit high risk high reward). Thus we ask ourselves, why is this not more actively discussed and why are we not seeing any EA documentaries? Is it that the potential upsides of documentaries are small, are we missing important downsides or has this simply been overlooked? What we mean by documentary In this post we are, for obvious reasons, interested in documentaries aiming to create some kind of positive change. And when it comes to creating change, we, inspired by BRITDOC, think of documentaries as able to fill four overlapping and interdependent, yet distinguishable functions: Changing minds: Spreading awareness and understanding with the aim of sparking societal interest and changing attitudes. E.g. introducing neglected existential risks to the public. Changing behaviors: Trying to get people to do something, not just think differently. E.g. getting people to take greater consideration of animal welfare when buying things or donating more and/or more effectively. Building communities: Providing a focal point around which people can organize. Changing structures: Directly trying to influence law or policy. Further, documentaries can take many different forms, from a 10 minute homemade Youtube video to a feature length high budget motion picture. In the following when we say documentary, we are mainly thinking about a high budget full length film with the purpose of raising awareness of important topics, bringing them to the attention of the media and wider society (something like An Inconvenient Truth in style). This is because we think this seems to be mostly missing at the moment, and could be of highest expected value. Also, in our interpretation, it seems like something others who have spoken about EA media are excited about (see EA in media | Joseph Gordon-Levitt, Julia Galef, AJ Jacobs, and William MacAskill and MacAskill Fireside Chat at EAG). We want to stress that we are very uncertain about what type of documentary, or other media content might be most impactful, which is part of the reason for writing this and we would love to hear your thoughts...

Business Live: Jamie Veitch's Sheffield Live radio show
Seed funding for startups and effective altruism with Sean Donnelly, Ripples

Business Live: Jamie Veitch's Sheffield Live radio show

Play Episode Listen Later Feb 4, 2022 40:17


Remember going to shops and popping your change into a charity collection box next to the till? The problem for charities is this has become a memory, not a habit, as we move to a cashless society.And it means an £80m shortfall in fundraising every year for charities which used to collect spare change.But Sean Donnelly's business Ripples, a social enterprise, wants to fix this and help small acts of generosity ripple outward to make a huge impact. Ripples uses Open Banking and its own clever, flexible and secure design to make a positive difference. It seeks to enable small penny donations and ultimately raise millions for charities, schools and important causes around the world. Sean  had to "kiss a lot of frogs" on the journey to secure seed funding for Ripples (which launched as Roundups and has just refreshed its branding). So he's got lots to say which will help other entrepreneurs prepare for seed investment rounds and finding angel investors.We also cover effective altruism, seeking to maximise the impact of money donated or invested into doing good, plus the value of networks and ecosystems for startups in tech, finding a co-founder and the lessons Sean learned from earlier businesses he launched.Timings:0 - 3:17 Introduction.3:17 Interview with Sean Donnelly, co-founder, Ripples (which has secured investment from Sheffield-based business accelerator, TwinklHive).33:30 What books have informed, inspired or challenged you, or given you tools you've found useful? Let me know. Sean and I talked a little about William MacAskill's "Doing Good Better – Effective Altruism and a Radical New Way to Make a Difference." Worth reading. Do you have book recommendations? Get in touch.35:28 Upcoming events including Tramlines, the Sheffield Adventure Film Festival and the Outdoor City.36:29 Funding! Travel sector and hospitality, accommodation and leisure grants (Sheffield), details here. And the Digital Innovation Grant (DIG) programme supports Small and Medium sized enterprises (SMEs) in South Yorkshire to develop their use of digital technology.38:09 Wrapping up.

Spiderum Official
VÌ SAO BẠN KHÔNG NÊN QUYÊN GÓP CỨU TRỢ THIÊN TAI? | Nhện Hóng biến | William MacAskill | SPIDERUM

Spiderum Official

Play Episode Listen Later Jan 11, 2022 10:22


Từ thiện và quyên góp đã trở thành hai trong những từ khoá định hình nên năm 2021. Đối với nhiều người, từ thiện và quyên góp giờ đây không còn dễ dàng và đáng tin như trước đây khi hàng loạt vụ lừa đảo và trường hợp không uy tín bị khui ra. Vậy quyên góp như thế nào là đúng? Tại sao bạn lại không nên quyên góp cho cứu trợ thiên tai? Hãy cùng Spiderum tìm hiểu nhé ! ______________ Cùng tìm hiểu cuốn sách "Doing Good Better" tại: https://book.spiderum.vn/DGB Theo dõi Kênh Podcast "Người Trong Muôn Nghề" tại đây: https://b.link/youtube-podcast-NTMN Ghé Nhà sách Spiderum trên SHOPEE ngay thôi các bạn ơi: https://shp.ee/ynm7jgy Kênh Spiderum Giải Trí đã có Podcast, nghe tại đây: https://anchor.fm/spiderum-giai-tri ______________ Bài viết: Tại sao bạn không nên đóng góp cho các quỹ cứu trợ thiên tai? Tác giả: William MacAskill Bài viết được trích từ sách Doing Good Better - Làm việc thiện đúng cách do Spiderum phát hành. --- Send in a voice message: https://anchor.fm/spiderum/message Support this podcast: https://anchor.fm/spiderum/support

The Nonlinear Library: EA Forum Top Posts
Ask Me Anything! by William_MacAskill

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 12, 2021 5:20


welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Ask Me Anything!, published by William_MacAskill on the effective altruism forum. Thanks for all the questions, all - I'm going to wrap up here! Maybe I'll do this again in the future, hopefully others will too! Hi, I thought that it would be interesting to experiment with an Ask Me Anything format on the Forum, and I'll lead by example. (If it goes well, hopefully others will try it out too.) Below I've written out what I'm currently working on. Please ask any questions you like, about anything: I'll then either respond on the Forum (probably over the weekend) or on the 80k podcast, which I'm hopefully recording soon (and maybe as early as Friday). Apologies in advance if there are any questions which, for any of many possible reasons, I'm not able to respond to. If you don't want to post your question publicly or non-anonymously (e.g. you're asking “Why are you such a jerk?” sort of thing), or if you don't have a Forum account, you can use this Google form. What I'm up to Book My main project is a general-audience book on longtermism. It's coming out with Basic Books in the US, Oneworld in the UK, Volante in Sweden and Gimm-Young in South Korea. The working title I'm currently using is What We Owe The Future. It'll hopefully complement Toby Ord's forthcoming book. His is focused on the nature and likelihood of existential risks, and especially extinction risks, arguing that reducing them should be a global priority of our time. He describes the longtermist arguments that support that view but not relying heavily on them. In contrast, mine is focused on the philosophy of longtermism. On the current plan, the book will make the core case for longtermism, and will go into issues like discounting, population ethics, the value of the future, political representation for future people, and trajectory change versus extinction risk mitigation. My goal is to make an argument for the importance and neglectedness of future generations in the same way Animal Liberation did for animal welfare. Roughly, I'm dedicating 2019 to background research and thinking (including posting on the Forum as a way of forcing me to actually get thoughts into the open), and then 2020 to actually writing the book. I've given the publishers a deadline of March 2021 for submission; if so, then it would come out in late 2021 or early 2022. I'm planning to speak at a small number of universities in the US and UK in late September of this year to get feedback on the core content of the book. My academic book, Moral Uncertainty, (co-authored with Toby Ord and Krister Bykvist) should come out early next year: it's been submitted, but OUP have been exceptionally slow in processing it. It's not radically different from my dissertation. Global Priorities Institute I continue to work with Hilary and others on the strategy for GPI. I also have some papers on the go: The case for longtermism, with Hilary Greaves. It's making the core case for strong longtermism, arguing that it's entailed by a wide variety of moral and decision-theoretic views. The Evidentialist's Wager, with Aron Vallinder, Carl Shulman, Caspar Oesterheld and Johannes Treutlein arguing that if one aims to hedge under decision-theoretic uncertainty, one should generally go with evidential decision theory over causal decision theory. A paper, with Tyler John, exploring the political philosophy of age-weighted voting. I have various other draft papers, but have put them on the back burner for the time being while I work on the book. Forethought Foundation Forethought is a sister organisation to GPI, which I take responsibility for: it's legally part of CEA and independent from the University, We had our first class of Global Priorities Fellows this year, and will continue the program into future years. Utilitarianism.net Darius Meissner and I (w...

The Nonlinear Library: EA Forum Top Posts
What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? by Luisa_Rodriguez

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 12, 2021 69:12


welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)?, published by Luisa_Rodriguez on the effective altruism forum. Epistemic transparency: Confidence in conclusions varies throughout. I give rough indicators of my confidence at the section level by indicating the amount of time I spent researching/thinking about each particular subtopic, plus a qualitative description of the types of sources I rely on. In general, I consider it a first step toward understanding this threat from civilizational collapse — not a final or decisive one. Acknowledgements This research was funded by the Forethought Foundation. It was written by Luisa Rodriguez under the supervision of Arden Koehler and Lewis Dartnell. Thanks to Arden Koehler, Max Daniel, Michael Aird, Matthew van der Merwe, Rob Wiblin, Howie Lempel, and Kit Harris who provided valuable comments. Thanks also to William MacAskill for providing guidance and feedback on the larger project. Summary In this post, I explore the probability that if various kinds of catastrophe caused civilizational collapse, this collapse would fairly directly lead to human extinction. I don't assess the probability of those catastrophes occurring in the first place, the probability they'd lead to indefinite technological stagnation, or the probability that they'd lead to non-extinction existential catastrophes (e.g., unrecoverable dystopias). I hope to address the latter two outcomes in separate posts (forthcoming). My analysis is organized into case studies: I take three possible catastrophes, defined in terms of the direct damage they would cause, and assess the probability that they would lead to extinction within a generation. There is a lot more someone could do to systematically assess the probability that a catastrophe of some kind would lead to human extinction, and what I've written up is certainly not conclusive. But I hope my discussion here can serve as a starting point as well as lay out some of the main considerations and preliminary results. Note: Throughout this document, I'll use the following language to express my best guess at the likelihood of the outcomes discussed: TABLE1 Case 1: I think it's exceedingly unlikely that humanity would go extinct (within ~a generation) as a direct result of a catastrophe that causes the deaths of 50% of the world's population, but causes no major infrastructure damage (e.g. damaged roads, destroyed bridges, collapsed buildings, damaged power lines, etc.) or extreme changes in the climate (e.g. cooling). The main reasons for this are: Although civilization's critical infrastructure systems (e.g. food, water, power) might collapse, I expect that several billions of people would survive without critical systems (e.g. industrial food, water, and energy systems) by relying on goods already in grocery stores, food stocks, and fresh water sources. After a period of hoarding and violent conflict over those supplies and other resources, I expect those basic goods would keep a smaller number of remaining survivors alive for somewhere between a year and a decade (which I call the grace period, following Lewis Dartnell's The Knowledge). After those supplies ran out, I expect several tens of millions of people to survive indefinitely by hunting, gathering, and practicing subsistence agriculture (having learned during the grace period any necessary skills they didn't possess already). Case 2: I think it's very unlikely that humanity would go extinct as a direct result of a catastrophe that caused the deaths of 90% of the world's population (leaving 800 million survivors), major infrastructure damage, and severe climate change (e.g. nuclear winter/asteroid impact). While I expect that millions would starve to death in the wake of something like a globa...

The Nonlinear Library: EA Forum Top Posts
Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism by Darius_M

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 11, 2021 12:18


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism, published by Darius_M on the AI Alignment Forum. We are excited to announce the launch of Utilitarianism.net, an introductory online textbook on utilitarianism, co-created by William MacAskill, James Aung and me over the past year. The website aims to provide a concise, accessible and engaging introduction to modern utilitarianism, functioning as an online textbook targeted at the undergraduate level . We hope that over time this will become the main educational resource for students and anyone else who wants to learn about utilitarianism online. The content of the website aims to be understandable to a broad audience, avoiding philosophical jargon where possible and providing definitions where necessary. Please note that the website is still in beta. We plan to produce an improved and more comprehensive version of this website by September 2020. We would love to hear your feedback and suggestions on what we could change about the website or add to it. The website currently has articles on the following topics and we aim to add further content in the future: Introduction to Utilitarianism Principles and Types of Utilitarianism Utilitarianism and Practical Ethics Objections to Utilitarianism and Responses Acting on Utilitarianism Utilitarian Thinkers Resources and Further Reading We are particularly grateful for the help of the following people with reviewing, writing, editing or otherwise supporting the creation of Utilitarianism.net: Lucy Hampton, Stefan Schubert, Pablo Stafforini, Laura Pomarius, John Halstead, Tom Adamczewski, Jonas Vollmer, Aron Vallinder, Ben Pace, Alex Holness-Tofts, Huw Thomas, Aidan Goth, Chi Nguyen, Eli Nathan, Nadia Mir-Montazeri and Ivy Mazzola. The following is a partial reproduction of the Introduction to Utilitarianism article from Utilitarianism.net. Please note that it does not include the footnotes, further resources, and the sections on Arguments in Favor of Utilitarianism and Objections to Utilitarianism. If you are interested in the full version of the article, please read it on the website. Introduction to Utilitarianism "The utilitarian doctrine is, that happiness is desirable, and the only thing desirable, as an end; all other things being only desirable as means to that end." - John Stuart Mill Utilitarianism was developed to answer the question of which actions are right and wrong, and why. Its core idea is that we ought to act to improve the wellbeing of everyone by as much as possible. Compared to other ethical theories, it is unusually demanding and may require us to make substantial changes to how we lead our lives. Perhaps more so than any other ethical theory, it has caused a fierce philosophical debate between its proponents and critics. Why Do We Need Moral Theories? When we make moral judgments in everyday life, we often rely on our intuition. If you ask yourself whether or not it is wrong to eat meat, or to lie to a friend, or to buy sweatshop goods, you probably have a strong gut moral view on the topic. But there are problems with relying merely on our moral intuition. Historically, people held beliefs we now consider morally horrific. In Western societies, it was once firmly believed to be intuitively obvious that people of color and women have fewer rights than white men; that homosexuality is wrong; and that it was permissible to own slaves. We now see these moral intuitions as badly misguided. This historical track record gives us reason to be concerned that we, in the modern era, may also be unknowingly guilty of serious, large-scale wrongdoing. It would be a very lucky coincidence if the present generation were the first generation whose intuitions were perfectly morally correct. Also, people have conflicting moral intuitions ab...

The Nonlinear Library: EA Forum Top Posts
EA syllabi and teaching materials by Julia_Wise

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 11, 2021 6:17


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA syllabi and teaching materials, published by Julia_Wise on the AI Alignment Forum. Write a Review I've been collecting a list of all the known courses taught about EA or closely related topics. Please let me know if you have others to add! AGI safety fundamentals For 2022, Richard Ngo Syllabus "Are we doomed? Confronting the end of the world" at University of Chicago Spring 2021, Daniel Holz & James A. Evans Syllabus "Ethics and the Future" at Yale Spring 2021, Shelly Kagan Syllabus "The Great Problems" at MIT Spring 2021, Kevin Esvelt Syllabus Improving Science Reading Group 2021, EA Cambridge Reading list Longtermism syllabus 2021, Joshua Teperowski Monrad Syllabus Effective Animal Advocacy Fellowship Winter 2021, EA at UCLA Syllabus and discussion guide Social Sciences & Existential Risks Reading Group Winter 2021 Reading list Global Development Fellowship Winter 2021, Stanford One for the World Syllabus "Ethics for Do-Gooders" at University of Graz Summer 2020, Dominic Roser Syllabus Cause Area Guide: Institutional Decision Making May 2020, EA Norway Guide, with reading list (focused on forecasting) Intro to Global Priorities Research for Economists Spring 2020, David Bernard and Matthias Endres Description, with link to reading list and materials Governance of AI Reading List Oxford, Spring 2020, Markus Anderljung Reading list EA course at Brown University Spring 2020, Emma Abele and Nick Whittaker, based on Harvard Arete fellowship syllabus Syllabus "Psychology of (Effective) Altruism" at University of Michigan Winter 2020, Izzy Gainsburg Syllabus "Philosophy and Philanthropy" at University of Chicago Winter 2020, Bart Schultz Syllabus Syllabus: Artificial Intelligence and China Jan. 2020, Ding, Fischer, Tse, and Byrd Reading list In-Depth Fellowship at EA Oxford Reading list "Topics in Global Priorities Research" at Oxford University Spring 2019, William MacAskill and Christian Tarsney Syllabus AI alignment reading group at MIT Fall 2019 Reading list "Normative Ethics, Effective Altruism, and the Environment" at University of Vermont Fall 2019, Mark Budolfson Syllabus Arete fellowship at MIT Fall 2018, MIT EA group Syllabus with discussion prompts "Safety and control for artificial general intelligence" at UC Berkeley Fall 2018, Andrew Critch and Stuart Russell Syllabus "Artificial Intelligence and International Security" July 2018, Remco Zwetsloot Reading list “The Psychology of Effective Altruism” at University of New Mexico Spring 2018, Geoffrey Miller Syllabus “Training Changemakers” program Spring 2018, Philanthropy Advisory Fellowship at Harvard University Program plan “The Ethics and Politics of Effective Altruism” at Stanford University Spring 2018, Ted Lechterman Syllabus “Effective Philanthropy: Ethics and Evidence” at London School of Economics 2017/2018, Luc Bovens and Stephan Chambers Summary Seminar on EA at University of Toronto Fall 2017, Jordan Thomson Summary "Effective altruism" course at University of St Andrews 2016-2017, Theron Pummer and Tim Mulgan Syllabus “How to actually change the world” session at MIT Fall 2016, Angelina Li and Daniel Ziegler Evaluation and course materials EA course at St. Catherine's University Fall 2016, Jeff Johnson and Kristine West Syllabus EA course at University of Saint Andrews Fall 2016, Theron Pummer and Tim Mulgan Syllabus EA course at University of York Spring 2016, Richard Yetter Chappell Syllabus EA Syllabus Stefan Schubert and Pablo Stafforini This syllabus is intended for use in philosophy, political science, or general humanities programs. EA courses at UC Berkeley In spring 2015 and 2016, students at UC Berkeley have led a full-semester class on effective altruism. Organizers spring 2015: Ajeya Cotra, Oliver Habryka Organizers spring 2016: Ajeya Cotra, Rohin Shah Materials: Syllabus 2015 Syllab...