Podcasts about Giving What We Can

English effective altruism organization

  • 41PODCASTS
  • 110EPISODES
  • 32mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Sep 28, 2022LATEST
Giving What We Can

POPULARITY

20152016201720182019202020212022


Best podcasts about Giving What We Can

Latest podcast episodes about Giving What We Can

The Munk Debates Podcast
Munk Dialogue: What do we owe the future?

The Munk Debates Podcast

Play Episode Listen Later Sep 28, 2022 39:54


Most societies commemorate and revere distant ancestors, with portraits, statues, streets, buildings, and holidays. We are fascinated with the pyramids in Egypt, Stonehenge in England and the earliest origins of our species in the savannas of Africa.  Our interest in humankind's deep past has created a collective blind spot about the prospects of our distant descendants thousand years into the future. For most of us, the deep future is a fantasy world, something you read about in science fiction novels.  But a growing number of thinkers are pushing back against the attitude that the future is a hypothetical we can discount in the favour of the here and now. Instead, they argue it's high time we start thinking seriously about the idea that humanity may only be in its infancy. That as a species we could potentially be around for thousands of years, with trillions of fellow humans to be born, each with vast potential to shape our future evolution, possibly even beyond Earth.  In sum, humankind urgently needs a thousand year plan or it risks losing millennia of human progress to the existential risks that stalk our all too dangerous present.  William MacAskill is a leading global thinker on how humanity could and should think about a common future for itself and the planet. He is an associate professor in philosophy at the University of Oxford and co-founder of Giving What We Can, the Centre for Effective Altruism, and 80,000 Hours, all philosophically inspired projects which together have raised hundreds of millions of dollars and hundreds of thousand of life years to support charities working to preserve human kind's potential for the millenia to come. He is the author of the international bestseller, Doing Good Better and What We Owe The Future. QUOTE:  "The future could be very big, indeed, at least if we don't cause humanity's untimely demise in the next few centuries. We could have a very large future ahead of us. And that means that if there is anything that would impact the well-being of, not just the present generation, but all generations to come, that would be of enormous moral importance."  The host of the Munk Debates is Rudyard Griffiths - @rudyardg.   Tweet your comments about this episode to @munkdebate or comment on our Facebook page https://www.facebook.com/munkdebates/ To sign up for a weekly email reminder for this podcast, send an email to podcast@munkdebates.com.   To support civil and substantive debate on the big questions of the day, consider becoming a Munk Member at https://munkdebates.com/membership Members receive access to our 10+ year library of great debates in HD video, a free Munk Debates book, newsletter and ticketing privileges at our live events. This podcast is a project of the Munk Debates, a Canadian charitable organization dedicated to fostering civil and substantive public dialogue - https://munkdebates.com/ Producer: Marissa Ramnanan  Editor: Adam Karch  

The Giving What We Can Podcast
Lesson: Don't make these 10 mistakes when trying to improve the world (from Dr. Michael Noetel)

The Giving What We Can Podcast

Play Episode Listen Later Sep 21, 2022 17:24


Dr. Michael Noetel shares 10 mistakes he made when trying to improve the world, so you don't have to make the same ones!  Michael is a Giving What We Can member, effective altruist and academic. We were grateful to have him share what he's learnt with us. To learn more about him, visit: https://noetel.com.au/

The Nonlinear Library
EA - Author Rutger Bregman about effective altruism and philanthropy by Sebastian Schwiecker

The Nonlinear Library

Play Episode Listen Later Sep 20, 2022 2:50


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Author Rutger Bregman about effective altruism and philanthropy, published by Sebastian Schwiecker on September 20, 2022 on The Effective Altruism Forum. Rutger Bregman, historian, and author (including Utopia for Realists and Humankind: A Hopeful History) describes his personal view on philanthropy in a conversation with Effektiv Spenden here (German here). In the Effektiv Spenden post donation survey he was mentioned more than any other person (e.g. more than Will MacAskill and more than Peter Singer). Our explanation is that he particularly good at reaching people from outside the existing EA community. Some quotes: On effective altruism: "Sometimes people can get the impression that “Oh, so you know what all the effective causes out there are and you are very dogmatic about that?” That's not the case at all. Effective altruism is a question. It's not an answer. It's all about continuously asking yourself the question, is this the best use of my time, resources, and money? That's what it's really about. And I think intellectual humility is a really important value, and I think that's also quite present in the movement." On systemic change vs. individual change: "There's now this discussion going on amongst progressives and people on the left like: “Oh, we shouldn't talk about individual change because that's neo-liberal. We should all talk about system change”, but obviously we need to do both. If you look at the most impressive reformers and prophets and campaigners and activists throughout history, they all did it both. I'm now reading a book about Anthony Benezet who was one of the most important abolitionists, he's called the father of abolitionism. He led the fight against the slave trade and slavery in the 18th century. If you would have said to him: “Oh, it's all about the system. It's not about the individual”. He would have said: “You're a hypocrite.” Of course, it's also about the individual, because he knew that he would be much more convincing if he actually did what he preached." On why he signed the Giving What We Can Pledge: "Because human behavior is contagious. We're not individuals, we're not lone atoms, but we influence each other all the time by our behavior. It's just contagious. Giving can be like that as well. That's why I think it's important to be public about your giving, not to show off, you need to be a little bit careful there, but that's also why I signed the Giving What We Can pledge to say. Look, people, if you like my work, this is what I find really important and it has made a big difference in my life to donate at least 10% of my income to highly effective causes. I think that actually, as a best seller author, you can go a little bit higher than 10%, but 10% is a good place to start." Since the interview is quite long feel free to share the video below with everyone who might be interested but can't be bothered for more than one minute. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Astral Hustle with Cory Allen
Sean Carroll - The Biggest Ideas In The Universe

The Astral Hustle with Cory Allen

Play Episode Listen Later Sep 19, 2022 45:02


Sean Carroll is a quantum physicist, author, and podcast host. In this episode, we talk about the wonder of the universe, reducing anxiety by increasing your perspective, and misunderstandings of the multiverse. This episode is sponsored by Giving What We Can. Click to learn more.Coaching with Cory: I'm now offering One-to-One coaching to help you build a path to the next level.Please support the show by joining our Patreon Community.Sign up for my newsletter to receive new writing on Friday morning.My new meditation course Coming Home is now available. Now Is the Way is out now in paperback!  Use Astral for 15% off Binaural Beats, Guided Meditations, and my Meditation Course.Please rate The Astral Hustle on iTunes. ★★★★★ Connect with Cory:Home: http://www.cory-allen.comIG: https://www.instagram.com/heycoryallenTwitter: https://twitter.com/HeyCoryAllenFacebook: https://www.facebook.com/HeyCoryAllen© CORY ALLEN 2022

The Nonlinear Library
EA - We're still (extremely) funding constrained (but don't let fear of getting funding stop you trying). by Luke Freeman

The Nonlinear Library

Play Episode Listen Later Sep 16, 2022 6:16


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We're still (extremely) funding constrained (but don't let fear of getting funding stop you trying)., published by Luke Freeman on September 16, 2022 on The Effective Altruism Forum. Lots has been written about this so I wrote a poem instead. I lead the team at Giving What We Can, views are my own. Poem Years ago we were struck by big problems: they were so extremely funding constrained. One-by-one we saw a big impact: by making them less extremely funding constrained. We didn't wait for permission, we gave from our own pockets first. It became our mission to put others first. Our spendthrift community dug into the data. We made money go further, we made things go better. Each dollar paid dividends, each DALY gained a good end. Progress felt slow, but was needed, we know. Constraints were consistent, opportunity cost felt: "Should I pledger further? Should I become a researcher?" A driven community with compassion so big: we found more neglected problems, solvable, and big. We said "more research needed", traded money for time: found researchers, founders and then funders aligned. Some problems found funders more quickly than founders, yet others found moneypits so desperate to fill. Give trillions in cash or keep coal in the ground? What about the backlash if our decisions aren't sound? As we made progress we hit the mainstream. Among the first questions: "Why's my cause unseen?" We're resource constrained, I wish it weren't such: "Yes, we want to help everyone, but we only have so much!" Our work's still so small in the scheme of the world. Still, let's be more ambitious: let's build a dreamworld. We need many folks to be stoaked by our mission. We need many funders, founders, and passion. Experimentation is something we now know we can try: don't let fear of funding be why you don't apply. But for the foreseeable future your dollars still count: for every life that you help we mustn't discount. Our mission 'aint over, we're at the start of our road. We need your help: let's make some inroads. So give what you can and get others involved. Let's keep working together to get these problems solved. Postscript It can be quite difficult to ‘feel' the fact that all of these things are true at the same time: We have increased available funding by an order of magnitude over the past decade and increased the rate at which that funding is being deployed We don't want lack of funds to be the reason that people don't do important and ambitious things; and yet Yet in most cases we are still extremely funding constrained I find it painful (and counter-productive) to see these messages floating around: EA has “too much money” EA has “more money than it knows what to do with” There is “such a glut of money in EA right now” EA has a “funding overhang” Donations aren't needed, or they don't count Pursuing a high-impact careers is mutually exclusive with donating (although there are tradeoffs) If you don't get funded it means your project is not worthwhile (in some cases it could be just below a bar) If you miss out on a job when “there's plenty of money” it means there's nothing for you (some organisations are still very funding constrained and may have happily hired you if they were less funding constrained) Whereas I think the better (more truthful and constructive narratives) are: We have a more decent shot at having a significant impact We have more resources which helps us: Double down on things we have good evidence for Justifiably spending more on research and experimentation Become more diverse (e.g. doesn't require someone to have enough personal resources to take big risks, we can fund people to attend a conference/retreat they couldn't otherwise afford etc) and therefore find more excellent people to participate in this grand project. The situation is nuanced: The funding...

The Nonlinear Library
EA - CEA Ops is now EV Ops by EV Ops

The Nonlinear Library

Play Episode Listen Later Sep 13, 2022 2:45


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA Ops is now EV Ops, published by EV Ops on September 13, 2022 on The Effective Altruism Forum. Effective Ventures (EV) is a federation of organisations and projects working to have a large positive impact in the world. EV was previously known as the Centre for Effective Altruism but the board decided to change the name to avoid confusion with the organisation within EV that goes by the same name. EV Operations (EV Ops) provides operational support and infrastructure that allows effective organisations to thrive. Summary EV Ops is a passionate and driven group of operations specialists who want to use our skills to do the most good in the world. You can read more about us at. What does EV Ops look like? EV Ops began as a two-person operations team at CEA. We soon began providing operational support for 80,000 Hours, EA Funds, the Forethought Foundation, and Giving What We Can. And eventually, we started supporting newer, smaller projects alongside these, too. As the team expanded and the scope of these efforts increased, it made less sense to remain a part of CEA. So at the end of last year, we spun out as a relatively independent organisation, known variously as “Ops”, “the Operations Team”, and “the CEA Operations team”. For the last nine months or so, we've been focused on expanding our capacity so that we can support even more high-impact organisations, including the GovAI, Longview Philanthropy, Asterisk, and Non-trivial. We now think that we have a comparative advantage in supporting and growing high-impact projects — and are happy that this new name, “Effective Ventures Operations”' or “EV Ops”, accords with this. EV Ops is arranged into the following six teams: The organisations EV Ops supports We now support and fiscally sponsor several organisations (learn more on our website). Alongside these we support a handful of Special Projects: smaller, 1-2 person, early-stage projects which may grow into independent organisations of their own. We're keen to support a wide range of projects looking to do good in the world, although we're close to current capacity. To see if we could help your project grow and develop, visit or complete the expression of interest form. Get involved We're currently hiring for the following positions: Project Manager for Oxford EA hub Senior Bookkeeper / Accountant Operations Associate Executive Assistant for the Property team Operations Associate - Salesforce Admin Finance Associate If you're interested in joining our team, visit. If you have any questions about EV or EV Ops, just drop a comment below. Thanks for reading! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Giving What We Can Podcast
Member Story: Zachary Brown

The Giving What We Can Podcast

Play Episode Listen Later Sep 11, 2022 6:06


Member Zachary Brown shares how the neighborhood where he grew up cast a spotlight on suffering and inequality, motivating him to effect change from an early age. He also discusses why he's passionate about helping future generations, how the Giving What We Can pledge ties into his Jewish background, and how effective giving has become a part of his identity. We are so pleased that we've been able to interview some of our members about their experiences with effective giving. Big thank you to Zachary and our other community members who make Giving What We Can such a special organisation. CHAPTERS: 00:00 - How did you learn about Giving What We Can? 00:55 - Why were you motivated to take the pledge? 02:45 - What is the role of effective giving in your life? 03:58 - Where are you giving now? 05:22 - Advice for others considering the pledge: "Just start!" OUR RESOURCES: ✍️ Take a giving pledge: https://givingwhatwecan.org/pledge

The Nonlinear Library
EA - Marketing Messages Trial for GWWC Giving Guide Campaign by Erin Morrissey

The Nonlinear Library

Play Episode Listen Later Sep 8, 2022 24:14


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Marketing Messages Trial for GWWC Giving Guide Campaign, published by Erin Morrissey on September 8, 2022 on The Effective Altruism Forum. The trial was run in conjunction with Josh Lewis (NYU). Thanks to David Moss and others for feedback on this post, and to Jamie Elsey for support with the Bayesian analysis. TL;DR Giving What We Can together with the EA Market Testing Team (EAMT) tested marketing and messaging themes on Facebook in their Effective Giving Guide Facebook Lead campaigns which ran from late November 2021 - January 2022. GWWC's Giving Guide answers key questions about effective giving and includes the latest effective giving recommendations to teach donors how to do the most good with their donations. These were exploratory trials to identify promising strategies to recruit people for GWWC and engage people with EA more broadly. We report the most interesting patterns from these trials to provide insight into which hypotheses might be worth exploring more rigorously in future (‘confirmatory analysis') work. Across four trials we compared the effectiveness of different types of (1) messages, (2) videos, and (3) targeted audiences. The key outcomes were (i) email addresses per dollar (when a Facebook user provides an email lead) and (ii) link clicks per dollar. Based on our analysis of 682,577 unique Facebook ‘impressions', we found: The cost of an email address was as low as $8.00 across campaigns, but it seemed to vary substantially across audiences, videos, and messages. The message "Only 3% of donors give based on charity effectiveness, yet the best charities can be 100x more impactful" generated more link clicks and email addresses per dollar than other messages. In contrast, the message "Giving What We Can has helped 6,000+ people make a bigger impact on the causes they care about most" was less cost-effective than the other messages. A ‘short video with facts about effective giving' generated more email addresses per dollar than either (1) a long video with facts about effective giving or (2) a long video that explained how GWWC can help maximize charitable impact, the GWWC 'brand video.' On a per-dollar basis ‘Animal' audiences that were given animal-related cause videos performed among the best, both overall and in the most comparable trials. ‘Lookalike' audiences (those with a similar profile as current people engaging with GWWC) performed best overall, for both cause and non-cause videos. However, ‘Climate' and ‘Global Poverty' audiences basically underperformed the ‘Philanthropy' audience when presented videos ‘for their own causes.' The Animal-related cause video performed particularly poorly on the ‘Philanthropy' audience. Demographics were mostly not predictive of email addresses per dollar nor link clicks per dollar See our Quarto dynamic document linked here for more details, and ongoing analyses. Purpose and Interpretation of this Report One of the primary goals of the EAMT is to identify the most effective, scalable strategies for marketing EA. Our main approach is to test marketing and messaging themes in naturally-occurring settings (such as advertising campaigns on Facebook, YouTube, etc.), targeting large audiences, to determine which specific strategies work best in the most relevant contexts. In this report, we share key patterns and insights about the effectiveness of different marketing and messaging approaches used in GWWC's Effective Giving Guide Facebook Lead campaigns. The patterns we share here serve as a starting point to consider themes and hypotheses to test more rigorously in our ongoing research project. We are hoping for feedback and suggestions from the EA community on these trials and their implementation and analysis. We continue to conduct detailed analyses of this data. We'd like to get ideas from the community ...

不合时宜
今天,重新谈论精英的傲慢与反思

不合时宜

Play Episode Listen Later Sep 8, 2022 99:34


【主播的话】这期节目是为九九公益日所录制的一期特别节目。在本期节目当中,我邀请了两位朋友一起来谈谈对于自身精英身份的反思和行动。如今,精英这个标签在某个意义上已经成为了众矢之的,人们不喜欢被贴上精英的标签,而普通大众则对精英抱有敌意。而这正是我们认为在今天有必要重提精英以及精英担当的原因。在某种意义上,反思自己所获得的优势资源以及幸运,并将这份幸运重新分配,本身应该作为精英们的一种种义务而非做了很好,但不做也并非不道德的事情。在时代的洪流当中,或许你也在反思同样的问题,并且思考如何能找到一份生活于当下的确定感与意义感。我们在这期节目中也提供了一种帮助你更好地服务世界的解法,有时候方法可以简单但很有效率,它就是“有效捐赠”(Effective Giving)。公益盒子的团队经过了两年、几千小时的研究,选出了证据扎实、效果突出的项目,让你的捐款发挥最大价值。如果听完节目你有所触动,欢迎为公益盒子今年推荐的有效公益项目进行捐款,链接

80,000 Hours Podcast with Rob Wiblin
#137 – Andreas Mogensen on whether effective altruism is just for consequentialists

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Sep 8, 2022 141:33


Effective altruism, in a slogan, aims to 'do the most good.' Utilitarianism, in a slogan, says we should act to 'produce the greatest good for the greatest number.' It's clear enough why utilitarians should be interested in the project of effective altruism. But what about the many people who reject utilitarianism? Today's guest, Andreas Mogensen — moral philosopher at Oxford University's All Souls College — rejects utilitarianism, but as he explains, this does little to dampen his enthusiasm for the project of effective altruism. Links to learn more, summary and full transcript. Andreas leans towards 'deontological' or rule-based theories of ethics, rather than 'consequentialist' theories like utilitarianism which look exclusively at the effects of a person's actions. Like most people involved in effective altruism, he parts ways with utilitarianism in rejecting its maximal level of demandingness, the idea that the ends justify the means, and the notion that the only moral reason for action is to benefit everyone in the world considered impartially. However, Andreas believes any plausible theory of morality must give some weight to the harms and benefits we provide to other people. If we can improve a stranger's wellbeing enormously at negligible cost to ourselves and without violating any other moral prohibition, that must be at minimum a praiseworthy thing to do. In a world as full of preventable suffering as our own, this simple 'principle of beneficence' is probably the only premise one needs to grant for the effective altruist project of identifying the most impactful ways to help others to be of great moral interest and importance. As an illustrative example Andreas refers to the Giving What We Can pledge to donate 10% of one's income to the most impactful charities available, a pledge he took in 2009. Many effective altruism enthusiasts have taken such a pledge, while others spend their careers trying to figure out the most cost-effective places pledgers can give, where they'll get the biggest 'bang for buck'. For someone living in a world as unequal as our own, this pledge at a very minimum gives an upper-middle class person in a rich country the chance to transfer money to someone living on about 1% as much as they do. The benefit an extremely poor recipient receives from the money is likely far more than the donor could get spending it on themselves. What arguments could a non-utilitarian moral theory mount against such giving? Many approaches to morality will say it's permissible not to give away 10% of your income to help others as effectively as is possible. But if they will almost all regard it as praiseworthy to benefit others without giving up something else of equivalent moral value, then Andreas argues they should be enthusiastic about effective altruism as an intellectual and practical project nonetheless. In this conversation, Andreas and Rob discuss how robust the above line of argument is, and also cover: • Should we treat thought experiments that feature very large numbers with great suspicion? • If we had to allow someone to die to avoid preventing the World Cup final from being broadcast to the world, is that permissible? • What might a virtue ethicist regard as 'doing the most good'? • If a deontological theory of morality parted ways with common effective altruist practices, how would that likely be? • If we can explain how we came to hold a view on a moral issue by referring to evolutionary selective pressures, should we disbelieve that view? Get this episode by subscribing to our podcast on the world's most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell and Beppe Rådvik Transcriptions: Katy Moore

Clearer Thinking with Spencer Greenberg
Estimating the long-term impact of our actions today (with Will MacAskill)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Sep 7, 2022 66:27 Very Popular


What is longtermism? Is the long-term future of humanity (or life more generally) the most important thing, or just one among many important things? How should we estimate the chance that some particular thing will happen given that our brains are so computationally limited? What is "the optimizer's curse"? How top-down should EA be? How should an individual reason about expected values in cases where success would be immensely valuable but the likelihood of that particular individual succeeding is incredibly low? (For example, if I have a one in a million chance of stopping World War III, then should I devote my life to pursuing that plan?) If we want to know, say, whether protests are effective or not, we merely need to gather and analyze existing data; but how can we estimate whether interventions implemented in the present will be successful in the very far future?William MacAskill is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. A Forbes 30 Under 30 social entrepreneur, he also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and Y Combinator–backed 80,000 Hours, which together have moved over $200 million to effective charities. He's the author of Doing Good Better, Moral Uncertainty, and What We Owe The Future.

The Nonlinear Library
EA - Founding the Against Malaria Foundation: Rob Mather's story by Giving What We Can

The Nonlinear Library

Play Episode Listen Later Aug 30, 2022 0:42


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Founding the Against Malaria Foundation: Rob Mather's story, published by Giving What We Can on August 30, 2022 on The Effective Altruism Forum. Giving What We Can interviewed Rob Mather, the founder and CEO of AMF and we've created a short video about how AMF came to be and about their amazing work. If you have a few spare minutes to watch, like, comment and share the video it really helps us get more traction to share a great story! We'll also be releasing the full interview with Rob in the coming weeks. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Effective altruism's billionaires aren't taxed enough. But they're trying. by Luke Freeman

The Nonlinear Library

Play Episode Listen Later Aug 24, 2022 1:22


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective altruism's billionaires aren't taxed enough. But they're trying., published by Luke Freeman on August 24, 2022 on The Effective Altruism Forum. Dylan Matthews just posted a Vox article "If you're such an effective altruist, how come you're so rich?" which addresses critics of effective altruism's billionaires. My TL;DR A lot of recent criticism of EA seems to come from the fact that it has a couple of billionaires now as supporters These billionaires however are some of the biggest donors to US candidates that would increase taxes on them Open support for raising taxes, e.g. Moskovitz tweeted the other day: "I'm for raising taxes and help elect Dems to do it" The broader EA community skews heavily left-of-center (typically supportive of higher taxes and social welfare) Effective altruism was founded explicitly on voluntary redistribution of income from people in high-income countries to low-income countries (e.g. Giving What We Can) and most of the communities founders give a significant portion of their incomes Given that the billionaires do exist, what else would you rather they spend money on? That's just my TL;DR – feel free to put in your own summaries, comments and critiques below. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Giving What We Can Podcast
The story of Rob Mather & The Against Malaria Foundation: the journey to founding one of the world's most effective charities

The Giving What We Can Podcast

Play Episode Listen Later Aug 24, 2022 11:56


Hear the powerful, personal story behind AMF, the organisation responsible for protecting 400 million people from malaria (roughly equal in size to 40% of the entire population of sub-Saharan Africa!) and saving tens of thousands of lives. It started with one man's fundraiser for a little girl who had suffered 90% burns in a house fire at her home in Suffolk, England, and turned into one of the most efficient and effective charities in the world. Rob Mather's journey shows us the power of combining the head and the heart to make a tremendous difference in the lives of others. Watch this story on our YouTube channel: https://youtu.be/Ex7hgpXfw0U

The Nonlinear Library
EA - The EA community might be neglecting the value of influencing people by JulianHazell

The Nonlinear Library

Play Episode Listen Later Aug 22, 2022 15:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA community might be neglecting the value of influencing people, published by JulianHazell on August 22, 2022 on The Effective Altruism Forum. TL;DR Most community-building effort currently goes towards creating new, highly-engaged EAs, yet the vast majority of people who can do things to help further EA goals will not wish to be highly-engaged. You don't need to actually be an “EA” to do effectively altruistic things, hence why influencing people with EA ideas can be very useful. While we already do some influencing, we might want to do more on the margin, especially if we feel urgency about problems like AI alignment. EA is special; we should try sharing the ideas/mental models/insights we have with those who can do good things with them. It would be useful to have an idea as to how cost-effective it is to try influencing others relative to creating highly-engaged EAs. Epistemic status Quite uncertain. I'm more confident in the sign of these arguments than the magnitude. This post was mostly informed by my impressions from being highly involved within the EA community over the last year or so, as well as the time I spent working at Giving What We Can and my current work at GovAI. All views articulated here are my own and do not reflect the stances of either of these organisations, or any other institutions I'm affiliated with. Finally, I'm hoping to spark a conversation rather than to make any sweeping declarations about what the community's strategy ought to be. I was inspired to write this post after reading Abraham Rowe's list of EA critiques he would like to read. Introduction I'm writing this post because I think that gearing the overwhelming majority of EA community building effort towards converting young people to highly-engaged EAs might neglect the value of influence, to the detriment of our ability to get enough people working on difficult problems like AI alignment. Providing altruistically-inclined young people with opportunities to pursue highly-impactful career paths is great. I'm happy this opportunity was provided to me, and I think this work is incredibly valuable insofar as it attracts more smart, ambitious, and well-intentioned people into the EA community for comparatively little cost. But my impression is that I (alongside other highly-engaged EAs) am somewhat unusual with respect to my willingness to severely constrain the list of potential career paths I might go down, and how much I weigh my social impact relative to other factors like salary and personal enjoyment. Most people will be unwilling to do this, or might be initially turned off from the seemingly large commitment that comes with being a highly-engaged EA. Does that mean they can't contribute to the world's most pressing problems, or only do so via donating? I don't think so — working directly on pressing problems doesn't necessarily have to be all or nothing. But my outside impression is that the current outreach approach might neglect the value of influencing the thinking of those who might never wish to become highly involved in the community, and/or those who already have influence over the state of affairs. You know, the people who (by and large) control/will control the levers of power in government, business, academia, thought leadership, journalism, and policy. To be clear, I don't think we should deliberately soft sell EA when communicating externally, but we should be aware that some ideas that sound perfectly reasonable to us might sound off-putting to others. Moreover, we should also be aware that packaging these ideas together as part of the “EA worldview” might risk even just one weird part of the package putting someone off the other ideas (that they might otherwise endorse!) entirely. In this case, it could be better to just strategically push the core thing ...

The Nonlinear Library
EA - The 3rd wave of EA is coming - what does it mean for you? by Jakob

The Nonlinear Library

Play Episode Listen Later Aug 20, 2022 8:33


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The 3rd wave of EA is coming - what does it mean for you?, published by Jakob on August 19, 2022 on The Effective Altruism Forum. In this post I'll first argue that we may look back at 2021-2022 as a time when EA entered a new phase (a "third wave" or "adulthood" could be appropriate terms). To be more specific, this phase shift would entail a sustained increase in money moved by the EA movement (say, by 2-5x or so? I haven't dug into the data enough to make a proper forecast), as well as the attention that people who don't identify as effective altruists pay to the movement. We are already seeing some of this acceleration (with e.g., Open Philanthropy and GiveWell scaling up significantly in 2021, good growth for CEA in 2021, the FTX Future Fund launching in 2022 and attracting significant attention for donations to congressional candidate Carrick Flynn, record-breaking numbers of EA Global participants in 2022, and the recent publication of Will MacAskill's book What We Owe the Future, with associated media attention), so I don't expect this to be a controversial claim. Then, I'll list some implications for 1) people who identify as (aspiring) effective altruists, 2) leaders in EA-aligned organizations, and 3) people with influence strategy and cultural norms at the movement level (a "movement leader"). In short, many effective altruists can take this moment of positive momentum to raise their ambitions; however, it is also a precarious moment of where the risk of values drift is elevated, so movement leaders should invest more than usually in steering the EA movement in the right direction. Many of these thoughts have already been expressed more eloquently elsewhere; please consider this post my 5 cents chiming in - and for some, perhaps it can serve as a convenient summary. A short history of EA First, one may (somewhat simplified) consider the history of EA in roughly two waves: one may trace the origins of the movement to the first wave ("the infancy" of the movement), which I'll limit to pre-2011, and the second wave (the "youth phase" of the movement), which I'll consider to have lasted around a decade. I'll give some more details below, but since I only learned of EA in 2014 myself, I'm not the best to give a detailed account of the early days. The following paragraphs are a summary and an interpretation of the events described by CEA, Wikipedia, and various Forum articles on the history of EA, and so may be skipped by some readers (though they form some of the context for the argument further down). During the first wave, organizations such as GiveWell (2007), Giving What We Can (2009) and 80000hours (2011) were launched. Peter Singer published his book The Life You Can Save (2009), and the rationalist community grew up around the Overcoming Bias/LessWrong blogs (2006). Individuals within these various communities and organizations sometimes interacted with each other, but there was no explicit unifying umbrella. 2011 marked a pivotal year, with the founding of the Centre for Effective Altruism (and the invention of the term effective altruist), and the launch of GiveWell labs, which later became Open Philanthropy. This is why I've used this as the pivotal year from wave 1 to wave 2. During the following decades, the amount of funding in the EA movement increased substantially, and many new organizations and projects were started. Some projects took names that explicitly showed a link to the EA movement, such as EA Funds, EA Global (which started as the EA summit in 2013), and the EA Foundation, and local effective altruism groups all over the world. Others, Like Longview Philanthropy, Effective Giving, Charity Entrepreneurship, endorsed many of the same values and ideas, and were started by people affiliated with the movement, but did not put the link expl...

The Giving What We Can Podcast
#9 - Joey Savoie: Making it easier for great charities to exist

The Giving What We Can Podcast

Play Episode Listen Later Aug 16, 2022 33:14


In this interview, we chat to Joey Savoie, CEO and co-founder of Charity Entrepreneurship, an incubator that helps launch high-impact nonprofits by connecting entrepreneurs with effective ideas, training, and funding. Joey provides an overview of Charity Entrepreneurship and the problems they're working to solve; he also shares his thoughts on the charity sector, the concept of efficacy, and the role altruism plays in his life. Since its founding in 2018, Charity Entrepreneurship has helped launch 18 impactful nonprofits, which have reached over 5 million humans and improved the lives of 1.2 million animals. You can donate to Charity Entrepreneurship via Giving What We Can: https://donate.givingwhatwecan.org/partners/charity-entrepreneurship WANT TO LEARN MORE ABOUT CHARITY ENTREPRENEURSHIP?

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
207 | William MacAskill on Maximizing Good in the Present and Future

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

Play Episode Listen Later Aug 15, 2022 102:23 Very Popular


It's always a little humbling to think about what affects your words and actions might have on other people, not only right now but potentially well into the future. Now take that humble feeling and promote it to all of humanity, and arbitrarily far in time. How do our actions as a society affect all the potential generations to come? William MacAskill is best known as a founder of the Effective Altruism movement, and is now the author of What We Owe the Future. In this new book he makes the case for longtermism: the idea that we should put substantial effort into positively influencing the long-term future. We talk about the pros and cons of that view, including the underlying philosophical presuppositions.Mindscape listeners can get 50% off What We Owe the Future, thanks to a partnership between the Forethought Foundation and Bookshop.org. Just click here and use code MINDSCAPE50 at checkout.Support Mindscape on Patreon.William (Will) MacAskill received his D.Phil. in philosophy from the University of Oxford. He is currently an associate professor of philosophy at Oxford, as well as a research fellow at the Global Priorities Institute, director of the Forefront Foundation for Global Priorities Research, President of the Centre for Effective Altruism, and co-founder of 80,000 hours and Giving What We Can.Web sitePhilPeople profileGoogle Scholar publicationsWikipediaTwitterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Valmy
Will MacAskill - Longtermism, Altruism, History, & Technology

The Valmy

Play Episode Listen Later Aug 12, 2022 56:07


Podcast: The Lunar Society (LS 30 · TOP 5% )Episode: Will MacAskill - Longtermism, Altruism, History, & TechnologyRelease date: 2022-08-09Will MacAskill is one of the founders of the Effective Altruist movement and the author of the upcoming book, What We Owe The Future.We talk about improving the future, risk of extinction & collapse, technological & moral change, problems of academia, who changes history, and much more.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.Episode website + Transcript here.Follow Will on Twitter. Follow me on Twitter for updates on future episodes.Subscribe to find out about future episodes!Timestamps(00:23) -Effective Altruism and Western values(07:47) -The contingency of technology(12:02) -Who changes history?(18:00) -Longtermist institutional reform(25:56) -Are companies longtermist?(28:57) -Living in an era of plasticity(34:52) -How good can the future be?(39:18) -Contra Tyler Cowen on what's most important(45:36) -AI and the centralization of power(51:34) -The problems with academiaPlease share if you enjoyed this episode! Helps out a ton!TranscriptDwarkesh Patel 0:06Okay, today I have the pleasure of interviewing William MacAskill. Will is one of the founders of the Effective Altruism movement, and most recently, the author of the upcoming book, What We Owe The Future. Will, thanks for coming on the podcast.Will MacAskill 0:20Thanks so much for having me on.Effective Altruism and Western valuesDwarkesh Patel 0:23My first question is: What is the high-level explanation for the success of the Effective Altruism movement? Is it itself an example of the contingencies you talk about in the book?Will MacAskill 0:32Yeah, I think it is contingent. Maybe not on the order of, “this would never have happened,” but at least on the order of decades. Evidence that Effective Altruism is somewhat contingent is that similar ideas have been promoted many times during history, and not taken on.We can go back to ancient China, the Mohists defended an impartial view of morality, and took very strategic actions to help all people. In particular, providing defensive assistance to cities under siege. Then, there were early utilitarians. Effective Altruism is broader than utilitarianism, but has some similarities. Even Peter Singer in the 70s had been promoting the idea that we should be giving most of our income to help the very poor — and didn't get a lot of traction until early 2010 after GiveWell and Giving What We Can launched.What explains the rise of it? I think it was a good idea waiting to happen. At some point, the internet helped to gather together a lot of like-minded people which wasn't possible otherwise. There were some particularly lucky events like Alex meeting Holden and me meeting Toby that helped catalyze it at the particular time it did.Dwarkesh Patel 1:49If it's true, as you say, in the book, that moral values are very contingent, then shouldn't that make us suspect that modern Western values aren't that good? They're mediocre, or worse, because ex ante, you would expect to end up with a median of all the values we could have had at this point. Obviously, we'd be biased in favor of whatever values we were brought up in.Will MacAskill 2:09Absolutely. Taking history seriously and appreciating the contingency of values, appreciating that if the Nazis had won the World War, we would all be thinking, “wow, I'm so glad that moral progress happened the way it did, and we don't have Jewish people around anymore. What huge moral progress we had then!” That's a terrifying thought. I think it should make us take seriously the fact that we're very far away from the moral truth.One of the lessons I draw in the book is that we should not think we're at the end of moral progress. We should not think, “Oh, we should lock in the Western values we have.” Instead, we should spend a lot of time trying to figure out what's actually morally right, so that the future is guided by the right values, rather than whichever happened to win out.Dwarkesh Patel 2:56So that makes a lot of sense. But I'm asking a slightly separate question—not only are there possible values that could be better than ours, but should we expect our values - we have the sense that we've made moral progress (things are better than they were before or better than most possible other worlds in 2100 or 2200)- should we not expect that to be the case? Should our priors be that these are ‘meh' values?Will MacAskill 3:19Our priors should be that our values are as good as expected on average. Then you can make an assessment like, “Are other values of today going particularly well?” There are some arguments you could make for saying no. Perhaps if the Industrial Revolution happened in India, rather than in Western Europe, then perhaps we wouldn't have wide-scale factory farming—which I think is a moral atrocity. Having said that, my view is to think that we're doing better than average.If civilization were just a redraw, then things would look worse in terms of our moral beliefs and attitudes. The abolition of slavery, the feminist movement, liberalism itself, democracy—these are all things that we could have lost and are huge gains.Dwarkesh Patel 4:14If that's true, does that make the prospect of a long reflection dangerous? If moral progress is a random walk, and we've ended up with a lucky lottery, then you're possibly reversing. Maybe you're risking regression to the mean if you just have 1,000 years of progress.Will MacAskill 4:30Moral progress isn't a random walk in general. There are many forces that act on culture and on what people believe. One of them is, “What's right, morally speaking? What's their best arguments support?” I think it's a weak force, unfortunately.The idea of lumbar flexion is getting society into a state that before we take any drastic actions that might lock in a particular set of values, we allow this force of reason and empathy and debate and goodhearted model inquiry to guide which values we end up with.Are we unwise?Dwarkesh Patel 5:05In the book, you make this interesting analogy where humans at this point in history are like teenagers. But another common impression that people have of teenagers is that they disregard wisdom and tradition and the opinions of adults too early and too often. And so, do you think it makes sense to extend the analogy this way, and suggest that we should be Burkean Longtermists and reject these inside-view esoteric threats?Will MacAskill 5:32My view goes the opposite of the Burkean view. We are cultural creatures in our nature, and are very inclined to agree with what other people think even if we don't understand the underlying mechanisms. It works well in a low-change environment. The environment we evolved towards didn't change very much. We were hunter-gatherers for hundreds of years.Now, we're in this period of enormous change, where the economy is doubling every 20 years, new technologies arrive every single year. That's unprecedented. It means that we should be trying to figure things out from first principles.Dwarkesh Patel 6:34But at current margins, do you think that's still the case? If a lot of EA and longtermist thought is first principles, do you think that more history would be better than the marginal first-principles thinker?Will MacAskill 6:47Two things. If it's about an understanding of history, then I'd love EA to have a better historical understanding. The most important subject if you want to do good in the world is philosophy of economics. But we've got that in abundance compared to there being very little historical knowledge in the EA community.Should there be even more first-principles thinking? First-principles thinking paid off pretty well in the course of the Coronavirus pandemic. From January 2020, my Facebook wall was completely saturated with people freaking out, or taking it very seriously in a way that the existing institutions weren't. The existing institutions weren't properly updating to a new environment and new evidence.The contingency of technologyDwarkesh Patel 7:47In your book, you point out several examples of societies that went through hardship. Hiroshima after the bombings, Europe after the Black Death—they seem to have rebounded relatively quickly. Does this make you think that perhaps the role of contingency in history, especially economic history is not that large? And it implies a Solow model of growth? That even if bad things happen, you can rebound and it really didn't matter?Will MacAskill 8:17In economic terms, that's the big difference between economic or technological progress and moral progress. In the long run, economic or technological progress is very non-contingent. The Egyptians had an early version of the steam engine, semaphore was only developed very late yet could have been invented thousands of years in the past.But in the long run, the instrumental benefits of tech progress, and the incentives towards tech progress and economic growth are so strong, that we get there in a wide array of circumstances. Imagine there're thousands of different societies, and none are growing except for one. In the long run, that one becomes the whole economy.Dwarkesh Patel 9:10It seems that particular example you gave of the Egyptians having some ancient form of a steam engine points towards there being more contingency? Perhaps because the steam engine comes up in many societies, but it only gets turned into an industrial revolution in one?Will MacAskill 9:22In that particular case, there's a big debate about whether quality of metalwork made it actually possible to build a proper steam engine at that time. I mentioned those to share some amazing examples of contingency prior to the Industrial Revolution.It's still contingency on the order of centuries to thousands of years. Post industrial-revolution world, there's much less contingency. It's much harder to see technologies that wouldn't have happened within decades if they hadn't been developed when they were.Dwarkesh Patel 9:57The model here is, “These general-purpose changes in the state of technology are contingent, and it'd be very important to try to engineer one of those. But other than that, it's going to get done by some guy creating a start-up anyways?”Will MacAskill 10:11Even in the case of the steam engine that seemed contingent, it gets developed in the long run. If the Industrial Revolution hadn't happened in Britain in the 18th century, would it have happened at some point? Would similar technologies that were vital to the industrial revolution developed? Yes, there are very strong incentives for doing so.If there's a culture that's into making textiles in an automated way as opposed to England in the 18th century, then that economy will take over the world. There's a structural reason why economic growth is much less contingent than moral progress.Dwarkesh Patel 11:06When people think of somebody like Norman Borlaug and the Green Revolution. It's like, “If you could have done something that, you'd be the greatest person in the 20th century.” Obviously, he's still a very good man, but would that not be our view? Do you think the green revolution would have happened anyways?Will MacAskill 11:22Yes. Norman Borlaug is sometimes credited with saving a billion lives. He was huge. He was a good force for the world. Had Norman Borlaug not existed, I don't think a billion people would have died. Rather, similar developments would have happened shortly afterwards.Perhaps he saved tens of millions of lives—and that's a lot of lives for a person to save. But, it's not as many as simply saying, “Oh, this tech was used by a billion people who would have otherwise been at risk of starvation.” In fact, not long afterwards, there were similar kinds of agricultural development.Who changes history?Dwarkesh Patel 12:02What kind of profession or career choice tends to lead to the highest counterfactual impact? Is it moral philosophers?Will MacAskill 12:12Not quite moral philosophers, although there are some examples. Sticking on science technology, if you look at Einstein, theory of special relativity would have been developed shortly afterwards. However, theory of general relativity was plausibly decades in advance. Sometimes, you get surprising leaps. But, we're still only talking about decades rather than millennia. Moral philosophers could make long-term difference. Marx and Engels made an enormous, long-run difference. Religious leaders like Mohammed, Jesus, and Confucius made enormous and contingent, long-run difference. Moral activists as well.Dwarkesh Patel 13:04If you think that the changeover in the landscape of ideas is very quick today, would you still think that somebody like Marx will be considered very influential in the long future? Communism lasted less than a century, right?Will MacAskill 13:20As things turned out, Marx will not be influential over the long term future. But that could have gone another way. It's not such a wildly different history. Rather than liberalism emerging dominant in the 20th century, it was communism. The better technology gets, the better the ruling ideology is to cement its ideology and persist for a long time. You can get a set of knock-on effects where communism wins the war of ideas in the 20th century.Let's say a world-government is based around those ideas, then, via anti-aging technology, genetic-enhancement technology, cloning, or artificial intelligence, it's able to build a society that possesses forever in accordance with that ideology.Dwarkesh Patel 14:20The death of dictators is especially interesting when you're thinking about contingency because there are huge changes in the regime. It makes me think the actual individual there was very important and who they happened to be was contingent and persistent in some interesting ways.Will MacAskill 14:37If you've got a dictatorship, then you've got single person ruling the society. That means it's heavily contingent on the views, values, beliefs, and personality of that person.Scientific talentDwarkesh Patel 14:48Going back to the second nation, in the book, you're very concerned about fertility. It seems your model about scientific and technological progress happens is number of people times average researcher productivity. If resource productivity is declining and the number of people isn't growing that fast, then that's concerning.Will MacAskill 15:07Yes, number of people times fraction of the population devoted to R&D.Dwarkesh Patel 15:11Thanks for the clarification. It seems that there have been a lot of intense concentrations of talent and progress in history. Venice, Athens, or even something like FTX, right? There are 20 developers making this a multibillion dollar company—do these examples suggest that organization and congregation of researchers matter more than the total amount?Will MacAskill 15:36The model works reasonably well. Throughout history, you start from a very low technological baseline compared to today. Most people aren't even trying to innovate. One argument for why Baghdad lost its Scientific Golden Age is because the political landscape changed such that what was incentivized was theological investigation rather than scientific investigation in the 10th/11th century AD.Similarly, one argument for why Britain had a scientific and industrial revolution rather than Germany was because all of the intellectual talent in Germany was focused on making amazing music. That doesn't compound in the way that making textiles does. If you look at like Sparta versus Athens, what was the difference? They had different cultures and intellectual inquiry was more rewarded in Athens.Because they're starting from a lower base, people trying to do something that looks like what we now think of as intellectual inquiry have an enormous impact.Dwarkesh Patel 16:58If you take an example like Bell Labs, the low-hanging fruit is gone by the late 20th century. You have this one small organization that has six Nobel Prizes. Is this a coincidence?Will MacAskill 17:14I wouldn't say that at all. The model we're working with is the size of the population times the fraction of the population doing R&D. It's the simplest model you can have. Bell Labs is punching above its weight. You can create amazing things from a certain environment with the most productive people and putting them in an environment where they're ten times more productive than they would otherwise be.However, when you're looking at the grand sweep of history, those effects are comparatively small compared to the broader culture of a society or the sheer size of a population.Longtermist institutional reformDwarkesh Patel 18:00I want to talk about your paper on longtermist institutional reform. One of the things you advocate in this paper is that we should have one of the houses be dedicated towards longtermist priorities. Can you name some specific performance metrics you would use to judge or incentivize the group of people who make up this body?Will MacAskill 18:23The thing I'll caveat with longtermist institutions is that I'm pessimistic about them. If you're trying to represent or even give consideration to future people, you have to face the fact that they're not around and they can't lobby for themselves. However, you could have an assembly of people who have some legal regulatory power. How would you constitute that? My best guess is you have a random selection from the population? How would you ensure that incentives are aligned?In 30-years time, their performance will be assessed by a panel of people who look back and assess the policies' effectiveness. Perhaps the people who are part of this assembly have their pensions paid on the basis of that assessment. Secondly, the people in 30-years time, both their policies and their assessment of the previous 30-years previous assembly get assessed by another assembly, 30-years after that, and so on. Can you get that to work? Maybe in theory—I'm skeptical in practice, but I would love some country to try it and see what happens.There is some evidence that you can get people to take the interests of future generations more seriously by just telling them their role. There was one study that got people to put on ceremonial robes, and act as trustees of the future. And they did make different policy recommendations than when they were just acting on the basis of their own beliefs and self-interest.Dwarkesh Patel 20:30If you are on that board that is judging these people, is there a metric like GDP growth that would be good heuristics for assessing past policy decisions?Will MacAskill 20:48There are some things you could do: GDP growth, homelessness, technological progress. I would absolutely want there to be an expert assessment of the risk of catastrophe. We don't have this yet, but imagine a panel of super forecasters predicting the chance of a war between great powers occurring in the next ten years that gets aggregated into a war index.That would be a lot more important than the stock market index. Risk of catastrophe would be helpful to feed into because you wouldn't want something only incentivizing economic growth at the expense of tail risks.Dwarkesh Patel 21:42Would that be your objection to a scheme like Robin Hanson's about maximizing the expected future GDP using prediction markets and making decisions that way?Will MacAskill 21:50Maximizing future GDP is an idea I associate with Tyler Cowen. With Robin Hanson's idea of voting on values but betting on beliefs, if people can vote on what collection of goods they want, GDP and unemployment might be good metrics. Beyond that, it's pure prediction markets. It's something I'd love to see tried. It's an idea of speculative political philosophy about how a society could be extraordinarily different in structure that is incredibly neglected.Do I think it'll work in practice? Probably not. Most of these ideas wouldn't work. Prediction markets can be gamed or are simply not liquid enough. There hasn't been a lot of success in prediction markets compared to forecasting. Perhaps you can solve these things. You have laws about what things can be voted on or predicted in the prediction market, you could have government subsidies to ensure there's enough liquidity. Overall, it's likely promising and I'd love to see it tried out on a city-level or something.Dwarkesh Patel 23:13Let's take a scenario where the government starts taking the impact on the long-term seriously and institutes some reforms to integrate that perspective. As an example, you can take a look at the environmental movement. There're environmental review boards that will try to assess the environmental impact of new projects and repeal any proposals based on certain metrics.The impact here, at least in some cases, has been that groups that have no strong, plausible interest in the environment are able to game these mechanisms in order to prevent projects that would actually help the environment. With longtermism, it takes a long time to assess the actual impact of something, but policymakers are tasked with evaluating the long term impacts of something. Are you worried that it'd be a system that'd be easy to game by malicious actors? And they'd ask, “What do you think went wrong with the way that environmentalism was codified into law?”Will MacAskill 24:09It's potentially a devastating worry. You create something to represent future people, but they're not allowed to lobby themselves (it can just be co-opted). My understanding of environmental impact statements has been similar. Similarly, it's not like the environment can represent itself—it can't say what its interests are. What is the right answer there? Maybe there are speculative proposals about having a representative body that assesses these things and elect jobs by people in 30-years time. That's the best we've got at the moment, but we need a lot more thought to see if any of these proposals would be robust for the long term rather than things that are narrowly-focused.Regulation to have liability insurance for dangerous bio labs is not about trying to represent the interests of future generations. But, it's very good for the long-term. At the moment, if longtermists are trying to change the government, let's focus on a narrow set of institutional changes that are very good for the long-term even if they're not in the game of representing the future. That's not to say I'm opposed to all such things. But, there are major problems with implementation for any of them.Dwarkesh Patel 25:35If we don't know how we would do it correctly, did you have an idea of how environmentalism could have been codified better? Why was that not a success in some cases?Will MacAskill 25:46Honestly, I don't have a good understanding of that. I don't know if it's intrinsic to the matter or if you could've had some system that wouldn't have been co-opted in the long-term.Are companies longtermist?Dwarkesh Patel 25:56Theoretically, the incentives of our most long-term U.S. institutions is to maximize future cash flow. Explicitly and theoretically, they should have an incentive to do the most good they can for their own company—which implies that the company can't be around if there's an existential risk…Will MacAskill 26:18I don't think so. Different institutions have different rates of decay associated with them. So, a corporation that is in the top 200 biggest companies has a half-life of only ten years. It's surprisingly short-lived. Whereas, if you look at universities Oxford and Cambridge are 800 years old. University of Bologna is even older. These are very long-lived institutions.For example, Corpus Christi at Oxford was making a decision about having a new tradition that would occur only every 400 years. It makes that kind of decision because it is such a long-lived institution. Similarly, the legends can be even longer-lived again. That type of natural half-life really affects the decisions a company would make versus a university versus a religious institution.Dwarkesh Patel 27:16Does that suggest that there's something fragile and dangerous about trying to make your institution last for a long time—if companies try to do that and are not able to?Will MacAskill 27:24Companies are composed of people. Is it in the interest of a company to last for a long time? Is it in the interests of the people who constitute the company (like the CEO and the board and the shareholders) for that company to last a long time? No, they don't particularly care. Some of them do, but most don't. Whereas other institutions go both ways. This is the issue of lock-in that I talked about at length in What We Owe The future: you get moments of plasticity during the formation of a new institution.Whether that's the Christian church or the Constitution of the United States, you lock-in a certain set of norms. That can be really good. Looking back, the U.S. Constitution seems miraculous as the first democratic constitution. As I understand it, it was created over a period of four months seems to have stood the test of time. Alternatively, lock-in norms could be extremely dangerous. There were horrible things in the U.S. Constitution like the legal right to slavery proposed as a constitutional amendment. If that had locked in, it would have been horrible. It's hard to answer in the abstract because it depends on the thing that's persisting for a long time.Living in an era of plasticityDwarkesh Patel 28:57You say in the book that you expect our current era to be a moment of plasticity. Why do you think that is?Will MacAskill 29:04There are specific types of ‘moments of plasticity' for two reasons. One is a world completely unified in a way that's historically unusual. You can communicate with anyone instantaneously and there's a great diversity of moral views. We can have arguments, like people coming on your podcast can debate what's morally correct. It's plausible to me that one of many different sets of moral views become the most popular ultimately.Secondly, we're at this period where things can really change. But, it's a moment of plasticity because it could plausibly come to an end — and the moral change that we're used to could end in the coming decades. If there was a single global culture or world government that preferred ideological conformity, combined with technology, it becomes unclear why that would end over the long-term? The key technology here is Artificial Intelligence. The point in time (which may be sooner than we think) where the rulers of the world are digital rather than biological, that [ideological conformity] could persist.Once you've got that and a global hegemony of a single ideology, there's not much reason for that set of values to change over time. You've got immortal leaders and no competition. What are the other kind of sources of value-change over time? I think they can be accounted for too.Dwarkesh Patel 30:46Isn't the fact that we are in a time of interconnectedness that won't last if we settle space — isn't that bit of reason for thinking that lock-in is not especially likely? If your overlords are millions of light years away, how well can they control you?Will MacAskill 31:01The “whether” you have is whether the control will happen before the point of space settlement. If we took to space one day, and there're many different settlements and different solar systems pursuing different visions of the good, then you're going to maintain diversity for a very long time (given the physics of the matter).Once a solar system has been settled, it's very hard for other civilizations to come along and conquer you—at least if we're at a period of technological maturity where there aren't groundbreaking technologies to be discovered. But, I'm worried that the control will happen earlier. I'm worried the control might happen this century, within our lifetimes. I don't think it's very likely, but it's seriously on the table - 10% or something?Dwarkesh Patel 31:53Hm, right. Going back to the long-term of the longtermism movement, there are many instructive foundations that were set up about a century ago like the Rockefeller Foundation, Carnegie Foundation. But, they don't seem to be especially creative or impactful today. What do you think went wrong? Why was there, if not value drift, some decay of competence and leadership and insight?Will MacAskill 32:18I don't have strong views about those particular examples, but I have two natural thoughts. For organizations that want to persist a long time and keep having an influence for a long time, they've historically specified their goals in far too narrow terms. One fun example is Benjamin Franklin. He invested a thousand pounds for each of the cities of Philadelphia and Boston to pay out after 100 years and then 200 years for different fractions of the amount invested. But, he specified it to help blacksmith apprentices. You might think this doesn't make much sense when you're in the year 2000. He could have invested more generally: for the prosperity of people in Philadelphia and Boston. It would have had plausibly more impact.The second is a ‘regression to the mean' argument. You have some new foundation and it's doing an extraordinary amount of good as the Rockefeller Foundation did. Over time, if it's exceptional in some dimension, it's probably going to get closer to average on that dimension. This is because you're changing the people involved. If you've picked exceptionally competent and farsighted people, the next generation are statistically going to be less so.Dwarkesh Patel 33:40Going back to that hand problem: if you specify your mission too narrowly and it doesn't make sense in the future—is there a trade off? If you're too broad, you make space for future actors—malicious or uncreative—to take the movement in ways that you would not approve of? With regards to doing good for Philadelphia, what if it turns into something that Ben Franklin would not have thought is good for Philadelphia?Will MacAskill 34:11It depends on what your values and views are. If Benjamin Franklin only cared about blacksmith's apprentices, then he was correct to specify it. But my own values tend to be quite a bit more broad than that. Secondly, I expect people in the future to be smarter and more capable. It's certainly the trend over time. In which case, if we're sharing similar broad goals, and they're implementing it in a different way, then they have it.How good can the future be?Dwarkesh Patel 34:52Let's talk about how good we should expect the future to be. Have you come across Robin Hanson's argument that we'll end up being subsistence-level ems because there'll be a lot of competition and minimizing compute per digital person will create a barely-worth-living experience for every entity?Will MacAskill 35:11Yeah, I'm familiar with the argument. But, we should distinguish the idea that ems are at subsistence level from the idea that we would have bad lives. So subsistence means that you get a balance of income per capita and population growth such that being poorer would cause deaths to outweigh additional births.That doesn't tell you about their well-being. You could be very poor as an emulated being but be in bliss all the time. That's perfectly consistent with the Malthusian theory. It might seem far away from the best possible future, but it could still be very good. At subsistence, those ems could still have lives that are thousands of times better than ours.Dwarkesh Patel 36:02Speaking of being poor and happy, there was a very interesting section in the chapter where you mentioned the study you had commissioned: you were trying to find out if people in the developing world find life worth living. It turns out that 19% of Indians would not want to relive their life every moment. But, 31% of Americans said that they would not want to relive their life at every moment? So, why are Indians seemingly much happier at less than a tenth of the GDP per capita?Will MacAskill 36:29I think the numbers are lower than that from memory, at least. From memory, it's something more like 9% of Indians wouldn't want to live their lives again if they had the option, and 13% of Americans said they wouldn't. You are right on the happiness metric, though. The Indians we surveyed were more optimistic about their lives, happier with their lives than people in the US were. Honestly, I don't want to generalize too far from that because we were sampling comparatively poor Americans to comparatively well-off Indians. Perhaps it's just a sample effect.There are also weird interactions with Hinduism and the belief in reincarnation that could mess up the generalizability of this. On one hand, I don't want to draw any strong conclusion from that. But, it is pretty striking as a piece of information, given that you find people's well-being in richer countries considerably happier than poorer countries, on average.Dwarkesh Patel 37:41I guess you do generalize in a sense that you use it as evidence that most lives today are living, right?Will MacAskill 37:50Exactly. So, I put together various bits of evidence, where approximately 10% of people in the United States and 10% of people in India seem to think that their lives are net negative. They think they contain more suffering than happiness and wouldn't want to be reborn and live the same life if they could.There's another scripture study that looks at people in United States/other wealthy countries, and asks them how much of their conscious life they'd want to skip if they could. Skipping here means that blinking would reach you to the end of whatever activity you're engaging with. For example, perhaps I hate this podcast so much that I would rather be unconscious than be talking to you. In which case, I'd have the option of skipping, and it would be over after 30 minutes.If you look at that, and then also asked people about the trade offs they would be willing to make as a measure of intensity of how much they're enjoying a certain experience, you reach the conclusion that a little over 10% of people regarded their life that day as being surveyed worse than if they'd been unconscious the entire day.Contra Tyler Cowen on what's most importantDwarkesh Patel 39:18Jumping topics here a little bit, on the 80,000 Hours Podcast, you said that you expect scientists who are explicitly trying to maximize their impact might have an adverse impact because they might be ignoring the foundational research that wouldn't be obvious in this way of thinking, but might be more important.Do you think this could be a general problem with longtermism? If you were trying to find the most important things that are important long-term, you might be missing things that wouldn't be obvious thinking this way?Will MacAskill 39:48Yeah, I think that's a risk. Among the ways that people could argue against my general set of views, I argue that we should be doing fairly specific and targeted things like trying to make AI safe, well-govern the rise of AI, reduce worst-case pandemics that can kill us all, prevent a Third World War, ensure that good values are promoted, and avoid value lock-in. But, some people could argue (and people like Tyler Cowen and Patrick Collison do), that it's very hard to predict the future impact of your actions.It's a mug's game to even try. Instead, you should look at the things that have done loads of good consistently in the past, and try to do the same things. In particular, they might argue that means technological progress or boosting economic growth. I dispute that. It's not something I can give a completely knock-down argument to because we don't know when we will find out who's right. Maybe in thousand-years time. But one piece of evidence is the success of forecasters in general. This also was true for Tyler Cowen, but people in Effective Altruism were realizing that the Coronavirus pandemic was going to be a big deal for them. At an early stage, they were worrying about pandemics far in advance. There are some things that are actually quite predictable.For example, Moore's Law has held up for over 70 years. The idea that AI systems are gonna get much larger and leading models are going to get more powerful are on trend. Similarly, the idea that we will be soon be able to develop viruses of unprecedented destructive power doesn't feel too controversial. Even though it's hard to predict loads of things, there are going to be tons of surprises. There are some things, especially when it comes to fairly long-standing technological trends, that we can make reasonable predictions — at least about the range of possibilities that are on the table.Dwarkesh Patel 42:19It sounds like you're saying that the things we know are important now. But, if something didn't turn out, a thousand years ago, looking back to be very important, it wouldn't be salient to us now?Will MacAskill 42:31What I was saying with me versus Patrick Collison and Tyler Cowen, who is correct? We will only get that information in a thousand-years time because we're talking about impactful strategies for the long-term. We might get suggestive evidence earlier. If me and others engaging in longtermism are making specific, measurable forecasts about what is going to happen with AI, or advances in biotechnology, and then are able to take action such that we are clearly reducing certain risks, that's pretty good evidence in favor of our strategy.Whereas, they're doing all sorts of stuff, but not make firm predictions about what's going to happen, but then things pop out of that that are good for the long-term (say we measure this in ten-years time), that would be good evidence for their view.Dwarkesh Patel 43:38You were saying earlier about the contingency in technology implies that given their worldview, even if you're trying to maximize what in the past is at the most impact, if what's had the most impact in the past is changing values, then economic growth might be the most important thing? Or trying to change the rate of economic growth?Will MacAskill 43:57I really do take the argument seriously of how people have acted in the past, especially for people trying to make a long-lasting impact. What things that they do that made sense and whatnot. So, towards the end of the 19th century, John Stuart Mill and the other early utilitarians had this longtermist wave where they started taking the interests of future generations very seriously. Their main concern was Britain running out of coal, and therefore, future generations would be impoverished. It's pretty striking because they had a very bad understanding of how the economy works. They hadn't predicted that we would be able to transition away from coal with continued innovation.Secondly, they had enormously wrong views about how much coal and fossil fuels there were in the world. So, that particular action didn't make any sense given what we know now. In fact, that particular action of trying to keep coal in the ground, given Britain at the time where we're talking about much lower amounts of coal—so small that the climate change effect is negligible at that level—probably would have been harmful.But, we could look at other things that John Stuart Mill could have done such promoting better values. He campaigned for women's suffrage. He was the first British MP. In fact, even the first politician in the world to promote women's suffrage - that seems to be pretty good. That seems to have stood the test of time. That's one historical data point. But potentially, we can learn a more general lesson there.AI and the centralization of powerDwarkesh Patel 45:36Do you think the ability of your global policymakers to come to a consensus is on net, a good or a bad thing? On the positive, maybe it helps around some dangerous tech from taking off, but on the negative side, prevent human challenge trials that cause some lock-in in the future. On net, what do you think about that trend?Will MacAskill 45:54The question of global integration, you're absolutely right, it's double-sided. One hand, it can help us reduce global catastrophic risks. The fact that the world was able to come come together and ban Chlorofluorocarbons was one of the great events of the last 50 years, allowing the hole in the ozone layer to to repair itself. But on the other hand, if it means we all converge to one monoculture and lose out on diversity, that's potentially bad. We could lose out on the most possible value that way.The solution is doing the good bits and not having the bad bits. For example, in a liberal constitution, you can have a country that is bound in certain ways by its constitution and by certain laws yet still enables a flourishing diversity of moral thought and different ways of life. Similarly, in the world, you can have very strong regulation and treaties that only deal with certain global public goods like mitigation of climate change, prevention of development of the next generation of weapons of mass destruction without having some very strong-arm global government that implements a particular vision of the world. Which way are we going at the moment? It seems to me we've been going in a pretty good and not too worrying direction. But, that could change.Dwarkesh Patel 47:34Yeah, it seems the historical trend is when you have a federated political body that even if constitutionally, the Central Powers constrain over time, they tend to gain more power. You can look at the U.S., you can look at the European Union. But yeah, that seems to be the trend.Will MacAskill 47:52Depending on the culture that's embodied there, it's potentially a worry. It might not be if the culture itself is liberal and promoting of moral diversity and moral change and moral progress. But, that needn't be the case.Dwarkesh Patel 48:06Your theory of moral change implies that after a small group starts advocating for a specific idea, it may take a century or more before that idea reaches common purchase. To the extent that you think this is a very important century (I know you have disagreements about that with with others), does that mean that there isn't enough time for longtermism to gain by changing moral values?Will MacAskill 48:32There are lots of people I know and respect fairly well who think that Artificial General Intelligence will likely lead to singularity-level technological progress and extremely rapid rate of technological progress within the next 10-20 years. If so, you're right. Value changes are something that pay off slowly over time.I talk about moral change taking centuries historically, but it can be much faster today. The growth of the Effective Altruism movement is something I know well. If that's growing at something like 30% per year, compound returns mean that it's not that long. That's not growth. That's not change that happens on the order of centuries.If you look at other moral movements like gay rights movement, very fast moral change by historical standards. If you're thinking that we've got ten years till the end of history, then don't broadly try and promote better values. But, we should have a very significant probability mass on the idea that we will not hit some historical end of this century. In those worlds, promoting better values could pay off like very well.Dwarkesh Patel 49:59Have you heard of Slime Mold Time Mold Potato Diet?Will MacAskill 50:03I have indeed heard of Slime Mold Time Mold Potato Diet, and I was tempted as a gimmick to try it. As I'm sure you know, potato is close to a superfood, and you could survive indefinitely on butter mashed potatoes if you occasionally supplement with something like lentils and oats.Dwarkesh Patel 50:25Hm, interesting. Question about your career: why are you still a professor? Does it still allow you to the things that you would otherwise have been doing like converting more SBF's and making moral philosophy arguments for EA? Curious about that.Will MacAskill 50:41It's fairly open to me what I should do, but I do spend significant amounts of time co-founding organizations or being on the board of those organizations I've helped to set up. More recently, working closely with the Future Fund, SBF's new foundation, and helping them do as much good as possible. That being said, if there's a single best guess for what I want to do longer term, and certainly something that plays to my strengths better, it's developing ideas, trying to get the big picture roughly right, and then communicating them in a way that's understandable and gets more people to get off their seats and start to do a lot of good for the long-term. I've had a lot of impact that way. From that perspective, having an Oxford professorship is pretty helpful.The problems with academiaDwarkesh Patel 51:34You mentioned in the book and elsewhere that there's a scarcity of people thinking about big picture questions—How contingent is history? How are people happy generally?—Are these questions that are too hard for other people? Or they don't care enough? What's going on? Why are there so few people talking about this?Will MacAskill 51:54I just think there are many issues that are enormously important but are just not incentivized anywhere in the world. Companies don't incentivize work on them because they're too big picture. Some of these questions are, “Is the future good, rather than bad? If there was a global civilizational collapse, would we recover? How likely is a long stagnation?” There's almost no work done on any of these topics. Companies aren't interested too grand in scale.Academia has developed a culture where you don't tackle such problems. Partly, that's because they fall through the cracks of different disciplines. Partly because they seem too grand or too speculative. Academia is much more in the mode of making incremental gains in our understanding. It didn't always used to be that way.If you look back before the institutionalization of academic research, you weren't a real philosopher unless you had some grand unifying theory of ethics, political philosophy, metaphysics, logic, and epistemology. Probably the natural sciences too and economics. I'm not saying that all of academic inquiry should be like that. But should there be some people whose role is to really think about the big picture? Yes.Dwarkesh Patel 53:20Will I be able to send my kids to MacAskill University? What's the status on that project?Will MacAskill 53:25I'm pretty interested in the idea of creating a new university. There is a project that I've been in discussion about with another person who's fairly excited about making it happen. Will it go ahead? Time will tell. I think you can do both research and education far better than it currently exists. It's extremely hard to break in or creating something that's very prestigious because the leading universities are hundreds of years old. But maybe it's possible. I think it would could generate enormous amounts of value if we were able to pull it off.Dwarkesh Patel 54:10Excellent, alright. So the book is What We Owe The Future. I understand pre-orders help a lot, right? It was such an interesting read. How often does somebody write a book about the questions they consider to be the most important even if they're not the most important questions? Big picture thinking, but also looking at very specific questions and issues that come up. Super interesting read.Will MacAskill 54:34Great. Well, thank you so much!Dwarkesh Patel 54:38Anywhere else they can find you? Or any other information they might need to know?Will MacAskill 54:39Yeah, sure. What We Owe The Future is out on August 16 in the US and first of September in the United Kingdom. If you want to follow me on Twitter, I'm @WillMcCaskill. If you want to try and use your time or money to do good, Giving What We Can is an organization that encourages people to take a pledge to give a significant fraction of the income (10% or more) to the charities that do the most good. It has a list of recommended charities. 80,000 Hours—if you want to use your career to do good—is a place to go for advice on what careers have the biggest impact at all. They provide one-on-one coaching too.If you're feeling inspired and want to do good in the world, you care about future people and I want to help make their lives go better, then, as well as reading What We Owe The Future, Giving What We Can, and 80,000 hours are the sources you can go to and get involved.Dwarkesh Patel 55:33Awesome, thanks so much for coming on the podcast! It was a lot of fun.Will MacAskill 54:39Thanks so much, I loved it. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com

Deep Dive with Ali Abdaal
Moral Philosopher Will MacAskill on What We Owe The Future

Deep Dive with Ali Abdaal

Play Episode Listen Later Aug 11, 2022 172:44


How can we do the most good with our careers, money and lives? And what are the things that we can do right now, to positively impact future generations to come? This is the mission of the Effective Altruism (EA) movement co-founded by Will McAskill, Associate Professor in Philosophy at the University of Oxford and co-founder of nonprofits Giving What We Can, the Centre for Effective Altruism, and Y Combinator-backed 80,000 Hour. In the conversation, me and Will talk about the fundamentals of EA, his brand new book 'What We Owe The Future', the idea of 'longtermism', the most pressing existential threats humanity is facing and what we can do about them, why giving away your income will make you happier, why your career choice is the biggest choice you'll make in your life and much more. 

The Nonlinear Library
EA - Announcing the Longtermism Fund by Michael Townsend

The Nonlinear Library

Play Episode Listen Later Aug 11, 2022 8:39


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Longtermism Fund, published by Michael Townsend on August 11, 2022 on The Effective Altruism Forum. Longview Philanthropy and Giving What We Can would like to announce a new fund for donors looking to support longtermist work: the Longtermism Fund. In this post, we outline the motivation behind the fund, reasons you may (or may not) choose to donate using it, and some questions we expect donors may have. What work will the Longtermism Fund support? The fund supports work that: Reduces existential and catastrophic risks, such as those coming from misaligned artificial intelligence, pandemics, and nuclear war. Promotes, improves, and implements key longtermist ideas. The Longtermism Fund aims to be a strong donation option for a wide range of donors interested in longtermism. The fund focuses on organisations that: Have a compelling and transparent case in favour of their cost effectiveness that most donors interested in longtermism will understand; and/or May benefit from being funded by a large number of donors (rather than one specific organisation or donor) — for example, organisations promoting longtermist ideas to the broader public may be more effective if they have been democratically funded. There are other funders supporting longtermist work in this space, such as Open Philanthropy and the FTX Future Fund. The Longtermism Fund's grantmaking is managed by Longview Philanthropy, which works closely with these other organisations, and is well positioned to coordinate with them to efficiently direct funding to the most cost-effective organisations. The fund will make grants approximately once each quarter. To give donors a sense of the kind of work within the fund's scope, here are some examples of organisations the fund would likely give grants to if funds were disbursed today: The Johns Hopkins Center for Health Security (CHS) — CHS is an independent research organisation working to improve organisations, systems, and tools used to prevent and respond to public health crises, including pandemics. Council on Strategic Risks (CSR) — CSR analyses and addresses core systemic risks to security. They focus on how different risks intersect (for example, how nuclear and climate risks may exacerbate each other) and seek to address them by working with key decision-makers. Centre for Human-Compatible Artificial Intelligence (CHAI) — CHAI is a research organisation aiming to shift the development of AI away from potentially dangerous systems we could lose control over, and towards provably safe systems that act in accordance with human interests even as they become increasingly powerful. Centre for the Governance of AI (GovAI) — GovAI is a policy research organisation that aims to build “a global research community, dedicated to helping humanity navigate the transition to a world with advanced AI.” The vision behind the Longtermism Fund We think that longtermism as an idea and movement is likely to become significantly more mainstream — especially with Will MacAskill's soon-to-be-released book, What We Owe The Future, and popular creators becoming more involved in promoting longtermist ideas. But what's the call to action? For many who want to contribute to longtermism, focusing on their careers (perhaps by pursuing one of 80,000 Hours' high-impact career paths) will be their best option. But for many others — and perhaps for most people — the most straightforward and accessible way to contribute is through donations. Our aim is for the Longtermism Fund to make it easier for people to support highly effective organisations working to improve the long-term future. Not only do we think that the money this fund will move will have significant impact, we also think the fund will provide another avenue for the broader community to engage with and implement these...

The Lunar Society
36: Will MacAskill - Longtermism, Altruism, History, & Technology

The Lunar Society

Play Episode Listen Later Aug 9, 2022 56:07


Will MacAskill is one of the founders of the Effective Altruist movement and the author of the upcoming book, What We Owe The Future.We talk about improving the future, risk of extinction & collapse, technological & moral change, problems of academia, who changes history, and much more.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.Read the full transcript here.Follow Will on Twitter. Follow me on Twitter for updates on future episodes.Subscribe to find out about future episodes!Timestamps(00:23) - Effective Altruism and Western values(07:47) - The contingency of technology(12:02) - Who changes history?(18:00) - Longtermist institutional reform(25:56) - Are companies longtermist?(28:57) - Living in an era of plasticity(34:52) - How good can the future be?(39:18) - Contra Tyler Cowen on what’s most important(45:36) - AI and the centralization of power(51:34) - The problems with academiaPlease share if you enjoyed this episode! Helps out a ton!TranscriptDwarkesh Patel 0:06Okay, today I have the pleasure of interviewing William MacAskill. Will is one of the founders of the Effective Altruism movement, and most recently, the author of the upcoming book, What We Owe The Future. Will, thanks for coming on the podcast.Will MacAskill 0:20Thanks so much for having me on.Effective Altruism and Western valuesDwarkesh Patel 0:23My first question is: What is the high-level explanation for the success of the Effective Altruism movement? Is it itself an example of the contingencies you talk about in the book?Will MacAskill 0:32Yeah, I think it is contingent. Maybe not on the order of, “this would never have happened,” but at least on the order of decades. Evidence that Effective Altruism is somewhat contingent is that similar ideas have been promoted many times during history, and not taken on.We can go back to ancient China, the Mohists defended an impartial view of morality, and took very strategic actions to help all people. In particular, providing defensive assistance to cities under siege. Then, there were early utilitarians. Effective Altruism is broader than utilitarianism, but has some similarities. Even Peter Singer in the 70s had been promoting the idea that we should be giving most of our income to help the very poor — and didn’t get a lot of traction until early 2010 after GiveWell and Giving What We Can launched.What explains the rise of it? I think it was a good idea waiting to happen. At some point, the internet helped to gather together a lot of like-minded people which wasn’t possible otherwise. There were some particularly lucky events like Alex meeting Holden and me meeting Toby that helped catalyze it at the particular time it did.Dwarkesh Patel 1:49If it's true, as you say, in the book, that moral values are very contingent, then shouldn't that make us suspect that modern Western values aren't that good? They're mediocre, or worse, because ex ante, you would expect to end up with a median of all the values we could have had at this point. Obviously, we'd be biased in favor of whatever values we were brought up in.Will MacAskill 2:09Absolutely. Taking history seriously and appreciating the contingency of values, appreciating that if the Nazis had won the World War, we would all be thinking, “wow, I'm so glad that moral progress happened the way it did, and we don't have Jewish people around anymore. What huge moral progress we had then!” That's a terrifying thought. I think it should make us take seriously the fact that we're very far away from the moral truth.One of the lessons I draw in the book is that we should not think we're at the end of moral progress. We should not think, “Oh, we should lock in the Western values we have.” Instead, we should spend a lot of time trying to figure out what's actually morally right, so that the future is guided by the right values, rather than whichever happened to win out.Dwarkesh Patel 2:56So that makes a lot of sense. But I'm asking a slightly separate question—not only are there possible values that could be better than ours, but should we expect our values - we have the sense that we've made moral progress (things are better than they were before or better than most possible other worlds in 2100 or 2200)- should we not expect that to be the case? Should our priors be that these are ‘meh’ values?Will MacAskill 3:19Our priors should be that our values are as good as expected on average. Then you can make an assessment like, “Are other values of today going particularly well?” There are some arguments you could make for saying no. Perhaps if the Industrial Revolution happened in India, rather than in Western Europe, then perhaps we wouldn't have wide-scale factory farming—which I think is a moral atrocity. Having said that, my view is to think that we're doing better than average.If civilization were just a redraw, then things would look worse in terms of our moral beliefs and attitudes. The abolition of slavery, the feminist movement, liberalism itself, democracy—these are all things that we could have lost and are huge gains.Dwarkesh Patel 4:14If that's true, does that make the prospect of a long reflection dangerous? If moral progress is a random walk, and we've ended up with a lucky lottery, then you're possibly reversing. Maybe you're risking regression to the mean if you just have 1,000 years of progress.Will MacAskill 4:30Moral progress isn't a random walk in general. There are many forces that act on culture and on what people believe. One of them is, “What’s right, morally speaking? What's their best arguments support?” I think it's a weak force, unfortunately.The idea of lumbar flexion is getting society into a state that before we take any drastic actions that might lock in a particular set of values, we allow this force of reason and empathy and debate and goodhearted model inquiry to guide which values we end up with.Are we unwise?Dwarkesh Patel 5:05In the book, you make this interesting analogy where humans at this point in history are like teenagers. But another common impression that people have of teenagers is that they disregard wisdom and tradition and the opinions of adults too early and too often. And so, do you think it makes sense to extend the analogy this way, and suggest that we should be Burkean Longtermists and reject these inside-view esoteric threats?Will MacAskill 5:32My view goes the opposite of the Burkean view. We are cultural creatures in our nature, and are very inclined to agree with what other people think even if we don't understand the underlying mechanisms. It works well in a low-change environment. The environment we evolved towards didn't change very much. We were hunter-gatherers for hundreds of years.Now, we're in this period of enormous change, where the economy is doubling every 20 years, new technologies arrive every single year. That's unprecedented. It means that we should be trying to figure things out from first principles.Dwarkesh Patel 6:34But at current margins, do you think that's still the case? If a lot of EA and longtermist thought is first principles, do you think that more history would be better than the marginal first-principles thinker?Will MacAskill 6:47Two things. If it's about an understanding of history, then I'd love EA to have a better historical understanding. The most important subject if you want to do good in the world is philosophy of economics. But we've got that in abundance compared to there being very little historical knowledge in the EA community.Should there be even more first-principles thinking? First-principles thinking paid off pretty well in the course of the Coronavirus pandemic. From January 2020, my Facebook wall was completely saturated with people freaking out, or taking it very seriously in a way that the existing institutions weren't. The existing institutions weren't properly updating to a new environment and new evidence.The contingency of technologyDwarkesh Patel 7:47In your book, you point out several examples of societies that went through hardship. Hiroshima after the bombings, Europe after the Black Death—they seem to have rebounded relatively quickly. Does this make you think that perhaps the role of contingency in history, especially economic history is not that large? And it implies a Solow model of growth? That even if bad things happen, you can rebound and it really didn't matter?Will MacAskill 8:17In economic terms, that's the big difference between economic or technological progress and moral progress. In the long run, economic or technological progress is very non-contingent. The Egyptians had an early version of the steam engine, semaphore was only developed very late yet could have been invented thousands of years in the past.But in the long run, the instrumental benefits of tech progress, and the incentives towards tech progress and economic growth are so strong, that we get there in a wide array of circumstances. Imagine there're thousands of different societies, and none are growing except for one. In the long run, that one becomes the whole economy.Dwarkesh Patel 9:10It seems that particular example you gave of the Egyptians having some ancient form of a steam engine points towards there being more contingency? Perhaps because the steam engine comes up in many societies, but it only gets turned into an industrial revolution in one?Will MacAskill 9:22In that particular case, there's a big debate about whether quality of metalwork made it actually possible to build a proper steam engine at that time. I mentioned those to share some amazing examples of contingency prior to the Industrial Revolution.It's still contingency on the order of centuries to thousands of years. Post industrial-revolution world, there's much less contingency. It's much harder to see technologies that wouldn't have happened within decades if they hadn't been developed when they were.Dwarkesh Patel 9:57The model here is, “These general-purpose changes in the state of technology are contingent, and it'd be very important to try to engineer one of those. But other than that, it's going to get done by some guy creating a start-up anyways?”Will MacAskill 10:11Even in the case of the steam engine that seemed contingent, it gets developed in the long run. If the Industrial Revolution hadn't happened in Britain in the 18th century, would it have happened at some point? Would similar technologies that were vital to the industrial revolution developed? Yes, there are very strong incentives for doing so.If there’s a culture that's into making textiles in an automated way as opposed to England in the 18th century, then that economy will take over the world. There's a structural reason why economic growth is much less contingent than moral progress.Dwarkesh Patel 11:06When people think of somebody like Norman Borlaug and the Green Revolution. It's like, “If you could have done something that, you'd be the greatest person in the 20th century.” Obviously, he's still a very good man, but would that not be our view? Do you think the green revolution would have happened anyways?Will MacAskill 11:22Yes. Norman Borlaug is sometimes credited with saving a billion lives. He was huge. He was a good force for the world. Had Norman Borlaug not existed, I don’t think a billion people would have died. Rather, similar developments would have happened shortly afterwards.Perhaps he saved tens of millions of lives—and that's a lot of lives for a person to save. But, it's not as many as simply saying, “Oh, this tech was used by a billion people who would have otherwise been at risk of starvation.” In fact, not long afterwards, there were similar kinds of agricultural development.Who changes history?Dwarkesh Patel 12:02What kind of profession or career choice tends to lead to the highest counterfactual impact? Is it moral philosophers?Will MacAskill 12:12Not quite moral philosophers, although there are some examples. Sticking on science technology, if you look at Einstein, theory of special relativity would have been developed shortly afterwards. However, theory of general relativity was plausibly decades in advance. Sometimes, you get surprising leaps. But, we're still only talking about decades rather than millennia. Moral philosophers could make long-term difference. Marx and Engels made an enormous, long-run difference. Religious leaders like Mohammed, Jesus, and Confucius made enormous and contingent, long-run difference. Moral activists as well.Dwarkesh Patel 13:04If you think that the changeover in the landscape of ideas is very quick today, would you still think that somebody like Marx will be considered very influential in the long future? Communism lasted less than a century, right?Will MacAskill 13:20As things turned out, Marx will not be influential over the long term future. But that could have gone another way. It's not such a wildly different history. Rather than liberalism emerging dominant in the 20th century, it was communism. The better technology gets, the better the ruling ideology is to cement its ideology and persist for a long time. You can get a set of knock-on effects where communism wins the war of ideas in the 20th century.Let’s say a world-government is based around those ideas, then, via anti-aging technology, genetic-enhancement technology, cloning, or artificial intelligence, it's able to build a society that possesses forever in accordance with that ideology.Dwarkesh Patel 14:20The death of dictators is especially interesting when you're thinking about contingency because there are huge changes in the regime. It makes me think the actual individual there was very important and who they happened to be was contingent and persistent in some interesting ways.Will MacAskill 14:37If you've got a dictatorship, then you've got single person ruling the society. That means it's heavily contingent on the views, values, beliefs, and personality of that person.Scientific talentDwarkesh Patel 14:48Going back to the second nation, in the book, you're very concerned about fertility. It seems your model about scientific and technological progress happens is number of people times average researcher productivity. If resource productivity is declining and the number of people isn't growing that fast, then that's concerning.Will MacAskill 15:07Yes, number of people times fraction of the population devoted to R&D.Dwarkesh Patel 15:11Thanks for the clarification. It seems that there have been a lot of intense concentrations of talent and progress in history. Venice, Athens, or even something like FTX, right? There are 20 developers making this a multibillion dollar company—do these examples suggest that organization and congregation of researchers matter more than the total amount?Will MacAskill 15:36The model works reasonably well. Throughout history, you start from a very low technological baseline compared to today. Most people aren't even trying to innovate. One argument for why Baghdad lost its Scientific Golden Age is because the political landscape changed such that what was incentivized was theological investigation rather than scientific investigation in the 10th/11th century AD.Similarly, one argument for why Britain had a scientific and industrial revolution rather than Germany was because all of the intellectual talent in Germany was focused on making amazing music. That doesn't compound in the way that making textiles does. If you look at like Sparta versus Athens, what was the difference? They had different cultures and intellectual inquiry was more rewarded in Athens.Because they're starting from a lower base, people trying to do something that looks like what we now think of as intellectual inquiry have an enormous impact.Dwarkesh Patel 16:58If you take an example like Bell Labs, the low-hanging fruit is gone by the late 20th century. You have this one small organization that has six Nobel Prizes. Is this a coincidence?Will MacAskill 17:14I wouldn't say that at all. The model we’re working with is the size of the population times the fraction of the population doing R&D. It's the simplest model you can have. Bell Labs is punching above its weight. You can create amazing things from a certain environment with the most productive people and putting them in an environment where they're ten times more productive than they would otherwise be.However, when you're looking at the grand sweep of history, those effects are comparatively small compared to the broader culture of a society or the sheer size of a population.Longtermist institutional reformDwarkesh Patel 18:00I want to talk about your paper on longtermist institutional reform. One of the things you advocate in this paper is that we should have one of the houses be dedicated towards longtermist priorities. Can you name some specific performance metrics you would use to judge or incentivize the group of people who make up this body?Will MacAskill 18:23The thing I'll caveat with longtermist institutions is that I’m pessimistic about them. If you're trying to represent or even give consideration to future people, you have to face the fact that they're not around and they can't lobby for themselves. However, you could have an assembly of people who have some legal regulatory power. How would you constitute that? My best guess is you have a random selection from the population? How would you ensure that incentives are aligned?In 30-years time, their performance will be assessed by a panel of people who look back and assess the policies’ effectiveness. Perhaps the people who are part of this assembly have their pensions paid on the basis of that assessment. Secondly, the people in 30-years time, both their policies and their assessment of the previous 30-years previous assembly get assessed by another assembly, 30-years after that, and so on. Can you get that to work? Maybe in theory—I’m skeptical in practice, but I would love some country to try it and see what happens.There is some evidence that you can get people to take the interests of future generations more seriously by just telling them their role. There was one study that got people to put on ceremonial robes, and act as trustees of the future. And they did make different policy recommendations than when they were just acting on the basis of their own beliefs and self-interest.Dwarkesh Patel 20:30If you are on that board that is judging these people, is there a metric like GDP growth that would be good heuristics for assessing past policy decisions?Will MacAskill 20:48There are some things you could do: GDP growth, homelessness, technological progress. I would absolutely want there to be an expert assessment of the risk of catastrophe. We don't have this yet, but imagine a panel of super forecasters predicting the chance of a war between great powers occurring in the next ten years that gets aggregated into a war index.That would be a lot more important than the stock market index. Risk of catastrophe would be helpful to feed into because you wouldn't want something only incentivizing economic growth at the expense of tail risks.Dwarkesh Patel 21:42Would that be your objection to a scheme like Robin Hanson’s about maximizing the expected future GDP using prediction markets and making decisions that way?Will MacAskill 21:50Maximizing future GDP is an idea I associate with Tyler Cowen. With Robin Hanson’s idea of voting on values but betting on beliefs, if people can vote on what collection of goods they want, GDP and unemployment might be good metrics. Beyond that, it's pure prediction markets. It's something I'd love to see tried. It’s an idea of speculative political philosophy about how a society could be extraordinarily different in structure that is incredibly neglected.Do I think it'll work in practice? Probably not. Most of these ideas wouldn't work. Prediction markets can be gamed or are simply not liquid enough. There hasn’t been a lot of success in prediction markets compared to forecasting. Perhaps you can solve these things. You have laws about what things can be voted on or predicted in the prediction market, you could have government subsidies to ensure there's enough liquidity. Overall, it's likely promising and I'd love to see it tried out on a city-level or something.Dwarkesh Patel 23:13Let’s take a scenario where the government starts taking the impact on the long-term seriously and institutes some reforms to integrate that perspective. As an example, you can take a look at the environmental movement. There're environmental review boards that will try to assess the environmental impact of new projects and repeal any proposals based on certain metrics.The impact here, at least in some cases, has been that groups that have no strong, plausible interest in the environment are able to game these mechanisms in order to prevent projects that would actually help the environment. With longtermism, it takes a long time to assess the actual impact of something, but policymakers are tasked with evaluating the long term impacts of something. Are you worried that it'd be a system that'd be easy to game by malicious actors? And they'd ask, “What do you think went wrong with the way that environmentalism was codified into law?”Will MacAskill 24:09It's potentially a devastating worry. You create something to represent future people, but they're not allowed to lobby themselves (it can just be co-opted). My understanding of environmental impact statements has been similar. Similarly, it's not like the environment can represent itself—it can't say what its interests are. What is the right answer there? Maybe there are speculative proposals about having a representative body that assesses these things and elect jobs by people in 30-years time. That's the best we've got at the moment, but we need a lot more thought to see if any of these proposals would be robust for the long term rather than things that are narrowly-focused.Regulation to have liability insurance for dangerous bio labs is not about trying to represent the interests of future generations. But, it's very good for the long-term. At the moment, if longtermists are trying to change the government, let's focus on a narrow set of institutional changes that are very good for the long-term even if they're not in the game of representing the future. That's not to say I'm opposed to all such things. But, there are major problems with implementation for any of them.Dwarkesh Patel 25:35If we don't know how we would do it correctly, did you have an idea of how environmentalism could have been codified better? Why was that not a success in some cases?Will MacAskill 25:46Honestly, I don't have a good understanding of that. I don't know if it's intrinsic to the matter or if you could’ve had some system that wouldn't have been co-opted in the long-term.Are companies longtermist?Dwarkesh Patel 25:56Theoretically, the incentives of our most long-term U.S. institutions is to maximize future cash flow. Explicitly and theoretically, they should have an incentive to do the most good they can for their own company—which implies that the company can’t be around if there’s an existential risk…Will MacAskill 26:18I don't think so. Different institutions have different rates of decay associated with them. So, a corporation that is in the top 200 biggest companies has a half-life of only ten years. It’s surprisingly short-lived. Whereas, if you look at universities Oxford and Cambridge are 800 years old. University of Bologna is even older. These are very long-lived institutions.For example, Corpus Christi at Oxford was making a decision about having a new tradition that would occur only every 400 years. It makes that kind of decision because it is such a long-lived institution. Similarly, the legends can be even longer-lived again. That type of natural half-life really affects the decisions a company would make versus a university versus a religious institution.Dwarkesh Patel 27:16Does that suggest that there's something fragile and dangerous about trying to make your institution last for a long time—if companies try to do that and are not able to?Will MacAskill 27:24Companies are composed of people. Is it in the interest of a company to last for a long time? Is it in the interests of the people who constitute the company (like the CEO and the board and the shareholders) for that company to last a long time? No, they don't particularly care. Some of them do, but most don't. Whereas other institutions go both ways. This is the issue of lock-in that I talked about at length in What We Owe The future: you get moments of plasticity during the formation of a new institution.Whether that’s the Christian church or the Constitution of the United States, you lock-in a certain set of norms. That can be really good. Looking back, the U.S. Constitution seems miraculous as the first democratic constitution. As I understand it, it was created over a period of four months seems to have stood the test of time. Alternatively, lock-in norms could be extremely dangerous. There were horrible things in the U.S. Constitution like the legal right to slavery proposed as a constitutional amendment. If that had locked in, it would have been horrible. It's hard to answer in the abstract because it depends on the thing that's persisting for a long time.Living in an era of plasticityDwarkesh Patel 28:57You say in the book that you expect our current era to be a moment of plasticity. Why do you think that is?Will MacAskill 29:04There are specific types of ‘moments of plasticity’ for two reasons. One is a world completely unified in a way that's historically unusual. You can communicate with anyone instantaneously and there's a great diversity of moral views. We can have arguments, like people coming on your podcast can debate what's morally correct. It's plausible to me that one of many different sets of moral views become the most popular ultimately.Secondly, we're at this period where things can really change. But, it's a moment of plasticity because it could plausibly come to an end — and the moral change that we're used to could end in the coming decades. If there was a single global culture or world government that preferred ideological conformity, combined with technology, it becomes unclear why that would end over the long-term? The key technology here is Artificial Intelligence. The point in time (which may be sooner than we think) where the rulers of the world are digital rather than biological, that [ideological conformity] could persist.Once you've got that and a global hegemony of a single ideology, there's not much reason for that set of values to change over time. You've got immortal leaders and no competition. What are the other kind of sources of value-change over time? I think they can be accounted for too.Dwarkesh Patel 30:46Isn't the fact that we are in a time of interconnectedness that won't last if we settle space — isn't that bit of reason for thinking that lock-in is not especially likely? If your overlords are millions of light years away, how well can they control you?Will MacAskill 31:01The “whether” you have is whether the control will happen before the point of space settlement. If we took to space one day, and there're many different settlements and different solar systems pursuing different visions of the good, then you're going to maintain diversity for a very long time (given the physics of the matter).Once a solar system has been settled, it's very hard for other civilizations to come along and conquer you—at least if we're at a period of technological maturity where there aren't groundbreaking technologies to be discovered. But, I'm worried that the control will happen earlier. I'm worried the control might happen this century, within our lifetimes. I don't think it’s very likely, but it's seriously on the table - 10% or something?Dwarkesh Patel 31:53Hm, right. Going back to the long-term of the longtermism movement, there are many instructive foundations that were set up about a century ago like the Rockefeller Foundation, Carnegie Foundation. But, they don't seem to be especially creative or impactful today. What do you think went wrong? Why was there, if not value drift, some decay of competence and leadership and insight?Will MacAskill 32:18I don't have strong views about those particular examples, but I have two natural thoughts. For organizations that want to persist a long time and keep having an influence for a long time, they’ve historically specified their goals in far too narrow terms. One fun example is Benjamin Franklin. He invested a thousand pounds for each of the cities of Philadelphia and Boston to pay out after 100 years and then 200 years for different fractions of the amount invested. But, he specified it to help blacksmith apprentices. You might think this doesn't make much sense when you’re in the year 2000. He could have invested more generally: for the prosperity of people in Philadelphia and Boston. It would have had plausibly more impact.The second is a ‘regression to the mean’ argument. You have some new foundation and it's doing an extraordinary amount of good as the Rockefeller Foundation did. Over time, if it's exceptional in some dimension, it's probably going to get closer to average on that dimension. This is because you’re changing the people involved. If you've picked exceptionally competent and farsighted people, the next generation are statistically going to be less so.Dwarkesh Patel 33:40Going back to that hand problem: if you specify your mission too narrowly and it doesn't make sense in the future—is there a trade off? If you're too broad, you make space for future actors—malicious or uncreative—to take the movement in ways that you would not approve of? With regards to doing good for Philadelphia, what if it turns into something that Ben Franklin would not have thought is good for Philadelphia?Will MacAskill 34:11It depends on what your values and views are. If Benjamin Franklin only cared about blacksmith's apprentices, then he was correct to specify it. But my own values tend to be quite a bit more broad than that. Secondly, I expect people in the future to be smarter and more capable. It’s certainly the trend over time. In which case, if we’re sharing similar broad goals, and they're implementing it in a different way, then they have it.How good can the future be?Dwarkesh Patel 34:52Let's talk about how good we should expect the future to be. Have you come across Robin Hanson’s argument that we’ll end up being subsistence-level ems because there'll be a lot of competition and minimizing compute per digital person will create a barely-worth-living experience for every entity?Will MacAskill 35:11Yeah, I'm familiar with the argument. But, we should distinguish the idea that ems are at subsistence level from the idea that we would have bad lives. So subsistence means that you get a balance of income per capita and population growth such that being poorer would cause deaths to outweigh additional births.That doesn't tell you about their well-being. You could be very poor as an emulated being but be in bliss all the time. That's perfectly consistent with the Malthusian theory. It might seem far away from the best possible future, but it could still be very good. At subsistence, those ems could still have lives that are thousands of times better than ours.Dwarkesh Patel 36:02Speaking of being poor and happy, there was a very interesting section in the chapter where you mentioned the study you had commissioned: you were trying to find out if people in the developing world find life worth living. It turns out that 19% of Indians would not want to relive their life every moment. But, 31% of Americans said that they would not want to relive their life at every moment? So, why are Indians seemingly much happier at less than a tenth of the GDP per capita?Will MacAskill 36:29I think the numbers are lower than that from memory, at least. From memory, it’s something more like 9% of Indians wouldn't want to live their lives again if they had the option, and 13% of Americans said they wouldn’t. You are right on the happiness metric, though. The Indians we surveyed were more optimistic about their lives, happier with their lives than people in the US were. Honestly, I don't want to generalize too far from that because we were sampling comparatively poor Americans to comparatively well-off Indians. Perhaps it's just a sample effect.There are also weird interactions with Hinduism and the belief in reincarnation that could mess up the generalizability of this. On one hand, I don't want to draw any strong conclusion from that. But, it is pretty striking as a piece of information, given that you find people's well-being in richer countries considerably happier than poorer countries, on average.Dwarkesh Patel 37:41I guess you do generalize in a sense that you use it as evidence that most lives today are living, right?Will MacAskill 37:50Exactly. So, I put together various bits of evidence, where approximately 10% of people in the United States and 10% of people in India seem to think that their lives are net negative. They think they contain more suffering than happiness and wouldn't want to be reborn and live the same life if they could.There's another scripture study that looks at people in United States/other wealthy countries, and asks them how much of their conscious life they'd want to skip if they could. Skipping here means that blinking would reach you to the end of whatever activity you're engaging with. For example, perhaps I hate this podcast so much that I would rather be unconscious than be talking to you. In which case, I'd have the option of skipping, and it would be over after 30 minutes.If you look at that, and then also asked people about the trade offs they would be willing to make as a measure of intensity of how much they're enjoying a certain experience, you reach the conclusion that a little over 10% of people regarded their life that day as being surveyed worse than if they'd been unconscious the entire day.Contra Tyler Cowen on what’s most importantDwarkesh Patel 39:18Jumping topics here a little bit, on the 80,000 Hours Podcast, you said that you expect scientists who are explicitly trying to maximize their impact might have an adverse impact because they might be ignoring the foundational research that wouldn't be obvious in this way of thinking, but might be more important.Do you think this could be a general problem with longtermism? If you were trying to find the most important things that are important long-term, you might be missing things that wouldn't be obvious thinking this way?Will MacAskill 39:48Yeah, I think that's a risk. Among the ways that people could argue against my general set of views, I argue that we should be doing fairly specific and targeted things like trying to make AI safe, well-govern the rise of AI, reduce worst-case pandemics that can kill us all, prevent a Third World War, ensure that good values are promoted, and avoid value lock-in. But, some people could argue (and people like Tyler Cowen and Patrick Collison do), that it's very hard to predict the future impact of your actions.It's a mug's game to even try. Instead, you should look at the things that have done loads of good consistently in the past, and try to do the same things. In particular, they might argue that means technological progress or boosting economic growth. I dispute that. It's not something I can give a completely knock-down argument to because we don’t know when we will find out who's right. Maybe in thousand-years time. But one piece of evidence is the success of forecasters in general. This also was true for Tyler Cowen, but people in Effective Altruism were realizing that the Coronavirus pandemic was going to be a big deal for them. At an early stage, they were worrying about pandemics far in advance. There are some things that are actually quite predictable.For example, Moore's Law has held up for over 70 years. The idea that AI systems are gonna get much larger and leading models are going to get more powerful are on trend. Similarly, the idea that we will be soon be able to develop viruses of unprecedented destructive power doesn’t feel too controversial. Even though it’s hard to predict loads of things, there are going to be tons of surprises. There are some things, especially when it comes to fairly long-standing technological trends, that we can make reasonable predictions — at least about the range of possibilities that are on the table.Dwarkesh Patel 42:19It sounds like you're saying that the things we know are important now. But, if something didn't turn out, a thousand years ago, looking back to be very important, it wouldn't be salient to us now?Will MacAskill 42:31What I was saying with me versus Patrick Collison and Tyler Cowen, who is correct? We will only get that information in a thousand-years time because we're talking about impactful strategies for the long-term. We might get suggestive evidence earlier. If me and others engaging in longtermism are making specific, measurable forecasts about what is going to happen with AI, or advances in biotechnology, and then are able to take action such that we are clearly reducing certain risks, that's pretty good evidence in favor of our strategy.Whereas, they're doing all sorts of stuff, but not make firm predictions about what's going to happen, but then things pop out of that that are good for the long-term (say we measure this in ten-years time), that would be good evidence for their view.Dwarkesh Patel 43:38You were saying earlier about the contingency in technology implies that given their worldview, even if you're trying to maximize what in the past is at the most impact, if what's had the most impact in the past is changing values, then economic growth might be the most important thing? Or trying to change the rate of economic growth?Will MacAskill 43:57I really do take the argument seriously of how people have acted in the past, especially for people trying to make a long-lasting impact. What things that they do that made sense and whatnot. So, towards the end of the 19th century, John Stuart Mill and the other early utilitarians had this longtermist wave where they started taking the interests of future generations very seriously. Their main concern was Britain running out of coal, and therefore, future generations would be impoverished. It's pretty striking because they had a very bad understanding of how the economy works. They hadn't predicted that we would be able to transition away from coal with continued innovation.Secondly, they had enormously wrong views about how much coal and fossil fuels there were in the world. So, that particular action didn't make any sense given what we know now. In fact, that particular action of trying to keep coal in the ground, given Britain at the time where we're talking about much lower amounts of coal—so small that the climate change effect is negligible at that level—probably would have been harmful.But, we could look at other things that John Stuart Mill could have done such promoting better values. He campaigned for women's suffrage. He was the first British MP. In fact, even the first politician in the world to promote women's suffrage - that seems to be pretty good. That seems to have stood the test of time. That's one historical data point. But potentially, we can learn a more general lesson there.AI and the centralization of powerDwarkesh Patel 45:36Do you think the ability of your global policymakers to come to a consensus is on net, a good or a bad thing? On the positive, maybe it helps around some dangerous tech from taking off, but on the negative side, prevent human challenge trials that cause some lock-in in the future. On net, what do you think about that trend?Will MacAskill 45:54The question of global integration, you're absolutely right, it's double-sided. One hand, it can help us reduce global catastrophic risks. The fact that the world was able to come come together and ban Chlorofluorocarbons was one of the great events of the last 50 years, allowing the hole in the ozone layer to to repair itself. But on the other hand, if it means we all converge to one monoculture and lose out on diversity, that's potentially bad. We could lose out on the most possible value that way.The solution is doing the good bits and not having the bad bits. For example, in a liberal constitution, you can have a country that is bound in certain ways by its constitution and by certain laws yet still enables a flourishing diversity of moral thought and different ways of life. Similarly, in the world, you can have very strong regulation and treaties that only deal with certain global public goods like mitigation of climate change, prevention of development of the next generation of weapons of mass destruction without having some very strong-arm global government that implements a particular vision of the world. Which way are we going at the moment? It seems to me we've been going in a pretty good and not too worrying direction. But, that could change.Dwarkesh Patel 47:34Yeah, it seems the historical trend is when you have a federated political body that even if constitutionally, the Central Powers constrain over time, they tend to gain more power. You can look at the U.S., you can look at the European Union. But yeah, that seems to be the trend.Will MacAskill 47:52Depending on the culture that's embodied there, it's potentially a worry. It might not be if the culture itself is liberal and promoting of moral diversity and moral change and moral progress. But, that needn't be the case.Dwarkesh Patel 48:06Your theory of moral change implies that after a small group starts advocating for a specific idea, it may take a century or more before that idea reaches common purchase. To the extent that you think this is a very important century (I know you have disagreements about that with with others), does that mean that there isn't enough time for longtermism to gain by changing moral values?Will MacAskill 48:32There are lots of people I know and respect fairly well who think that Artificial General Intelligence will likely lead to singularity-level technological progress and extremely rapid rate of technological progress within the next 10-20 years. If so, you’re right. Value changes are something that pay off slowly over time.I talk about moral change taking centuries historically, but it can be much faster today. The growth of the Effective Altruism movement is something I know well. If that's growing at something like 30% per year, compound returns mean that it's not that long. That's not growth. That's not change that happens on the order of centuries.If you look at other moral movements like gay rights movement, very fast moral change by historical standards. If you're thinking that we've got ten years till the end of history, then don't broadly try and promote better values. But, we should have a very significant probability mass on the idea that we will not hit some historical end of this century. In those worlds, promoting better values could pay off like very well.Dwarkesh Patel 49:59Have you heard of Slime Mold Time Mold Potato Diet?Will MacAskill 50:03I have indeed heard of Slime Mold Time Mold Potato Diet, and I was tempted as a gimmick to try it. As I'm sure you know, potato is close to a superfood, and you could survive indefinitely on butter mashed potatoes if you occasionally supplement with something like lentils and oats.Dwarkesh Patel 50:25Hm, interesting. Question about your career: why are you still a professor? Does it still allow you to the things that you would otherwise have been doing like converting more SBF’s and making moral philosophy arguments for EA? Curious about that.Will MacAskill 50:41It's fairly open to me what I should do, but I do spend significant amounts of time co-founding organizations or being on the board of those organizations I've helped to set up. More recently, working closely with the Future Fund, SBF’s new foundation, and helping them do as much good as possible. That being said, if there's a single best guess for what I want to do longer term, and certainly something that plays to my strengths better, it's developing ideas, trying to get the big picture roughly right, and then communicating them in a way that's understandable and gets more people to get off their seats and start to do a lot of good for the long-term. I’ve had a lot of impact that way. From that perspective, having an Oxford professorship is pretty helpful.The problems with academiaDwarkesh Patel 51:34You mentioned in the book and elsewhere that there's a scarcity of people thinking about big picture questions—How contingent is history? How are people happy generally?—Are these questions that are too hard for other people? Or they don't care enough? What's going on? Why are there so few people talking about this?Will MacAskill 51:54I just think there are many issues that are enormously important but are just not incentivized anywhere in the world. Companies don't incentivize work on them because they’re too big picture. Some of these questions are, “Is the future good, rather than bad? If there was a global civilizational collapse, would we recover? How likely is a long stagnation?” There’s almost no work done on any of these topics. Companies aren't interested too grand in scale.Academia has developed a culture where you don't tackle such problems. Partly, that's because they fall through the cracks of different disciplines. Partly because they seem too grand or too speculative. Academia is much more in the mode of making incremental gains in our understanding. It didn't always used to be that way.If you look back before the institutionalization of academic research, you weren't a real philosopher unless you had some grand unifying theory of ethics, political philosophy, metaphysics, logic, and epistemology. Probably the natural sciences too and economics. I'm not saying that all of academic inquiry should be like that. But should there be some people whose role is to really think about the big picture? Yes.Dwarkesh Patel 53:20Will I be able to send my kids to MacAskill University? What's the status on that project?Will MacAskill 53:25I'm pretty interested in the idea of creating a new university. There is a project that I've been in discussion about with another person who's fairly excited about making it happen. Will it go ahead? Time will tell. I think you can do both research and education far better than it currently exists. It's extremely hard to break in or creating something that's very prestigious because the leading universities are hundreds of years old. But maybe it's possible. I think it would could generate enormous amounts of value if we were able to pull it off.Dwarkesh Patel 54:10Excellent, alright. So the book is What We Owe The Future. I understand pre-orders help a lot, right? It was such an interesting read. How often does somebody write a book about the questions they consider to be the most important even if they're not the most important questions? Big picture thinking, but also looking at very specific questions and issues that come up. Super interesting read.Will MacAskill 54:34Great. Well, thank you so much!Dwarkesh Patel 54:38Anywhere else they can find you? Or any other information they might need to know?Will MacAskill 54:39Yeah, sure. What We Owe The Future is out on August 16 in the US and first of September in the United Kingdom. If you want to follow me on Twitter, I'm @WillMcCaskill. If you want to try and use your time or money to do good, Giving What We Can is an organization that encourages people to take a pledge to give a significant fraction of the income (10% or more) to the charities that do the most good. It has a list of recommended charities. 80,000 Hours—if you want to use your career to do good—is a place to go for advice on what careers have the biggest impact at all. They provide one-on-one coaching too.If you're feeling inspired and want to do good in the world, you care about future people and I want to help make their lives go better, then, as well as reading What We Owe The Future, Giving What We Can, and 80,000 hours are the sources you can go to and get involved.Dwarkesh Patel 55:33Awesome, thanks so much for coming on the podcast! It was a lot of fun.Will MacAskill 54:39Thanks so much, I loved it. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com

The Tim Ferriss Show
#612: Will MacAskill of Effective Altruism Fame — The Value of Longtermism, Tools for Beating Stress and Overwhelm, AI Scenarios, High-Impact Books, and How to Save the World and Be an Agent of Change

The Tim Ferriss Show

Play Episode Listen Later Aug 2, 2022 104:35 Very Popular


Will MacAskill of Effective Altruism Fame — The Value of Longtermism, Tools for Beating Stress and Overwhelm, AI Scenarios, High-Impact Books, and How to Save the World and Be an Agent of Change | Brought to you by LinkedIn Jobs recruitment platform with 800M+ users, Vuori comfortable and durable performance apparel, and Theragun percussive muscle therapy devices. More on all three below. William MacAskill (@willmacaskill) is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. A Forbes 30 Under 30 social entrepreneur, he also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and Y Combinator-backed 80,000 Hours, which together have moved over $200 million to effective charities. You can find my 2015 conversation with Will at tim.blog/will. His new book is What We Owe the Future. It is blurbed by several guests of the podcast, including Sam Harris, who wrote, “No living philosopher has had a greater impact upon my ethics than Will MacAskill. . . . This is an altogether thrilling and necessary book.” Please enjoy! *This episode is brought to you by Vuori clothing! Vuori is a new and fresh perspective on performance apparel, perfect if you are sick and tired of traditional, old workout gear. Everything is designed for maximum comfort and versatility so that you look and feel as good in everyday life as you do working out.Get yourself some of the most comfortable and versatile clothing on the planet at VuoriClothing.com/Tim. Not only will you receive 20% off your first purchase, but you'll also enjoy free shipping on any US orders over $75 and free returns.*This episode is also brought to you by Theragun! Theragun is my go-to solution for recovery and restoration. It's a famous, handheld percussive therapy device that releases your deepest muscle tension. I own two Theraguns, and my girlfriend and I use them every day after workouts and before bed. The all-new Gen 4 Theragun is easy to use and has a proprietary brushless motor that's surprisingly quiet—about as quiet as an electric toothbrush.Go to Therabody.com/Tim right now and get your Gen 4 Theragun today, starting at only $199.*This episode is also brought to you by LinkedIn Jobs. Whether you are looking to hire now for a critical role or thinking about needs that you may have in the future, LinkedIn Jobs can help. LinkedIn screens candidates for the hard and soft skills you're looking for and puts your job in front of candidates looking for job opportunities that match what you have to offer.Using LinkedIn's active community of more than 800 million professionals worldwide, LinkedIn Jobs can help you find and hire the right person faster. When your business is ready to make that next hire, find the right person with LinkedIn Jobs. And now, you can post a job for free. Just visit LinkedIn.com/Tim.*For show notes and past guests on The Tim Ferriss Show, please visit tim.blog/podcast.For deals from sponsors of The Tim Ferriss Show, please visit tim.blog/podcast-sponsorsSign up for Tim's email newsletter (5-Bullet Friday) at tim.blog/friday.For transcripts of episodes, go to tim.blog/transcripts.Discover Tim's books: tim.blog/books.Follow Tim:Twitter: twitter.com/tferriss Instagram: instagram.com/timferrissYouTube: youtube.com/timferrissFacebook: facebook.com/timferriss LinkedIn: linkedin.com/in/timferrissPast guests on The Tim Ferriss Show include Jerry Seinfeld, Hugh Jackman, Dr. Jane Goodall, LeBron James, Kevin Hart, Doris Kearns Goodwin, Jamie Foxx, Matthew McConaughey, Esther Perel, Elizabeth Gilbert, Terry Crews, Sia, Yuval Noah Harari, Malcolm Gladwell, Madeleine Albright, Cheryl Strayed, Jim Collins, Mary Karr, Maria Popova, Sam Harris, Michael Phelps, Bob Iger, Edward Norton, Arnold Schwarzenegger, Neil Strauss, Ken Burns, Maria Sharapova, Marc Andreessen, Neil Gaiman, Neil de Grasse Tyson, Jocko Willink, Daniel Ek, Kelly Slater, Dr. Peter Attia, Seth Godin, Howard Marks, Dr. Brené Brown, Eric Schmidt, Michael Lewis, Joe Gebbia, Michael Pollan, Dr. Jordan Peterson, Vince Vaughn, Brian Koppelman, Ramit Sethi, Dax Shepard, Tony Robbins, Jim Dethmer, Dan Harris, Ray Dalio, Naval Ravikant, Vitalik Buterin, Elizabeth Lesser, Amanda Palmer, Katie Haun, Sir Richard Branson, Chuck Palahniuk, Arianna Huffington, Reid Hoffman, Bill Burr, Whitney Cummings, Rick Rubin, Dr. Vivek Murthy, Darren Aronofsky, and many more.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Nonlinear Library
EA - The Operations team at CEA transforms by Josh Axford

The Nonlinear Library

Play Episode Listen Later Aug 2, 2022 8:32


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Operations team at CEA transforms, published by Josh Axford on August 2, 2022 on The Effective Altruism Forum. The Operations team at CEA — or “Ops” — provides the financial, legal, administrative, grantmaking, logistical, and fundraising support that enables many high-impact organisations to grow. These organisations include CEA, 80,000 Hours, the Forethought Foundation, EA Funds, Giving What We Can, the Centre for the Governance of AI, Longview Philanthropy, Asterisk, Wytham Abbey, and Non-trivial. Summary The last six months has been the most transformational period for Ops so far. We've nearly tripled our capacity, from 7 to 20 FTEs. Increasing capacity was our primary focus over this period, because it lets us sustain high quality operational support while meeting the rising demands on our systems. We've been thrilled with the results of our recent hiring rounds, including the team's approach to onboarding new members. And the quality of the new hires is a strong indication for the sustainability of future growth. Our increased capacity has also allowed us to support more organisations. So far this year we've fiscally sponsored four additional organisations, including Longview Philanthropy, Asterisk, and Non-trivial. We've added a Property team within Ops. This team creates and manages offices and accommodation spaces that are optimised for productivity, creativity, and wellbeing. The team has been evaluating the impact of the Oxford office while exploring creating more office spaces on the US East Coast. Looking ahead, we've been working on a rebrand for Ops to minimise brand entanglement with CEA. This change reflects the fact that Ops now supports an increasing number of organisations beyond CEA, and we expect to announce an update in Q3. We've also decided to appoint a Head of Fiscal Sponsorship in the second half of the year to manage the rising demand for the fiscal sponsorship service. What does Ops look like? Here's the current structure of Ops: Highlights of the year so far Executive Summer internship: We trialled a three-month summer internship programme. We received 135 applicants and hired 5 interns. The strength of the pool was very high, and we were able to recommend candidates to partner organisations. In September, we'll evaluate the programme and share a write up on the Forum. So far the results look positive. Restructure and rebrand: We've been exploring legal structure options which will give us the greatest amount of flexibility going forward. We've also appointed a team to work on a new website and selected a name for the legal entity. Fiscal Sponsorship: We began fiscally sponsoring Longview Philanthropy, Asterisk and Non-trivial. Sara Elsholz (Executive Assistant) and Susan Shi (General Counsel) joined the team. Property Trajan office survey: A survey of users at Trajan House (the EA hub workspace in Oxford) suggests that the workspace produces a ~12% counterfactual increase in productivity for users. We're exploring opening further office spaces in Oxford. Offices in the US: Buoyed by the Trajan House data, we're also exploring the possibility of opening office spaces in Boston and New York. Work on the Boston offices has begun, while a New York office is still being evaluated. Visitor accommodation in Oxford: We've acquired some property in Oxford to let visitors stay the night while avoiding high hotel costs. The property is called Lakeside and rooms will be available for booking soon. Jonathan Michel was promoted to Head of Property. Bethany Lacey-Page and Tom Hempstock (Office Assistants for the Trajan Office), and Kaleem Ahmid (Project Manager) joined the team. Staff Support Automated onboarding: We optimised the onboarding process by automating common tasks — like sending contracts, writing welcome emails, and requesting feedbac...

The Astral Hustle with Cory Allen
Staying Grounded with Gratitude

The Astral Hustle with Cory Allen

Play Episode Listen Later Aug 1, 2022 17:34


People often say that focusing on gratitude makes our lives better. But what exactly is gratitude practice? In this episode, I share what gratitude is, why it's easy to overlook what's going right in our lives, and how feeling grateful can ground us. This episode is sponsored by Giving What We Can. Click to learn more.Coaching with Cory: I'm now offering One-to-One coaching to help you build a path to the next level.Please support the show by joining our Patreon Community.Sign up for my newsletter to receive new writing on Friday morning.My new meditation course Coming Home is now available. Now Is the Way is out now in paperback!  Use Astral for 15% off Binaural Beats, Guided Meditations, and my Meditation Course.Please rate The Astral Hustle on iTunes. ★★★★★ Connect with Cory:Home: http://www.cory-allen.comIG: https://www.instagram.com/heycoryallenTwitter: https://twitter.com/HeyCoryAllenFacebook: https://www.facebook.com/HeyCoryAllen© CORY ALLEN 2022

The Nonlinear Library
EA - Why I view effective giving as complementary to direct work by JulianHazell

The Nonlinear Library

Play Episode Listen Later Jul 31, 2022 8:01


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I view effective giving as complementary to direct work, published by JulianHazell on July 31, 2022 on The Effective Altruism Forum. Epistemic status Feelings, anecdotes, and impressions. Declaration I used to work at Giving What We Can, but in this capacity I'm writing about my personal experience with the Pledge. Introduction I have the impression that, at least among effective altruists who are directly using their careers to do good, donating is somewhat falling out of style. I feel neither happy nor sad about this. Perhaps ironically, my stance on whether someone doing directly impactful work should donate is that it's a personal decision, to borrow an eye-roll-inducing platitude from the philanthropic field. I hold this belief because I view the decision of whether to donate as vastly different from the decision of where to donate. The former can often hinge on personal circumstances, such as where one is at in their career, what their financial situation looks like, and if they have any dependents. I view the claim that the choice of where to donate is a personal decision as much more tenuous; I'm preaching to the choir here, but one ought to support charities that do more good than less, all else being equal. I understand why some of my peers who work directly on the world's most pressing problems choose not to take the Giving What We Can Pledge. At the end of the day, I don't care how the good is done, so long as it is done. My default is to trust the judgement of people who are making a good faith effort to think carefully about what they can do to help others as much as possible. If you feel that donating isn't the best road to impact, I have faith in the reasoning behind your belief. Yet as someone who is working directly on what I view as one of the world's most pressing problems, I still feel that effective giving is a core part of my plan to do what I can to make the world as good as possible. Here's why. Nothing can take donating away from me, not even a bad day I'm currently spending my time researching ways to steer the development of transformative artificial intelligence in a positive direction. This means that I work in a field with little-to-no clear feedback loops — at least ones that can concretely indicate whether or not the things I'm doing are actually improving others' lives. Like many, I have struggled with imposter syndrome, and this has been exacerbated by the messy and opaque causal links between the things I do on a day-to-day basis and actual downstream improvements in others' lives. Questions like “Am I smart enough to belong in this community?”, “Am I actually doing any good?” and “Will this paper I'm writing help anyone?” pop into my head more than I'd like to admit. These kinds of thoughts and feelings suck. They aren't helpful, and I recognise that. But sometimes they're hard to avoid, and they can be debilitating when they strike. I want to help others so badly, and the thought of failing to do that is agonising. Being able to donate makes me hopeful. No matter how rough of a day I have, or how unclear the impact of my work is, nothing can take donating away from me. There is no imposter: I can literally see a number on my Giving What We Can dashboard, and I can feel proud about knowing those funds are going to help others. Hitting a brick wall on figuring out AI governance, or having colossal uncertainties about what I ought to do with my career won't change that. In fact, nothing will, and the sense of agency that brings me is incredibly motivating. To be clear, I believe the vast majority of my positive impact on the world will come from using my career to work on problems that could negatively impact humanity's long run potential. But in my mind, that has little bearing on the reality that $3,500 can save someone's life...

The Giving What We Can Podcast
Member Story: James Montavon

The Giving What We Can Podcast

Play Episode Listen Later Jul 25, 2022 3:05


New member James Montavon shares why he joined Giving What We Can and tells us about living his beliefs, giving gradually and gives his advice about making a small commitment to give effectively! This interview was filmed at Effective Altruism Global London 2022. Thank you to James for taking the time to share your story. CHAPTERS: 00:00 - What inspired you to take the pledge? 00:42 - How does it feel to give effectively? 01:02 - Do you have any advice for giving effectively? 01:52 - Where are you giving? 02:32 - Knowing you're having an impact!

The Nonlinear Library
EA - If you're unhappy, consider leaving by Justis

The Nonlinear Library

Play Episode Listen Later Jul 20, 2022 9:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If you're unhappy, consider leaving, published by Justis on July 20, 2022 on The Effective Altruism Forum. When I talk to random people around my town, and ask how they're doing, a large fraction of them open with "well, the world sucks, but..."They do this whether or not they, personally, are about to report that they're doing well. It feels like a necessary caveat to them. To me, this is a sign that they're in a social context that cuts against their thriving. Does the world suck? Very hard to say with confidence. The world is very big, and there's no clear objective standard. But carrying around "the world sucks" is heavy, and makes it harder to enjoy life. So I'd like to tell them to consider just... not thinking that anymore? The belief probably isn't paying rent. It's probably not rigorously considered. Just this cached thing, sitting around, making life slightly worse. But I don't, because they're random acquaintances and their world models are their business. Same with you, reader of this post. But I'm going to make a recommendation to you, anyway. Consider leaving the organized EA movement for a while. Wait, what? If your reaction is "but this movement is great! I'm really happy and energized and ready to make the biggest difference I can with all my new exciting friends" then yeah, keep doing your thing! I probably don't have anything to offer you. And for what it's worth, I also agree the movement is great, and I am personally somewhat involved right now. But I think it's pretty clear that lots of people in the EA orbit are persistently unhappy. And the solution to being persistently unhappy with a social arrangement or memeplex, usually, is to leave. Famously, this often doesn't occur to the person suffering for a long time, if ever, even if it looks like the obvious correct choice from the outside. What do I mean by leaving? I don't mean "stop caring about doing good". If you've taken the Giving What We Can pledge, for example, I'd definitely keep it. What I mean is something more like "stop expecting EA to provide for any of your core needs." This includes, at least: Social - have non-EA friends. Ideally have some be local. Talk about other stuff with them, mostly. Financial - do not rely on EA funding sources for income that you couldn't do without. Don't apply for EA jobs. Emotional - do not have a unidimensional sense of self-worth that boils down to "how am I scoring on a vague, amorphous impact scale as envisioned in EA terms". Actually, that's probably about it. If you're persistently unhappy, and don't feel fulfilled socially, financially, or emotionally, stop doing those three things. What are the upsides? First, consider yourself. It's easy to take this idea too far, but the person you have the greatest responsibility for - except perhaps your children or in some niche situations your spouse - is yourself. You have direct access to your emotional state. If you're not flourishing, it is first and foremost your responsibility to address that. And addressing it is worth something, even in the EA framework. Second, consider bias. I think there's a common story which goes something like: Well I need to make sure I make a good impression on people at orgs X and Y, since I might want to work for them. But they have close relationships with Z and A. So I need to be rubbing elbows with those people, too. To do that it'll help for me to live in the most expensive city in the world, which makes it really important I don't irritate C and D, because I'll need substantial funding just to not burn through savings. I didn't properly appreciate this for a long time, but I keep seeing people be nervous to like, seriously criticize EA, because their (aspirational) livelihoods depend on it. This is very bad for epistemology. It may be the single worst thing for ep...

The Nonlinear Library
EA - EA Infrastructure Fund: September–December 2021 grant recommendations by Max Daniel

The Nonlinear Library

Play Episode Listen Later Jul 12, 2022 31:58


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Infrastructure Fund: September–December 2021 grant recommendations, published by Max Daniel on July 12, 2022 on The Effective Altruism Forum. Introduction The EA Infrastructure Fund made the following grants between September and December 2021: Total grants: $2,445,941 Number of grants: 68 Acceptance rate: 78.5% This was somewhat, though not massively higher than usual. E.g., in both the previous and subsequent 4-month periods the acceptance rate was just under 71%. We suspect the unusually high acceptance rate during the period covered by this report was mostly due to an unusually large fraction of applications from EA university groups, which tend to have a higher acceptance rate than other applications we receive. Payout date: September–December 2021 Report authors: Max Daniel (chair), Michelle Hutchinson, Buck Shlegeris, Emma Williamson (assistant fund manager), Michael Aird (guest manager), Chi Nguyen (guest manager) In addition, we referred 5 grants totalling an additional $261,530 to private funders. See the "Referred Grants" section below. For capacity reasons, we only provide an abbreviated strategic update that doesn't include all relevant recent and upcoming developments at the EAIF and EA Funds more generally. A more comprehensive update will follow within the next few months. The EA Funds donation platform is moving to Giving What We Can. We are working on a new process for sharing information about the grants we are making that will reduce the delay between our grant decisions and their public reporting. Therefore, this is likely going to be the last EAIF payout report in the familiar format. We appointed Peter Wildeford as fund manager. Like other fund managers, Peter may evaluate any grant we receive, but we are especially excited about his expertise and interest in global health and wellbeing and animal welfare, which we think complements the more longtermism-focused profile of the other fund managers. Ashley Lin and Olivia Jimenez joined the EAIF as new guest managers. Chi Nguyen and Michael Aird left after their guest manager tenure, though Michael will still occasionally handle applications he is especially well placed to evaluate (for instance because he evaluated an application by the same applicant in the past). Olivia has since left as well to focus on other EA-related projects. We appointed Joan Gass and Catherine Low as fund advisers to increase coordination with the CEA Groups and Community Health teams, respectively. Joan Gass has since resigned from her role as adviser to focus on other EA-related projects, and we added Rob Gledhill and Jesse Rothman (both at CEA) as advisers instead. Would you like to get funded? Apply for funding. Highlights We continue to be very excited about the Global Challenges Project (GCP) led by Emma Abele, James Aung, and Henry Sleight. We supported GCP with $174,000 to cover the salaries of their core team as well as salaries and expenses for university groups managed by GCP. We first funded the GCP team in May 2021 and are impressed by their achievements to date, which include running an Oxford-based summer program for university group organizers, taking on EA Books Direct, and advice and other services for university groups. We were impressed with the track record, plans, and team of the Norwegian effective giving website Gi Effektivt, and supported them with a grant of $270,500 that exceeded their original ask. We funded Luca Parodi ($20,000), creator of the Instagram account School of Thinking, to produce additional Italian-language content on EA, longtermism, and rationality. We haven't reviewed the content Luca created since we made the grant, but we were excited to fund experiments with content creation on platforms with thus far little EA-related content. Grant Recipients In addition to the grants d...

The Giving What We Can Podcast
#7 - Bruce Friedrich: Why plant based meat is a scalable solution to feed the world

The Giving What We Can Podcast

Play Episode Listen Later Jul 7, 2022 49:03


We were lucky enough to sit down with Bruce Friedrich, the founder and CEO of the Good Food Institute (GFI) which is pioneering open access science, policy and corporate engagement to make plant based meat taste as good and cost less than it's traditional counterparts as part of a solution to feed future populations and reduce the impacts on climate, health and poverty. This was an inspiring interview and we are really grateful for Bruce sharing his time with us, and for the work that he has spearheaded at GFI. You can donate to GFI and support all their excellent work via Giving What We Can: https://www.givingwhatwecan.org/charities/good-food-institute WANT TO LEARN MORE ABOUT GFI?

The Giving What We Can Podcast
Member Story: John Yan

The Giving What We Can Podcast

Play Episode Listen Later Jul 4, 2022 2:50


John Yan, who lives in NYC was kind enough to let us interview him about his experiences with Giving What We Can and taking a public giving pledge.   CHAPTERS: 00:00 - How did you first learn about GWWC? 00:40 - What motivated you to take the GWWC pledge? 01:06 - What has giving effectively brought to your life? 01:34 - Do you have any advice for someone considering taking the pledge? 02:01 - The GWWC community 02:29 - Making it more normal to give!   OUR RESOURCES: ✍️ Take a giving pledge: https://givingwhatwecan.org/pledge

The Giving What We Can Podcast
Member Story: Catherine Low

The Giving What We Can Podcast

Play Episode Listen Later Jun 30, 2022 6:40


We really enjoyed chatting to Catherine about her experiences with effective giving and Giving What We Can. Thank you Catherine for giving up your time to speak to us! CHAPTERS: 00:00 - How did you discover GWWC? 00:51 - What motivated you to take the GWWC pledge? 03:11 - What does giving effectively bring to your life? 04:23 - Do you have any advice for someone considering taking the pledge? 05:44 - Being a part of a community   OUR RESOURCES: ✍️ Take a giving pledge: https://givingwhatwecan.org/pledge

Financial Independence Podcast
Podcast Takeover: Financial Independence Europe - Effective Altruism

Financial Independence Podcast

Play Episode Listen Later Jun 27, 2022 51:05 Very Popular


Welcome to another podcast-takeover episode of the Financial Independence podcast! This time, Mathias from the Financial Independence Europe podcast takes over the show to talk to Luke Freeman (from Giving What We Can) and Rebecca Herbst (from Yield and Spread) about Effective Altruism! Highlights What is Effective Altruism How to give efficiently and effectively Benefits of joining the effective altruism community How giving is evolving Differences between the US and Europe with regards to philanthropy How to give to charity while pursuing FI Show Links Financial Independence Europe - https://financial-independence.eu/ Yield and Spread - http://yieldandspread.com Give What We Can - https://www.givingwhatwecan.org Give Well - https://www.givewell.org Original Show Link - https://financial-independence.eu/uncategorized/episode-147-how-to-give-effectively-luke-freeman-and-rebecca-herbst/

The Nonlinear Library
EA - User-Friendly Intro Post by James Odene [User-Friendly]

The Nonlinear Library

Play Episode Listen Later Jun 23, 2022 10:18


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: User-Friendly Intro Post, published by James Odene [User-Friendly] on June 23, 2022 on The Effective Altruism Forum. Introducing User-Friendly User-Friendly is an EA-aligned marketing agency. We provide marketing expertise to impact-driven organisations, multiplying efforts towards solving the world's biggest challenges. We have supported EA clients such as Giving What We Can, 80K, High Impact Athletes, WANBAM, Rethink Priorities, Happier Lives Institute and Carrick Flynn's recent campaign for congress, with a wide-range of strategic and creative marketing services.We had been operating User-Friendly as a freelance side-hustle up until early May this year when we increased our capacity following funding from the EA Infrastructure Fund. The grant is to give User-Friendly operational support whilst we continue to assess the worth and desire for our work in this space. The Pitch Effective communication is a key component to the success of many organisational aims. As a movement, core components of our work are unavoidably entwined in the way in which we communicate with our intended audiences. Whether you are aiming to change opinions, change behaviours, drive others towards a more impactful path or encourage donations or pledges, the way in which your message is constructed and disseminated plays a crucial role in how effective your organisation can be. As an indication of the scale for impact, it's worth considering the Kantar research that suggests the creative quality of marketing is one of the best multipliers of impact (x12). This indicates the substantial impact available between the best and worst marketing attempts on quality alone. We want to support EA-aligned organisations to achieve their strategic objectives by utilising our expertise and skills, ensuring that their marketing is effectively (and cost-effectively) implemented. The Service User-Friendly brings together the international marketing and behavioural science expertise of James Odene and the substantial global animal movement campaigns experience of Amy Odene. We also collaborate with external freelancers for specific expertise that we do not hold ourselves.Getting the best multiplying effect of marketing requires not only creative skill, but sound strategic underpinning honed through years of experience and training. We offer strategic services including; strategic consultation, messaging development, brand development and management and marketing auditing as well as creative execution services including; graphic design, branding, social media management and asset development, campaign development and ad creative. The Scoping In order to assess the current landscape of the movement's marketing and communications resource, we disseminated a scoping survey earlier this year. 35 organisations responded, the results from which were assessed alongside our top preconceptions about the desire and worth of such an offering in the EA space, informing our strategy moving forward. Here you will find a list of our initial preconceptions, a summary of the responses in relation to these, and our agency's solution. We identified 6 challenges that we felt the EA movement faced in relation to such a service. You can review the full scoping survey here. Internal resource Organisations are looking for a specific service or task to be completed but do not have the resource, funding or perhaps full-time capacity need, to hire in-house for this particular project.Survey result: Throughout the survey results it is clear that the demand for one-off project based work is high, and growing, with seemingly no short-term intention to build this capacity in-house. With the current lack of resource, skill deficit and predicted increase in spend, it is reasonable to suggest that a service such as User-Friendly, could provide a...

Impact Audio
The Opportunity Motivation: Effective Altruism and Our Impetus to Give

Impact Audio

Play Episode Listen Later Jun 7, 2022 28:54


What are the factors that encourage charitable giving and how should people decide where to focus their efforts? Effective altruism is a social movement and research initiative with unique (and controversial) responses to these questions. In this episode of Impact Audio, you'll hear Luke Freeman, Executive Director of Giving What We Can, discussing the guiding principles behind his organization. Luke also talks about motivational psychology, CSR, impact measurement, the pandemic, and criticisms of effective altruism. Listen in to learn about:• What actually motivates people to give back • How businesses can support employee giving and lead by example• Giving What We Can's process for choosing charities and measuring impact • The movement's responses to trust-based philanthropy and criticism about causality• How US philanthropy sets itself apart from global giving We hope you enjoy the conversation.

Tennis IQ Podcast
Ep. 85 - Marcus Daniell and Making an Impact

Tennis IQ Podcast

Play Episode Listen Later May 29, 2022 48:41 Very Popular


Marcus Daniell is a professional tennis player from New Zealand and the Founder & Executive Director of High Impact Athletes (https://highimpactathletes.org/). He is an Olympic bronze medallist tennis player with 5 ATP titles, quarterfinal appearances at both Wimbledon and the Australian Open (twice), and numerous caps for the NZ Davis Cup Team. He has been Giving Effectively since 2014. On January 4th, 2021, Marcus took the Giving What We Can pledge to donate at least 10% of his annual winnings to effective organisations for the rest of his life. Alongside his tennis career, Marcus has completed a B.A. from Massey University in Psychology and Spanish and has been awarded the Arthur Ashe Humanitarian Award for his work with HIA, joining recipients such as Nelson Mandela and Roger Federer. In this conversation, Brian and Josh speak with Marcus about his career and playing with an impactful purpose. Donate via High Impact Athletes: https://highimpactathletes.org/donate

The Nonlinear Library
EA - How Could AI Governance Go Wrong? by HaydnBelfield

The Nonlinear Library

Play Episode Listen Later May 27, 2022 26:53


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Could AI Governance Go Wrong?, published by HaydnBelfield on May 26, 2022 on The Effective Altruism Forum. (I gave a talk to EA Cambridge in February 2022. People have told me they found it useful as an introduction/overview so I edited the transcript, which can be found below. If you're familiar with AI governance in general, you may still be interested in the sections on 'Racing vs Dominance' and 'What is to be done?'.) Talk Transcript I've been to lots of talks which catch you with a catchy title and they don't actually tell you the answer until right at the end so I'm going to skip right to the end and answer it. How could AI governance go wrong? These are the three answers that I'm gonna give: over here you've got some paper clips, in the middle you've got some very bad men, and then on the right you've got nuclear war. This is basically saying the three cases are accident, misuse and structural or systemic risks. That's the ultimate answer to the talk, but I'm gonna take a bit longer to actually get there. I'm going to talk very quickly about my background and what CSER (the Centre for the Study of Existential Risk) is. Then I'm going to answer what is this topic called AI governance, then how could AI governance go wrong? Before finally addressing what can be done, so we're not just ending on a sad glum note but we're going out there realising there is useful stuff to be done. My Background & CSER This is an effective altruism talk, and I first heard about effective altruism back in 2009 in a lecture room a lot like this, where someone was talking about this new thing called Giving What We Can, where they decided to give away 10% of their income to effective charities. I thought this was really cool: you can see that's me on the right (from a little while ago and without a beard). I was really taken by these ideas of effective altruism and trying to do the most good with my time and resources. So what did I do? I ended up working for the Labour Party for several years in Parliament. It was very interesting, I learned a lot, and as you can see from the fact that the UK has a Labour government and is still in the European Union, it went really well. Two of the people I worked for are no longer even MPs. After this sterling record of success down in Westminster – having campaigned in one general election, two leadership elections and two referendums – I moved up to Cambridge five years ago to work at CSER. The Centre for the Study of Existential Risk: we're a research group within the University of Cambridge dedicated to the study and mitigation of risks that could lead to human extinction or civilizational collapse. We do high quality academic research, we develop strategies for how to reduce risk, and then we field-build, supporting a global community of people working on existential risk. We were founded by these three very nice gentlemen: on the left that's Prof Huw Price, Jaan Tallinn (founding engineer of Skype and Kazaa) and Lord Martin Rees. We've now grown to about 28 people (tripled in size since I started) - there we are hanging out on the bridge having a nice chat. A lot of our work falls into four big risk buckets: pandemics (a few years ago I had to justify why that was in the slides, now unfortunately it's very clear to all of us) AI, which is what we're going to be talking mainly about today climate change and ecological damage, and then systemic risk from all of our intersecting vulnerable systems. Why care about existential risks? Why should you care about this potentially small chance of the whole of humanity going extinct or civilization collapsing in some big catastrophe? One very common answer is looking at the size of all the future generations that could come if we don't mess things up. The little circle in the middle is the number of ...

The Nonlinear Library
EA - EA Funds donation platform is moving to Giving What We Can by Luke Freeman

The Nonlinear Library

Play Episode Listen Later May 23, 2022 2:07


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Funds donation platform is moving to Giving What We Can, published by Luke Freeman on May 23, 2022 on The Effective Altruism Forum. Since late 2021 Effective Altruism Funds (EA Funds) and Giving What We Can (GWWC) have been going through a restructure. EA Funds has moved towards focussing primarily on grantmaking and GWWC has taken over the management of the donation platform. In April 2022 we soft-launched a rebranded GWWC donation portal. Over the coming months the donation specific functionality of funds.effectivealtruism.org will be retired and redirected to GWWC's version of the donation platform (the pages related to the four EA Funds and grantmaking will continue on funds.effectivealtruism.org and the homepage will become more grantee focused). A new GWWC website is currently under development which will include a fully integrated donation experience (as well as improving our pledge dashboard and signup process). As well as the donation platform, GWWC will continue to support the donor lottery, EffectiveCrypto.org and any of the other formerly EA Funds activities relating to fundraising while maintaining our existing work promoting effective giving (e.g. pledge, community, guides, talks, marketing campaigns and research). EA Funds will continue to manage the grantmaking activities of their four Funds and will at some point post an update about their plans moving forward and this includes some of the reasoning for this restructure decision. GWWC has also recently posted an update about our strategy which is very relevant to this decision. We will be consulting with donors, stakeholders and the broader community about the future of the donation platform and how we can best support effective giving within the community. Please don't hesitate to get in touch with any feedback, suggestions, requests, or concerns. We look forward to this next chapter and are excited to continue our mission to create a world where giving effectively and significantly is a cultural norm! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - "Big tent" effective altruism is very important (particularly right now) by Luke Freeman

The Nonlinear Library

Play Episode Listen Later May 20, 2022 7:33


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Big tent" effective altruism is very important (particularly right now), published by Luke Freeman on May 20, 2022 on The Effective Altruism Forum. This August, when Will MacAskill launches What We Owe The Future, we will see a spike of interest in longtermism and effective altruism more broadly. People will form their first impressions – these will be hard to shake. After hearing of these ideas for the first time, they will be wondering things like: Who are these people? (Can I trust them? Are they like me? Do they have an ulterior agenda?) What can I do (literally right now and also might it might my decisions over time)? What does this all mean for me and my life? If we're lucky, they'll investigate these questions. The answers they get matter (and so does their experience finding those answers). I get the sense that effective altruism is at a crossroads right now. We can either become a movement of people who appear dedicated to a particular set of conclusions about the world, or we can become a movement of people that appear united by a shared commitment to using reason and evidence to do the most good we can. In the former case, I expect to become a much smaller group, easier to coordinate our focus, but it's also a group that's more easily dismissed. People might see us as a bunch of nerds who have read too many philosophy papers and who are out of touch with the real world. In the latter case, I'd expect to become a much bigger group. I'll admit that it's also a group that's harder to organise (people are coming at the problem from different angles and with varying levels of knowledge). However, if we are to have the impact we want: I'd bet on the latter option. I don't believe we can – nor should – simply tinker on the margins forever nor try to act as a "shadowy cabal". As we grow, we will start pushing for bigger and more significant changes, and people will notice. We've already seen this with the increased media coverage of things like political campaigns and prominent people that are seen to be EA-adjacent. A lot of these first impressions we won't be able to control. But we can try to spread good memes about EA (inspiring and accurate ones), and we do have some level of control about what happens when people show up at our "shop fronts" (e.g. prominent organisations, local and university groups, conferences etc.). I recently had a pretty disheartening exchange where I heard from a new GWWC member who'd started to help run a local group felt "discouraged and embarrassed" at an EAGx conference. They left feeling like they weren't earning enough to be "earning to give" and that they didn't belong in the community if they're not doing direct work (or don't have an immediate plan to drop everything and change). They said this "poisoned" their interest in EA. Experiences like this aren't always easy to prevent, but it's worth trying. We are aware that we are one of the "shop fronts" at Giving What We Can. So we're currently thinking about how we represent worldview diversity within effective giving and what options we present to first-time donors. Some examples: We're focusing on providing easily legible options (e.g. larger organisations with an understandable mission and strong track record instead of more speculative small grants that foundations better make) and easier decisions (e.g. "I want to help people now" or "I want to help future generations"). We're also cautious about how we talk about The Giving What We Can Pledge to ensure that it's framed as an invitation for those who want it and not an admonition of those for whom it's not the right fit. We're working to ensure that people who first come across EA via effective giving can find their way to the actions that best fit them (e.g. by introducing them to the broader EA community). We often cros...

The Nonlinear Library
EA - How many people have heard of effective altruism? by David Moss

The Nonlinear Library

Play Episode Listen Later May 20, 2022 40:30


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How many people have heard of effective altruism?, published by David Moss on May 20, 2022 on The Effective Altruism Forum. This post reports the results of a survey we ran in April 2022 investigating how many people had heard of ‘effective altruism' in a large (n=6130) sample, weighted to be representative of the US general population. In subsequent posts in this series, we will be reporting on findings from this survey about where people first hear about effective altruism and how positive or negative people's impressions are of effective altruism. This survey replicates and extends a survey we ran in conjunction with CEA in early 2021, which focused only on US students. Because that survey was not representative, we think that these new results offer a significant advance in estimating how many people in the US population have heard of EA, and in particular sub-groups like students and even students at top-ranked universities. Summary After applying a number of checks (described below), we classified individuals as having heard of effective altruism using both a ‘permissive' standard and a more conservative ‘stringent' standard, based on their explanations of what they understand ‘effective altruism' to mean. We estimate that 6.7% of the US adult population have heard of effective altruism using our permissive standard and 2.6% according to our more stringent standard. We also identified a number of differences across groups: Men (7.6% permissive, 3.0% stringent) were more likely to have heard of effective altruism than women (5.8% permissive, 2.1% stringent) Among students specifically, we estimated 7% had heard of EA (according to a permissive standard) and 2.8% (according to the stringent standard). However, students from top-50 ranked universities seemed more likely to have heard of EA (7.9% permissive, 4.1% stringent). We also found that students at top 15 universities were even more likely to have heard of EA, though this was based on a small sample size. Younger (18-24) people seem somewhat less likely to have heard of effective altruism than older (25-44) people, though the pattern is complicated. The results nevertheless suggest that EA's skew towards younger people cannot simply be explained by higher rates of exposure. Higher levels of education were also strongly associated with being more likely to have heard of EA, with 11.7% of those with a graduate degree having heard of it (permissive standard) compared to 9.2% of college graduates and 3.7% of high school graduates. We also found sizable differences between the percentage of Republicans (4.3% permissive, 1.5% stringent) estimated to have heard of EA, compared to Democrats (7.2% permissive, 2.9% stringent) and Independents (4.3% permissive, 1.5% stringent). Humanities students (or graduates) seem more likely to have heard of EA than people from other areas of study. We estimated the percentages that had heard of various EA and EA-adjacent figures and organisations including: Peter Singer: 11.2% William MacAskill: 2.1% GiveWell: 7.8% Giving What We Can: 4.1% Open Philanthropy: 3.6% 80,000 Hours: 1.3% Why it matters how many people have heard of EA Knowing how many people have already encountered EA is potentially relevant to assessing how far we should scale up (or scale down) outreach efforts. This may apply to particular target groups (e.g. students at top universities), as well as the total population. Knowing how the number of people who have encountered effective altruism differs across different groups could highlight who our outreach is missing. Our outreach could simply be failing to reach certain groups. Most people in the EA community do not seem to first hear about EA through particularly direct, targeted outreach (only 7.7% first hear from an EA group, for example), but rather through mo...

The Nonlinear Library
EA - Have You Ever Doubted Whether You're Good Enough To Pursue Your Career? by lynettebye

The Nonlinear Library

Play Episode Listen Later May 11, 2022 13:16


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Have You Ever Doubted Whether You're Good Enough To Pursue Your Career?, published by lynettebye on May 11, 2022 on The Effective Altruism Forum. This post is cross posted on my blog. People often look to others that they deem particularly productive and successful and come up with (often fairly ungrounded) guesses for how these people accomplish so much. Instead of guessing, I want to give a peek behind the curtain. I interviewed eleven people I thought were particularly successful, relatable, or productive. We discussed topics ranging from productivity to career exploration to self-care. The Peak behind the Curtain interview series is meant to help dispel common myths and provide a variety of takes on success and productivity from real people. To that end, I've grouped responses on common themes to showcase a diversity of opinions on these topics. This first post covers “Have you ever doubted whether you're good enough to pursue your career?” and other personal struggles. My guests include: Abigail Olvera was a U.S. diplomat last working at the China Desk. Abi was formerly stationed at the US Embassies in Egypt and Senegal and holds a Master's of Global Affairs from Yale University. Full interview. Ajeya Cotra is a Senior Research Analyst at Open Philanthropy where she worked on a framework for estimating when transformative AI may be developed, as well as various cause prioritization and worldview diversification projects. Ajeya received a B.S. in Electrical Engineering and Computer Science from UC Berkeley. Full interview. Ben Garfinkel was a research fellow at the Future of Humanity Institute at the time of the interview. He is now the Acting Director of the Centre for the Governance of AI. Ben earned a degree in Physics and in Mathematics and Philosophy from Yale University, before deciding to study for a DPhil in International Relations at the University of Oxford. Full interview. Daniel Ziegler researched AI safety at OpenAI. He has since left to do AI safety research at Redwood Research. Full interview. Eva Vivalt did an Economics Ph.D. and Mathematics M.A. at the University of California, Berkeley after a master's in Development Studies at Oxford University. She then worked at the World Bank for two years and founded AidGrade before finding her way back to academia. Full interview. Gregory Lewis is a DPhil Scholar at the Future of Humanity Institute, where he investigates long-run impacts and potential catastrophic risk from advancing biotechnology. Previously, he was an academic clinical fellow in public health medicine and before that a junior doctor. He holds a master's in public health and a medical degree, both from Cambridge University. Full interview. Helen Toner is Director of Strategy at Georgetown's Center for Security and Emerging Technology (CSET). She previously worked as a Senior Research Analyst at Open Philanthropy. She is a member of the board of directors for OpenAI. Helen holds an MA in Security Studies from Georgetown. Full interview not available. Jade Leung is Governance Lead at OpenAI. She was the inaugural Head of Research & Partnerships with the Centre for the Governance of Artificial Intelligence (GovAI), housed at Oxford's Future of Humanity Institute. She completed her DPhil in AI Governance at the University of Oxford and is a Rhodes scholar. Full interview. Julia Wise serves as a contact person for the effective altruism community and helps local and online groups support their members. She serves on the board of GiveWell and writes about effective altruism at Giving Gladly. She was president of Giving What We Can from 2017-2020. Before joining CEA, Julia was a social worker, and studied sociology at Bryn Mawr College. Full interview. Michelle Hutchinson holds a PhD in Philosophy from the University of Oxford, where her thesis ...

The Nonlinear Library
EA - Messy personal stuff that affected my cause prioritization (or: how I started to care about AI safety) by Julia Wise

The Nonlinear Library

Play Episode Listen Later May 5, 2022 3:36


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Messy personal stuff that affected my cause prioritization (or: how I started to care about AI safety), published by Julia Wise on May 5, 2022 on The Effective Altruism Forum. Crossposted from OtherwiseThis is the story of how I started to care about AI risk. It's far from an ideal decision-making process, but I wanted to try to spell out the untidy reality. I first learned about the idea of AI risk by reading a lot of LessWrong in the summer of 2011. I didn't like the idea of directing resources toward it. I didn't spell out my reasons to myself at the time, but here's what I think was going on under the surface: I was already really dedicated to global health as a cause area, and didn't want competition with that. The concrete thing you could do about AI risk seemed to be “donate to MIRI,” and I didn't understand what MIRI was doing or how it was going to help. These people all seemed to be California tech guys, and that wasn't my culture. My explicit thoughts were something like: Well yeah, I can see how misaligned AI might be the end of everything But maybe that wouldn't be so bad; seems like there's a lot of suffering in the world Anyway, I don't know what we're really going to do about it. In 2017, a coworker/friend who had worked at an early version of MIRI talked to some of her old friends and got particularly worried about short AI timelines. And seeing her level of concern clicked with me. She wasn't a California tech guy; she was a former civil rights lawyer from Detroit. She was a Giving What We Can member. She felt like My People. And I started to take it seriously. I started to feel viscerally that it could be very bad for everything I cared about if we developed superhuman AI and we weren't ready. Once I started caring about this area a lot, I took a fresh look around at what might be done about it. In the time since I'd first encountered the idea, more people had also started taking it seriously. Now there were more projects like AI policy work that I found easier to comprehend. Two other things that shifted over time: My concern about people and animals having net-negative lives has been related to what's happening with my own depression. My concern is a lot stronger when I'm doing worse personally. Once I had children, I had a gut-level feeling that it was extremely important that they have long and healthy lives. Changing my beliefs didn't mean there were especially good actions to take. Once I changed my view on AI safety I was more willing to donate to that area, but a lot of people had the same idea, and there wasn't/isn't a lot of obvious work that wasn't already funded. So I've continued donating to a mix of global health (which I still really value) and EA community-building. I was already doing cause-general work and didn't think I could be more useful in direct work, but I started to encourage other people to consider work on global catastrophic risks. Reflections now: What subculture you belong to doesn't mean much about how right you are about something. Subcultures / echo chambers develop different ideas from the mainstream, some of which will be valuable and many which will be pointless or harmful. (LessWrong was also very into cryonics at the time, and I think it's right for that idea to get a lot less attention than AI safety.) One downside of a homogeneous culture is that other people may bounce off for tribalistic reasons. Because you don't share the same concerns, and don't speak to the things they care about Because they're put off in some basic social or demographic way, and never seriously listen to you in the first place When I think about what could have alerted me that my thinking was driven by group identity more than by logic, what comes to mind is the feeling of annoyance I had about “AI people.” Thanks for listening. To h...

The Nonlinear Library
EA - Notes From a Pledger by Justis

The Nonlinear Library

Play Episode Listen Later Apr 30, 2022 5:36


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Notes From a Pledger, published by Justis on April 30, 2022 on The Effective Altruism Forum. This post is a response to Jeff's post here, and the genre of post it represents. I think Jeff's post is valid and valuable, and that thinking through this sort of thing is a good idea. I also think that necessarily, personal testimonies are going to be more common from more engaged EAs. So I'd like to give the perspective of a GWWC pledge taker more toward the periphery. N=1, so have your salt shaker ready! EA status is unimportant at a certain distance I've done freelance editing work for EAs and EA-related organizations for over five years, including (at various times) CEA, LessWrong, AI Impacts, BERI, PIE, and probably others I'm forgetting. There have been times - the longest was a little under a year - when this editing work was my primary source of income. I read a lot of EA content and occasionally wade into the discourse. So among non-inner-ring EAs, I think I'm probably unusually engaged by most metrics, perhaps among the most engaged 10% in the reference class of people who might skim the EA newsletter now and then. All this being said, my relative status as an EA is just not very important to me at all. I took the Giving What We Can pledge several years ago. I've been donating 10% of my income to the Against Malaria Foundation since then. I care approximately 0 if EA bigwigs think I'm a bit dim for this decision, or if they think "ok well, a direct worker is worth about 30 Justis equivalents on the margin." My grandparents were religious and tithed; it feels nice to "do my part" not in an abstract EA community cred sense but in a vague, "being a morally decent person" sense, and no amount of focus on direct work by people across the country/world is likely to make me feel inadequate or rethink this. Really! I will maintain my pledge with a similar level of pride and joy no matter what the official recommendation to current Yale students in EA student groups is (not to undermine them - they are very important, just approximately irrelevant to my life). I am in Florida. Most of my friends work at restaurants or for the state government. Sure, it feels nice when people across the country want to include me or rank me well, but it's not in a crucial spot on my hierarchy of needs. I suspect the same is true for most pledge-takers. It's possible to regard EA roles as desirable without regarding them as morally necessary I've applied for a few jobs in EA over the years. I didn't get them. This was painful. In one case I was doing 25+ hours a week freelance work for an org for several months, it went really well, they put up a job posting with precisely my current duties and invited me to apply, then hired someone else. This was very painful, and strongly discouraged me from applying to full time EA roles in the future. However, at no point did I think "oh no, now I can't have all the impact I might have had counterfactually, so I'm a bad/worthless person." Here I think it's time for another interlude: I've got moderate OCD and a lot of that manifests as scrupulosity. My most common negative emotion is guilt. I'm confident that I worry about being somehow ineffably "bad" significantly more than the average human being. I read Peter Singer in high school and it rocked my world - I got on the phone with Oxfam to donate something, anything, but realized I didn't actually have any money I'd earned myself yet and hung up. I'm confident that I am among the most vulnerable people to feeling distress from the notion that I've not lived up to an ethical obligation. But no, none of my EA-role-failure pain has really been in that direction. I am giving 10% of my income. This is much more than basically anyone I know, including virtually all my friends and all my family. I will proba...

The Nonlinear Library
EA - What I learnt from attending EAGx Oxford (as someone who's new to EA) by Olivia Addy

The Nonlinear Library

Play Episode