POPULARITY
What if there's something it's like to be a shrimp — or a chatbot?For centuries, humans have debated the nature of consciousness, often placing ourselves at the very top. But what about the minds of others — both the animals we share this planet with and the artificial intelligences we're creating?We've pulled together clips from past conversations with researchers and philosophers who've spent years trying to make sense of animal consciousness, artificial sentience, and moral consideration under deep uncertainty.Links to learn more and full transcript: https://80k.info/nhsChapters:Cold open (00:00:00)Luisa's intro (00:00:57)Robert Long on what we should picture when we think about artificial sentience (00:02:49)Jeff Sebo on what the threshold is for AI systems meriting moral consideration (00:07:22)Meghan Barrett on the evolutionary argument for insect sentience (00:11:24)Andrés Jiménez Zorrilla on whether there's something it's like to be a shrimp (00:15:09)Jonathan Birch on the cautionary tale of newborn pain (00:21:53)David Chalmers on why artificial consciousness is possible (00:26:12)Holden Karnofsky on how we'll see digital people as... people (00:32:18)Jeff Sebo on grappling with our biases and ignorance when thinking about sentience (00:38:59)Bob Fischer on how to think about the moral weight of a chicken (00:49:37)Cameron Meyer Shorb on the range of suffering in wild animals (01:01:41)Sébastien Moro on whether fish are conscious or sentient (01:11:17)David Chalmers on when to start worrying about artificial consciousness (01:16:36)Robert Long on how we might stumble into causing AI systems enormous suffering (01:21:04)Jonathan Birch on how we might accidentally create artificial sentience (01:26:13)Anil Seth on which parts of the brain are required for consciousness (01:32:33)Peter Godfrey-Smith on uploads of ourselves (01:44:47)Jonathan Birch on treading lightly around the “edge cases” of sentience (02:00:12)Meghan Barrett on whether brain size and sentience are related (02:05:25)Lewis Bollard on how animal advocacy has changed in response to sentience studies (02:12:01)Bob Fischer on using proxies to determine sentience (02:22:27)Cameron Meyer Shorb on how we can practically study wild animals' subjective experiences (02:26:28)Jeff Sebo on the problem of false positives in assessing artificial sentience (02:33:16)Stuart Russell on the moral rights of AIs (02:38:31)Buck Shlegeris on whether AI control strategies make humans the bad guys (02:41:50)Meghan Barrett on why she can't be totally confident about insect sentience (02:47:12)Bob Fischer on what surprised him most about the findings of the Moral Weight Project (02:58:30)Jeff Sebo on why we're likely to sleepwalk into causing massive amounts of suffering in AI systems (03:02:46)Will MacAskill on the rights of future digital beings (03:05:29)Carl Shulman on sharing the world with digital minds (03:19:25)Luisa's outro (03:33:43)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongAdditional content editing: Katy Moore and Milo McGuireTranscriptions and web: Katy Moore
Introduction I have been writing posts critical of mainstream EA narratives about AI capabilities and timelines for many years now. Compared to the situation when I wrote my posts in 2018 or 2020, LLMs now dominate the discussion, and timelines have also shrunk enormously. The ‘mainstream view' within EA now appears to be that human-level AI will be arriving by 2030, even as early as 2027. This view has been articulated by 80,000 Hours, on the forum (though see this excellent piece excellent piece arguing against short timelines), and in the highly engaging science fiction scenario of AI 2027. While my article piece is directed generally against all such short-horizon views, I will focus on responding to relevant portions of the article ‘Preparing for the Intelligence Explosion' by Will MacAskill and Fin Moorhouse. Rates of Growth The authors summarise their argument as follows: Currently, total global research effort [...] ---Outline:(00:11) Introduction(01:05) Rates of Growth(04:55) The Limitations of Benchmarks(09:26) Real-World Adoption(11:31) Conclusion--- First published: May 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/meNrhbgM3NwqAufwj/why-i-am-still-skeptical-about-agi-by-2030 --- Narrated by TYPE III AUDIO.
"We are aiming for a place where we can decouple the scorecard from our worthiness. It's of course the case that in trying to optimise the good, we will always be falling short. The question is how much, and in what ways are we not there yet? And if we then extrapolate that to how much and in what ways am I not enough, that's where we run into trouble." —Hannah BoettcherWhat happens when your desire to do good starts to undermine your own wellbeing?Over the years, we've heard from therapists, charity directors, researchers, psychologists, and career advisors — all wrestling with how to do good without falling apart. Today's episode brings together insights from 16 past guests on the emotional and psychological costs of pursuing a high-impact career to improve the world — and how to best navigate the all-too-common guilt, burnout, perfectionism, and imposter syndrome along the way.Check out the full transcript and links to learn more: https://80k.info/mhIf you're dealing with your own mental health concerns, here are some resources that might help:If you're feeling at risk, try this for the the UK: How to get help in a crisis, and this for the US: National Suicide Prevention Lifeline.The UK's National Health Service publishes useful, evidence-based advice on treatments for most conditions.Mental Health Navigator is a service that simplifies finding and accessing mental health information and resources all over the world — built specifically for the effective altruism communityWe recommend this summary of treatments for depression, this summary of treatments for anxiety, and Mind Ease, an app created by Spencer Greenberg.We'd also recommend It's Not Always Depression by Hilary Hendel.Some on our team have found Overcoming Perfectionism and Overcoming Low Self-Esteem very helpful.And there's even more resources listed on these episode pages: Having a successful career with depression, anxiety, and imposter syndrome, Hannah Boettcher on the mental health challenges that come with trying to have a big impact, Tim LeBon on how altruistic perfectionism is self-defeating.Chapters:Cold open (00:00:00)Luisa's intro (00:01:32)80,000 Hours' former CEO Howie on what his anxiety and self-doubt feels like (00:03:47)Evolutionary psychiatrist Randy Nesse on what emotions are for (00:07:35)Therapist Hannah Boettcher on how striving for impact can affect our self-worth (00:13:45)Luisa Rodriguez on grieving the gap between who you are and who you wish you were (00:16:57)Charity director Cameron Meyer Shorb on managing work-related guilt and shame (00:24:01)Therapist Tim LeBon on aiming for excellence rather than perfection (00:29:18)Author Cal Newport on making time to be alone with our thoughts (00:36:03)80,000 Hours career advisors Michelle Hutchinson and Habiba Islam on prioritising mental health over career impact (00:40:28)Charity founder Sarah Eustis-Guthrie on the ups and downs of founding an organisation (00:45:52)Our World in Data researcher Hannah Ritchie on feeling like an imposter as a generalist (00:51:28)Moral philosopher Will MacAskill on being proactive about mental health and preventing burnout (01:00:46)Grantmaker Ajeya Cotra on the psychological toll of big open-ended research questions (01:11:00)Researcher and grantmaker Christian Ruhl on how having a stutter affects him personally and professionally (01:19:30)Mercy For Animals' CEO Leah Garcés on insisting on self-care when doing difficult work (01:32:39)80,000 Hours' former CEO Howie on balancing a job and mental illness (01:37:12)Therapist Hannah Boettcher on how self-compassion isn't self-indulgence (01:40:39)Journalist Kelsey Piper on communicating about mental health in ways that resonate (01:43:32)Luisa's outro (01:46:10)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Katy Moore and Milo McGuireTranscriptions and web: Katy Moore
This is a Forum Team crosspost from Substack. Matt would like to add: "Epistemic status = incomplete speculation; posted here at the Forum team's request" When you ask prominent Effective Altruists about Effective Altruism, you often get responses like these: For context, Will MacAskill and Holden Karnofsky are arguably, literally the number one and two most prominent Effective Altruists on the planet. Other evidence of their ~spouses' personal involvement abounds, especially Amanda's. Now, perhaps they've had changes of heart in recent months or years – and they're certainly entitled to have those – but being evasive and implicitly disclaiming mere knowledge of EA is comically misleading and non-transparent. Calling these statements lies seems within bounds for most.[1] This kind of evasiveness around one's EA associations has been common since the collapse of FTX in 2022, (which, for yet more context, was a major EA funder that year and [...] ---Outline:(03:32) Why can't EAs talk about EA like normal humans (or even normal executives)?(05:54) Coming of age during the Great Awokening(07:15) Bad Comms Advice(08:22) Not understanding how words work (coupled with motivated reasoning)(11:05) TraumaThe original text contained 5 footnotes which were omitted from this narration. --- First published: April 8th, 2025 Source: https://forum.effectivealtruism.org/posts/6NCYo7RFYfkEjLAtn/ea-adjacency-as-ftx-trauma --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang theory, quantum mechanics, birth control, and more. Now imagine all of it compressed into just 10 years.That's the future Will MacAskill — philosopher and researcher at the Forethought Centre for AI Strategy — argues we need to prepare for in his new paper “Preparing for the intelligence explosion.” Not in the distant future, but probably in three to seven years.These highlights are from episode #213 of The 80,000 Hours Podcast: Will MacAskill on AI causing a “century in a decade” — and how we're completely unprepared, and include:Rob's intro (00:00:00)A century of history crammed into a decade (00:00:17)What does a good future with AGI even look like? (00:04:48)AI takeover might happen anyway — should we rush to load in our values? (00:09:29)Lock-in is plausible where it never was before (00:14:40)ML researchers are feverishly working to destroy their own power (00:20:07)People distrust utopianism for good reason (00:24:30)Non-technological disruption (00:29:18)The 3 intelligence explosions (00:31:10)These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org. Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don't yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating [...] ---Outline:(00:34) Why we exist(01:57) Research(02:00) Research agendas(03:13) Recent work(03:34) Approach(03:37) Comparison to other efforts(04:14) Principles(05:35) What you can do(05:39) Engage with our research(06:08) Apply to work with us(06:25) FundingThe original text contained 1 footnote which was omitted from this narration. --- First published: March 11th, 2025 Source: https://forum.effectivealtruism.org/posts/6JnTAifyqz245Kv7S/forethought-a-new-ai-macrostrategy-group --- Narrated by TYPE III AUDIO.
The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang theory, quantum mechanics, birth control, and more. Now imagine all of it compressed into just 10 years.That's the future Will MacAskill — philosopher, founding figure of effective altruism, and now researcher at the Forethought Centre for AI Strategy — argues we need to prepare for in his new paper “Preparing for the intelligence explosion.” Not in the distant future, but probably in three to seven years.Links to learn more, highlights, video, and full transcript.The reason: AI systems are rapidly approaching human-level capability in scientific research and intellectual tasks. Once AI exceeds human abilities in AI research itself, we'll enter a recursive self-improvement cycle — creating wildly more capable systems. Soon after, by improving algorithms and manufacturing chips, we'll deploy millions, then billions, then trillions of superhuman AI scientists working 24/7 without human limitations. These systems will collaborate across disciplines, build on each discovery instantly, and conduct experiments at unprecedented scale and speed — compressing a century of scientific progress into mere years.Will compares the resulting situation to a mediaeval king suddenly needing to upgrade from bows and arrows to nuclear weapons to deal with an ideological threat from a country he's never heard of, while simultaneously grappling with learning that he descended from monkeys and his god doesn't exist.What makes this acceleration perilous is that while technology can speed up almost arbitrarily, human institutions and decision-making are much more fixed.In this conversation with host Rob Wiblin, recorded on February 7, 2025, Will maps out the challenges we'd face in this potential “intelligence explosion” future, and what we might do to prepare. They discuss:Why leading AI safety researchers now think there's dramatically less time before AI is transformative than they'd previously thoughtThe three different types of intelligence explosions that occur in orderWill's list of resulting grand challenges — including destructive technologies, space governance, concentration of power, and digital rightsHow to prevent ourselves from accidentally “locking in” mediocre futures for all eternityWays AI could radically improve human coordination and decision makingWhy we should aim for truly flourishing futures, not just avoiding extinctionChapters:Cold open (00:00:00)Who's Will MacAskill? (00:00:46)Why Will now just works on AGI (00:01:02)Will was wrong(ish) on AI timelines and hinge of history (00:04:10)A century of history crammed into a decade (00:09:00)Science goes super fast; our institutions don't keep up (00:15:42)Is it good or bad for intellectual progress to 10x? (00:21:03)An intelligence explosion is not just plausible but likely (00:22:54)Intellectual advances outside technology are similarly important (00:28:57)Counterarguments to intelligence explosion (00:31:31)The three types of intelligence explosion (software, technological, industrial) (00:37:29)The industrial intelligence explosion is the most certain and enduring (00:40:23)Is a 100x or 1,000x speedup more likely than 10x? (00:51:51)The grand superintelligence challenges (00:55:37)Grand challenge #1: Many new destructive technologies (00:59:17)Grand challenge #2: Seizure of power by a small group (01:06:45)Is global lock-in really plausible? (01:08:37)Grand challenge #3: Space governance (01:18:53)Is space truly defence-dominant? (01:28:43)Grand challenge #4: Morally integrating with digital beings (01:32:20)Will we ever know if digital minds are happy? (01:41:01)“My worry isn't that we won't know; it's that we won't care” (01:46:31)Can we get AGI to solve all these issues as early as possible? (01:49:40)Politicians have to learn to use AI advisors (02:02:03)Ensuring AI makes us smarter decision-makers (02:06:10)How listeners can speed up AI epistemic tools (02:09:38)AI could become great at forecasting (02:13:09)How not to lock in a bad future (02:14:37)AI takeover might happen anyway — should we rush to load in our values? (02:25:29)ML researchers are feverishly working to destroy their own power (02:34:37)We should aim for more than mere survival (02:37:54)By default the future is rubbish (02:49:04)No easy utopia (02:56:55)What levers matter most to utopia (03:06:32)Bottom lines from the modelling (03:20:09)People distrust utopianism; should they distrust this? (03:24:09)What conditions make eventual eutopia likely? (03:28:49)The new Forethought Centre for AI Strategy (03:37:21)How does Will resist hopelessness? (03:50:13)Video editing: Simon MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongCamera operator: Jeremy ChevillotteTranscriptions and web: Katy Moore
Effective altruism — the brand of philanthropy where you try to do the most good for the greatest number of people with the resources you have — has gotten a bad rap lately due to its association with Sam Bankman-Fried, the disgraced crypto wunderkind who was once hailed as the movement's poster child. But is the bad press fair? Today, we explore that question by revisiting our conversation with Will MacAskill, Oxford professor of philosophy, leading figure in the movement, and author of 'What We Owe the Future.”
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates on the EA catastrophic risk landscape, published by Benjamin Todd on May 6, 2024 on The Effective Altruism Forum. Around the end of Feb 2024 I attended the Summit on Existential Risk and EAG: Bay Area (GCRs), during which I did 25+ one-on-ones about the needs and gaps in the EA-adjacent catastrophic risk landscape, and how they've changed. The meetings were mostly with senior managers or researchers in the field who I think are worth listening to (unfortunately I can't share names). Below is how I'd summarise the main themes in what was said. If you have different impressions of the landscape, I'd be keen to hear them. There's been a big increase in the number of people working on AI safety, partly driven by a reallocation of effort (e.g. Rethink Priorities starting an AI policy think tank); and partly driven by new people entering the field after its newfound prominence. Allocation in the landscape seems more efficient than in the past - it's harder to identify especially neglected interventions, causes, money, or skill-sets. That means it's become more important to choose based on your motivations. That said, here's a few ideas for neglected gaps: Within AI risk, it seems plausible the community is somewhat too focused on risks from misalignment rather than mis-use or concentration of power. There's currently very little work going into issues that arise even if AI is aligned, including the deployment problem, Will MacAskill's " grand challenges" and Lukas Finnveden's list of project ideas. If you put significant probability on alignment being solved, some of these could have high importance too; though most are at the stage where they can't absorb a large number of people. Within these, digital sentience was the hottest topic, but to me it doesn't obviously seem like the most pressing of these other issues. (Though doing field building for digital sentience is among the more shovel ready of these ideas.) The concrete entrepreneurial idea that came up the most, and seemed most interesting to me, was founding orgs that use AI to improve epistemics / forecasting / decision-making (I have a draft post on this - comments welcome). Post-FTX, funding has become even more dramatically concentrated under Open Philanthropy, so finding new donors seems like a much bigger priority than in the past. (It seems plausible to me that $1bn in a foundation independent from OP could be worth several times that amount added to OP.) In addition, donors have less money than in the past, while the number of opportunities to fund things in AI safety has increased dramatically, which means marginal funding opportunities seem higher value than in the past (as a concrete example, nuclear security is getting almost no funding). Both points mean efforts to start new foundations, fundraise and earn to give all seem more valuable compared to a couple of years ago. Many people mentioned comms as the biggest issue facing both AI safety and EA. EA has been losing its battle for messaging, and AI safety is in danger of losing its too (with both a new powerful anti-regulation tech lobby and the more left-wing AI ethics scene branding it as sci-fi, doomer, cultish and in bed with labs). People might be neglecting measures that would help in very short timelines (e.g. transformative AI in under 3 years), though that might be because most people are unable to do much in these scenarios. Right now, directly talking about AI safety seems to get more people in the door than talking about EA, so some community building efforts have switched to that. There's been a recent influx in junior people interested in AI safety, so it seems plausible the biggest bottleneck again lies with mentoring & management, rather than recruiting more junior people. Randomly: there seems to have been a trend of former le...
Disclaimer: While I criticize several EA critics in this article, I am myself on the EA-skeptical side of things (especially on AI risk). Introduction. I am a proud critic of effective altruism, and in particular a critic of AI existential risk, but I have to admit that a lot of the critcism of EA is hostile, or lazy, and is extremely unlikely to convince a believer. Take this recent Leif Weinar time article as an example. I liked a few of the object level critiques, but many of the points were twisted, and the overall point was hopelessly muddled (are they trying to say that voluntourism is the solution here?). As people have noted, the piece was needlessly hostile to EA (and incredibly hostile to Will Macaskill in particular). And he's far from the only prominent hater. Emille Torres views EA as a threat to humanity. Timnit Gebru sees [...] ---Outline:(02:21) No door to door atheists(04:51) What went wrong here?(08:40) Motivation gaps in AI x-risk(10:59) EA gap analysis(15:12) Counter-motivations(25:49) You can't rely on ingroup criticism(29:10) How to respond to motivation gaps--- First published: April 22nd, 2024 Source: https://forum.effectivealtruism.org/posts/CfBNdStftKGc863o6/motivation-gaps-why-so-much-ea-criticism-is-hostile-and-lazy Linkpost URL:https://titotal.substack.com/p/motivation-gaps-why-so-much-ea-criticism --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Motivation gaps: Why so much EA criticism is hostile and lazy, published by titotal on April 22, 2024 on The Effective Altruism Forum. Disclaimer: While I criticize several EA critics in this article, I am myself on the EA-skeptical side of things (especially on AI risk). Introduction I am a proud critic of effective altruism, and in particular a critic of AI existential risk, but I have to admit that a lot of the critcism of EA is hostile, or lazy, and is extremely unlikely to convince a believer. Take this recent Leif Weinar time article as an example. I liked a few of the object level critiques, but many of the points were twisted, and the overall point was hopelessly muddled (are they trying to say that voluntourism is the solution here?). As people have noted, the piece was needlessly hostile to EA (and incredibly hostile to Will Macaskill in particular). And he's far from the only prominent hater. Emille Torres views EA as a threat to humanity. Timnit Gebru sees the whole AI safety field as racist nutjobs. In response, @JWS asked the question: why do EA critics hate EA so much? Are all EA haters just irrational culture warriors? There are a few answers to this. Good writing is hard regardless of the subject matter. More inflammatory rhetoric gets more clicks, shares and discussion. EA figures have been involved in bad things (like SBF's fraud), so nasty words in response are only to be expected. I think there's a more interesting explanation though, and it has to do with motivations. I think the average EA-critical person doesn't hate EA, although they might dislike it. But it takes a lot of time and effort to write an article and have it published in TIME magazine. If Leif Weinar didn't hate EA, he wouldn't have bothered to write the article. In this article, I'm going to explore the concept of motivation gaps, mainly using the example of AI x-risk, because the gaps are particularly stark there. I'm going to argue that for certain causes, the critiques being hostile or lazy is the natural state of affairs, whether or not the issue is actually correct, and that you can't use the unadjusted quality of each sides critiques to judge an issues correctness. No door to door atheists Disclaimer: These next sections contains an analogy between logical reasoning about religious beliefs and logical reasoning about existential risk. It is not an attempt to smear EA as a religion, nor is it an attack on religion. Imagine a man, we'll call him Dave, who, for whatever reason, has never once thought about the question of whether God exists. One day he gets a knock on his door, and encounters two polite, well dressed and friendly gentlemen who say they are spreading the word about the existence of God and the Christian religion. They tell them that a singular God exists, and that his instructions for how to live life are contained within the Holy Bible. They have glossy brochures, well-prepared arguments and evidence, and represent a large organisation with a significant following and social backing by many respected members of society. He looks their website and finds that, wow, a huge number of people believe this, there is a huge field called theology explaining why God exists, and some of the smartest people in history have believed it as well. Dave is impressed, but resolves to be skeptical. He takes their information and informs them that he while he finds them convincing, he wants to hear the other side of the story as well. He tells them that he'll wait for the atheist door-to-door knockers to come and make their case, so he can decide for himself. Dave waits for many months, but to his frustration, no atheists turn up. Another point for the Christians. He doesn't give up though, and looks online, and finds the largest atheist forum he can find, r/atheism. Dave is shoc...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What should the EA community learn from the FTX / SBF disaster? An in-depth discussion with Will MacAskill on the Clearer Thinking podcast, published by spencerg on April 16, 2024 on The Effective Altruism Forum. In this new podcast episode, I discuss with Will MacAskill what the Effective Altruism community can learn from the FTX / SBF debacle, why Will has been limited in what he could say about this topic in the past, and what future directions for the Effective Altruism community and his own research Will is most enthusiastic about: https://podcast.clearerthinking.org/episode/206/will-macaskill-what-should-the-effective-altruism-movement-learn-from-the-sbf-ftx-scandal Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Quick Update on Leaving the Board of EV, published by Rebecca Kagan on April 3, 2024 on The Effective Altruism Forum. A brief and belated update: When I resigned from the board of EV US last year, I was planning on writing about that decision. But I ultimately decided against doing that for a variety of reasons, including that it was very costly to me, and I believed it wouldn't make a difference. However, I want to make it clear that I resigned last year due to significant disagreements with the board of EV and EA leadership, particularly concerning their actions leading up to and after the FTX crisis. While I certainly support the boards' decision to pay back the FTX estate, spin out the projects as separate organizations, and essentially disband EV, I continue to be worried that the EA community is not on track to learn the relevant lessons from its relationship with FTX. Two things that I think would help (though I am not planning to work on either myself): EA needs an investigation, done externally and shared publicly, on mistakes made in the EA community's relationship with FTX.[1] I believe there were extensive and significant mistakes made which have not been addressed. (In particular, some EA leaders had warning signs about SBF that they ignored, and instead promoted him as a good person, tied the EA community to FTX, and then were uninterested in reforms or investigations after the fraud was revealed). These mistakes make me very concerned about the amount of harm EA might do in the future. EA also needs significantly more clarity on who, if anyone, "leads" EA and what they are responsible for. I agree with many of Will MacAskill's points here and think confusion on this issue has indirectly resulted in a lot of harm. CEA is a logical place to house both of these projects, though I also think leaders of other EA-affiliated orgs, attendees of the Meta Coordination Forum, and some people at Open Philanthropy would also be well-suited to do this work. I continue to be available to discuss my thoughts on why I left the board, or on EA's response to FTX, individually as needed. ^ Although EV conducted a narrow investigation, the scope was far more limited than what I'm describing here, primarily pertaining to EV's legal exposure, and most results were not shared publicly. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Is anyone in the world being paid to do an independent investigation of how EA handled Sam Bankman-Fried, with respect to "did we screw up" and "is there stuff we should do differently going forward"? Last I heard, literally nobody was doing this and at least some EA leaders were mostly just hoping that SBF gets memoryholed — but maybe I'm out of the loop? My understanding is that Effective Ventures completed a narrow investigation into this topic in mid-2023, purely looking at legal risk to EV and not at all trying to do a general postmortem for EA or any group of EAs. Is that correct, and have things changed since then? I saw that Will MacAskill is planning to appear on some podcasts soon to speak about SBF, which seems like great news to mee. If I recall correctly, Will previously said that he was going to [...] --- First published: April 1st, 2024 Source: https://forum.effectivealtruism.org/posts/PAG3DJtoZeGtz488f/why-hasn-t-ea-done-an-sbf-investigation-and-postmortem --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why hasn't EA done an SBF investigation and postmortem?, published by RobBensinger on April 1, 2024 on The Effective Altruism Forum. Is anyone in the world being paid to do an independent investigation of how EA handled Sam Bankman-Fried, with respect to "did we screw up" and "is there stuff we should do differently going forward"? Last I heard, literally nobody was doing this and at least some EA leaders were mostly just hoping that SBF gets memoryholed - but maybe I'm out of the loop? My understanding is that Effective Ventures completed a narrow investigation into this topic in mid-2023, purely looking at legal risk to EV and not at all trying to do a general postmortem for EA or any group of EAs. Is that correct, and have things changed since then? I saw that Will MacAskill is planning to appear on some podcasts soon to speak about SBF, which seems like great news to me. If I recall correctly, Will previously said that he was going to talk about what happened with SBF once EV's narrow investigation was done, but it's now been almost a year since that investigation finished (!). I think it would have been better to speak up way, way sooner, but I'm hopeful that Will will be able to clear up some big chunks of what the heck happened, and that a bunch of other EAs will speak up with their postmortems too, now that SBF's trial and sentencing are complete? I unfortunately don't know the full list of who should be sharing personal or org-level postmortems on this topic, so I'm forced to single out people like Will whose involvement over the years is public knowledge. Hopefully I'll know who I should be gadflying to share the remaining puzzle pieces once Will and others start sharing some of the first puzzle pieces. To state the obvious: I'm wary of EAs performatively self-flagellating and accepting more responsibility for the FTX thing than is warranted (given, e.g., that huge numbers of people with a very direct financial incentive in spotting FTX's fraud didn't spot it, so it's obviously not weird if random EAs failed to spot it). I want a concrete understanding of what actually happened, not vague scapegoating or self-flagellation. But the idea of a basic investigation and postmortem seems like an obvious step to me regardless, and my sense is that there are things we could have done a lot better re SBF (e.g., better spread the word about what happened in the Alameda blow-up, so more people would've been aware of some red flags), even if those things probably wouldn't have prevented the FTX debacle all on their own. So I'd like to hear what's up with all that. See also CEA's recent piece in the Washington Post. The WaPost piece mostly just seems like EA PR, and I'll be very sad if we stay at that level of vagueness. The piece also (unless I'm misunderstanding something) implies some false things about whether CEA, EV, etc. have ever done an investigation into what happened with an eye toward reviewing (and possibly improving) EA institutions, practices, etc. This doesn't match what I've heard from talking to involved parties, and Oliver Habryka mentions that he's "been shared on documents by CEA employees where the legal investigation was explicitly called out as not being helpful for facilitating a reflection process and institutional reform". (Oliver clarified to me that the document wasn't an official CEA document.) So the narrow "are we in legal trouble?" investigation EV did last year doesn't seem like it was ever meant to fill the "figure out what happened and whether we should do anything about it, for the sake of ethics and for the sake of furthering our EA work" role. But maybe I'm missing something here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Long Reflection Reading List, published by Will Aldred on March 25, 2024 on The Effective Altruism Forum. This is a reading list on the following cluster of notions: "the long reflection", "the deployment problem", "structural risk", "post AGI governance", "ASI governance", "reflective governance", "metaphilosophy", "AI philosophical competence", "trajectory change", "macrostrategy", "worldview investigations", "grand challenges" and "the political philosophy of AI". I claim that this area outscores regular AI safety on importance[1] while being significantly more neglected (and roughly the same in terms of tractability), making it perhaps the highest priority EA cause area. I don't claim to be the ideal person to have made this reading list. The story behind how it came about is that two months ago, Will MacAskill wrote: "I think there's a lot of excitement about work in this broad area that isn't yet being represented in places like the Forum. I'd be keen for more people to start learning about and thinking about these issues." Intrigued, I spent some time trying to learn about the issues he was pointing to. I then figured I'd channel the spirit of "EAs should post more summaries and collections": this reading list is an attempt to make the path easier for others to follow. Accordingly, it starts at the introductory level, but by the end the reader will be at the frontier of publicly available knowledge. (The frontier at the time of writing, at least.[2]) Note: in some places where I write "the long reflection," I'm using the term as shorthand to refer to the above cluster of notions. Intro Quotes about the long reflection - MichaelA (2020)[3] The Precipice - Ord (2020) Just chapter 7, including endnotes. Beyond Maxipok - good reflective governance as a target for action - Cotton-Barratt (2024) New Frontiers in Effective Altruism - MacAskill (2024) This was a talk given at EAG Bay Area 2024. It doesn't appear to be available as a recording yet, but I'll add it if and when it goes up. Quick take on Grand Challenges - MacAskill (2024) The part about hiring is no longer relevant, but the research projects MacAskill outlines still give a sense for what good future work on grand challenges / the long reflection might look like. Criticism of the long reflection idea: 'Long Reflection' Is Crazy Bad Idea - Hanson (2021) Objections: What about "long reflection" and the division of labor? - Vinding (2022) Just the highlighted section. A comment by Wei Dai (2019a) What might we be aiming for? Is there moral truth? What should we do if not? What are human values, and how do they fit in? Moral Uncertainty and the Path to AI Alignment with William MacAskill - AI Alignment Podcast by the Future of Life Institute (2018) See also Shah (2018)'s summary and commentary. See also this comment exchange between Michael Aird and Lukas Gloor (2020), which zooms in on the realism vs. antirealism wager and how it relates to the long reflection. Complexity of value - LessWrong Wiki Moral ~realism - Cotton-Barratt (2024) Why should ethical anti-realists do ethics? - Carlsmith (2023) Coherent extrapolated volition - Arbital How to think about utopia? Hedonium and computronium - EA Forum Wiki Terms that tend to come up in discussions of utopia. Why Describing Utopia Goes Badly - Karnofsky (2021) Visualizing Utopia - Karnofsky (2021) Characterising utopia - Ngo (2020) Actually possible: thoughts on Utopia - Carlsmith (2021) Deep Utopia - Bostrom (2024) (If and when someone writes a summary of this book I'll add it to this reading list.) Ideally, I would include at this point some readings on how aggregation might work for building a utopia, since this seems like an obvious and important point. For instance, should the light cone be divided such that every person (or every moral patient more broad...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Slim overview of work one could do to make AI go better (and a grab-bag of other career considerations), published by Chi on March 22, 2024 on The Effective Altruism Forum. Many kinds of work one could do to make AI go better and a grab-bag of other career considerations I recently found myself confused about what I'd like to work on. So, I made an overview with the possible options for what to work on to make AI go well. I thought I'd share it in case it's helpful for other people. Since I made this overview for my own career deliberations, it is tailored for myself and not necessarily complete. That said, I tried to be roughly comprehensive, so feel free to point out options I'm missing. I redacted some things but didn't edit the doc in other ways to make it more comprehensible to others. In case you're interested, I explain a lot of the areas in the "Humans in control" and the "Misalignment" worlds here and to some extent here. What areas could one work on? What endpoints or intermediary points could one aim for? Note that I redacted a bunch of names in "Who's working on this" just because I didn't want to bother asking them and I wasn't sure they had publicly talked about it yet, not because of anything else. "?" behind a name or org means I don't know if they actually work on the thing (but you could probably find out with a quick google!) World it helps The area (Note that this doesn't say anything about the type of work at the moment. For example, I probably should never do MechInterp myself because of personal fit. But I could still think it's good to do something that overall supports MechInterp.) Biggest uncertainty Who's working on this Hu- mans in con- trol ASI governance | human-control Who is in control of AI, what's the governance structure etc. Digital sentience [...] Is this tractable and is success path-dependent? Will MacAskill, [redacted]?, indirectly: cybersec. folk?, some AI governance work? Acausal interactions | human-control Metacognition Decision theory Values of future civilisation SPIs [redacted] SPIs for causal interactions | human-control CLR Mis- align- ment Prevent sign flip and other near misses Is this a real concern? Nobody? Acausal interactions | misalignment Decision theory Value porosity Is this tractable? [redacted]? [redacted]? Reducing conflict-conducive preferences for causal interactions & SPIs | misalignment CLR Main- stream AI safety best thing to work on Reduction of malevolence in positions of influence through improving awareness (also goes into the "Humans in control" category) [redacted]? Nobody? Differentially support responsible AI labs For some of these: Would success be net good or net bad? If good: How good? How high is the penalty for being less neglected? Influence AI timelines [redacted], [redacted], [redacted]?, maybe misc. policy people? AI control (and ideas like paying AIs) Redwood Research Model capabilities evaluations METR, Apollo?, maybe AI labs policy teams, maybe misc. Other policy people? Alignment (more comprehensive overview): MechInterp ELK (L)AT Debate COT oversight Infrabayesianism Natural abstractions Understanding intelligence [...] Overview post on LessWrong Human epistemics during early AI ~Forecasting crowd, nobody? Growing the AI safety and EA community or improving its branding or upskilling people in the community (e.g. fellowships) Constellation, Local groups, CEA, OpenPhilanthropy, … Improving the AI safety and EA community and culture socially CEA Threat modelling, scenario forecasting etc. [redacted], … Make it harder to steal models Cybersecurity folk Regulate Open Source capabilities Policy folk? Nobody? What types of work are there? Which world Type of work Broad category of work Can be in any of the three areas above Offering 1-1 support (mental, operational, and debugging) Proj...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There are no massive differences in impact between individuals, published by Sarah Weiler on March 14, 2024 on The Effective Altruism Forum. Or: Why aiming for the tail-end in an imaginary social impact distribution is not the most effective way to do good in the world "It is very easy to overestimate the importance of our own achievements in comparison with what we owe others." attributed to Dietrich Bonhoeffer, quoted in Tomasik 2014(2017) Summary In this essay, I argue that it is not useful to think about social impact from an individualist standpoint. I claim that there are no massive differences in impact between individual interventions, individual organisations, and individual people, because impact is dispersed across all the actors that contribute to the outcomes before any individual action is taken, all the actors that contribute to the outcomes after any individual action is taken, and all the actors that shape the taking of any individual action in the first place. I raise some concerns around adverse effects of thinking about impact as an attribute that follows a power law distribution and that can be apportioned to individual agents: Such a narrative discourages actions and strategies that I consider highly important, including efforts to maintain and strengthen healthy communities; Such a narrative may encourage disregard for common-sense virtues and moral rules; Such a narrative may negatively affect attitudes and behaviours among elites (who aim for extremely high impact) as well as common people (who see no path to having any meaningful impact); and Such a narrative may disrupt basic notions of moral equality and encourage a differential valuation of human lives in accordance with the impact potential an individual supposedly holds. I then reflect on the sensibility and usefulness of apportioning impact to individual people and interventions in the first place, and I offer a few alternative perspectives to guide our efforts to do good effectively. In the beginning, I give some background on the origin of this essay, and in the end, I list a number of caveats, disclaimers, and uncertainties to paint a fuller picture of my own thinking on the topic. I highly welcome any feedback in response to the essay, and would also be happy to have a longer conversation about any or all of the ideas presented - please do not hesitate to reach out in case you would like to engage in greater depth than a mere Forum comment :)! Context I have developed and refined the ideas in the following paragraphs at least since May 2022 - my first notes specifically on the topic were taken after I listened to Will MacAskill talk about "high-impact opportunities" at the opening session of my first EA Global, London 2022. My thoughts on the topic were mainly sparked by interactions with the effective altruism community (EA), either in direct conversations or through things that I read and listened to over the last few years. However, I have encountered these arguments outside EA as well, among activists, political strategists, and "regular folks" (colleagues, friends, family). My journal contains many scattered notes, attesting to my discomfort and frustration with the - in my view, misguided - belief that a few individuals can (and should) have massive amounts of influence and impact by acting strategically. This text is an attempt to pull these notes together, giving a clear structure to the opposition I feel and turning it into a coherent argument that can be shared with and critiqued by others. Impact follows a power law distribution: The argument as I understand it "[T]he cost-effectiveness distributions of the most effective interventions and policies in education, health and climate change, are close to power-laws [...] the top intervention is 2 or almost 3 orders of magni...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI things that are perhaps as important as human-controlled AI (Chi version), published by Chi on March 3, 2024 on The Effective Altruism Forum. Topic of the post: I list potential things to work on other than keeping AI under human control. Motivation The EA community has long been worried about AI safety. Most of the efforts going into AI safety are focused on making sure humans are able to control AI. Regardless of whether we succeed at this, I think there's a lot of additional value on the line. First of all, if we succeed at keeping AI under human control, there are still a lot of things that can go wrong. My perception is that this has recently gotten more attention, for example here, here, here, and at least indirectly here (I haven't read all these posts. and have chosen them to illustrate that others have made this point purely based on how easily I could find them). Why controlling AI doesn't solve everything is not the main topic of this post, but I want to at least sketch my reasons to believe this. Which humans get to control AI is an obvious and incredibly important question and it doesn't seem to me like it will go well by default. It doesn't seem like current processes put humanity's wisest and most moral at the top. Humanity's track record at not causing large-scale unnecessary harm doesn't seem great (see factory farming). There is reasonable disagreement on how path-dependent epistemic and moral progress is but I think there is a decent chance that it is very path-dependent. While superhuman AI might enable great moral progress and new mechanisms for making sure humanity stays on "moral track", superhuman AI also comes with lots of potential challenges that could make it harder to ensure a good future. Will MacAskill talks about "grand challenges" we might face shortly after the advent of superhuman AI here. In the longer-term, we might face additional challenges. Enforcement of norms, and communication in general, might be extremely hard across galactic-scale distances. Encounters with aliens (or even merely humanity thinking they might encounter aliens!) threaten conflict and could change humanity's priorities greatly. And if you're like me, you might believe there's a whole lot of weird acausal stuff to get right. Humanity might make decisions that influence these long-term issues already shortly after the development of advanced AI. It doesn't seem obvious to me at all that a future where some humans are in control of the most powerful earth-originating AI will be great. Secondly, even if we don't succeed at keeping AI under human control, there are other things we can fight for and those other things might be almost as important or more important than human control. Less has been written about this (although not nothing.) My current and historically very unstable best guess is that this reflects an actual lower importance of influencing worlds where humans don't retain control over AIs although I wish there was more work on this topic nonetheless. Justifying why I think influencing uncontrolled AI matters isn't the main topic of this post, but I would like to at least sketch my motivation again. If there is alien life out there, we might care a lot about how future uncontrolled AI systems treat them. Additionally, perhaps we can prevent uncontrolled AI from having actively terrible values. And if you are like me, you might believe there are weird acausal reasons to make earth-originating AIs more likely to be a nice acausal citizen. Generally, even if future AI systems don't obey us, we might still be able to imbue them with values that are more similar to ours. The AI safety community is aiming for human control, in part, because this seems much easier than aligning AIs with "what's morally good". But some properties that result in moral good...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On being an EA for decades, published by Michelle Hutchinson on February 12, 2024 on The Effective Altruism Forum. A friend sent me on a lovely trip down memory lane last week. She forwarded me an email chain from 12 years ago in which we all pretended we left the EA community, and explained why. We were partly thinking through what might cause us to drift in order that we could do something to make that less likely, and partly joking around. It was a really nice reminder of how uncertain things felt back then, and how far we've come. Although the emails hadn't led us to take any specific actions, almost everyone on the thread was still devoting their time to helping others as much as they can. We're also, for the most part, still supporting each other in doing so. Of the 10 or so people on there, for example, Niel Bowerman is now my boss - CEO of 80k. Will MacAskill and Toby Ord both made good on their plans to write books and donate their salaries above £20k[1] a year. And Holly Morgan is who I turned to a couple of weeks ago when I needed help thinking through work stress. Here's what I wrote speculating about why I might drift away from EA. Note that the email below was written quickly and just trying to gesture at things I might worry about in future, I was paying very little attention to the details. The partner referenced became my husband a year and a half later, and we now have a four year old. On 10 February 2012 18:14, Michelle Hutchinson wrote: Writing this was even sadder than I expected it to be. 1. Holly's and my taste in music drove the rest of the HIH[2] crazy, and they murdered us both. 2. The feeling that I was letting down my parents, husband and children got the better of me, and I sought better paid employment. 3. Toby, Will and Nick got replaced by trustees whose good judgement and intelligence I didn't have nearly as much faith in, so I no longer felt fully supportive of the organisation - I was worried it was soon going to become preachy rather than effective, or even dangerous (researching future technology without much foresight). 4. I realised there were jobs that wouldn't involve working (or even emailing co-workers) Friday evenings :p 5. I remembered how much I loved reading philosophy all day, and teaching tutorials, and unaccountably Oxford offered me a stipendiary lectureship, so I returned to academia. 6. [My partner] got lazy career-wise, and I wanted a nice big house and expensive food, so I got a higher paid job. 7. I accidentally emailed confidential member information to our whole mailing list / said the wrong thing to one of our big funders / made a wrong call that got us into legal trouble, and it was politely suggested to me that the best way I could help the EA movement was by being nowhere near it. 8. When it became clear that although I rationally agreed with the moral positions of GWWC and CEA, most of my emotional motivation for working for them came from not wanting to let down the other [team members], it was decided really I wasn't a nice enough person to have a position of responsibility in such a great organisation. 9. [My partner] got a job too far away from other CEA people for me to be able to work for CEA and be with him. I chose him. 10. When we had children I took maternity leave, and then couldn't bear to leave the children to return to work. 11. I tried to give a talk about GWWC to a large audience, fainted with fright, and am now in a coma. When I wrote the email, I thought it was really quite likely that in 10 years I'd have left the organisation and community. Looking around the world, it seemed like a lot of people become less idealistic as they grow older. And looking inside myself, it felt pretty contingent that I happened to fall in with a group of people who supp...
A friend sent me on a lovely trip down memory lane last week. She forwarded me an email chain from 12 years ago in which we all pretended we left the EA community, and explained why. We were partly thinking through what might cause us to drift in order that we could do something to make that less likely, and partly joking around. It was a really nice reminder of how uncertain things felt back then, and how far we've come. Although the emails hadn't led us to take any specific actions, almost everyone on the thread was still devoting their time to helping others as much as they can. We're also, for the most part, still supporting each other in doing so. Of the 10 or so people on there, for example, Niel Bowerman is now my boss - CEO of 80k. Will MacAskill and Toby Ord [...] The original text contained 3 footnotes which were omitted from this narration. --- First published: February 12th, 2024 Source: https://forum.effectivealtruism.org/posts/zEMvHK9Qa4pczWbJg/on-being-an-ea-for-decades --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Celebrating Benjamin Lay (died on this day 265 years ago), published by Lizka on February 8, 2024 on The Effective Altruism Forum. Quaker abolitionist Benjamin Lay died exactly 265 years ago today (on February 8, 1759). I'm using the anniversary of his death to reflect on his life and invite you to join me by sharing your thoughts sometime this week. Lay was a radical anti-slavery advocate and an important figure in the Quaker abolitionist movement. He's been described as a moral weirdo; besides viewing slavery as a great sin, he opposed the death penalty, was vegetarian, believed that men and women were equal in the eyes of God, and more. He didn't hide his views and was known for his "guerrilla theater" protests, which included splashing fake blood on slave-owners and forcing people to step over him as they exited a meeting. Expulsion from various communities, ridicule for his beliefs or appearance (he had dwarfism), and the offended sensibilities of those around him didn't seem to seriously slow him down. Consider sharing your thoughts this week (February 8-15)! You could share a post, a Quick Take, or simply comment here. (If you post something, you could also link to this post and invite readers to share their own thoughts.[1]) Here are a few discussion prompts, in case they help (feel free to write about whatever comes to mind, though!): How can we develop the courage to be " morally weird"? How can we avoid missing potential ongoing moral catastrophes (or get more moral clarity)? When are disruptive approaches to moral change or advocacy more useful than "polite" or collaborative ones? (When are they less useful?) In the rest of this post, I share a brief overview of Benjamin Lay's famous protests , life and partnership with Sarah Lay (a respected Quaker minister and fellow abolitionist) , and how their work fits into the broader history of slavery . I should flag that I'm no expert in Lay's life or work - just compiling info from ~a day of reading. Protests against slavery: shocking people into awareness "Over the course of the twenty-seven years that he lived in Pennsylvania, Lay harangued the Philadelphia Quakers about the horrors of slavery at every opportunity, and he did so in dramatic style." Will MacAskill in Chapter Three of What We Owe the Future Lay's famous protests illustrate his "dramatic style" (and how little he cared about the opinion of others). Here are some examples: 1738: At the biggest event of the Philadelphia Yearly Meeting, Lay showed up in a great coat and waited his turn to speak. When the time came, Lay rose and announced in a "booming" voice: "Oh all you Negro masters who are contentedly holding your fellow creatures in a state of slavery, . . . you might as well throw off the plain coat as I do." He then threw off his coat, revealing that he was dressed in a military uniform and holding a sword and a book: "It would be as justifiable in the sight of the Almighty, who beholds and respects all nations and colours of men with an equal regard, if you should thrust a sword through their hearts as I do through this book!" When Lay plunged his sword through the book, it started gushing red liquid. In preparation for the event, Lay had hollowed out the book and inserted an animal bladder filled with bright red pokeberry juice. As he finished speaking, he splattered the fake blood on the slave owners present. Smithsonian and WWOTF) One Sunday morning he stood at a gateway to the Quaker meetinghouse, knowing all Friends would pass his way. He left "his right leg and foot entirely uncovered" and thrust them into the snow. Like the ancient philosopher Diogenes, who also trod barefoot in snow, he again sought to shock his contemporaries into awareness. One Quaker after another took notice and urged him not to expose himself to the freezing col...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Niel Bowerman as the next CEO of 80,000 Hours, published by 80000 Hours on January 31, 2024 on The Effective Altruism Forum. We're excited to announce that the boards of Effective Ventures US and Effective Ventures UK have approved our selection committee's choice of Niel Bowerman as the new CEO of 80,000 Hours. I (Rob Wiblin) was joined on the selection committee by Will MacAskill, Hilary Greaves, Simran Dhaliwal, and Max Daniel. 80,000 Hours is a project of EV US and EV UK, though under Niel's leadership, it expects to be spinning out and creating an independent legal structure, which will involve selecting a new board. We want to thank Brenton Mayer, who has served as 80,000 Hours interim CEO since late 2022, for his dedication and thoughtful management. Brenton expressed enthusiasm about the committee's choice, and he expects to take on the role of chief operations officer, where he will continue to work closely with Niel to keep 80,000 Hours running smoothly. By the end of its deliberations, the selection committee agreed that Niel was the best candidate to be 80,000 Hours' long-term CEO. We think Niel's drive and attitude will help him significantly improve the organisation and shift its strategy to keep up with events in the world. We were particularly impressed by his ability to use evidence to inform difficult strategic decisions and lay out a clear vision for the organisation. Niel started his career as a climate physicist and activist, and he went on to co-found and work at the Centre for Effective Altruism and served as assistant director of the Future of Humanity Institute before coming to 80,000 Hours. The selection committee believes that in the six years since he joined the organisation, Niel developed a deep understanding of its different programmes and the impact they have. He has a history of initiating valuable projects and delegating them to others, a style we think will be a big strength in a CEO. For example, in his role as director of special projects, Niel helped oversee the impressive growth of the 80,000 Hours job board team. It now features about 400 jobs a month aimed at helping people increase the impact of their careers and receives around 75,000 clicks a month; it's helped fill at least 200 roles that we're aware of. Niel has also made substantial contributions to the website, publishing an ahead-of-its-time article about working in US AI policy in January 2019, helping launch the new collection of articles on impactful skills, and authoring a recent newsletter on how to build better habits. In addition, Niel helped the 1on1 team nearly double the typical number of calls completed per year and aided in developing quantitative lead metrics to inform its decisions each week. And he's run organisation-wide projects, such as leading the two-year review for 2021 and 2022. Niel was very forthcoming and candid with the committee about his weaknesses. His focus on getting frank feedback and using it to drive a self-improvement cycle really impressed the selection committee. The committee considered three internal candidates for the CEO role, as well as dozens of other candidates suggested to us and dozens of others who applied for the position. In the end, we scored the top candidates on good judgement, inspiringness, social skills, leveraging people well, industriousness and drive, adaptability and resilience, commitment to the mission of 80,000 Hours, and deep understanding of the organisation. Among other things, these scores were based on: input from 80,000 Hours directors on the organisation's general situation, staff surveys, 'take-home' work tests, and a self-assessment of their biggest successes and mistakes. We also consulted, among others, Michelle Hutchinson, the current director of the 1on1 programme, as well as former 8...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Howdy, I'm Elliot, published by Elliot Billingsley on January 6, 2024 on The Effective Altruism Forum. Hi EA community, I'd like to formally introduce myself to this forum, where I've been lurking for a while but have been too timid to post until now, thanks to the encouragement of some. I first heard about EA through the Tim Ferriss Podcast in 2016. I still remember standing on the ferryboat crossing the Bosphorous while listening to Will MacAskill say things that were incredibly obvious, at least after they were heard. In the couple years that followed, I organized a local EA workshop, attended EAGx Berlin, and flew to San Francisco to attend EAG. I got involved with Students for High-Impact Charity, helping out on the periphery. I enjoyed lively conversation with EA Vancouver. And increased the usage of the phrase 'expected value' in daily conversation. That's about it. That was my EA Life Phase I. Half a decade later, I sat down with my wife and child during a Pentathlon in which every day you ask yourself the question: "What is the Most Important Work I can do today?" All of a sudden, it all came back to me. The most important things I can possibly do have quite clearly been described in EA. So I resolved in early 2022 to buckle up and take EA seriously. I honestly wasn't sure what my best option was, so I went with the most inspiring recent topic on the 80k podcast: Andrew Yang's Forward Party. I basically reached out and got named State Lead.I feel my experience with Forward may be a whole 'nuther post so I'll leave it at that. I also engaged in a lot of other ways, in large part thanks to EA Virtual Programs, which I really appreciate. But there's one person who had a huge role in my transition from an EA sleeper cell to a stupidly engaged one. That's Dr. Ben Smith. I swallowed my Ninja Lurker EA Forum personality (Never posts, always votes strongly) in order to write this post, for a specific reason, which I'll share now. Last fall, I launched a coaching practice with the intention of supporting the EA community. I asked some friends and acquaintances to take a chance and try my coaching out, and thank them very much. I now know my coaching helps people. So If I help EAs, I'm helping better, in theory, right? I want to test this theory! I'm going to EAG next month and even have a special cohort designed for attendees. If you're going to EAG, do consider applying, we'd love to have you. So that's my shameless plug. For any of you still reading, I'd like to say thanky (I'm from Texas, that's kind of how my dad used to say 'thank you'). I hope to write here more and learn in this incredible community. Elliot Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
IntroAt the gracious invitation of AI Safety Twitter-fluencer Sarah Hastings-Woodhouse, I appeared on the very first episode of her new podcast “Consistently Candid” to debate moral realism (or something kinda like that, I guess; see below) with fellow philosophy nerd and EA Twitter aficionado Max Alexander, alongside Sarah as moderator and judge of sorts.What I believeIn spite of the name of the episode and the best of my knowledge/understanding a few days ago, it turns out my stance may not be ~genuine~ moral realism. Here's my basic meta-ethical take:* Descriptive statements that concern objective relative goodness or badness (e.g., "it is objectively for Sam to donate $20 than to buy an expensive meal that costs $20 more than a similar, less fancy meal”) can be and sometimes are true; but* Genuinely normative claims like “Sam should (!) donate $20 and should not buy that fancy meal” are never objectively true.Of course the label per se doesn't really matter. But for a bunch of reasons it still seems wise to figure out which label really does work best.Some definitionsStanford Encyclopedia of Philosophy: Moral realists are those who think that, in these respects, things should be taken at face value—moral claims do purport to report facts and are true if they get the facts right. Moreover, they hold, at least some moral claims actually are true. That much is the common and more or less defining ground of moral realism (although some accounts of moral realism see it as involving additional commitments, say to the independence of the moral facts from human thought and practice, or to those facts being objective in some specified way).Wikipedia:The robust model of moral realism commits moral realists to three theses:* The semantic thesis: The primary semantic role of moral predicates (such as "right" and "wrong") is to refer to moral properties (such as rightness and wrongness), so that moral statements (such as "honesty is good" and "slavery is unjust") purport to represent moral facts, and express propositions that are true or false (or approximately true, largely false, and so on).* The alethic thesis: Some moral propositions are in fact true.* The metaphysical thesis: Moral propositions are true when actions and other objects of moral assessment have the relevant moral properties (so that the relevant moral facts obtain), where these facts and properties are robust: their metaphysical status, whatever it is, is not relevantly different from that of (certain types of) ordinary non-moral facts and properties.And to settle the matter, here's what some LLM have to say.My prompt:Suppose a person believes the following:1: Descriptive statements that concern objective relative goodness or badness (e.g., "it is objectively for Sam to donate $20 than to buy an expensive meal that costs $20 more than a similar, less fancy meal”) can be and sometimes are true; but 2: Genuinely normative claims like “Sam should donate $20 and should not buy that fancy meal” are never objectively true.Is this person best categorized as a moral realist?ChatGPTGiven these two beliefs, the person's stance doesn't align completely with traditional moral realism, which typically asserts that there are objective truths about both moral facts (like the goodness or badness of actions) and moral duties (what one should or should not do). However, it also doesn't fully align with moral anti-realism, since the person believes in some form of objective truth regarding moral value.One possible categorization for this stance could be a form of "value realism" combined with "normative anti-realism" or "normative subjectivism." This would mean that the person believes in objective truths about what is better or worse (value realism) but thinks that what one ought to do (normative claims) is not a matter of objective truth but perhaps of subjective judgment or societal conventions.This position is quite sophisticated and might be best understood as a hybrid or intermediate position in the landscape of moral philosophy, rather than fitting neatly into the category of moral realism or its standard alternatives.Great, so much for trying to fit things into nice little buckets.Claude, fortunately, is more definitive:No, the view described is not best categorized as moral realism. Here's why:Moral realism is the view that there are objective moral facts or truths. A moral realist believes that some moral claims, including normative claims about what one should or shouldn't do, can be objectively and mind-independently true.The view in the example accepts that there can be objective truths about relative degrees of goodness or badness (claim 1). However, it denies that there are objective normative truths about what one should or shouldn't do (claim 2). By rejecting that genuine normative 'should/shouldn't' claims can be objectively true, the view denies a key commitment of moral realism - namely that moral claims can state objective normative facts. At best it allows for objective evaluative facts about goodness/badness, not prescriptive facts about obligations.So this view would be more accurately categorized as a form of moral anti-realism like non-cognitivism or error theory. It does not qualify as moral realism because of its rejection of the existence of objective normative truths that could make strong normative claims justified or true. The commitment to objective evaluative facts alone does not sufficiently qualify the view as realist in the metaethical sense.In summary, the view accepts objective "good/bad" facts but denies objective "should/shouldn't" facts. This combination puts it at odds with core commitments of moral realism, so it is better seen as an anti-realist view.I'm gonna disagree with Claude here, at least in that the bolded claim above (objective normative facts are a key part of moral realism) just seems…wrong? Like all the definitions could replace the term “moral fact” or “moral claim” with “normative fact” and “normative claim” but they do not!So I really don't have a great answer to which label works best. One of two ~human level chatbots giving me a definitive-sounding “anti-realism” answer is too much evidence for me, whose only formal philosophy training is an undergrad minor, to rule that one out. There are also good arguments, I think, for the “realist label,” as well as for “neither” (i.e., ‘secret third thing'). In fact all of these seem pretty similar in terms of argument convincingness/correctness. So, in sum,
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing New Beginner-friendly Book on AI Safety and Risk, published by Darren McKee on November 25, 2023 on The Effective Altruism Forum. Concisely, I've just released the book Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World It's an engaging introduction to the main issues and arguments about AI safety and risk. Clarity and accessibility were prioritized. There are blurbs of support from Max Tegmark, Will MacAskill, Roman Yampolskiy and others. Main argument is that AI capabilities are increasing rapidly, we may not be able to fully align or control advanced AI systems, which creates risk. There is great uncertainty, so we should be prudent and act now to ensure AI is developed safely. It tries to be hopeful. Why does it exist? There are lots of useful posts, blogs, podcasts, and articles on AI safety, but there was no up-to-date book entirely dedicated to the AI safety issue that is written for those without any exposure to the issue. (Including those with no science background.) This book is meant to fill that gap and could be useful outreach or introductory materials. If you have already been following the AI safety issue, there likely isn't a lot that is new for you. So, this might be best seen as something useful for friends, relatives, some policy makers, or others just learning about the issue. (although, you may still like the framing) It's available on numerous Amazon marketplaces. Audiobook and Hardcover options to follow. It was a hard journey. I hope it is of value to the community. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
In this episode, John and Lawrence respond to philosopher Will MacAskill, for many the leading figure of Effective Altruism, on the subject of antinatalism. Is antinatalism worth taking seriously? Would human extinction be bad? And are antinatalists welcome in the EA community? Listen to find out! TIMESTAMPS00:00 Intro02:04 Purpose of this episode05:29 Our thoughts on Will MacAskill13:04 "Too nihilistic and divorced from humane values" comment32:03 "Positively glad that the world exists" comment38:09 Question 1: Does MacAskill take human extinction seriously enough? 56:01 Question 2: Are antinatalists welcome in the EA community?1:10:36 A positive note to end on1:18:05 OutroANTINATALIST ADVOCACYNewsletter: https://antinatalistadvocacy.org/news...Website: https://antinatalistadvocacy.org/ Twitter / X: https://twitter.com/AN_advocacyInstagram: https://www.instagram.com/an_advocacyCheck out the links below!- What We Owe The Future: https://amzn.eu/d/bO5Wqpo- Doing Good Better: https://amzn.eu/d/cyPlhZY- Reluctant Prophet of Effective Altruism: https://www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism- The dismal dismissal of suffering-focused views: https://magnusvinding.com/2022/06/17/dismal-dismissal/- Utilitarianism.net: https://www.utilitarianism.net/population-ethics/#person-affecting-views-and-the-procreative-asymmetry- The Problem with Antinatalism: https://www.youtube.com/watch?v=zxuohL8Lx1o
Chris Williamson is an English podcaster and YouTuber. Since 2018 he has hosted the Modern Wisdom podcast which has been downloaded over 200 million times. He discusses current events, politics, philosophy, science and hobbies with a variety of guests including Jordan Peterson, Steven Pinker, Neil deGrasse Tyson, Will MacAskill, Robin Dunbar, Michael Malice, Sam Harris, Eric Weinstein, and Patrick Moore. SPONSORED BY: AG1. Go to https://www.drinkAG1.com/triggernometry/ to get 5 free AG1 Travel Packs and a FREE 1 year supply of Vitamin D with your first purchase! Become a Premium Member to receive exclusive benefits https://triggernometry.supercast.com/ OR Support TRIGGERnometry Here: Bitcoin: bc1qm6vvhduc6s3rvy8u76sllmrfpynfv94qw8p8d5 Music by: Music by: Xentric | info@xentricapc.com | https://www.xentricapc.com/ YouTube: @xentricapc Buy Merch Here: https://www.triggerpod.co.uk/shop/ Advertise on TRIGGERnometry: marketing@triggerpod.co.uk Join the Mailing List: https://www.triggerpod.co.uk/sign-up/ Find TRIGGERnometry on Social Media: https://twitter.com/triggerpod https://www.facebook.com/triggerpod/ https://www.instagram.com/triggerpod/ About TRIGGERnometry: Stand-up comedians Konstantin Kisin (@konstantinkisin) and Francis Foster (@francisjfoster) make sense of politics, economics, free speech, AI, drug policy and WW3 with the help of presidential advisors, renowned economists, award-winning journalists, controversial writers, leading scientists and notorious comedians.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Linkpost: They Studied Dishonesty. Was Their Work a Lie?, published by Linch on October 2, 2023 on LessWrong. This is a linkpost for Gideon Lewis-Kraus's New Yorker article on the (alleged) Ariely and Gino data fraud scandals. I've been following this situation off-and-on for a while (and even more so after the original datacolada blog posts). The basic story is that multiple famous professors in social psychology (specializing in dishonesty) have been caught with blatant data fraud. The field to a large extent tried to "protect their own," but in the end the evidence became too strong. The suspects have since retreated to attempting to sue datacolada (the investigators). Despite the tragic nature of the story, I consider this material hilarious high entertainment, in addition to being quite educational. The writing is also quite good, as I've come to expect from Gideon Lewis-Kraus (who locals might have heard of from his in-depth profiles on Slate Star Codex, Will MacAskill, and the FTX crash). Some quotes: If you tortured the data long enough, as one grim joke went, it would confess to anything. They called such techniques "p-hacking." As they later put it, "Everyone knew it was wrong, but they thought it was wrong the way it's wrong to jaywalk." In fact, they wrote, "it was wrong the way it's wrong to rob a bank." Ziani [a young grad student] found Gino's results implausible, and assumed that they had been heavily p-hacked. She told me, "This crowd is used to living in a world where you have enough degrees of freedom to do whatever you want and all that matters is that it works beautifully." But an adviser strongly suggested that Ziani "build on" the paper, which had appeared in a top journal. When she expressed her doubts, the adviser snapped at her, "Don't ever say that!" Members of Ziani's dissertation committee couldn't understand why this nobody of a student was being so truculent. In the end, two of them refused to sign off on her degree if she did not remove criticisms of Gino's paper from her dissertation. One warned Ziani not to second-guess a professor of Gino's stature in this way. In an e-mail, the adviser wrote, "Academic research is like a conversation at a cocktail party. You are storming in, shouting 'You suck!' " A former senior researcher at the lab told me, "He assured us that the effect was there, that this was a true thing, and I was convinced he completely believed it." The former senior researcher said, "How do you swim through that murky area of where is he lying? Where is he stretching the truth? What is he forgetting or misremembering? Because he does all three of those things very consistently. So when it really matters - like with the auto insurance - which of these three things is it?" (Meme made by myself) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will MacAskill has stepped down as trustee of EV UK, published by lincolnq on September 21, 2023 on The Effective Altruism Forum. Earlier today, Will MacAskill stepped down from the board of Effective Ventures UK[1], having served as a trustee since its founding more than a decade ago. Will has been intending to step down for several months and announced his intention to do so earlier this year. Will had initially planned to remain on the board until we brought on additional trustees to replace him. However, given that our trustee recruitment process has taken longer than anticipated, and given also that Will continues to be recused from a significant proportion of board business[2], he felt that it didn't make sense for him to stay on any longer. Will announced his resignation today. As a founding board member of EV UK (then called CEA), Will played a vital role in getting EV and its constituent projects off the ground, including co-founding Giving What We Can and 80,000 Hours. We are very grateful to Will for everything he's contributed to the effective altruism movement to date and look forward to his future positive impact; we wish him the best of luck with his future work. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Earlier today, Will MacAskill stepped down from the board of Effective Ventures UK[1], having served as a trustee since its founding more than a decade ago. Will has been intending to step down for several months and announced his intention to do so earlier this year. Will had initially planned to remain on the board until we brought on additional trustees to replace him. However, given that our trustee recruitment process has taken longer than anticipated, and given also that Will continues to be recused from a significant proportion of board business[2], he felt that it didn't make sense for him to stay on any longer. Will announced his resignation today. As a founding board member of EV UK (then called CEA), Will played a vital role in getting EV and its constituent projects off the ground, including co-founding Giving What We Can and 80,000 Hours. We are very grateful [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: September 21st, 2023 Source: https://forum.effectivealtruism.org/posts/mArisdpuQiFtTNWw3/will-macaskill-has-stepped-down-as-trustee-of-ev-uk --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on far-UVC after working in the field for 8 months, published by Max Görlitz on July 31, 2023 on The Effective Altruism Forum. Views expressed in this article are my own and do not necessarily reflect those of my employer SecureBio. Summary Far-UVC has great promise, but a lot of work still needs to be done There still are many important open research questions that need to be answered before the technology can become widely adopted Right now, a key priority is to grow the research field and improve coordination The main reason far-UVC is so promising is that widespread installation could passively suppress future pandemics before we even learn that an outbreak has occurred Higher doses mean more rapid inactivation of airborne pathogens but also more risk for harm to skin, eyes, and through indoor air chemistry. Therefore, the important question in safety is, "How high can far-UVC doses go while maintaining a reasonable risk profile?" Existing evidence for skin safety within current exposure guidelines seems pretty robust, and I expect that skin safety won't be the bottleneck for far-UVC deployment at higher doses. Current evidence around eye safety is much more sparse than for skin safety. Eye safety seems like it could be the bottleneck to what doses of far-UVC can be reasonably used. Undoubtedly, far-UVC has a substantial impact on indoor air chemistry by producing ozone, which oxidizes volatile organic compounds in the air that can result in harmful products such as particulate matter. Little research has been done on methods to mitigate this issue. This might turn out to be a bottleneck to what doses of far-UVC can be reasonably used, but I am really uncertain here. There is no doubt that far-UVC can dramatically reduce the amount of airborne pathogens within a room (inactivation of ~98% of aerosolized bacteria within 5 minutes). Crucially, we don't know how well this translates into an actual reduction in the total number of infections. Very few people have thought about how the adoption of far-UVC could be driven and what a widespread deployment of the technology could look like So far, there is little to no regulation of far-UVC. In the US, (potential) regulation of far-UVC seems quite messy, as no authority has clear jurisdiction over it. Introduction Far-UVC (200-235 nm) has received quite a bit of attention in EA-adjacent biosecurity circles as a technology to reduce indoor airborne disease spread and is often discussed in the context of indoor air quality (IAQ). Notably, Will MacAskill mentioned it often throughout various media appearances in 2022. I have been working on research around far-UVC for the past 8 months. More specifically, we wrote an extensive literature review on skin and eye safety (submitted & soon™ to be published as an academic paper). We also coordinated with many researchers in the field to lay out a plan for the studies that still need to be done to get a more comprehensive understanding of the technology's safety & efficacy. Although far-UVC has been discussed on the forum, the existing information is relatively shallow, and most in-depth knowledge is either buried in technical research papers or not publicly available since a lot of intricacies are mostly discussed informally within the research community. In this post, I will first offer high-level thoughts and then go over different categories of information around far-UVC (safety, efficacy, indoor air chemistry, adoption, and regulation) to provide my current perspectives & takes. Please note that I am much more familiar with safety aspects than with the other categories. Also, this is not a general overview of far-UVC, what it is, and how it works. For a relatively recent and comprehensive introduction, I recommend "Far UV-C radiation: An emerging tool for pandemic co...
Manifund is launching a new regranting program! We will allocate ~$2 million over the next six months based on the recommendations of our regrantors. Grantees can apply for funding through our site; we're also looking for additional regrantors and donors to join.What is regranting?Regranting is a funding model where a donor delegates grantmaking budgets to different individuals known as “regrantors”. Regrantors are then empowered to make grant decisions based on the objectives of the original donor.This model was pioneered by the FTX Future Fund; in a 2022 retro they considered regranting to be very promising at finding new projects and people to fund. More recently, Will MacAskill cited regranting as one way to diversify EA funding.What is Manifund?Manifund is the charitable arm of Manifold Markets. Some of our past work:Impact certificates, with Astral Codex Ten and the OpenPhil AI Worldviews ContestForecasting tournaments, with Charity Entrepreneurship and Clearer ThinkingDonating prediction market winnings to charity, funded by the Future FundHow does regranting on Manifund work?Our website makes the process simple, transparent, and fast:A donor contributes money to Manifold for Charity, our registered 501c3 nonprofitThe donor then allocates the money between regrantors of their choice. They can increase budgets for regrantors doing a good job, or pick out new regrantors who share the donor's values.Regrantors choose which opportunities (eg existing charities, new projects, or individuals) to spend their budgets on, writing up an explanation for each grant made.We expect most regrants to start with a conversation between the recipient and the regrantor, and after that, for the process to take less than two weeks.Alternatively, people looking for funding can post their project on the Manifund site. Donors and regrantors can then decide whether to fund it, similar to Kickstarter.The Manifund team screens the grant to make sure it is legitimate, legal, and aligned with our mission. If so, we approve the grant, which sends money to the recipient's Manifund account.The recipient withdraws money from their Manifund account to be used for their project.Differences from the Future Fund's regranting programAnyone can donate to regrantors. Part of what inspired us to start this program is how hard it is to figure out where to give as a longtermist donor—there's no GiveWell, no ACE, just a mass of opaque, hard-to-evaluate research orgs. Manifund's regranting infrastructure lets individual donors outsource their giving decisions to people they trust, who may be more specialized and more qualified at grantmaking.All grant information is public. This includes the identity of the regrantor and grant recipient, the project description, the grant size, and the regrantor's writeup. We strongly believe in transparency as it allows for meaningful public feedback, accountability of decisions, and establishment of regrantor track records.Almost everything is done through our website. This lets us move faster, act transparently, set good defaults, and encourage discourse about the projects in comment sections.We recognize that not all grants are suited for publishing; for now, we recommend sensitive grants apply to other donors (such as LTFF, SFF, OpenPhil).We're starting with less money. The Future [...]--- First published: July 5th, 2023 Source: https://forum.effectivealtruism.org/posts/RMXctNAksBgXgoszY/announcing-manifund-regrants Linkpost URL:https://manifund.org/rounds/regrants --- Narrated by TYPE III AUDIO. Share feedback on this narration.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Three mistakes in the moral mathematics of existential risk (David Thorstad), published by Global Priorities Institute on July 4, 2023 on The Effective Altruism Forum. Abstract Longtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation: focusing on cumulative risk rather than period risk; ignoring background risk; and neglecting population dynamics. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to threaten the normative case for existential risk mitigation. I use this discussion to draw four positive lessons for the study of existential risk: the importance of treating existential risk as an intergenerational coordination problem; a surprising dialectical flip in the relevance of background risk levels to the case for existential risk mitigation; renewed importance of population dynamics, including the dynamics of digital minds; and a novel form of the cluelessness challenge to longtermism. Introduction Suppose you are an altruist. You want to do as much good as possible with the resources available to you. What might you do? One option is to address pressing short-term challenges. For example, GiveWell (2021) estimates that $5,000 spent on bed nets could save a life from malaria today. Recently, a number of longtermists (Greaves and MacAskill 2021; MacAskill 2022b) have argued that you could do much more good by acting to mitigate existential risks: risks of existential catastrophes involving “the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development” (Bostrom 2013, p. 15). For example, you might work to regulate chemical and biological weapons, or to reduce the threat of nuclear conflict (Bostrom and Cirkovi ' c' 2011; MacAskill 2022b; Ord 2020). Many authors argue that efforts to mitigate existential risk have enormous value. For example, Nick Bostrom (2013) argues that even on the most conservative assumptions, reducing existential risk by just one-millionth of one percentage point would be as valuable as saving a hundred million lives today. Similarly, Hilary Greaves and Will MacAskill (2021) estimate that early efforts to detect potentially lethal asteroid impacts in the 1980s and 1990s had an expected cost of just fourteen cents per life saved. If this is right, then perhaps an altruist should focus on existential risk mitigation over short term improvements. There are many ways to push back here. Perhaps we might defend population-ethical assumptions such as neutrality (Naverson 1973; Frick 2017) that cut against the importance of creating happy people. Alternatively, perhaps we might introduce decision-theoretic assumptions such as risk aversion (Pettigrew 2022), ambiguity aversion (Buchak forthcoming) or anti-fanaticism (Monton 2019; Smith 2014) that tell against risky, ambiguous and low-probability gambles to prevent existential catastrophe. We might challenge assumptions about aggregation (Curran 2022; Heikkinen 2022), personal prerogatives (Unruh forthcoming), and rights used to build a deontic case for existential risk mitigation. We might discount the well-being of future people (Lloyd 2021; Mogensen 2022), or hold that pressing current duties, such as reparative duties (Cordelli 2016), take precedence over duties to promote far-future welfare. These strategies set themselves a difficult task if they accept the longtermist's framing on which existential risk mitigation is not simply better, but orders of magnitude better than competing short-termist interventions. Is it really so obvious ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mistakes in the moral mathematics of existential risk (Part 1: Introduction and cumulative risk) - Reflective altruism, published by BrownHairedEevee on July 3, 2023 on The Effective Altruism Forum. This is the first part of "Mistakes in the moral mathematics of existential risk", a series of blog posts by David Thorstad that aims to identify ways in which estimates of the value of reducing existential risk have been inflated. I've made this linkpost part of a sequence. Even if we use . conservative estimates, which entirely ignor[e] the possibility of space colonization and software minds, we find that the expected loss of an existential catastrophe is greater than the value of 1016 human lives. This implies that the expected value of reducing existential risk by a mere one millionth of one percentage point is at least a hundred times the value of a million human lives. Nick Bostrom, “Existential risk prevention as global priority” 1. Introduction This is Part 1 of a series based on my paper, “Mistakes in the moral mathematics of existential risk.” (Almost) everyone agrees that human extinction would be a bad thing, and that actions which reduce the chance of human extinction have positive value. But some authors assign quite high value to extinction mitigation efforts. For example: Nick Bostrom argues that even on the most conservative assumptions, reducing existential risk by just one millionth of one percentage point would be as valuable as saving a hundred million lives today. Hilary Greaves and Will MacAskill estimate that early asteroid-detection efforts saved lives at an expected cost of fourteen cents per life. These numbers are a bit on the high side. If they are correct, then on many philosophical views the truth of longtermism will be (nearly) a foregone conclusion. I think that these, and other similar estimates, are inflated by many orders of magnitude. My paper and blog series “Existential risk pessimism and the time of perils” brought out one way in which these numbers may be too high: they will be overestimates unless the Time of Perils Hypothesis is true. My aim in this paper is to bring out three novel ways in which many leading estimates of the value of existential risk mitigation have been inflated. (The paper should be online as a working paper within a month.) I'll introduce the mistakes in detail throughout the series, but it might be helpful to list them now. Mistake 1: Focusing on cumulative risk rather than per-unit risk. Mistake 2: Ignoring background risk. Mistake 3: Neglecting population dynamics. I show how many leading estimates make one, or often more than one of these mistakes. Correcting these mistakes in the moral mathematics of existential risk has two important implications. First, many debates have been mislocated, insofar as factors such as background risk and population dynamics are highly relevant to the value of existential risk mitigation, but these factors have rarely figured in recent debates. Second, many authors have overestimated the value of existential risk mitigation, often by many orders of magnitude. In this series, I review each mistake in turn. Then I consider implications of this discussion for current and future debates. Today, I look at the first mistake, focusing on cumulative rather than per-unit risk. 2. Bostrom's conservative scenario Nick Bostrom (2013) considers what he terms a conservative scenario in which humanity survives for a billion years on the planet Earth, at a stable population of one billion humans. We will see throughout this series that is far from a conservative scenario. Modeling background risk (correcting the second mistake) will put pressure on the likelihood of humanity surviving for a billion years. And modeling population dynamics (correcting the third mistake) will raise the possi...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Improving EA Communication Surrounding Disability, published by MHR on June 13, 2023 on The Effective Altruism Forum. Epistemic Status: Low-to-medium confidence, informed by my experience with having a disability as an EA. I think the included recommendations are reasonable best practices, but I'm uncertain as to whether they would make a tangible change to perceptions of the EA movement. Summary The EA movement has historically faced criticism from disability rights advocates, potentially reducing support for EA and limiting its ability to do good. This tension between EA and disability advocacy may be as much a matter of poor EA communication around issues of disability as a matter of fundamental philosophical disagreement. Changes to communications practices regarding disability might therefore deliver major benefits for relatively little effort. Particular recommendations for improving communications include: Avoiding unnecessarily presenting EA and disability advocacy as being in opposition Being careful to only use DALYs when appropriate and when properly contextualized Increasing the quantity and diversity of EA writing on disability Introduction The Effective Altruism movement has had a somewhat contentious relationship with the disability advocacy community. Disability advocates have critiqued EA via protests, articles, and social media posts, arguing that the movement is ableist, eugenicist, and/or insufficiently attentive to the needs of disabled individuals. Yet the EA community is often substantially more inclusive than society at large for people with many disabilities, through aspects such as availability of remote work, social acceptance of specialized dietary needs, and provision of information in a wide variety of formats. Moreover, while there are some areas in which EA's typical consequentialism may have fundamental conflicts with theories of disability justice, these areas are likely much more limited than many would assume. In fact, since people with disabilities tend to be overrepresented among those living in extreme poverty and/or experiencing severe pain, typical EA approaches that prioritize these problems are likely to be substantially net beneficial to the lives of disabled individuals. Given this context, I think it is likely that the conflict between disability advocates and effective altruists is as much a problem of poor EA communication as it is a problem of fundamental philosophical difference. This breakdown implies that conflicts between EAs and disability advocates might be substantially reduced via changes to EA communications practices. While changes to communication approaches carry some costs, I believe the benefits from improved communications around disability would probably outweigh them. There are three potential areas in which I think the status quo hurts the EA movement. First of all, it likely drives off potential donors, employees, and advocates with disabilities, reducing the resources with which the EA movement is able to do good. Second, it may prevent dialogue between the EA and disability advocacy communities that might productively identify effective interventions focused on people with disabilities. Finally, it may reduce support for the EA movement among the wider community of people who care about the interests and concerns of the disabled community. In comparison to these harms, I think the modest efforts required to improve on current EA communications around disability issues are likely to be noticeably less costly. In the next section, I identify three practical areas in which communications could likely be substantially improved. Suggested Methods of Improving Communication Avoiding Unnecessarily Presenting EA and Disability Advocacy as being in Opposition In What We Owe the Future, Will MacAskill descri...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If you find EA conferences emotionally difficult, you're not alone, published by Amber Dawn on May 22, 2023 on The Effective Altruism Forum. I went to EAG London this weekend. I had some interesting chats, wrote some cryptic squiggles in my notebook (“Clockify” “the Easterlin paradox”, “functionalist eudaimonic theories”), and gave and received some hopefully-useful advice. Overall, the conference was fun and worthwhile for me. But at times, I also found the conference emotionally difficult. I think this is pretty common. After last year's EAG, Alastair Fraser-Urquhart wrote about how he burnt out at the conference and had to miss a retreat starting the next day. The post was popular, and many said they'd had similar experiences. The standard euphemism for this facet of EA conferences is ‘intense' or ‘tiring', but I suspect these adjectives are often a more socially-acceptable way of saying ‘I feel low/anxious/exhausted and want to curl up in a foetal position in a darkened room'. I want to write this post to: balance out the ‘woo EAG lfg!' hype, and help people who found it a bad or ambivalent experience to feel less alone dig into to why EAGs can be difficult: this might help attendees have better experiences themselves, and also create an environment where others are more likely to have good experiences help people who mostly enjoy EAGs understand what their more neurotic or introverted friends are going through Here are some reasons that EAGs might be emotionally difficult. Some of these I've experienced personally, others are based on comments I've heard, and others are plausible educated guesses. It's easy to compare oneself (negatively) to others EA conferences are attended by a bunch of “impressive” people: big-name EAs like Will MacAskill and Toby Ord, entrepreneurs, organisation leaders, politicians, and “inner-circle-y” people who are Forum- or Twitter-famous. You've probably scheduled meetings with people because they're impressive to you; perhaps you're seeking mentorship and advice from people who are more senior or advanced in your field, or you want to talk to someone because they have cool ideas. This can naturally inflame impostor syndrome, feelings of inadequacy, and negative comparisons. Everyone seems smarter, harder-working, more agentic, better informed. Everyone's got it all figured out, while you're still stuck at Stage 2 of 80k's career planning process. Everyone expects you to have a plan to save the world, and you don't even have a plan for how to start making a plan. Most EAs, I think, know that these thought patterns are counterproductive. But even if some rational part of you knows this, it can still be hard to fight them - especially if you're tired, scattered, or over-busy, since this makes it harder to employ therapeutic coping mechanisms. The stakes are high We're trying to solve immense, scary problems. We (and CEA) pour so much time and money into these conferences because we hope that they'll help us make progress on those problems. This can make the conferences anxiety-inducing - you really really hope that the conference pays off. This is especially true if you have some specific goal - such as finding a job, collaborators or funders - or if you think the conference has a high opportunity cost for you. You spend a lot of time talking about depressing things This is just part of being an EA, of course, but most of us don't spend all our time directly confronting the magnitude of these problems. Having multiple back-to-back conversations about ‘how can we solve [massive, seemingly-intractable problem]?' can be pretty discouraging. Everything is busy and frantic You're constantly rushing from meeting to meeting, trying not to bump into others who are doing the same. You see acquaintances but only have time to wave hello, because y...
Machine Alignment Monday 3/13/23 https://astralcodexten.substack.com/p/why-i-am-not-as-much-of-a-doomer (see also Katja Grace and Will Eden's related cases) The average online debate about AI pits someone who thinks the risk is zero, versus someone who thinks it's any other number. I agree these are the most important debates to have for now. But within the community of concerned people, numbers vary all over the place: Scott Aaronson says says 2% Will MacAskill says 3% The median machine learning researcher on Katja Grace's survey says 5 - 10% Paul Christiano says 10 - 20% The average person working in AI alignment thinks about 30% Top competitive forecaster Eli Lifland says 35% Holden Karnofsky, on a somewhat related question, gives 50% Eliezer Yudkowsky seems to think >90% As written this makes it look like everyone except Eliezer is
In this episode of the Philosophy Bites podcast David Edmonds interviews Will MacAskill on the controversial idea that we ought to give the interests of future people substantial weight when deciding what we ought to do now.
The philosophy of Effective Altrusim took a hit due to its close association with Samuel Bankman-Fried. New Yorker staff writer Gideon Lewis-Kraus profiled one of the movement's leaders, Will MacAskill, and followed up in a piece titled “Sam Bankman-Fried, Effective Altruism, and the Question of Complicity.” Plus, the East Palestine Crash is a headache, but maybe not much more. And … a new survey shows the U.S. is in last place in the Trust-The-Media rankings. Produced by Joel Patterson and Corey Wara Email us at thegist@mikepesca.com To advertise on the show, visit: https://advertisecast.com/TheGist Subscribe to The Gist Subscribe: https://subscribe.mikepesca.com/ Follow Mikes Substack at: Pesca Profundities | Mike Pesca | Substack Learn more about your ad choices. Visit podcastchoices.com/adchoices
For our final look at some of the best books to have hit shelves in 2022, we dive back into standout discussions from the past 12 months including Jonathan Freedland, whose book, The Escape Artist, tells the story of Auschwitz escapees Rudolf Vrba and Alfréd Wetzler. We also revisit our discussion with philosopher Will MacAskill, whose book, What We Owe The Future, claims that society needs to take a far longer-sighted view of how altruism can be effective. The book has also come under scrutiny in the latter half of 2022 due to its influence on the behaviour of billionaire philanthropists. We finish the 12 Books of Christmas with a sneak preview from our upcoming episode with author Katherine Rundell. Her latest book, The Golden Mole, celebrates the irreplaceable diversity found within the animal kingdom. ... Did you know that Intelligence Squared offers way more than podcasts? We've just launched a new online streaming platform Intelligence Squared+ and we'd love you to give it a go. It's packed with more than 20 years' worth of video debates and conversations on the world's hottest topics. Tune in to live events, ask your questions or watch back on-demand totally ad-free with hours of discussion to dive into for just £14.99 a month. Visit intelligencesquaredplus.com to start watching today. Learn more about your ad choices. Visit megaphone.fm/adchoices
Welcome to another episode of The Tim Ferriss Show, where it is my job to deconstruct world-class performers to tease out the routines, habits, et cetera that you can apply to your own life. This is a special inbetweenisode, which serves as a recap of the episodes from last month. It features a short clip from each conversation in one place so you can easily jump around to get a feel for the episode and guest.Based on your feedback, this format has been tweaked and improved since the first recap episode. For instance, @hypersundays on Twitter suggested that the bios for each guest can slow the momentum, so we moved all the bios to the end. See it as a teaser. Something to whet your appetite. If you like what you hear, you can of course find the full episodes at tim.blog/podcast. Please enjoy! ***This episode is brought to you by 5-Bullet Friday, my very own email newsletter that every Friday features five bullet points highlighting cool things I've found that week, including apps, books, documentaries, gadgets, albums, articles, TV shows, new hacks or tricks, and—of course—all sorts of weird stuff I've dug up from around the world.It's free, it's always going to be free, and you can subscribe now at tim.blog/friday.***Timestamps:Roelof Botha: 00:03:17Will MacAskill: 00:08:46Russ Roberts: 00:14:02Andrew Weil: 00:23:47Tim Q&A: 00:27:54Full episode titles:Roelof Botha — Investing with the Best, Ulysses Pacts, The Magic of Founder-Problem Fit, How to Use Pre-Mortems and Pre-Parades, Learning from Crucible Moments, and Daring to Dream (#618)Will MacAskill of Effective Altruism Fame — The Value of Longtermism, Tools for Beating Stress and Overwhelm, AI Scenarios, High-Impact Books, and How to Save the World and Be an Agent of Change (#612)Russ Roberts on Lessons from F.A. Hayek and Nassim Taleb, Decision-Making Insights from Charles Darwin, The Dangers of Scientism, Wild Problems in Life and the Decisions That Define Us, Learnings from the Talmud, The Role of Prayer, and The Journey to Transcendence (#613)Dr. Andrew Weil — The 4-7-8 Breath Method, Cannabis, The Uses of Coca Leaf, Rehabilitating Demonized Plants, Kava for Anxiety, Lessons from Wade Davis, The Psychedelic Renaissance, How to Emerge from Depression, Tales from 50+ Visits to Japan, Matcha Benefits, and More (#615)Q&A with Tim on Wealth and Money, Book Recommendations, Advice on Taking Advice, C.S. Lewis, Relationships, Behavior Change and Self-Awareness, Why We Are All (Mostly) Making It Up as We Go, and Much More (#614)*For show notes and past guests on The Tim Ferriss Show, please visit tim.blog/podcast.For deals from sponsors of The Tim Ferriss Show, please visit tim.blog/podcast-sponsorsSign up for Tim's email newsletter (5-Bullet Friday) at tim.blog/friday.For transcripts of episodes, go to tim.blog/transcripts.Discover Tim's books: tim.blog/books.Follow Tim:Twitter: twitter.com/tferriss Instagram: instagram.com/timferrissYouTube: youtube.com/timferrissFacebook: facebook.com/timferriss LinkedIn: linkedin.com/in/timferrissPast guests on The Tim Ferriss Show include Jerry Seinfeld, Hugh Jackman, Dr. Jane Goodall, LeBron James, Kevin Hart, Doris Kearns Goodwin, Jamie Foxx, Matthew McConaughey, Esther Perel, Elizabeth Gilbert, Terry Crews, Sia, Yuval Noah Harari, Malcolm Gladwell, Madeleine Albright, Cheryl Strayed, Jim Collins, Mary Karr, Maria Popova, Sam Harris, Michael Phelps, Bob Iger, Edward Norton, Arnold Schwarzenegger, Neil Strauss, Ken Burns, Maria Sharapova, Marc Andreessen, Neil Gaiman, Neil de Grasse Tyson, Jocko Willink, Daniel Ek, Kelly Slater, Dr. Peter Attia, Seth Godin, Howard Marks, Dr. Brené Brown, Eric Schmidt, Michael Lewis, Joe Gebbia, Michael Pollan, Dr. Jordan Peterson, Vince Vaughn, Brian Koppelman, Ramit Sethi, Dax Shepard, Tony Robbins, Jim Dethmer, Dan Harris, Ray Dalio, Naval Ravikant, Vitalik Buterin, Elizabeth Lesser, Amanda Palmer, Katie Haun, Sir Richard Branson, Chuck Palahniuk, Arianna Huffington, Reid Hoffman, Bill Burr, Whitney Cummings, Rick Rubin, Dr. Vivek Murthy, Darren Aronofsky, Margaret Atwood, Mark Zuckerberg, Peter Thiel, Dr. Gabor Maté, Anne Lamott, Sarah Silverman, Dr. Andrew Huberman, and many more.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Neuroscientist Erik Hoel talks about why he is not an "effective altruist" with EconTalk host, Russ Roberts. Hoel argues that the utilitarianism that underlies effective altruism--a movement co-founded by Will MacAskill and Peter Singer--is a poison that inevitably leads to repugnant conclusions and thereby weakens the case for the strongest claims made by effective altruists.
Sign up for Intelligence Squared Premium here: https://iq2premium.supercast.com/ for ad-free listening, bonus content, early access and much more. See below for details. Will MacAskill is the philosopher thinking a million years into the future who is also having a bit of a moment in the present. As Associate Professor in Philosophy and Research Fellow at the Global Priorities Institute at the University of Oxford, he is co-founder of the effective altruism movement, which uses evidence and reason as the driver to help maximise how we can better resource the world. MacAskill's writing has found fans ranging from Elon Musk to Stephen Fry and his new book is What We Owe the Future: A Million-Year View. Our host on the show is Max Roser, Director of the Oxford Martin Programme on Global Development and founder and editor of Our World in Data. … We are incredibly grateful for your support. To become an Intelligence Squared Premium subscriber, follow the link: https://iq2premium.supercast.com/ Here's a reminder of the benefits you'll receive as a subscriber: Ad-free listening, because we know some of you would prefer to listen without interruption One early episode per week Two bonus episodes per month A 25% discount on IQ2+, our exciting streaming service, where you can watch and take part in events live at home and enjoy watching past events on demand and without ads A 15% discount and priority access to live, in-person events in London, so you won't miss out on tickets Our premium monthly newsletter Intelligence Squared Merch Learn more about your ad choices. Visit megaphone.fm/adchoices
My guest today is Will MacAskill. Will is an associate professor of philosophy at Oxford University. He is the co-founder and president of the Centre for Effective Altruism. Will is also the director of the Forethought Foundation for Global Priorities Research.In this episode, we discuss his new book "What We Owe the Future". We talk about whether we have a moral obligation to the billions of humans that will be born in the next several 1000 years, and how to weigh those obligations against those of living humans. We discuss population ethics in general, and Derek Parfit's Repugnant Conclusion thought experiment. We discuss the role of economic growth in humanity's long-term future and how to weigh that against present-day wealth inequality. We talk about the ethics of abortion, and the notion of moral progress. We also discuss the possible AI futures that lie ahead of us and much more. -Get 20% Off and Free Shipping with the code [20COLEMAN] at Manscaped.com. That's 20% off with free shipping at manscaped.com and use code [20COLEMAN].-Sign up through wren.co/coleman to make a difference in the climate crisis, and Wren will plant 10 extra trees in your name!-Sign up today at butcherbox.com/COLEMAN to get two, 10 oz New York strip steaks and 8 oz of lobster claw and knuckle meat FREE in your first order.
Ryan talks to professor and writer Will MacAskill about his book What We Owe The Future, how to create effective change in the world, the importance of gaining a better perspective on the world, and more.Will MacAskill is an Associate Professor in Philosophy and Research Fellow at the Global Priorities Institute, University of Oxford. His research focuses on the fundamentals of effective altruism - the use of evidence and reason to help others as much as possible with our time and money - with a particular concentration on how to act given moral uncertainty. He is the author of the upcoming book What We Owe The Future, available for purchase on August 12. Will also wrote Doing Good Better: Effective Altruism and a Radical New Way to Make a Difference and co-authored Moral Uncertainty.✉️ Sign up for the Daily Stoic email: https://dailystoic.com/dailyemail
1. People who exist in the future deserve some degree of moral consideration. 2. The future could be very big, very long, and/or very good. 3. We can reasonably hope to influence whether people in the future exist, and how good or bad their lives are. 4. So trying to make the world better for future generations is a key priority of our time. This is the simple four-step argument for 'longtermism' put forward in What We Owe The Future, the latest book from today's guest — University of Oxford philosopher and cofounder of the effective altruism community, Will MacAskill. Links to learn more, summary and full transcript. From one point of view this idea is common sense. We work on breakthroughs to treat cancer or end use of fossil fuels not just for people alive today, but because we hope such scientific advances will help our children, grandchildren, and great-grandchildren as well. Some who take this longtermist idea seriously work to develop broad-spectrum vaccines they hope will safeguard humanity against the sorts of extremely deadly pandemics that could permanently throw civilisation off track — the sort of project few could argue is not worthwhile. But Will is upfront that longtermism is also counterintuitive. To start with, he's willing to contemplate timescales far beyond what's typically discussed. A natural objection to thinking millions of years ahead is that it's hard enough to take actions that have positive effects that persist for hundreds of years, let alone “indefinitely.” It doesn't matter how important something might be if you can't predictably change it. This is one reason, among others, that Will was initially sceptical of longtermism and took years to come around. He preferred to focus on ending poverty and preventable diseases in ways he could directly see were working. But over seven years he gradually changed his mind, and in *What We Owe The Future*, Will argues that in fact there are clear ways we might act now that could benefit not just a few but *all* future generations. The idea that preventing human extinction would have long-lasting impacts is pretty intuitive. If we entirely disappear, we aren't coming back. But the idea that we can shape human values — not just for our age, but for all ages — is a surprising one that Will has come to more recently. In the book, he argues that what people value is far more fragile and historically contingent than it might first seem. For instance, today it feels like the abolition of slavery was an inevitable part of the arc of history. But Will lays out that the best research on the topic suggests otherwise. If moral progress really is so contingent, and bad ideas can persist almost without end, it raises the stakes for moral debate today. If we don't eliminate a bad practice now, it may be with us forever. In today's in-depth conversation, we discuss the possibility of a harmful moral 'lock-in' as well as: • How Will was eventually won over to longtermism • The three best lines of argument against longtermism • How to avoid moral fanaticism • Which technologies or events are most likely to have permanent effects • What 'longtermists' do today in practice • How to predict the long-term effect of our actions • Whether the future is likely to be good or bad • Concrete ideas to make the future better • What Will donates his money to personally • Potatoes and megafauna • And plenty more Get this episode by subscribing to our podcast on the world's most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
Will MacAskill is a philosopher, ethicist, and one of the originators of the Effective Altruism movement. Humans understand that long term thinking is a good idea, that we need to provide a good place for future generations to live. We try to leave the world better than when we arrived for this very reason. But what about the world in one hundred thousand years? Or 8 billion? If there's trillions of human lives still to come, how should that change the way we act right now? Expect to learn why we're living through a particularly crucial time in the history of the future, the dangers of locking in any set of values, how to avoid the future being ruled by a malevolent dictator, whether the world has too many or too few people on it, how likely a global civilisational collapse is, why technological stagnation is a death sentence and much more... Sponsors: Get a Free Sample Pack of all LMNT Flavours at https://www.drinklmnt.com/modernwisdom (discount automatically applied) Get 20% discount on the highest quality CBD Products from Pure Sport at https://bit.ly/cbdwisdom (use code: MW20) Get 5 Free Travel Packs, Free Liquid Vitamin D and Free Shipping from Athletic Greens at https://athleticgreens.com/modernwisdom (discount automatically applied) Extra Stuff: Buy What We Owe The Future - https://amzn.to/3PDqghm Check out Effective Altruism - https://www.effectivealtruism.org/ Get my free Reading List of 100 books to read before you die → https://chriswillx.com/books/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact/
Q&A with Tim on Wealth and Money, Book Recommendations, Advice on Taking Advice, C.S. Lewis, Relationships, Behavior Change and Self-Awareness, Why We Are All (Mostly) Making It Up as We Go, and Much More | Brought to you by MANSCAPED premium grooming products, 80,000 Hours free career advice for high impact and doing good in the world, and Helix Sleep premium mattresses. Welcome to another episode of The Tim Ferriss Show, where it is usually my job to sit down with world-class performers of all different types to tease out the habits, routines, favorite books, and so on that you can apply and test in your own life. This time, we have a slightly different format, and I'm the guest. As some of you know, to celebrate the 15th anniversary of The 4-Hour Workweek, which was first published on April 24th, 2007 (4/24/07), I signed 424 copies of the book. As thousands of you threw your hat in the ring to win a signed copy, I wanted to do something special for all the entrants. So, I invited those folks to a private YouTube Q&A. This episode is the recording of that session. I answered many questions on a variety of topics, including startup investments, IFS therapy, my views on wealth and money, C.S. Lewis, lower-back pain, solitary time and its challenges, lessons from Kevin Kelly, books I'm currently reading, maintaining relationships, Japan travel tips, from whom to take advice (and how to evaluate advice), self-awareness, behavior change, my current alcohol consumption (or lack thereof), and much, much more.I hope you enjoy it as much as I did.*This episode is brought to you by MANSCAPED! MANSCAPED is bringing you the total package to ensure your package is the perfect package. No, I didn't come up with that line… but I tested and loved their products, so I can vouch that I'm still intact and ready for my centerfold shoot. Was it scary? Yes. Was it worth it? Also yes. My girlfriend makes the effort, so I figured it was only fair. MANSCAPED is the most effective way to keep clean, pristine, and looking like a dream through the sweatiest summer months.The Perfect Package 4.0 is the ultimate grooming kit that includes everything you need for optimal below-the waist hygiene. Use their precision-engineered Lawn Mower 4.0—yes, “Lawn Mower”—electric trimmer to remove excess and avoid the dreaded swamp crotch. Feel confident in trimming hard-to-reach areas with MANSCAPED's SkinSafe technology, which helps reduce the risk of nicks and snags. It's time to take care of yourself; time for some basic landscaping. Get 20% off and free shipping with the code TIMTIM at Manscaped.com.*This episode is also brought to you by 80,000 Hours! You have roughly 80,000 hours in your career. That's 40 hours a week, 50 weeks a year for 40 years. They add up and are one of your biggest opportunities, if not the biggest opportunity, to make a positive impact on the world. Some of the best strategies, best research, and best tactical advice I've seen and heard come from 80,000 Hours, a nonprofit co-founded by Will MacAskill, an Oxford philosopher and a popular past guest on this podcast.If you're looking to make a big change to your direction, address pressing global problems from your current job, or if you're just starting out or maybe starting a new chapter and not sure which path to pursue, 80,000 Hours can help. Join their free newsletter, and they'll send you an in-depth guide for free that will help you identify which global problems are most pressing and where you can have the biggest impact personally. It will also help you get new ideas for high-impact careers or directions that help tackle these issues.*This episode is also brought to you by Helix Sleep! Helix was selected as the #1 overall mattress of 2020 by GQ magazine, Wired, Apartment Therapy, and many others. With Helix, there's a specific mattress to meet each and every body's unique comfort needs. Just take their quiz—only two minutes to complete—that matches your body type and sleep preferences to the perfect mattress for you. They have a 10-year warranty, and you get to try it out for a hundred nights, risk-free. They'll even pick it up from you if you don't love it. And now, Helix is offering up to 200 dollars off all mattress orders plus two free pillows at HelixSleep.com/Tim.*For show notes and past guests on The Tim Ferriss Show, please visit tim.blog/podcast.For deals from sponsors of The Tim Ferriss Show, please visit tim.blog/podcast-sponsorsSign up for Tim's email newsletter (5-Bullet Friday) at tim.blog/friday.For transcripts of episodes, go to tim.blog/transcripts.Discover Tim's books: tim.blog/books.Follow Tim:Twitter: twitter.com/tferriss Instagram: instagram.com/timferrissYouTube: youtube.com/timferrissFacebook: facebook.com/timferriss LinkedIn: linkedin.com/in/timferrissPast guests on The Tim Ferriss Show include Jerry Seinfeld, Hugh Jackman, Dr. Jane Goodall, LeBron James, Kevin Hart, Doris Kearns Goodwin, Jamie Foxx, Matthew McConaughey, Esther Perel, Elizabeth Gilbert, Terry Crews, Sia, Yuval Noah Harari, Malcolm Gladwell, Madeleine Albright, Cheryl Strayed, Jim Collins, Mary Karr, Maria Popova, Sam Harris, Michael Phelps, Bob Iger, Edward Norton, Arnold Schwarzenegger, Neil Strauss, Ken Burns, Maria Sharapova, Marc Andreessen, Neil Gaiman, Neil de Grasse Tyson, Jocko Willink, Daniel Ek, Kelly Slater, Dr. Peter Attia, Seth Godin, Howard Marks, Dr. Brené Brown, Eric Schmidt, Michael Lewis, Joe Gebbia, Michael Pollan, Dr. Jordan Peterson, Vince Vaughn, Brian Koppelman, Ramit Sethi, Dax Shepard, Tony Robbins, Jim Dethmer, Dan Harris, Ray Dalio, Naval Ravikant, Vitalik Buterin, Elizabeth Lesser, Amanda Palmer, Katie Haun, Sir Richard Branson, Chuck Palahniuk, Arianna Huffington, Reid Hoffman, Bill Burr, Whitney Cummings, Rick Rubin, Dr. Vivek Murthy, Darren Aronofsky, and many more.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Will MacAskill of Effective Altruism Fame — The Value of Longtermism, Tools for Beating Stress and Overwhelm, AI Scenarios, High-Impact Books, and How to Save the World and Be an Agent of Change | Brought to you by LinkedIn Jobs recruitment platform with 800M+ users, Vuori comfortable and durable performance apparel, and Theragun percussive muscle therapy devices. More on all three below. William MacAskill (@willmacaskill) is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. A Forbes 30 Under 30 social entrepreneur, he also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and Y Combinator-backed 80,000 Hours, which together have moved over $200 million to effective charities. You can find my 2015 conversation with Will at tim.blog/will. His new book is What We Owe the Future. It is blurbed by several guests of the podcast, including Sam Harris, who wrote, “No living philosopher has had a greater impact upon my ethics than Will MacAskill. . . . This is an altogether thrilling and necessary book.” Please enjoy! *This episode is brought to you by Vuori clothing! Vuori is a new and fresh perspective on performance apparel, perfect if you are sick and tired of traditional, old workout gear. Everything is designed for maximum comfort and versatility so that you look and feel as good in everyday life as you do working out.Get yourself some of the most comfortable and versatile clothing on the planet at VuoriClothing.com/Tim. Not only will you receive 20% off your first purchase, but you'll also enjoy free shipping on any US orders over $75 and free returns.*This episode is also brought to you by Theragun! Theragun is my go-to solution for recovery and restoration. It's a famous, handheld percussive therapy device that releases your deepest muscle tension. I own two Theraguns, and my girlfriend and I use them every day after workouts and before bed. The all-new Gen 4 Theragun is easy to use and has a proprietary brushless motor that's surprisingly quiet—about as quiet as an electric toothbrush.Go to Therabody.com/Tim right now and get your Gen 4 Theragun today, starting at only $199.*This episode is also brought to you by LinkedIn Jobs. Whether you are looking to hire now for a critical role or thinking about needs that you may have in the future, LinkedIn Jobs can help. LinkedIn screens candidates for the hard and soft skills you're looking for and puts your job in front of candidates looking for job opportunities that match what you have to offer.Using LinkedIn's active community of more than 800 million professionals worldwide, LinkedIn Jobs can help you find and hire the right person faster. When your business is ready to make that next hire, find the right person with LinkedIn Jobs. And now, you can post a job for free. Just visit LinkedIn.com/Tim.*For show notes and past guests on The Tim Ferriss Show, please visit tim.blog/podcast.For deals from sponsors of The Tim Ferriss Show, please visit tim.blog/podcast-sponsorsSign up for Tim's email newsletter (5-Bullet Friday) at tim.blog/friday.For transcripts of episodes, go to tim.blog/transcripts.Discover Tim's books: tim.blog/books.Follow Tim:Twitter: twitter.com/tferriss Instagram: instagram.com/timferrissYouTube: youtube.com/timferrissFacebook: facebook.com/timferriss LinkedIn: linkedin.com/in/timferrissPast guests on The Tim Ferriss Show include Jerry Seinfeld, Hugh Jackman, Dr. Jane Goodall, LeBron James, Kevin Hart, Doris Kearns Goodwin, Jamie Foxx, Matthew McConaughey, Esther Perel, Elizabeth Gilbert, Terry Crews, Sia, Yuval Noah Harari, Malcolm Gladwell, Madeleine Albright, Cheryl Strayed, Jim Collins, Mary Karr, Maria Popova, Sam Harris, Michael Phelps, Bob Iger, Edward Norton, Arnold Schwarzenegger, Neil Strauss, Ken Burns, Maria Sharapova, Marc Andreessen, Neil Gaiman, Neil de Grasse Tyson, Jocko Willink, Daniel Ek, Kelly Slater, Dr. Peter Attia, Seth Godin, Howard Marks, Dr. Brené Brown, Eric Schmidt, Michael Lewis, Joe Gebbia, Michael Pollan, Dr. Jordan Peterson, Vince Vaughn, Brian Koppelman, Ramit Sethi, Dax Shepard, Tony Robbins, Jim Dethmer, Dan Harris, Ray Dalio, Naval Ravikant, Vitalik Buterin, Elizabeth Lesser, Amanda Palmer, Katie Haun, Sir Richard Branson, Chuck Palahniuk, Arianna Huffington, Reid Hoffman, Bill Burr, Whitney Cummings, Rick Rubin, Dr. Vivek Murthy, Darren Aronofsky, and many more.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.