POPULARITY
Zizians, technofeudalism, Rationalist movement, COINTELPRO, Philadelphia/Wilmington suburbs, seasteading, Vassarites, PayPal mafia, Bay Area, Medieval era, Mendicants, Effective Altruism (EA), Sam Bankman-Fried (SBF), FTX, cryptocurrency, cybernetics, science fiction, techno-utopianism, the American obsession with technology/science, Extropianism, Accelerationism, AI, Roko's Basilisk, DOGE, cypherpunks, assassination politics, behavior modification, cults, ketamine, Leverage Research, ARTICHOKE/MK-ULTRA, the brain as a computer, Programmed to Kill, modern proliferation of cults, Order of Nine Angles (O9A), Maniac Murder Cult (MKY), digital Gladio, networking, decentralized finance (DeFi), digital commonsPurchase Weird Tales :Amazon: https://www.amazon.com/Weird-Tales-Zizians-Crypto-Demiurges/dp/B0F48538C6?ref_=ast_author_dpEbook (KDP/PDF): https://thefarmpodcast.store/Music by: Keith Allen Dennishttps://keithallendennis.bandcamp.com/Additional Music: J Money Get bonus content on Patreon Hosted on Acast. See acast.com/privacy for more information.
Zizians, Rationalist movement, Peter Thiel, Eliezer Yudkowsky, neoreaction, Accelerationism, Curtis Yarvin, AI, AI apocalypse, machine learning, psychedelics, Effective Altruism (EA), Sam Bankman-Fried, Extropianism, Thiel & Yudkowsky as Extropians, Discordianism, life extension, space colonization, cryptocurrencies, Yudkowsky as self-educated, Nick Bostrom, Center for Applied Rationality (CFAR), Rationalism's use of magical thinking, New Thought, Roko's Basilisk, Nick Land, predicting the future, LessWrong, LessWrong's relations ship to the Zizians, Ziz, non-binary/trans, vegan Siths, Vasserites, murders linked to Zizians, Zizians in Vermont, Luigi Mangione indirectly influenced by Zizianism, Brain Thompson assassination, ChangeHealthcare hack, were the hack and assassination targeting UnitedHealth Group influenced by this milieu?, is the Trump administration radicalizing Zizians?, Yudkowsky's links to Sam Bankman-Fried, Leverage Research/Center for Effective Altruism & MK-ULTRA-like techniques used by, are more cults coming from the Rationalist movement?Additional Resources:Leverage Research:https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b#c778MIRI/Center for Applied Rationality (CFAR):https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoeMusic by: Keith Allen Dennishttps://keithallendennis.bandcamp.com/Additional Music: J Money Get bonus content on Patreon Hosted on Acast. See acast.com/privacy for more information.
Rethink Priorities has been conducting a range of surveys and experiments aimed at understanding how people respond to different framings of Effective Altruism (EA), Longtermism, and related specific cause areas. There has been much debate about whether people involved in EA and Longtermism should frame their efforts and outreach in terms of Effective altruism, Longtermism, Existential risk, Existential security, Global priorities research, or by only mentioning specific risks, such as AI safety and Pandemic prevention (examples can be found at the following links: 1,2,3,4,5,6,7,8). These discussions have taken place almost entirely in the absence of empirical data, even though they concern largely empirical questions.[1] In this post we report the results of three pilot studies examining responses to different EA-related terms and descriptions. Some initial findings are: Longtermism appears to be consistently less popular than other EA-related terms and concepts we examined, whether presented just as a [...] ---Outline:(01:52) Study 1. Cause area framing(05:13) Demographics(07:15) Study 2. EA-related concepts with and without descriptions(10:58) Demographics(11:31) Study 3. Preferences for concrete causes or more general ideas/movements(15:04) Demographics(15:29) Manifold Market Predictions(16:43) General discussionThe original text contained 2 footnotes which were omitted from this narration. The original text contained 18 images which were described by AI. --- First published: November 7th, 2024 Source: https://forum.effectivealtruism.org/posts/qagZoGrxbD7YQRYNr/testing-framings-of-ea-and-longtermism --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Rethink Priorities has been conducting a range of surveys and experiments aimed at understanding how people respond to different framings of Effective Altruism (EA), Longtermism, and related specific cause areas. There has been much debate about whether people involved in EA and Longtermism should frame their efforts and outreach in terms of Effective altruism, Longtermism, Existential risk, Existential security, Global priorities research, or by only mentioning specific risks, such as AI safety and Pandemic prevention (examples can be found at the following links: 1,2,3,4,5,6,7,8). These discussions have taken place almost entirely in the absence of empirical data, even though they concern largely empirical questions.[1] In this post we report the results of three pilot studies examining responses to different EA-related terms and descriptions. Some initial findings are: Longtermism appears to be consistently less popular than other EA-related terms and concepts we examined, whether presented just as a [...] ---Outline:(01:52) Study 1. Cause area framing(05:40) Demographics(08:12) Study 2. EA-related concepts with and without descriptions(12:51) Demographics(13:31) Study 3. Preferences for concrete causes or more general ideas/movements(17:35) Demographics(18:07) Manifold Market Predictions(19:20) General discussionThe original text contained 2 footnotes which were omitted from this narration. The original text contained 18 images which were described by AI. --- First published: November 7th, 2024 Source: https://forum.effectivealtruism.org/posts/qagZoGrxbD7YQRYNr/testing-framings-of-ea-and-longtermism --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An invitation to the Berlin EA co-working space TEAMWORK, published by Johanna Schröder on May 24, 2024 on The Effective Altruism Forum. TL;DR TEAMWORK, a co-working and event space in Berlin run by Effektiv Spenden, is available for use by the Effective Altruism (EA) community. We offer up to 15 desk spaces in a co-working office for EA professionals and a workshop and event space for a broad range of EA events, all free of charge at present (and at least for the rest of 2024). A lot has changed since the space was established in 2021. After a remodeling project in September last year, there has been a notable improvement in the acoustics and soundproofing, leading to a more focused and productive work environment. Apply here if you would like to join our TEAMWORK community. What TEAMWORK offers TEAMWORK is a co-working space focused on EA professionals operated by Effektiv Spenden and located in Berlin. Following a remodeling project in fall 2023, we were able to improve the acoustics and soundproofing significantly, fostering a more conducive atmosphere for focused work. Additionally, we transformed one of our co-working rooms into a workshop space, providing everything necessary for productive collaboration and gave our meeting room a makeover with modern new furniture, ensuring a professional setting for discussions and presentations. Our facilities include: Co-working Offices: One large office with 11 desks and a smaller office with four desks. The smaller office is also bookable for team retreats or team co-working, while the big office can be transformed into an event space for up to 40 people. Workshop Room: "Flamingo Paradise" serves as a workshop room with a big sofa, a large desk, a flip chart, and a pin board. It can also be used as a small event space, complete with a portable projector. When not in use for events, it functions as a chill and social area. Meeting Room: A meeting room for up to four people (max capacity six people). Can also be used for calls. Phone Booths: Four private phone booths. In addition to that also the "Flamingo Paradise" and the Meeting room can be used to take a call. Community Kitchen: A kitchen with free coffee and tea. We have a communal lunch at 1 pm where members can either bring their own meals or go out to eat. Berlin as an EA Hub Berlin is home to a vibrant and growing (professional) EA community, making it one of the biggest EA hubs in continental Europe. It is also home of Effektiv Spenden, Germany's effective giving organization, that is hosting this space. Engaging with this dynamic community provides opportunities for collaboration and networking with like-minded individuals. Additionally, working from Berlin could offer a change of scene maybe enhancing your productivity and inspiration (particularly in spring and summer). Join Our Community Our vision is to have a space where people from the EA Community can not only work to make the world a better place, but can also informally engage with other members of the community during coffee breaks, lunch or at community events. Many of the EA meetups organized by the EA Berlin community take place at TEAMWORK. You can find more information on how to engage with the EA Berlin community here. People in the TEAMWORK community are working on various cause areas. Our members represent a range of organizations, including Founders Pledge, Future Matters, Open Philanthropy, and Kooperation Global. We frequently host international visitors from numerous EA-aligned organizations such as Charity Entrepreneurship, the Center for Effective Altruism, the Good Food Institute, Future Cleantech Architects, and the Center for the Governance of AI. Additionally, organizations like EA Germany, the Fish Welfare Initiative, One for the World, and Allfed have utilized our space for team re...
Today's episode is a debate from Bankless on the opposing viewpoints of Effective Accelerationism (E/ACC) and Effective Altruism (EA). Hosted by Ryan Sean Adams and David Hoffman and featuring Haseeb Qureshi and our own Erik Torenberg, the podcast discusses whether AI progression should be cautiously regulated or aggressively pursued with minimal oversight. – SPONSORS: NETSUITE | BEEHIIV | SQUAD NETSUITE
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exploring Ergodicity in the Context of Longtermism, published by Arthur Jongejans on March 30, 2024 on The Effective Altruism Forum. ___________________________________________________ tldr; Expected value theory misrepresents ruin games and obscures the dynamics of repetitions in a multiplicative environment. The ergodicity framework provides a better perspective on such problems as it takes these dynamics into account. Incorporating the ergodicity framework into decision-making can help prevent the EA movement from inadvertently increasing existential risks by rejecting high expected value but multiplicatively risky interventions that could lead to catastrophic outcomes. ___________________________________________________ Effective Altruism (EA) has embraced longtermism as one of its guiding principles. In What we owe the future, MacAskill lays out the foundational principles of longtermism, urging us to expand our ethical considerations to include the well-being and prospects of future generations. Thinking in Bets In order to consider the changes one could make in the world, MacAskill argues one should be "Thinking in Bets". To do so, expected value (EV) theory is employed on the account that it is the most widely accepted method. In the book, he describes the phenomenon with an example of his poker-playing friends: "Liv and Igor are at a pub, and Liv bets Igor that he can't flip and catch six coasters at once with one hand. If he succeeds, she'll give him £3; if he fails, he has to give her £1. Suppose Igor thinks there's a fifty-fifty chance that he'll succeed. If so, then it's worth it for him to take the bet: the upside is a 50 percent chance of £3, worth £1.50; the downside is a 50 percent chance of losing £1, worth negative £0.50. Igor makes an expected £1 by taking the bet - £1.50 minus £0.50. If his beliefs about his own chances of success are accurate, then if he were to take this bet over and over again, on average he'd make £1 each time." More theoretically, he breaks expected value theory down into three components: Thinking in probabilities Assigning values to outcomes (What economists call Utility Theory) Taking a decision based on the expected value This logic served EA well during the early neartermist days of the movement, where it was used to answer questions like: "Should the marginal dollar be used to buy bednets against malaria or deworming pills to improve school attendance?". The Train to Crazy Town Yet problems arise when such reasoning is followed into more extreme territory. For example, based on its consequentialist nature, EA-logic prescribes pulling the handle in the Trolley Problem[1]. However, many Effective Altruists (EAs) hesitate to follow this reasoning all the way to its logical conclusion. Consider for instance whether you are willing to take the following gamble: you're offered to press a button with a 51% chance of doubling the world's happiness but a 49% chance of ending it. This problem, also known as Thomas Hurka's St Petersburg Paradox, highlights the following dilemma: Maximizing expected utility suggests you should press it, as it promises a net positive outcome. However, the issue arises when pressing the button multiple times. Despite each press theoretically maximizing utility, pressing the button over and over again will inevitably lead to destruction. Which highlights the conflict between utility maximization and the catastrophic risk of repeated gambles.[2] In simpler terms, the impact of repeated bets is concealed behind the EV. In EA-circles, following the theory to its logical extremes has become known as catching "The train to crazy town"[[3],[4]]. The core issue with this approach is that, while most people want to get off the train before crazy town, the consequentialist expected value framework does not al...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How we started our own EA charity (and why we decided to wrap up), published by KvPelt on February 26, 2024 on The Effective Altruism Forum. This post shares our journey in starting an Effective Altruism (EA) charity/project focused on Mediterranean fish welfare, the challenges we faced, our key learnings, and the reasons behind our decision to conclude the project. Actual research results are published in a Literature review and article. Key points The key points of this post are summarized as follows: We launched a project with the goal of enhancing fish welfare in Mediterranean aquaculture. We chose to limit our project to gathering information and decided against continuing our advocacy efforts after our initial six months. Our strategy, which focused on farmer-friendly outreach, was not effective in engaging farmers. The rationale behind our decision is the recognition that existing organizations are already performing excellent work, and we believe that funders should support these established organizations instead of starting a new one. The support and resources from the Effective Altruism (EA) and animal welfare communities were outstanding. Despite the project not achieving its intended outcomes, we view the overall experience positively. It's common for new charities to not succeed; the key is to quickly determine the viability of your idea, which we believe we have accomplished. Note: Ren has recently begun working as a guest fund manager for the EA Funds Animal Welfare Fund. The views that we express in this article are our views, and we are not speaking for the fund. Personal/Project background Before delving into our project we'll provide a quick background of our profiles and how we got to starting this project. Koen During my Masters in Maritime/Offshore engineering (building floating things) I got interested in animal welfare. Due to engagement with my EA university group (EA Delft) and by attending EAG(x)Rotterdam I became interested and motivated to use my career to work on animal welfare. I hoped to apply my maritime engineering background in a meaningful way, which led me to consider aquatic animal welfare. I attended EAGLondon in 2023 with the goal of finding career opportunities and surprisingly this worked! I talked to many with backgrounds in animal welfare (AW) and engineering and in one of my 1on1's I met someone who would later connect me with Ren. Ren As a researcher, Ren has been working at Animal Ask for the past couple of years conducting research to support the animal advocacy movement. However, Ren still feels really sad about the scale of suffering endured by animals, and this was the motivation to launch a side project. Why work on Mediterranean fish welfare? This project originated out of a desire to work on alleviating extreme-suffering. More background on the arguments to focus on extreme-suffering is discussed in Ren's earlier forum post. When the welfare of nonhuman animals is not taken into account during slaughter, extreme-suffering is likely to occur. Also, from Ren's existing work at Animal Ask, they knew that stunning before slaughter is often quite well-understood and tractable. Therefore, Ren produced a systematic spreadsheet of every farmed animal industry in developed countries (i.e., those countries where Ren felt safe and comfortable working). This spreadsheet included information on a) the number of animals killed, and b) whether those animals were already being stunned before slaughter. Three industries emerged as sources of large-scale, intense suffering: 1. Farmed shrimp in the United States, 2. Farmed shrimp in Australia, and 3. Sea bass and sea bream in the Mediterranean. Ren actually looked at farmed shrimp initially, and work on these projects may continue in the future, but there are some technical reasons ...
This post shares our journey in starting an Effective Altruism (EA) charity/project focused on Mediterranean fish welfare, the challenges we faced, our key learnings, and the reasons behind our decision to conclude the project. Actual research results are published in a Literature review and article. Key points The key points of this post are summarized as follows: We launched a project with the goal of enhancing fish welfare in Mediterranean aquaculture. We chose to limit our project to gathering information and decided against continuing our advocacy efforts after our initial six months. Our strategy, which focused on farmer-friendly outreach, was not effective in engaging farmers. The rationale behind our decision is the recognition that existing organizations are already performing excellent work, and we believe that funders should support these established organizations instead of starting a new one. The support and resources from the Effective Altruism (EA) and [...] The original text contained 1 footnote which was omitted from this narration. --- First published: February 26th, 2024 Source: https://forum.effectivealtruism.org/posts/z59wybc56FCAysrAe/how-we-started-our-own-ea-charity-and-why-we-decided-to-wrap --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This post shares our journey in starting an Effective Altruism (EA) charity/project focused on Mediterranean fish welfare, the challenges we faced, our key learnings, and the reasons behind our decision to conclude the project. Actual research results are published in a Literature review and article. Key points The key points of this post are summarized as follows: We launched a project with the goal of enhancing fish welfare in Mediterranean aquaculture. We chose to limit our project to gathering information and decided against continuing our advocacy efforts after our initial six months. Our strategy, which focused on farmer-friendly outreach, was not effective in engaging farmers. The rationale behind our decision is the recognition that existing organizations are already performing excellent work, and we believe that funders should support these established organizations instead of starting a new one. The support and resources from the Effective Altruism (EA) and [...] ---Outline:(00:27) Key points(01:48) Personal/Project background(03:01) Why work on Mediterranean fish welfare?(07:07) Project plans and initial work(11:03) Initial work(13:47) Farmer outreach(20:45) Wrapping up the project(22:30) Other takeaways from starting a project(24:48) Resources for launching a new charityThe original text contained 1 footnote which was omitted from this narration. --- First published: February 26th, 2024 Source: https://forum.effectivealtruism.org/posts/z59wybc56FCAysrAe/how-we-started-our-own-ea-charity-and-why-we-decided-to-wrap --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This post shares our journey in starting an Effective Altruism (EA) charity/project focused on Mediterranean fish welfare, the challenges we faced, our key learnings, and the reasons behind our decision to conclude the project. Actual research results are published in a Literature review and article. Key points The key points of this post are summarized as follows: We launched a project with the goal of enhancing fish welfare in Mediterranean aquaculture. We chose to limit our project to gathering information and decided against continuing our advocacy efforts after our initial six months. Our strategy, which focused on farmer-friendly outreach, was not effective in engaging farmers. The rationale behind our decision is the recognition that existing organizations are already performing excellent work, and we believe that funders should support these established organizations instead of starting a new one. The support and resources from the Effective Altruism (EA) and [...] ---Outline:(00:24) Key points(01:41) Personal/Project background(02:52) Why work on Mediterranean fish welfare?(06:46) Project plans and initial work(10:32) Initial work(13:12) Farmer outreach(19:52) Wrapping up the project(21:33) Other takeaways from starting a project(23:48) Resources for launching a new charityThe original text contained 1 footnote which was omitted from this narration. --- First published: February 26th, 2024 Source: https://forum.effectivealtruism.org/posts/z59wybc56FCAysrAe/how-we-started-our-own-ea-charity-and-why-we-decided-to-wrap --- Narrated by TYPE III AUDIO.
The New York Times Picture a scene: the New York Times is releasing an article on Effective Altruism (EA) with an express goal to dig up every piece of negative information they can find. They contact Émile Torres, David Gerard, and Timnit Gebru, collect evidence about Sam Bankman-Fried, the OpenAI board blowup, and Pasek's Doom, start calling Astral Codex Ten (ACX) readers to ask them about rumors they'd heard about affinity between Effective Altruists, neoreactionaries, and something called TESCREAL. They spend hundreds of hours over six months on interviews and evidence collection, paying Émile and Timnit for their time and effort. The phrase "HBD" is muttered, but it's nobody's birthday. A few days before publication, they present key claims to the Centre for Effective Altruism (CEA), who furiously tell them that many of the claims are provably false and ask for a brief delay to demonstrate the falsehood of [...]The original text contained 16 footnotes which were omitted from this narration. --- First published: December 19th, 2023 Source: https://www.lesswrong.com/posts/2vNHiaTb4rcA8PgXQ/effective-aspersions-how-the-nonlinear-investigation-went --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Aspersions: How the Nonlinear Investigation Went Wrong, published by TracingWoodgrains on December 19, 2023 on The Effective Altruism Forum. The New York Times Picture a scene: the New York Times is releasing an article on Effective Altruism (EA) with an express goal to dig up every piece of negative information they can find. They contact Émile Torres, David Gerard, and Timnit Gebru, collect evidence about Sam Bankman-Fried, the OpenAI board blowup, and Pasek's Doom, start calling Astral Codex Ten (ACX) readers to ask them about rumors they'd heard about affinity between Effective Altruists, neoreactionaries, and something called TESCREAL. They spend hundreds of hours over six months on interviews and evidence collection, paying Émile and Timnit for their time and effort. The phrase "HBD" is muttered, but it's nobody's birthday. A few days before publication, they present key claims to the Centre for Effective Altruism (CEA), who furiously tell them that many of the claims are provably false and ask for a brief delay to demonstrate the falsehood of those claims, though their principles compel them to avoid threatening any form of legal action. The Times unconditionally refuses, claiming it must meet a hard deadline. The day before publication, Scott Alexander gets his hands on a copy of the article and informs the Times that it's full of provable falsehoods. They correct one of his claims, but tell him it's too late to fix another. The final article comes out. It states openly that it's not aiming to be a balanced view, but to provide a deep dive into the worst of EA so people can judge for themselves. It contains lurid and alarming claims about Effective Altruists, paired with a section of responses based on its conversation with EA that it says provides a view of the EA perspective that CEA agreed was a good summary. In the end, it warns people that EA is a destructive movement likely to chew up and spit out young people hoping to do good. In the comments, the overwhelming majority of readers thank it for providing such thorough journalism. Readers broadly agree that waiting to review CEA's further claims was clearly unnecessary. David Gerard pops in to provide more harrowing stories. Scott gets a polite but skeptical hearing out as he shares his story of what happened, and one enterprising EA shares hard evidence of one error in the article to a mixed and mostly hostile audience. A few weeks later, the article writer pens a triumphant follow-up about how well the whole process went and offers to do similar work for a high price in the future. This is not an essay about the New York Times. The rationalist and EA communities tend to feel a certain way about the New York Times. Adamantly a certain way. Emphatically a certain way, even. I can't say my sentiment is terribly different - in fact, even when I have positive things to say about the New York Times, Scott has a way of saying them more elegantly, as in The Media Very Rarely Lies. That essay segues neatly into my next statement, one I never imagined I would make: You are very very lucky the New York Times does not cover you the way you cover you. A Word of Introduction Since this is my first post here, I owe you a brief introduction. I am a friendly critic of EA who would join you were it not for my irreconcilable differences in fundamental values and thinks you are, by and large, one of the most pleasant and well-meaning groups of people in the world. I spend much more time in the ACX sphere or around its more esoteric descendants and know more than anyone ought about its history and occasional drama. Some of you know me from my adversarial collaboration in Scott's contest some years ago, others from my misadventures in "speedrunning" college, still others from my exhaustively detailed deep dives in...
The New York Times Picture a scene: the New York Times is releasing an article on Effective Altruism (EA) with an express goal to dig up every piece of negative information they can find. They contact Émile Torres, David Gerard, and Timnit Gebru, collect evidence about Sam Bankman-Fried, the OpenAI board blowup, and Pasek's Doom, start calling Astral Codex Ten (ACX) readers to ask them about rumors they'd heard about affinity between Effective Altruists, neoreactionaries, and something called TESCREAL. They spend hundreds of hours over six months on interviews and evidence collection, paying Émile and Timnit for their time and effort. The phrase "HBD" is muttered, but it's nobody's birthday. A few days before publication, they present key claims to the Centre for Effective Altruism (CEA), who furiously tell them that many of the claims are provably false and ask for a brief delay to demonstrate the falsehood of [...] ---Outline:(00:06) The New York Times(03:08) A Word of Introduction(07:35) The Story So Far: A Recap(11:08) Avoidable, Unambiguous Falsehoods in Sharing Information About Nonlinear(21:32) These Issues Were Known and Knowable By Lightcone and the Community. The EA/LW Community Dismissed Them(27:03) Better processes are both possible and necessary(38:44) On Lawsuits(47:15) First Principles, Duty, and Harm(50:43) What of Nonlinear?The original text contained 16 footnotes which were omitted from this narration. --- First published: December 19th, 2023 Source: https://forum.effectivealtruism.org/posts/bwtpBFQXKaGxuic6Q/effective-aspersions-how-the-nonlinear-investigation-went --- Narrated by TYPE III AUDIO.
Molly White, Ryan Broderick, and Deepa Seetharaman join Big Technology Podcast to dive deep into the Effective Altruism (EA) vs. Effective Accelerationism (e/acc) debate in Silicon Valley that may have been at the heart of the OpenAI debacle. White is a crypto researcher and critic who writes Citation Needed on Substack, Broderick is an internet culture reporter who writes Garbage Day on Substack, Seetharaman is a reporter at The Wall Street Journal who covers AI. Our three guests join to discuss who these groups are, how they formed, how their influence played into the OpenAI coup and counter-coup, and where they go from here. -- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Note: I can't seem to edit or remove the “transcript” tab. I recommend you ignore that and just look at the much higher quality, slightly cleaned up one below. Most importantly, follow Sarah on Twitter! Summary (Written by chatGPT, as you can probably tell)In this episode of Pigeon Hour host Aaron delves deep into the world of AI safety with his guest, Sarah Woodhouse. Sarah shares her unexpected journey from fearing job automation to becoming a recognized voice on AI safety Twitter. Her story starts with a simple Google search that led her down a rabbit hole of existential dread and unexpected fame on social media. As she narrates her path from lurker to influencer, Sarah reflects on the quirky dynamics of the AI safety community, her own existential crisis, and the serendipitous tweet that resonated with thousands.Aaron and Sarah's conversation takes unexpected turns, discussing everything from the peculiarities of EA rationalists to the surprisingly serious topic of shrimp welfare. They also explore the nuances of AI doom probabilities, the social dynamics of tech Twitter, and Sarah's unexpected viral fame as a tween. This episode is a rollercoaster of insights and anecdotes, perfect for anyone interested in the intersection of technology, society, and the unpredictable journey of internet fame.Topics discussedDiscussion on AI Safety and Personal Journeys:* Aaron and Sarah discuss her path to AI safety, triggered by concerns about job automation and the realization that AI could potentially replace her work.* Sarah's deep dive into AI safety started with a simple Google search, leading her to Geoffrey Hinton's alarming statements, and eventually to a broader exploration without finding reassuring consensus.* Sarah's Twitter engagement began with lurking, later evolving into active participation and gaining an audience, especially after a relatable tweet thread about an existential crisis.* Aaron remarks on the rarity of people like Sarah, who follow the AI safety rabbit hole to its depths, considering its obvious implications for various industries.AI Safety and Public Perception:* Sarah discusses her surprise at discovering the AI safety conversation happening mostly in niche circles, often with a tongue-in-cheek attitude that could seem dismissive of the serious implications of AI risks.* The discussion touches on the paradox of AI safety: it's a critically important topic, yet it often remains confined within certain intellectual circles, leading to a lack of broader public engagement and awareness.Cultural Differences and Personal Interests:* The conversation shifts to cultural differences between the UK and the US, particularly in terms of sincerity and communication styles.* Personal interests, such as theater and musicals (like "Glee"), are also discussed, revealing Sarah's background and hobbies.Effective Altruism (EA) and Rationalist Communities:* Sarah points out certain quirks of the EA and rationalist communities, such as their penchant for detailed analysis, hedging statements, and the use of probabilities in discussions.* The debate around the use of "P(Doom)" (probability of doom) in AI safety discussions is critiqued, highlighting how it can be both a serious analytical tool and a potentially alienating jargon for outsiders.Shrimp Welfare and Ethical Considerations:* A detailed discussion on shrimp welfare as an ethical consideration in effective altruism unfolds, examining the moral implications and effectiveness of focusing on animal welfare at a large scale.* Aaron defends his position on prioritizing shrimp welfare in charitable giving, based on the principles of importance, tractability, and neglectedness.Personal Decision-Making in Charitable Giving:* Strategies for personal charitable giving are explored, including setting a donation cutoff point to balance moral obligations with personal needs and aspirations.TranscriptAARON: Whatever you want. Okay. Yeah, I feel like you said this on Twitter. The obvious thing is, how did you learn about AI safety? But maybe you've already covered that. That's boring. First of all, do you want to talk about that? Because we don't have to.SARAH: I don't mind talking about that.AARON: But it's sort of your call, so whatever. I don't know. Maybe briefly, and then we can branch out?SARAH: I have a preference for people asking me things and me answering them rather than me setting the agenda. So don't ever feel bad about just asking me stuff because I prefer that.AARON: Okay, cool. But also, it feels like the kind of thing where, of course, we have AI. Everyone already knows that this is just like the voice version of these four tweets or whatever. But regardless. Yes. So, Sarah, as Pigeon Hour guest, what was your path through life to AI safety Twitter?SARAH: Well, I realized that a chatbot could very easily do my job and that my employers either hadn't noticed this or they had noticed, but they were just being polite about it and they didn't want to fire me because they're too nice. And I was like, I should find out what AI development is going to be like over the next few years so that I know if I should go and get good at some other stuff.SARAH: I just had a little innocent Google. And then within a few clicks, I'd completely doom pilled myself. I was like, we're all going to die. I think I found Geoffrey Hinton because he was on the news at the time, because he just quit his job at Google. And he was there saying things that sounded very uncertain, very alarming. And I was like, well, he's probably the pessimist, but I'm sure that there are loads of optimists to counteract that because that's how it usually goes. You find a doomer and then you find a bunch of more moderate people, and then there's some consensus in the middle that everything's basically fine.SARAH: I was like, if I just keep looking, I'll find the consensus because it's there. I'm sure it's there. So I just kept looking and looking for it. I looked for it for weeks. I just didn't find it. And then I was like, nobody knows what's going on. This seems really concerning. So then I started lurking on Twitter, and then I got familiar with all the different accounts, whatever. And then at some point, I was like, I'm going to start contributing to this conversation, but I didn't think that anybody would talk back to me. And then at some point, they started talking back to me and I was like, this is kind of weird.SARAH: And then at some point, I was having an existential crisis and I had a couple of glasses of wine or something, and I just decided to type this big, long thread. And then I went to bed. I woke up the next morning slightly grouchy and hungover. I checked my phone and there were all these people messaging me and all these people replying to my thread being like, this is so relatable. This really resonated with me. And I was like, what is going on?AARON: You were there on Twitter before that thread right? I'm pretty sure I was following you.SARAH: I think, yeah, I was there before, but no one ever really gave me any attention prior to that. I think I had a couple of tweets that blew up before that, but not to the same extent. And then after that, I think I was like, okay, so now I have an audience. When I say an audience, like, obviously a small one, but more of an audience than I've ever had before in my life. And I was like, how far can I take this?SARAH: I was a bit like, people obviously started following me because I'm freFreaking out about AI, but if I post an outfit, what's going to happen? How far can I push this posting, these fit checks? I started posting random stuff about things that were completely unrelated. I was like, oh, people are kind of here for this, too. Okay, this is weird. So now I'm just milking it for all its worth, and I really don't know why anybody's listening to me. I'm basically very confused about the whole thing.AARON: I mean, I think it's kind of weird from your perspective, or it's weird in general because there aren't that many people who just do that extremely logical thing at the beginning. I don't know, maybe it's not obvious to people in every industry or whatever that AI is potentially a big deal, but there's lots of truckers or whatever. Maybe they're not the best demographic or the most conducive demographic, like, getting on Twitter or whatever, but there's other jobs that it would make sense to look into that. It's kind of weird to me that only you followed the rabbit hole all the way down.SARAH: I know! This is what I…Because it's not that hard to complete the circle. It probably took me like a day, it took me like an afternoon to get from, I'm worried about job automation to I should stop saving for retirement. It didn't take me that long. Do you know what I mean? No one ever looks. I literally don't get it. I was talking to some people. I was talking to one of my coworkers about this the other day, and I think I came up in conversation. She was like, yeah, I'm a bit worried about AI because I heard on the radio that taxi drivers might be out of a job. That's bad. And I was like, yeah, that is bad. But do you know what else? She was like, what are the AI companies up to that we don't know about? And I was like, I mean, you can go on their website. You can just go on their website and read about how they think that their technology is an extinction risk. It's not like they're hiding. It's literally just on there and no one ever looks. It's just crazy.AARON: Yeah. Honestly, I don't even know if I was in your situation, if I would have done that. It's like, in some sense, I am surprised. It's very few people maybe like one, but at another level, it's more rationality than most humans have or something. Yeah. You regret going down that rabbit hole?SARAH: Yeah, kind of. Although I'm enjoying the Twitter thing and it's kind of fun, and it turns out there's endless comedic material that you can get out of impending doom. The whole thing is quite funny. It's not funny, but you can make it funny if you try hard enough. But, yeah, what was I going to say? I think maybe I was more primed for doom pilling than your average person because I already knew what EA was and I already knew, you know what I mean. That stuff was on my radar.AARON: That's interesting.SARAH: I think had it not been on my radar, I don't think I would have followed the pipeline all the way.AARON: Yeah. I don't know what browser you use, but it would be. And you should definitely not only do this if you actually think it would be cool or whatever, but this could be in your browser history from that day and that would be hilarious. You could remove anything you didn't want to show, but if it's like Google Chrome, they package everything into sessions. It's one browsing session and it'll have like 10,000 links.SARAH: Yeah, I think for non-sketchy reasons, I delete my Google history more regularly than that. I don't think I'd be able to find that. But I can remember the day and I can remember my anxiety levels just going up and up somewhere between 01:00 p.m. and 07:00 p.m. And by the evening I'm like, oh, my God.AARON: Oh, damn, that's wild.SARAH: It was really stressful.AARON: Yeah, I guess props for, I don't know if props…Is the right word, I guess, impressed? I'm actually somewhat surprised to hear that you said you regret it. I mean, that sucks though, I guess. I'm sorry.SARAH: If you could unknow this, would you?AARON: No, because I think it's worth maybe selfishly, but not overall because. Okay, yeah, I think that would plausibly be the selfish thing to do. Actually. No, actually, hold on. No, I actually don't think that's true. I actually think there's enough an individual can do selfishly such that it makes sense. Even the emotional turmoil.SARAH: It would depend how much you thought that you were going to personally move the needle by knowing about it. I personally don't think that I'm going to be able to do very much. I was going to tip the scales. I wouldn't selfishly unknow it and sacrifice the world. But me being not particularly informed or intelligent and not having any power, I feel like if I forgot that AI was going to end the world, it would not make much difference.AARON: You know what I mean? I agree that it's like, yes, it is unlikely for either of us to tip the scales, but.SARAH: Maybe you can't.AARON: No, actually, in terms of, yeah, I'm probably somewhat more technically knowledgeable just based on what I know about you. Maybe I'm wrong.SARAH: No, you're definitely right.AARON: It's sort of just like a probabilities thing. I do think that ‘doom' - that word - is too simplified, often too simple to capture what people really care about. But if you just want to say doom versus no doom or whatever, AI doom versus no AI doom. Maybe there's like a one in 100,000 chance that one of us tips the scales. And that's important. Maybe even, like, one in 10,000. Probably not. Probably not.SARAH: One in 10,000. Wow.AARON: But that's what people do. People vote, even though this is old 80k material I'm regurgitating because they basically want to make the case for why even if you're not. Or in some article they had from a while ago, they made a case for why doing things that are unlikely to counterfactually matter can still be amazingly good. And the classic example, just voting if you're in a tight race, say, in a swing state in the United States, and it could go either way. Yeah. It might be pretty unlikely that you are the single swing vote, but it could be one in 100,000. And that's not crazy.SARAH: It doesn't take very much effort to vote, though.AARON: Yeah, sure. But I think the core justification, also, the stakes are proportionally higher here, so maybe that accounts for some. But, yes, you're absolutely right. Definitely different amounts of effort.SARAH: Putting in any effort to saving the world from AI. I wouldn't say that. I wouldn't say that I'm sacrificing.AARON: I don't even know if I like. No. Maybe it doesn't feel like a sacrifice. Maybe it isn't. But I do think there's, like, a lot. There's at least something to be. I don't know if this really checks out, but I would, like, bet that it does, which is that more reasonably, at least calibrated. I wanted to say reasonably well informed. But really what it is is, like, some level of being informed and, like, some level of knowing what you don't know or whatever, and more just like, normal. Sorry. I hope normal is not like a bat. I'm saying not like tech Bros, I guess so more like non tech bros. People who are not coded as tech bros. Talking about this on a public platform just seems actually, in fact, pretty good.SARAH: As long as we like, literally just people that aren't men as well. No offense.AARON: Oh, no, totally. Yeah.SARAH: Where are all the women? There's a few.AARON: There's a few that are super. I don't know, like, leaders in some sense, like Ajeya Cotra and Katja Grace. But I think the last EA survey was a third. Or I could be butchering this or whatever. And maybe even within that category, there's some variation. I don't think it's 2%.SARAH: Okay. All right. Yeah.AARON: Like 15 or 20% which is still pretty low.SARAH: No, but that's actually better than I would have thought, I think.AARON: Also, Twitter is, of all the social media platforms, especially mail. I don't really know.SARAH: Um.AARON: I don't like Instagram, I think.SARAH: I wonder, it would be interesting to see whether or not that's much, if it's become more male dominated since Elon Musk took.AARON: It's not a huge difference, but who knows?SARAH: I don't know. I have no idea. I have no idea. We'll just be interesting to know.AARON: Okay. Wait. Also, there's no scheduled time. I'm very happy to keep talking or whatever, but as soon as you want to take a break or hop off, just like. Yeah.SARAH: Oh, yeah. I'm in no rush.AARON: Okay, well, I don't know. We've talked about the two obvious candidates. Do you have a take or something? Want to get out to the world? It's not about AI or obesity or just a story you want to share.SARAH: These are my two pet subjects. I don't know anything else.AARON: I don't believe you. I know you know about house plants.SARAH: I do. A secret, which you can't tell anyone, is that I actually only know about house plants that are hard to kill, and I'm actually not very good at taking care of them.AARON: Well, I'm glad it's house plants in that case, rather than pets. Whatever.SARAH: Yeah. I mean, I have killed some sea monkeys, too, but that was a long time ago.AARON: Yes. So did I, actually.SARAH: Did you? I feel like everyone has. Everyone's got a little sea monkey graveyard in their past.AARON: New cause area.SARAH: Are there more shrimp or more sea monkeys? That's the question.AARON: I don't even know what even. I mean, are they just plankton?SARAH: No, they're not plankton.AARON: I know what sea monkeys are.SARAH: There's definitely a lot of them because they're small and insignificant.AARON: Yeah, but I also think we don't. It depends if you're talking about in the world, which I guess probably like sea monkeys or farmed for food, which is basically like. I doubt these are farmed either for food or for anything.SARAH: Yeah, no, you're probably right.AARON: Or they probably are farmed a tiny bit for this niche little.SARAH: Or they're farmed to sell in aquariums for kids.AARON: Apparently. They are a kind of shrimp, but they were bred specifically to, I don't know, be tiny or something. I'm just skimming that, Wikipedia. Here.SARAH: Sea monkeys are tiny shrimp. That is crazy.AARON: Until we get answers, tell me your life story in whatever way you want. It doesn't have to be like. I mean, hopefully not. Don't straight up lie, but wherever you want to take that.SARAH: I'm not going to lie. I'm just trying to think of ways to make it spicier because it's so average. I don't know what to say about it.AARON: Well, it's probably not that average, right? I mean, it might be average among people you happen to know.SARAH: Do you have any more specific questions?AARON: Okay, no. Yeah, hold on. I have a meta point, which is like, I think the people who are they have a thing on the top of their mind, and if I give any sort of open ended question whatsoever, they'll take it there and immediately just start giving slinging hot takes. But thenOther people, I think, this category is very EA. People who aren't, especially my sister, they're like, “No, I have nothing to talk about. I don't believe that.” But they're not, I guess, as comfortable.SARAH: No, I mean, I have. Something needs to trigger them in me. Do you know what I mean? Yeah, I need an in.AARON: Well, okay, here's one. Is there anything you're like, “Maybe I'll cut this. This is kind of, like narcissistic. I don't know. But is there anything you want or curious to ask?” This does sound kind of weird. I don't know. But we can cut it if need be.SARAH: What does the looking glass in your Twitter name mean? Because I've seen a bunch of people have this, and I actually don't know what it means, but I was like, no.AARON: People ask this. I respond to a tweet that's like, “What does that like?” At least, I don't know, once every month or two. Or know basically, like Spencer Greenberg. I don't know if you're familiar with him. He's like a sort of.SARAH: I know the know.AARON: He literally just tweeted, like a couple years ago. Put this in your bio to show that you really care about finding the truth or whatever and are interested in good faith conversations. Are you familiar with the scout mindset?SARAH: Yeah.AARON: Julia Galef. Yeah. That's basically, like the short version.SARAH: Okay.AARON: I'm like, yeah, all right. And there's at least three of us who have both a magnifying glass. Yeah. And a pause thing, which is like, my tightest knit online community I guess.SARAH: I think I've followed all the pause people now. I just searched the emoji on Twitter, and I just followed everyone. Now I can't find. And I also noticed when I was doing this, that some people, if they've suspended their account or they're taking time off, then they put a pause in their thing. So I was, like, looking, and I was like, oh, these are, like, AI people. But then they were just, like, in their bio, they were, like, not tweeting until X date. This is a suspended account. And I was like, I see we have a messaging problem here. Nice. I don't know how common that actually.AARON: Was. I'm glad. That was, like, a very straightforward question. Educated the masses. Max Alexander said Glee. Is that, like, the show? You can also keep asking me questions, but again, this is like.SARAH: Wait, what did he say? Is that it? Did he just say glee? No.AARON: Not even a question mark. Just the word glee.SARAH: Oh, right. He just wants me to go off about Glee.AARON: Okay. Go off about. Wait, what kind of Glee are we? Vaguely. This is like a show or a movie or something.SARAH: Oh, my God. Have you not seen it?AARON: No. I mean, I vaguely remember, I think, watching some TV, but maybe, like, twelve years ago or something. I don't know.SARAH: I think it stopped airing in, like, maybe 2015?AARON: 16. So go off about it. I don't know what I. Yeah, I.SARAH: Don't know what to say about this.AARON: Well, why does Max think you might have a take about Glee?SARAH: I mean, I don't have a take about. Just see the thing. See? No, not even, like, I am just transparently extremely lame. And I really like cheesy. I'm like. I'm like a musical theater kid. Not even ironically. I just like show tunes. And Glee is just a show about a glee club at a high school where they sing show tunes and there's, like, petty drama, and people burst into song in the hallways, and I just think it's just the most glorious thing on Earth. That's it. There are no hot takes.AARON: Okay, well, that's cool. I don't have a lot to say, unfortunately, but.SARAH: No, that's totally fine. I feel like this is not a spicy topic for us to discuss. It's just a good time.AARON: Yeah.SARAH: Wait.AARON: Okay. Yeah. So I do listen to Hamilton on Spotify.SARAH: Okay.AARON: Yeah, that's about it.SARAH: I like Hamilton. I've seen it three times. Oh.AARON: Live or ever. Wow. Cool. Yeah, no, that's okay. Well, what do people get right or wrong about theater kids?SARAH: Oh, I don't know. I think all the stereotypes are true.AARON: I mean, that's generally true, but usually, it's either over moralized, there's like a descriptive thing that's true, but it's over moralized, or it's just exaggerated.SARAH: I mean, to put this in more context, I used to be in choir. I went every Sunday for twelve years. And then every summer we do a little summer school and we go away and put on a production. So we do a musical or something. So I have been. What have I been? I was in Guys and Dolls. I think I was just in the chorus for that. I was the reverend in Anything Goes. But he does unfortunately get kidnapped in like the first five minutes. So he's not a big presence. Oh, I've been Tweedle dumb in Alice in Wonderland. I could go on, but right now as I'm saying this, I'm looking at my notice board and I have two playbills from when I went to Broadway in April where I saw Funny Girl and Hadestown.SARAH: I went to New York.AARON: Oh, cool. Oh yeah. We can talk about when you're moving to the United States. However.SARAH: I'm not going to do that. Okay.AARON: I know. I'm joking. I mean, I don't know.SARAH: I don't think I'm going to do that. I don't know. It just seems like you guys have got a lot going on over there. It seems like things aren't quite right with you guys. Things aren't quite right with us either.AARON: No, I totally get this. I think it would be cool. But also I completely relate to not wanting to. I've lived within 10 miles of one. Not even 10 miles, 8 miles in one location. Obviously gone outside of that. But my entire life.SARAH: You've just always lived in DC.AARON: Yeah, either in DC or. Sorry. But right now in Maryland, it's like right next to DC on the Metro or at Georgia University, which is in the trying to think would I move to the UK. Like I could imagine situations that would make me move to the UK. But it would still be annoying. Kind of.SARAH: Yeah, I mean, I guess it's like they're two very similar places, but there are all these little cultural things which I feel like kind of trip you up.AARON: I don't to. Do you want to say what?SARAH: Like I think people, I just like, I don't know. I don't have that much experience because I've only been to America twice. But people seem a lot more sincere in a way that you don't really get that. Like people are just never really being upfront. And in America, I just got the impression that people just have less of a veneer up, which is probably a good thing. But it's really hard to navigate if you're not used to it or something. I don't know how to describe that.AARON: Yeah, I've definitely heard this at least. And yeah, I think it's for better and for worse.SARAH: Yeah, I think it's generally a good thing.AARON: Yeah.SARAH: But it's like there's this layer of cynicism or irony or something that is removed and then when it's not there, it's just everything feels weak. I can't describe it.AARON: This is definitely, I think, also like an EA rationalist thing. I feel like I'm pretty far on the spectrum. Towards the end of surgical niceties are fine, but I don't know, don't obscure what you really think unless it's a really good reason to or something. But it can definitely come across as being rude.SARAH: Yeah. No, but I think it's actually a good rule of thumb to obscure what you. It's good to try not to obscure what you think most of the time, probably.Ably, I don't know, but I would love to go over temporarily for like six months or something and just hang out for a bit. I think that'd be fun. I don't know if I would go back to New York again. Maybe. I like the bagels there.AARON: I should have a place. Oh yeah. Remember, I think we talked at some point. We can cut this out if you like. Don't if either of us doesn't want it in. But we discussed, oh yeah, I should be having a place. You can. I emailed the landlord like an hour before this. Hopefully, probably more than 50%. That is still an offer. Yeah, probably not for all six months, but I don't know.SARAH: I would not come and sleep on your sofa for six months. That would be definitely impolite and very weird.AARON: Yeah. I mean, my roommates would probably grumble.SARAH: Yeah. They would be like.AARON: Although I don't know. Who knows? I wouldn't be shocked if people were actually like, whatever somebody asked for as a question. This is what he said. I might also be interested in hearing how different backgrounds. Wait, sorry. This is not good grammar. Let me try to parse this. Not having a super hardcore EA AI rationalist background shape how you think or how you view AI as rationality?SARAH: Oh, that's a good question. I think it's more happening the other way around, the more I hang around in these circles. You guys are impacting how I think.AARON: It's definitely true for me as well.SARAH: Seeping into my brain and my language as well. I've started talking differently. I don't know. That's a good question, though. Yeah. One thing that I will say is that there are certain things that I find irritating about the EA way of style of doing things. I think one specific, I don't know, the kind of like hand ring about everything. And I know that this is kind of the point, right? But it's kind of like, you know, when someone's like, I want to take a stance on something, but then whenever they want to take a stance on something, they feel the need to write like a 10,000 word blog post where they're thinking about the second and order and third and fifth order effects of this thing. And maybe this thing that seems good is actually bad for this really convoluted reason. That's just so annoying.AARON: Yeah.SARAH: Also understand that maybe that is a good thing to do sometimes, but it just seems like, I don't know how anyone ever gets anywhere. It seems like everyone must be paralyzed by indecision all the time because they just can't commit to ever actually just saying anything.AARON: I think this kind of thing is really good if you're trying to give away a billion dollars. Oh yes, I do want the billion dollar grantor to be thinking through second and third order effects of how they give away their billion dollars. But also, no, I am super. The words on the tip of my tongue, not overwhelmed but intimidated when I go on the EA forum because the posts, none of them are like normal, like five paragraph essays. Some of them are like, I think one of them I looked up for fun because I was going to make a meme about it and still will. Probably was like 30,000 words or something. And even the short form posts, which really gets me kind of not even annoyed. I don't know, maybe kind of annoyed is that the short form posts, which is sort of the EA forum version of Twitter, are way too high quality, way too intimidating. And so maybe I should just suck it up and post stuff anyway more often. It just feels weird. I totally agree.SARAH: I was also talking to someone recently about how I lurked on the EA forum and less wrong for months and months and I couldn't figure out the upvoting system and I was like, am I being stupid or why are there four buttons? And I was like, well, eventually I had to ask someone because I couldn't figure it out. And then he explained it to me and I was like, that is just so unnecessary. Like, just do it.AARON: No, I do know what you mean.SARAH: I just tI think it's annoying. It pisses me off. I just feel like sometimes you don't need to add more things. Sometimes less is good. Yeah, that's my hot take. Nice things.AARON: Yeah, that's interesting.SARAH: But actually, a thing that I like that EA's do is the constant hedging and caveatting. I do find it kind of adorable. I love that because it's like you're having to constantly acknowledge that you probably didn't quite articulate what you really meant and that you're not quite making contact with reality when you're talking. So you have to clarify that you probably were imprecise when you said this thing. It's unnecessary, but it's kind of amazing.AARON: No, it's definitely. I am super guilty of this because I'll give an example in a second. I think I've been basically trained to try pretty hard, even in normal conversation with anybody, to just never say anything that's literally wrong. Or at least if I do caveat it.AARON: I was driving home, me and my parents and I, unless visited, our grandparents were driving back, and we were driving back past a cruise ship that was in a harbor. And my mom, who was driving at the time, said, “Oh, Aaron, can you see if there's anyone on there?” And I immediately responded like, “Well, there's probably at least one person.” Obviously, that's not what she meant. But that was my technical best guess. It's like, yes, there probably are people on there, even though I couldn't see anybody on the decks or in the rooms. Yeah, there's probably a maintenance guy. Felt kind of bad.SARAH: You can't technically exclude that there are, in fact, no people.AARON: Then I corrected myself. But I guess I've been trained into giving that as my first reaction.SARAH: Yeah, I love that. I think it's a waste of words, but I find it delightful.AARON: It does go too far. People should be more confident. I wish that, at least sometimes, people would say, “Epistemic status: Want to bet?” or “I am definitely right about this.” Too rarely do we hear, "I'm actually pretty confident here.SARAH: Another thing is, people are too liberal with using probabilities. The meaning of saying there is an X percent chance of something happening is getting watered down by people constantly saying things like, “I would put 30% on this claim.” Obviously, there's no rigorous method that's gone into determining why it's 30 and not 35. That's a problem and people shouldn't do that. But I kind of love it.AARON: I can defend that. People are saying upfront, “This is my best guess. But there's no rigorous methodology.” People should take their word for that. In some parts of society, it's seen as implying that a numeric probability came from a rigorous model. But if you say, “This is my best guess, but it's not formed from anything,” people should take their word for that and not refuse to accept them at face value.SARAH: But why do you have to put a number on it?AARON: It depends on what you're talking about. Sometimes probabilities are relevant and if you don't use numbers, it's easy to misinterpret. People would say, “It seems quite likely,” but what does that mean? One person might think “quite reasonably likely” means 70%, the other person thinks it means 30%. Even though it's weird to use a single number, it's less confusing.SARAH: To be fair, I get that. I've disagreed with people about what the word “unlikely” means. Someone's pulled out a scale that the government uses, or intelligence services use to determine what “unlikely” means. But everyone interprets those words differently. I see what you're saying. But then again, I think people in AI safety talking about P Doom was making people take us less seriously, especially because people's probabilities are so vibey.AARON: Some people are, but I take Paul Cristiano's word seriously.SARAH: He's a 50/50 kind of guy.AARON: Yeah, I take that pretty seriously.Obviously, it's not as simple as him having a perfect understanding of the world, even after another 10,000 hours of investigation. But it's definitely not just vibes, either.SARAH: No, I came off wrong there. I don't mean that everyone's understanding is just vibes.AARON: Yeah.SARAH: If you were looking at it from the outside, it would be really difficult to distinguish between the ones that are vibes and the ones that are rigorous, unless you carefully parsed all of it and evaluated everyone's background, or looked at the model yourself. If you're one step removed, it looks like people just spitting out random, arbitrary numbers everywhere.AARON: Yeah. There's also the question of whether P doom is too weird or silly, or if it could be easily dismissed as such.SARAH: Exactly, the moment anyone unfamiliar with this discussion sees it, they're almost definitely going to dismiss it. They won't see it as something they need to engage with.AARON: That's a very fair point. Aside from the social aspect, it's also a large oversimplification. There's a spectrum of outcomes that we lump into doom and not doom. While this binary approach can be useful at times, it's probably overdone.SARAH: Yeah, because when some people say doom, they mean everyone dies, while others mean everyone dies plus everything is terrible. And no one specifies what they mean. It is silly. But, I also find it kind of funny and I kind of love it.AARON: I'm glad there's something like that. So it's not perfect. The more straightforward thing would be to say P existential risk from AI comes to pass. That's the long version, whatever.SARAH: If I was in charge, I would probably make people stop using PDOOm. I think it's better to say it the long way around. But obviously I'm not in charge. And I think it's funny and kind of cute, so I'll keep using it.AARON: Maybe I'm willing to go along and try to start a new norm. Not spend my whole life on it, but say, I think this is bad for X, Y, and Z reasons. I'll use this other phrase instead and clarify when people ask.SARAH: You're going to need Twitter premium because you're going to need a lot more characters.AARON: I think there's a shorthand which is like PX risk or P AiX risk.SARAH: Maybe it's just the word doom that's a bit stupid.AARON: Yeah, that's a term out of the Bay Area rationalists.SARAH: But then I also think it kind of makes the whole thing seem less serious. People should be indignant to hear that this meme is being used to trade probabilities about the likelihood that they're going to die and their families are going to die. This has been an in-joke in this weird niche circle for years and they didn't know about it. I'm not saying that in a way to morally condemn people, but if you explain this to people…People just go to dinner parties in Silicon Valley and talk about this weird meme thing, and what they really mean is the ODs know everyone's going to prematurely die. People should be outraged by that, I think.AARON: I disagree that it's a joke. It is a funny phrase, but the actual thing is people really do stand by their belief.SARAH: No, I totally agree with that part. I'm not saying that people are not being serious when they give their numbers, but I feel like there's something. I don't know how to put this in words. There's something outrageous about the fact that for outsiders, this conversation has been happening for years and people have been using this tongue-in-cheek phrase to describe it, and 99.9% of people don't know that's happening. I'm not articulating this very well.AARON: I see what you're saying. I don't actually think it's like. I don't know a lot of jargon.SARAH: But when I first found out about this, I was outraged.AARON: I honestly just don't share that intuition. But that's really good.SARAH: No, I don't know how to describe this.AARON: I think I was just a little bit indignant, perhaps.SARAH: Yeah, I was indignant about it. I was like, you guys have been at social events making small talk by discussing the probability of human extinction all this time, and I didn't even know. I was like, oh, that's really messed up, guys.AARON: I feel like I'm standing by the rational tier because, it was always on. No one was stopping you from going on less wrong or whatever. It wasn't behind closed.SARAH: Yeah, but no one ever told me about it.AARON: Yeah, that's like a failure of outreach, I suppose.SARAH: Yeah. I think maybe I'm talking more about. Maybe the people that I'm mad at is the people who are actually working on capabilities and using this kind of jargon. Maybe I'm mad at those people. They're fine.AARON: Do we have more questions? I think we might have more questions. We have one more. Okay, sorry, but keep going.SARAH: No, I'm going to stop making that point now because I don't really know what I'm trying to say and I don't want to be controversial.AARON: Controversy is good for views. Not necessarily for you. No, thank you for that. Yes, that was a good point. I think it was. Maybe it was wrong. I think it seems right.SARAH: It was probably wrong.Shrimp Welfare: A Serious DiscussionAARON: I don't know what she thinks about shrimp welfare. Oh, yeah. I think it's a general question, but let's start with that. What do you think about shrimp? Well, today.SARAH: Okay. Is this an actual cause area or is this a joke about how if you extrapolate utilitarianism to its natural conclusion, you would really care about shrimp?AARON: No, there's a charity called the Shrimp Welfare Initiative or project. I think it's Shrimp Welfare Initiative. I can actually have a rant here about how it's a meme that people find amusing. It is a serious thing, but I think people like the meme more than they're willing to transfer their donations in light of it. This is kind of wrong and at least distasteful.No, but there's an actual, if you Google, Shrimp Welfare Project. Yeah, it's definitely a thing, but it's only a couple of years old. And it's also kind of a meme because it does work in both ways. It sort of shows how we're weird, but in the sense that we are willing to care about things that are very different from us. Not like we're threatening other people. That's not a good description.SARAH: Is the extreme version of this position that we should put more resources into improving the lives of shrimp than into improving the lives of people just because there are so many more shrimp? Are there people that actually believe that?AARON: Well, I believe some version of that, but it really depends on who the ‘we' is there.SARAH: Should humanity be putting more resources?AARON: No one believes that as far as I know.SARAH: Okay. Right. So what is the most extreme manifestation of the shrimp welfare position?AARON: Well, I feel like my position is kind of extreme, and I'm happy to discuss it. It's easier than speculating about what the more extreme ones are. I don't think any of them are that extreme, I guess, from my perspective, because I think I'm right.SARAH: Okay, so what do you believe?AARON: I think that most people who have already decided to donate, say $20, if they are considering where to donate it and they are better morally, it would be better if they gave it to the shrimp welfare project than if they gave it to any of the commonly cited EA organizations.SARAH: Malaria nets or whatever.AARON: Yes. I think $20 of malaria nets versus $20 of shrimp. I can easily imagine a world where it would go the other way. But given the actual situation, the $20 of shrimp is much better.SARAH: Okay. Is it just purely because there's just more shrimp? How do we know how much shrimp suffering there is in the world?AARON: No, this is an excellent question. The numbers are a key factor, but no, it's not as simple. I definitely don't think one shrimp is worth one human.SARAH: I'm assuming that it's based on the fact that there are so many more shrimp than there are people that I don't know how many shrimp there are.AARON: Yeah, that's important, but at some level, it's just the margin. What I think is that when you're donating money, you should give to wherever it does the most good, whatever that means, whatever you think that means. But let's just leave it at that. The most good is morally best at the margin, which means you're not donating where you think the world should or how you think the world should expend its trillion dollar wealth. All you're doing is adding $20 at this current level, given the actual world. And so part of it is what you just said, and also including some new research from Rethink Priorities.Measuring suffering in reasonable ranges is extremely hard to do. But I believe it's difficult to do a better job than raising priorities on that, given what I've seen. I can provide some links. There are a few things to consider here: numbers, times, and the enormity of suffering. I think there are a couple of key elements, including tractability.Are you familiar with the three-pronged concept people sometimes discuss, which encompasses tractability, and neglectedness?SARAH: Okay.AARON: Importance is essentially what we just mentioned. Huge numbers and plausible amounts of suffering. When you try to do the comparison, it seems like they're a significant concern. Tractability is another factor. I think the best estimates suggest that a one-dollar donation could save around 10,000 shrimp from a very painful death.SARAH: In that sense…AARON: You could imagine that even if there were a hundred times more shrimp than there actually are, we have direct control over how they live and die because we're farming them. The industry is not dominated by wealthy players in the United States. Many individual farmers in developing nations, if educated and provided with a more humane way of killing the shrimp, would use it. There's a lot of potential for improvement here. This is partly due to the last prong, neglectedness, which is really my focus.SARAH: You're saying no one cares about the shrimp.AARON: I'm frustrated that it's not taken seriously enough. One of the reasons why the marginal cost-effectiveness is so high is because large amounts of money are donated to well-approved organizations. But individual donors often overlook this. They ignore their marginal impact. If you want to see even a 1% shift towards shrimp welfare, the thing to do is to donate to shrimp welfare. Not donate $19 to human welfare and one dollar to shrimp welfare, which is perhaps what they think the overall portfolio should be.SARAH: Interesting. I don't have a good reason why you're wrong. It seems like you're probably right.AARON: Let me put the website in the chat. This isn't a fair comparison since it's something I know more about.SARAH: Okay.AARON: On the topic of obesity, neither of us were more informed than the other. But I could have just made stuff up or said something logically fallacious.SARAH: You could have told me that there were like 50 times the number of shrimp in the world than there really are. And I would have been like, sure, seems right.AARON: Yeah. And I don't know, if I…If I were in your position, I would say, “Oh, yeah, that sounds right.” But maybe there are other people who have looked into this way more than me that disagree, and I can get into why I think it's less true than you'd expect in some sense.SARAH: I just wonder if there's like… This is like a deeply non-EA thing to say. So I don't know, maybe I shouldn't say it, but are there not any moral reasons? Is there not any good moral philosophy behind just caring more about your own species than other species? If you're sorry, but that's probably not right, is it? There's probably no way to actually morally justify that, but it seems like it feels intuitively wrong. If you've got $20 to be donating 19 of them to shrimp and one to children with malaria, that feels like there should be something wrong with that, but I can't tell you what it is.AARON: Yeah, no, there is something wrong, which is that you should donate all 20 because they're acting on the margin, for one thing. I do think that doesn't check out morally, but I think basically me and everybody I know in terms of real life or whatever, I do just care way more about humans. I don't know, for at least the people that it's hard to formalize or specify what you mean by caring about or something. But, yeah, I think you can definitely basically just be a normal human who basically cares a lot about other humans. And still that's not like, negated by changing your $20 donation or whatever. Especially because there's nothing else that I do for shrimp. I think you should be like a kind person or something. I'm like an honest person, I think. Yeah, people should be nice to other humans. I mean, you should be nice in the sense of not beating them. But if you see a pigeon on the street, you don't need to say hi or whatever, give it a pet, because. I don't know. But yeah, you should be basically like, nice.SARAH: You don't stop to say hi to every pigeon that you see on the way to anywhere.AARON: I do, but I know most normal people don't.SARAH: This is why I'm so late to everything, because I have to do it. I have to stop for every single one. No exceptions.AARON: Yeah. Or how I think about it is sort of like a little bit of compartmentalization, which I think is like… Which is just sort of like a way to function normally and also sort of do what you think really checks out at the end of the day, just like, okay, 99% of the time I'm going to just be like a normal person who doesn't care about shrimp. Maybe I'll refrain from eating them. But actually, even that is like, I could totally see a person just still eating them and then doing this. But then during the 1% of the time where you're deciding how to give money away and none of those, the beneficiaries are going to be totally out of sight either way. This is like a neutral point, I guess, but it's still worth saying, yeah, then you can be like a hardcore effective altruist or whatever and then give your money to the shrimp people.SARAH: Do you have this set up as like a recurring donation?AARON: Oh, no. Everybody should call me out as a hypocrite because I haven't donated much money, but I'm trying to figure out actually, given that I haven't had a stable income ever. And maybe, hopefully I will soon, actually. But even then, it's still a part-time thing. I haven't been able to do sort of standard 10% or more thing, and I'm trying to figure out what the best thing to do or how to balance, I guess, not luxury, not like consumption on things that I… Well, to some extent, yeah. Maybe I'm just selfish by sometimes getting an Uber. That's totally true. I think I'm just a hypocrite in that respect. But mostly I think the trade-off is between saving, investing, and giving. Beast of the money that I have saved up and past things. So this is all sort of a defense of why I don't have a recurring donation going on.SARAH: I'm not asking you to defend yourself because I do not do that either.AARON: I think if I was making enough money that I could give away $10,000 a year and plan on doing that indefinitely, I would be unlikely to set up a recurring donation. What I would really want to do is once or twice a year, really try to prioritize deciding on how to give it away rather than making it the default. This has a real cost for charities. If you set up a recurring donation, they have more certainty in some sense of their future cash flow. But that's only good to do if you're really confident that you're going to want to keep giving there in the future. I could learn new information that says something else is better. So I don't think I would do that.SARAH: Now I'm just thinking about how many shrimp did you say it was per dollar?AARON: Don't quote me. I didn't say an actual thing.SARAH: It was like some big number. Right. Because I just feel like that's such a brainworm. Imagine if you let that actually get in your head and then every time you spend some unnecessary amount of money on something you don't really need, you think about how many shrimp you just killed by getting an Uber or buying lunch out. That is so stressful. I think I'm going to try not to think about that.AARON: I don't mean to belittle this. This is like a core, I think you're new to EA type of thinking. It's super natural and also troubling when you first come upon it. Do you want me to talk about how I, or other people deal with that or take action?SARAH: Yeah, tell me how to get the shrimp off my conscience.AARON: Well, for one thing, you don't want to totally do that. But I think the main thing is that the salience of things like this just decreases over time. I would be very surprised if, even if you're still very engaged in the EA adjacent communities or EA itself in five years, that it would be as emotionally potent. Brains make things less important over time. But I think the thing to do is basically to compartmentalize in a sort of weird sense. Decide how much you're willing to donate. And it might be hard to do that, but that is sort of a process. Then you have that chunk of money and you try to give it away the best you can under whatever you think the best ethics are. But then on the daily, you have this other set pot of money. You just are a normal person. You spend it as you wish. You don't think about it unless you try not to. And maybe if you notice that you might even have leftover money, then you can donate the rest of it. But I really do think picking how much to give should sort of be its own project. And then you have a pile of money you can be a hardcore EA about.SARAH: So you pick a cut off point and then you don't agonize over anything over and above that.AARON: Yeah. And then people, I mean, the hard part is that if somebody says their cut off point is like 1% of their income and they're making like $200,000, I don't know. Maybe their cut off point should be higher. So there is a debate. It depends on that person's specific situation. Maybe if they have a kid or some super expensive disease, it's a different story. If you're just a random guy making $200,000, I think you should give more.SARAH: Maybe you should be giving away enough to feel the pinch. Well, not even that. I don't think I'm going to do that. This is something that I do actually want to do at some point, but I need to think about it more and maybe get a better job.AARON: Another thing is, if you're wanting to earn to give as a path to impact, you could think and strive pretty hard. Maybe talk to people and choose your education or professional development opportunities carefully to see if you can get a better paying job. That's just much more important than changing how much you give from 10% to 11% or something. You should have this macro level optimization. How can I have more money to spend? Let me spend, like, I don't know, depends what life stage you are, but if you had just graduated college or maybe say you're a junior in college or something. It could make sense to spend a good amount of time figuring out what that path might look like.AARON: I'm a huge hypocrite because I definitely haven't done all this nearly as much as I should, but I still endorse it.SARAH: Yeah, I think it's fine to say what you endorse doing in an ideal world, even if you're not doing that, that's fine.AARON: For anybody listening, I tweeted a while ago, asking if anyone has resources on how to think about giving away wealth. I'm not very wealthy but have some amount of savings. It's more than I really need. At the same time, maybe I should be investing it because EA orgs don't feel like, or they think they can't invest it because there's potentially a lot of blowback if they make poor investments, even though it would be higher expected value.There's also the question of, okay, having some amount of savings allows me to take higher, potentially somewhat higher risk, but higher value opportunities because I have a cushion. But I'm very confused about how to give away what I should do here. People should DM me on Twitter or anywhere they have ideas.SARAH: I think you should calculate how much you need to cover your very basic needs. Maybe you should work out, say, if you were working 40 hours a week in a minimum wage job, like how much would you make then? And then you should keep that for yourself. And then the rest should definitely all go to the shrimp. Every single penny. All of it.AARON: This is pretty plausible. Just to make it more complicated, there's also the thing that I feel like my estimates or my best guesses of the best charities to give to over time has changed. And so there's like two competing forces. One is that I might get wiser and more knowledgeable as time goes on. The other one is that in general, giving now is better than giving later. All else equal, because I think for a couple of reasons, the main one just being that the charities don't know that you're going to give later.AARON: So it's like they can plan for the future much better if they get money now. And also there's just higher leverage opportunities or higher value per dollar opportunities now in general than there will be later for a couple of reasons I don't really need to. This is what makes it really complicated. So I've donated in the past to places that I don't think, or I don't think even at the time were the best to. So then there's a question of like, okay, how long do I save this money? Do I sit on it for months until I'm pretty confident, like a year.AARON: I do think that probably over the course of zero to five years or something, becoming more confident or changing your mind is like the stronger effect than how much good you give to the, or how much better it is for the charities to give now instead of later. But also that's weird because you're never committing at all.Sometimes you might decide to give it away, and maybe you won't. Maybe at that time you're like, “Oh, that's what I want. A car, I have a house, whatever.” It's less salient or something. Maybe something bad happened with EA and you no longer identify that way. Yeah, there's a lot of really thorny considerations. Sorry, I'm talking way too much.SARAH: Long, are you factoring AI timelines into this?AARON: That makes it even more sketchy. But that could also go both ways. On one hand, you have the fact that if you don't give away your money now and you die with it, it's never going to do any good. The other thing is that it might be that especially high leverage opportunities come in the future or something potentially you need, I don't know, whatever I can imagine I could make something up about. OpenPhil needs as much money as it can get to do X, Y and Z. It's really important right now, but I won't know that until a few years down the line. So just like everything else, it doesn't neatly wash out.SARAH: What do you think the AGI is going to do to the shrimp? I reckon it's probably pretty neat, like one shrimp per paperclip. Maybe you could get more. I wonder what the sort of shrimp to paperclip conversion rate is.AARON: Has anyone looked into that morally? I think like one to zero. I don't think in terms of money. You could definitely price that. I have no idea.SARAH: I don't know. Maybe I'm not taking this as seriously as I should be because I'm.AARON: No, I mean, humor is good. When people are giving away money or deciding what to do, they should be serious. But joking and humor is good. Sorry, go ahead.SARAH: No, you go ahead.AARON: I had a half-baked idea. At EA Global, they should have a comedy show where people roast everybody, but it's a fundraiser. You have to pay to get 100 people to attend. They have a bidding contest to get into the comedy show. That was my original idea. Or they could just have a normal comedy show. I think that'd be cool.SARAH: Actually, I think that's a good idea because you guys are funny. There is a lot of wit on this side of Twitter. I'm impressed.AARON: I agree.SARAH: So I think that's a very good idea.AARON: Okay. Dear Events team: hire Aaron Bergman, professional comedian.SARAH: You can just give them your Twitter as a source for how funny you are, and that clearly qualifies you to set this up. I love it.AARON: This is not important or related to anything, but I used to be a good juggler for entertainment purposes. I have this video. Maybe I should make sure the world can see it. It's like a talent show. So maybe I can do that instead.SARAH: Juggling. You definitely should make sure the world has access to this footage.AARON: It had more views than I expected. It wasn't five views. It was 90 or something, which is still nothing.SARAH: I can tell you a secret right now if you want. That relates to Max asking in the chat about glee.AARON: Yes.SARAH: This bit will also have to edit out, but me having a public meltdown over AI was the second time that I've ever blown up on the Internet. The first time being. I can't believe I'm telling you this. I think I'm delirious right now. Were you ever in any fandoms, as a teenager?AARON: No.SARAH: Okay. Were you ever on Tumblr?AARON: No. I sort of know what the cultural vibes were. I sort of know what you're referring to. There are people who like Harry Potter stuff and bands, like Kpop stuff like that.SARAH: So people would make these fan videos where they'd take clips from TV shows and then they edit them together to music. Sometimes people would edit the clips to make it look like something had happened in the plot of the show that hadn't actually happened. For example, say, what if X character had died? And then you edit the clips together to try and make it look like they've died. And you put a sad song, how to save a life by the fray or something, over the top. And then you put it on YouTube.AARON: Sorry, tell me what…"Hat I should search or just send the link here. I'm sending my link.SARAH: Oh, no, this doesn't exist anymore. It does not exist anymore. Right? So, say if you're, like, eleven or twelve years old and you do this, and you don't even have a mechanism to download videos because you don't know how to do technology. Instead, you take your little iPod touch and you just play a YouTube video on your screen, and you literally just film the screen with your iPod touch, and that's how you're getting the clips. It's kind of shaky because you're holding the camera anyway.SARAH: Then you edit together on the iMovie app of your iPod touch, and then you put it on the Internet, and then you just forget about it. You forget about it. Two years later, you're like, oh, I wonder what happened to that YouTube account? And you log in and this little video that you've made with edited clips that you've filmed off the screen of your laptop to ‘How To Save Life' by The Fray with clips from Glee in it, has nearly half a million views.AARON: Nice. Love it.SARAH: Embarrassing because this is like, two years later. And then all the comments were like, oh, my God, this was so moving. This made me cry. And then obviously, some of them were hating and being like, do you not even know how to download video clips? Like, what? And then you're so embarrassed.AARON: I could totally seem it. Creative, but only a reasonable solution. Yeah.SARAH: So that's my story of how I went viral when I was like, twelve.AARON: It must have been kind of overwhelming.SARAH: Yeah, it was a bit. And you can tell that my time, it's like 20 to eleven at night, and now I'm starting to really go off on one and talk about weird things.AARON: Like an hour. So, yeah, we can wrap up. And I always say this, but it's actually true. Which is that low standard, like, low stakes or low threshold. Low bar for doing that in recording some of the time.SARAH: Yeah, probably. We'll have to get rid of the part about how I went viral on YouTube when I was twelve. I'll sleep on that.AARON: Don't worry. I'll send the transcription at some point soon.SARAH: Yeah, cool.AARON: Okay, lovely. Thank you for staying up late into the night for this.SARAH: It's not that late into the night. I'm just like, lame and go to bed early.AARON: Okay, cool. Yeah, I know. Yeah, for sure. All right, bye. Get full access to Aaron's Blog at www.aaronbergman.net/subscribe
It's our first ever AAP bonus episode! It's a recording from the first discussion John and Lawrence had on the Lawrence Anton Youtube channel in November 2022 on how antinatalists can do the most good. Back then John hadn't yet splashed out on a microphone, so if you can suffer through the low-quality audio, enjoy! TIMESTAMPS00:00 Intro03:02 How John became an antinatalist09:36 Brief into to Effective Altruism (EA)14:03 The link between antinatalism and EA16:52 What antinatalists can learn from EA's successes22:41 What antinatalists can learn from EA's challenges27:52 Reluctant longtermism34:38 The four cause areas of antinatalist activism49:49 People working in this space53:00 Possible objections from the antinatalist community1:02:42 Possible objections from the EA community1:10:45 How academia might react to antinatalist activism1:17:30 Tangible steps to have an impact1:20:21 Final thoughts1:24:24 Outro ANTINATALIST ADVOCACYNewsletter: https://antinatalistadvocacy.org/newsletterWebsite: https://antinatalistadvocacy.org/Twitter: https://twitter.com/AN_advocacy Instagram: https://instagram.com/an_advocacy Check out the links below! Effective Altruism: https://www.effectivealtruism.org Give Well: https://www.givewell.org 80,000 Hours: https://80000hours.org Animal Charity Evaluators: https://animalcharityevaluators.org Centre for Reducing Suffering: https://centerforreducingsuffering.org Critique of Macaskill's ‘Is it good to make happy people?' | Minus Vinding (Article): https://forum.effectivealtruism.org/posts/vZ4kB8gpvkfHLfz8d/critique-of-macaskill-s-is-it-good-to-make-happy-people Strategic Considerations for Moral Antinatalists | Brian Tomasik (Article): https://reducing-suffering.org/strategic-considerations-moral-antinatalists/ Famine, Affluence & Procreation | David Benatar (Article): https://r.jordan.im/download/natalism/benatar2020.pdf
A thought-provoking conversation about Effective Altruism (EA) with technologist Ben Goldhaber, as we explore its intersections with utilitarianism and transaction costs. We'll try to navigate the tricky terrains of libertarianism and the more "directed" world of EA, balancing directional and destinationist solutions, and the role of strong leadership and community dynamics in maintaining this equilibrium. We'll question the limits of utility maximization as a framework and ponder over the potential dangers it could pose if unchecked. Our discussion investigates how EA, rational thinking, and global development has influenced the field of AI alignment. And my favorite new TWEJ, from @dtarias. In the first monthly edition of TAITC.Some resources:The Reddit source for the TWEJSunday Brunch, for $195, at the BreakersEconTalk: Peter SingerEconTalk: Will McCaskill and LongtermismEconTalk: Eric Hoel and the Repugnant ConclusionKevin Munger--Everything Was Rational and Nothing VibedConsequentialism: IEPEffective Altruism ForumSB-F on SB-F (New York Times)If you have questions or comments, or want to suggest a future topic, email the show at taitc.email@gmail.com ! You can follow Mike Munger on Twitter at @mungowitz
In this episode, John speaks with fellow antinatalist Michael (aka Vegan Space Scientist), a Researcher and Strategy Lead and at the Sentience Institute. They discuss the core ideas of Effective Altruism, the ins and outs of the EA community, and how people can use the core ideas of Effective Altruism and Antinatalism to make the world a better place. Enjoy!TIMESTAMPS00:00 Intro to the episode02:59 Welcome Michael04:32 Discovering antinatalism08:37 Going vegan11:31 Coming across Effective Altruism (EA)17:26 Core EA ideas22:36 How these ideas have evolved over time31:04 Choosing causes / charities to support40:26 Challenges / downsides of core EA ideas43:43 Intro to the EA community54:58 Challenges / downsides of the community 1:04:43 Balancing doing good with living a good life1:07:42 Relationship between EA and antinatalism1:18:37 Opportunities for impact across communities1:22:14 Opportunity cost of having children1:26:30 What Michael is currently working on1:31:41 Things to be positive about1:35:26 Outro to the episodeANTINATALIST ADVOCACYNewsletter: https://antinatalistadvocacy.org/newsletter Website: https://antinatalistadvocacy.org/Twitter: https://twitter.com/AN_advocacyFacebook: https://www.facebook.com/antinatalistadvocacyInstagram: https://instagram.com/an_advocacyCheck out the links below!Michael's website: https://www.michaeldello.com/Vegan Space Scientist: https://www.youtube.com/@VeganSpaceScientistEffective Altruism: https://www.effectivealtruism.org/EA TED Talk: Peter Singer: The why and how of effective altruism
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prototype: GPT-powered EA/LW weekly summary, published by Hamish McDoodles on August 24, 2023 on The Effective Altruism Forum. Zoe Williams used to manually do weekly summaries of the EA Forum and LessWrong, but now she doesn't today I strung together a bunch of google apps scripts, google sheets expressions, graphQL queries, and D3.js to automatically extract all the posts on EAF/LW from the last week with >50 karma, summarize them with GPT-4, and assemble the result into HTML with links and stuff hard to say what the API usage cost, what with all the tinkering and experimenting, but I reckon it was about $5 there were a bunch of posts which were too long for the API message length, so as a first whack i just cut stuff out of the middle of the post until it fit (Procrustes style) Siao (co-author) was going to help but i finished everything so fast she never got a chance lol I haven't spent much time sanity-checking these summaries, but I reckon they're "good enough to be useful" They often drop useful details or get the emphasis wrong. I haven't seen any outright fabrication. Obviously if you have a special interest in some topic these aren't going to substitute for reading the original post. the obvious next few steps are: automate the actual posting of the summaries. does EAF/LW have an API for posting? also summarize top comment(s) this is kinda hard experiment with prompts to see if other summaries are more useful also generate a top level summary which gives you like 5 bullet points of the most important things from the forums this week feedback that would be useful: what would you (personally) like to be different about these summaries? should they be shorter? longer? bullet points? have quotes? fewer entries? more entries? leave a comment or DM me or whatever with any old feedback oh, and: i originally also got the top posts from the AI alignment forum, but they were all cross posted on lesswrong? is that alwasy true? anyone know? EA Forum Select examples of adverse selection in longtermist grantmaking by Linch The author, a volunteer and sometimes contractor for EA Funds' Long-Term Future Fund (LTFF), discusses the pros and cons of diversification in longtermist EA funding. While diversification can increase financial stability, allow for a variety of worldviews, encourage accountability, and provide access to diverse networks, it can also lead to adverse selection, where projects that have been rejected by existing grantmakers are funded by new ones. The author provides examples of such cases and suggests that new grantmakers should be cautious about funding projects that have been rejected by others, but also acknowledges that grantmakers can make mistakes and that a network of independent funders could help ensure that unusual but potentially high-impact projects are not overlooked. An Elephant in the Community Building room by Kaleem The author, a contractor for CEA and employee of EVOps, shares his personal views on the strategies of community building within the Effective Altruism (EA) movement. He identifies two main strategies: Global EA, which aims to spread EA ideas as widely as possible, and Narrow EA, which focuses on influencing a small group of highly influential people. The author argues that community builders and funders should be more explicit about their theory of change for global community building, as there could be significant trade-offs in impact between these two strategies. CE alert: 2 new interventions for February-March 2024 Incubation Program by CE Charity Entrepreneurship has announced two new charity interventions for its February-March 2024 Incubation Program, bringing the total to six. The new interventions include an organization focused on bringing new funding into the animal advocacy movement and an organization providing ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Making EA more inclusive, representative, and impactful in Africa, published by Ashura Batungwanayo on August 19, 2023 on The Effective Altruism Forum. Authors: Ashura Batungwanayo (University of KwaZulu Natal) and Hayley Martin (University of Cape Town) DISCLAIMER: This paper draws from our collective experiences, perspectives, and conversations with community builders in Africa who share similar experiences and perspectives. Introduction In our engagement with Effective Altruism (EA), we noted a distinct emphasis on Existential Risks, notably concerning Artificial Intelligence (AI), alongside a focus on animal welfare and veganism - issues that carry nuanced significance, differing between Western societies and Africa due to diverse factors in animal husbandry. However, our initial allure lay in EA's focus on Global Health and Development (GH&D). GH&D holds a special significance for us as it directly confronts the realities we, as African students, encounter daily. Amidst this, the need for balance arises: acknowledging existential risks while prioritising urgent issues like poverty and education. Our vision extends to an EA Africa initiative that blends bottom-up and top-down approaches for effective, contextually attuned change. However, challenges persist, including navigating a competitive altruistic landscape and balancing immediate impact with long-term prevention. A critical thread is Africa's self-sufficiency, with EA acting as a catalyst for local partnership, co-designed interventions, and self-reliance. The path forward involves forging strategic collaborations, knowledge sharing, and empowerment, all underpinned by a commitment to inclusivity, representation, and comprehensive change. Global Health and Development's Urgent Call to Address African Realities The challenges it addresses are not abstract concepts, but tangible issues that our communities and loved ones have grappled with. As university students, we acknowledge the privilege bestowed upon us and feel a profound responsibility to address the issues that plague our homeland. Our identity is intertwined with the principles of Ubuntu, which emphasise our shared humanity and interconnectedness. This cultural ethos, coupled with the weight of the "black tax," the financial responsibilities we bear for our families and communities, amplifies our desire to contribute meaningfully to the well-being of our people. Incorporating the principles of GH&D into our personal cause area is more than a mere pursuit; it's a calling driven by the urgent need to translate our empathy into action. By understanding the nuances of diplomatic engagement and the complexities of GH&D, we can channel our aspirations into effective strategies that uplift our communities while respecting our cultural values. GH&D resonates deeply with our African identity, our educational privilege, and our unwavering commitment to making a positive impact in the places we call home. While AI Alignment and Animal Welfare remains a critical concern, we acknowledge the complexities in communicating its relevance to African audiences. The messaging around AI Safety and Animal Welfare doesn't inherently speak with the immediate and intersecting challenges we face as a continent, although we recognise its significance in the broader global context (and acknowledge that there are people working on AI Alignment and Animal Advocacy on the continent as well). It's important to note that concentrating solely on existential risks could inadvertently diminish the urgency of current issues, such as poverty and education. Striving for a comprehensive approach that appreciates the distinctive dynamics of African contexts is paramount. It's worth contemplating the establishment of an EA Africa initiative, one that aspires to harmonise both bottom-up and...
David Edmonds is a philosopher, writer, podcaster and presenter. His most recent book is a biography of Derek Parfit. Parfit: A philosopher and his mission to save morality. “Derek was perhaps the most important philosopher of his era. This scintillating and insightful portrait of him is one of the best intellectual biographies I have read.” -Tyler Cowen Other books include: The Murder of Professor Schlick, Would You Kill the Fat Man? and (with John Eidinow) the international best-seller Wittgenstein's Poker. He's a Distinguished Research Fellow at Oxford University's Uehiro Centre for Practical Ethics. With Nigel Warburton he produces the popular podcast series Philosophy Bites. For three decades, he was a multi-award winning presenter/producer at the BBC. We start off discussing “Trolley problems” and the ethical implications of choosing between lives now and in the future. Edmonds provides a nuanced perspective, discussing the argument that while a life in the future is (almost) as valuable as a life today, the decision to kill five lives today could potentially reduce future life. Would you kill five people today, or five people in 100 years? "I think I would choose five in a hundred years, but it would be a very marginal decision…on the whole, I agree with Parfit in I think that there should be no moral discounting in that I think a life in the future is as valuable as a life today. But presumably if you kill five lives today, you are affecting who gets born. So that's why I would kill five lives in the future because I might be also reducing future life as well if I take lives today." We chat about if thought experiments are even useful at all (contra, Diane Coyle, who dislikes them). I then ask about real life challenges such as NHS budgets and potentially choosing between saving pre-term babies or diabetics. I ask David about his favorite paradox (think about God and a very large breakfast) and give him the St Petersburg paradox to answer. "Can God cook a breakfast so big that He can't eat it?" We discuss the life of Derek Parfit, his personality and obsessions. Whether he might have been a good historian (vs philosopher), the pros and cons of All Souls College and if an autistic cognitive profile mattered. David gives his view on why Derek's second book was (and is) considered inferior to his first. We also touch on Effective Altruism (EA) and Derek's influence on longtermism and possible foundational philosophical roots to the EA movement. We end on what chess opening David would use against Magnus Carlson, what countries David would like to visit, current projects and life advice David has. Transcript and video available here.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Presentación Introductoria al Altruismo Eficaz, published by davidfriva on June 26, 2023 on The Effective Altruism Forum. TL;DR: Spanish-Speaking Introduction to Effective Altruism covering key concepts like EA itself, 80,000 Hours, Longtermism, and the ITN Framework. Message to the English-Speaking Community (Mensaje para la comunidad angloparlante): Hey everyone! I'm David, a 21-year-old Computer Science student at the University of Buenos Aires (Argentina). I recently delivered an introductory talk on Effective Altruism (EA), drawing inspiration from Ajeya Cotra's style and targeting young adults. During the talk, I covered various important concepts such as Effective Altruism itself, the idea of 80,000 hours, Longtermism, and the ITN Framework (translated into Spanish), after sharing my personal journey of discovering these concepts and why they hold significance for me. As part of my ongoing efforts to address the lack of Spanish content on the EA Forum, I am sharing a link to the talk and the accompanying transcript in the form of an article. Spanish, being the second most widely spoken language in the world and extensively used on the internet, deserves greater representation within the EA community. My hope is that this initiative will help bridge the gap and make EA concepts more accessible to Spanish speakers. I. Hola, soy David, tengo 21 años y estudio Computación en la Universidad de Buenos Aires. Hoy, quiero hablarles sobre un concepto que transformó radicalmente mi vida: el Altruismo Eficaz. El Altruismo Eficaz es, básicamente, un proyecto que tiene como objetivo encontrar las mejores formas de ayudar a los demás y ponerlas en práctica. Es tanto un campo de investigación que busca identificar los problemas más urgentes del mundo y las mejores soluciones para ellos, como una comunidad práctica que busca utilizar esos hallazgos para hacer el bien. Pero. para que entiendan por qué es tan importante para mí, y poder profundizar en esto, necesito antes contarles un poco de mi historia: Bueno, era marzo de 2020, cuando habiendo cumplido 18 años, termino la secundaria. El mismo mes me quedo sin hogar justo antes de empezar la universidad, cuando mi padre me echa de la habitación que compartíamos en una vivienda colectiva. Terminé siendo hospedado en la casa de un amigo y comencé a buscar empleo. Eran tiempos difíciles, de pandemia, la actividad económica había quedado paralizada. Por suerte, mala y buena, me contratan como empleado de limpieza en un Hospital En mi tiempo trabajando ahí, pude ser testigo de lo colapsado que se encontraba el sistema sanitario. Veía a personas ansiosas esperando en la guardia, pacientes sufriendo de enfermedades graves, y los trabajadores de la salud que se encargaban de tratarlos, completamente exhaustos. Con la emergencia sanitaria sumándose al caos, era un ambiente bastante estresante, incluso aterrador.En ese ambiente yo trabajaba. Y a la vuelta, iba a un comedor comunitario, que maneja una iglesia en el barrio de Constitución, para pedir comida. Es ahí, donde me doy cuenta de que el hambre, frío, y desesperación, que yo sentía, era cotidiano para una parte significativa de nuestra sociedad. Durante esta época, hubo noches en las que simplemente lloraba, sintiéndome completamente impotente. No entendía cómo el mundo podía llegar a ser tan injusto, y la gente capaz de ayudar, con recursos de sobra, tan indiferente a la tragedia de otros. Eventualmente llego a una posición cómoda, habiendo conseguido empleo como programador podía trabajar desde el confort de un departamento en el barrio más caro de Buenos Aires, alejado de los comedores comunitarios y los hospitales. Viviendo en una burbuja, lentamente, me fuí olvidando de la gente que encontraba en esos lugares, de los pobres y enfermos, los más desfavorecidos. Altruismo Efica...
Are you feeling that all of your good deeds go unnoticed or aren't really making a big difference in the grand scheme of things? Then this episode is for you! OverShare welcomed Stephanie M. Casey, of Dallas Love Bugs, and Casey Switzer, established TikToker & animal advocate, to the show. We discussed the movement of Effective Altruism (EA) and how Dallas Love Bugs is changing the culture of animal rescue and education. Ya'll this was such a good episode with loads of insight into how to make effective changes within your community. Animal rescue and education is such an important topic since Dallas is dealing with sky-high numbers of animal intake and euthanasia rates. How do we make a change? How do we change the mindset of individuals who decide to take on the huge responsibility of owning and caring for an animal? This episode has tons of good tools to ensure that you are doing the most good whether that be via donations ,volunteering or adopting. Lovely listeners, tell me what you thought about this episode! OverShare loves to hear from ya'll, so make sure to subscribe and leave me some feedback.
Pablo Melchor, es ex-emprendedor y fundador de Fundación Ayuda Efectiva. Hablamos acerca de los orígenes del movimiento Effective Altruism (EA) y el marco filosófico en el que se enmarca.Zrive es una plataforma de educación y orientación profesional para estudiantes universitarios y jóvenes profesionales. Si quieres entender cómo funciona realmente el mundo profesional, maximizar tus oportunidades laborales y aprender cosas útiles que no te han enseñado en la universidad, visita nuestra web (https://www.zriveapp.com/) y síguenos en nuestras redes sociales:- Twitter: https://twitter.com/Zrive_- Instagram: https://www.instagram.com/zrive_/- LinkedIn: https://es.linkedin.com/school/zrive/- Email: hola@zriveapp.com.0:00 - Trayectoria profesional de Pablo hasta Ayuda Efectiva10:15 - Primeros pasos en la creación de Ayuda Efectiva15:00 - Razones para emprender17:45 - La diferencia entre valor e impacto23:05 - ¿Cómo generar impacto con tu carrera profesional?29:55 - ¿Tiene sentido trabajar para una fundación u ONG?34:45 - ¿Cómo medir el impacto?43:20 - ¿Cuáles son los principios filosóficos del Altruismo Eficaz? ¿Se adscribe a alguna corriente política?53:40 - ¿Tiene sentido apadrinar a un niño?57:25 - ¿Cuánto, como y a quién donar? El proyecto de Ayuda Efectiva.1:12:00 - Ejemplos de malas y buenas formas de donar1:15:08 - ¿El altruismo es una solución o un parche?1:18:37 - ¿Cuáles son los grandes riesgos para la Humanidad?1:26:17 - ¿De donde nace la responsabilidad moral de ayudar a los demás?1:29:05 - ¿Cómo colaborar con Ayuda Efectiva?
Today on the podcast we dig into the philosophy and practice of Effective Altruism (EA) and how it permeates and influences the animal rights movement. Krista Hiddema, Executive Director of For The Greater Good, has written a chapter in the new anthology, The Good It Promises, The Harm It Does: Critical Essays on Effective Altruism edited by Carol Adams, Alice Crary, and Lori Gruen. Krista offers a broad introduction to EA and how in the last decade, it has informed and now enveloped the animal advocacy movements strategy and tactics and why this may be a detrimental path for the animals. She shares stories of how campaigns that are unquantifiable can have profound impact and should not be pushed aside by the EA trend. Krista holds a doctorate in social sciences where her research focused on the need to utilize ecofeminist principles in matters of board governance within the animal rights movement. She holds five other degrees in areas of leadership, human resources, and organizational development, she teaches strategic planning and board governance, she is a fellow with the Animals & Society Research Initiative, and a reviewer for the Journal of Critical Animal Studies, and much more. She resides in outside Toronto, Canada. Resources:Book: The Good It Promises, The Harm It Does: Critical Essays on Effective Altruism edited by Carol Adams, Alice Crary, and Lori GruenKrista: · www.DrKristaHiddema.com· https://drkristahiddema.com/blog· https://drkristahiddema.com/blog/2022/12/14/effective-altruism-the-impact-is-fear-corruption-and-it-is-also-not-good-for-animals· https://www.facebook.com/Krista.Hiddema
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Mental Health & Productivity Survey 2023, published by Emily on February 21, 2023 on The Effective Altruism Forum. This survey is intended for members of the Effective Altruism (EA) community who aim to improve or maintain good mental health and productivity. We'd be so grateful if you could donate ~10 minutes of your time to complete the survey! You will help both identify the most pressing next steps for enhancing mental flourishing within the EA community, and provide the interventions and resources you'd prefer. These can be psychological, physiological, and lifestyle interventions. Why this survey? The mind is inherently the basis of everything we do and feel. Its health and performance are the foundation of any metric of happiness and productivity at impactful work. Good mental health is not just the absence of mental health issues. It is a core component of flourishing, enabling functioning, wellbeing, and value-aligned living. Rethink Wellbeing, the Mental Health Navigator, High Impact Psychology, and two independent EAs have teamed up to create this community-wide survey on Mental Health and Productivity. Through this survey, we aim to better understand the key issues and bottlenecks of EA performance and well-being. We also want to shed light on EAs' interest in and openness to different interventions that proactively improve health, well-being, and productivity. The results will likely serve as a basis for further projects and initiatives surrounding the improvement of well-being, mental health and productivity in the EA community. By filling out this form, you will help us with that. Based on form responses, we will compile overview statistics for the EA community that will be published on the EA Forum in 2023. Survey information Please complete the survey by March 17th. We recommend you take the survey on your computer, since the format doesn't work well on cell phones. All responses will be kept confidential, and we will not use the data you provide for any other purposes. Thank you! We are deeply grateful to all participants! Feel free to reach out to us if you have any questions or feedback. Emily Jennings, Samuel Nellessen, Tim Farkas, and Inga Grossmann Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What you prioritise is mostly moral intuition, published by James Ozden on December 24, 2022 on The Effective Altruism Forum. Note: A linkpost from my blog. Epistemic status: A confused physicist trying to grapple with philosophy, and losing. Effective Altruism is about doing as much good as possible with a given amount of resources, using reason and evidence. This sounds very appealing, but there's the age-old problem, what counts as “the most good”? Despite the large focus on careful reasoning and rigorous evidence within Effective Altruism, I speculate that many people decide what is “the most good” based largely on moral intuitions. I should also point out that these moral dilemmas don't just plague Effective Altruists or those who prescribe to utilitarianism. These thorny moral issues apply to everyone who wants to help others or “do good”. As Richard Chappell neatly puts it, these are puzzles for everyone. If you want to cop-out, you could reject any philosophy where you try to rank how bad or good things are, but this seems extremely unappealing. Surely if we're given the choice between saving ten people from a terrible disease like malaria and ten people from hiccups, we should be able to easily decide that one is worse than the other? Anything else seems very unhelpful to the world around us, where we allow grave suffering to continue as we don't think comparing “bads” are possible. To press on, let's take one concrete example of a moral dilemma: How important is extending lives relative to improving lives? Put more simply, given limited resources, should we focus on averting deaths from easily preventable diseases or increasing people's quality of life? This is not a question that one can easily answer with randomised controlled trials, meta-analyses and other traditional forms of evidence! Despite this, it might strongly affect what you dedicate your life to working on, or the causes you choose to support. Happier Lives Institute have done some great research looking at this exact question and no surprises - your view on this moral question matters a lot. When looking at charities that might help people alive today, they find that it matters a lot whether you prioritise the youngest people (deprivationism), older children over infants (TRIA), or the view that death isn't necessarily bad, but that living a better life is what matters most (Epicureanism). For context, the graph below shows the relative cost-effectiveness of various charities under different philosophical assumptions, using the metric WELLBYs, which taken into account the subjective experiences of people. So, we have a problem. If this is a question that could affect what classifies as “the most good”, and I think it's definitely up there, then how do we proceed? Do we just do some thought experiments, weigh up our intuitions against other beliefs we hold (using reflective equilibrium potentially), incorporate moral uncertainty, and go from there? For a movement based on doing “the most good”, this seems very unsatisfying! But sadly, I think this problem rears its head in several important places. To quote Michael Plant (a philosopher from the University of Oxford and director of Happier Lives Institute): “Well, all disagreements in philosophy ultimately come down to intuitions, not just those in population ethics!” To note, I think this is very different from empirical disagreements about doing “the most good”. For example, Effective Altruism (EA) is pretty good at using data and evidence to get to the bottom of how to do a certain kind of good. One great example is GiveWell, who have an extensive research process, drawing mostly on high-quality randomised control trials (see this spreadsheet for the cost-effectiveness of the Against Malaria Foundation) to find the most effective ways to hel...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Advocacy Africa's 2022 Review - Our Achievements and 2023 Strategy, published by AnimalAdvocacyAfrica on December 23, 2022 on The Effective Altruism Forum. 1. Summary Animal Advocacy Africa (AAA) works to empower animal advocates who are interested in or are working to reduce farmed animal suffering in African countries. AAA shares knowledge, provides connections, and helps advocates build the skills to run an impactful animal advocacy organisation. This year, we: Helped 17 partner organisations raise a total of ~US$83,000. We discuss how much of this we think may have counterfactually happened without AAA's support in Section 3. Provided strategic advice and feedback to 15 organisations, and influenced at least 2 of our partners to adopt high-impact interventions, with one adopting a high-impact intervention in preventing or slowing the growth of industrial animal agriculture in Uganda and one leading a cage-free campaign in Ghana. Helped connect at least 6 of our partners to influential figures and organisations in the global Effective Altruism (EA) and animal advocacy movements, with the aim of improving the visibility of the farmed animal advocacy movement in Africa. Released two research reports on farmed animal advocacy in the African and Asian contexts. Next year we intend to: Continue our current capacity-building programme, with changes and improvements made based on feedback and monitoring & evaluation findings. Start regranting funds to especially promising African advocates and organisations to encourage effective programming and interventions that we think are most likely to help farmed animals in African countries. Identify evidence-based strategies and work with local advocates to mitigate the rise of intensive animal farming in Africa as much as possible. Our primary bottleneck, and that of our partner organisations, remains a lack of funding. Our total funding gap for 2023 is $290,000. We intend to mitigate this by: Better emphasising the importance, neglectedness and tractability of farmed animal advocacy work in Africa. This includes improving overall visibility of the movement in Africa by highlighting the work that organisations are doing to funders and the wider international movement, and facilitating networking and connections between African organisations and international advocates. Better demonstrating our added value to African groups and to the African movement more broadly. Consistently tracking our progress and showcasing monitoring & evaluation findings more clearly for potential donors. Relatedly, better highlighting our theory of change with key cruxes validated. Hiring a full-time Fundraising & Communications staff. Increasing and improving outreach to high-net-worth individuals who may be interested in supporting farmed animal welfare in Africa. Registering in the United States to qualify for various matching opportunities offered during Giving Season. 2. Why farmed animal advocacy in Africa? We believe that farmed animal advocacy in Africa is a highly impactful project to work on for several reasons: The human population of Africa is expected to nearly triple by 2100. Meat production in Africa has nearly doubled since 2000, and this rate is expected to increase to match the growing population and growing wealth of the continent. Of all continents, Africa has the highest growth rate in aquatic farming. Farmed animal advocacy is incredibly neglected in Africa — in 2019, Open Philanthropy estimated that only $1 million went towards farmed animal advocacy work in Africa per year. Building the farmed animal advocacy movement in Africa before intensive animal farming is locked-in may improve the lives of millions of animals in the near- and long-term future and potentially prevent millions of animals from being born in factory fa...
In this episode we take a look at a few of the biggest stories from what has been a notable newsworthy couple of weeks for philanthropy - focusing on the fallout from the spectacular implosion of crypto billionaire and high profile Effective Altruist Sam Bank-Fried. We also take a look at a big philanthropy pledge from Jeff Bezos and the latest on Mackenzie Scott's radical no-strings-attached big giving. Including:SBF:What the hell has happened in the SBF story?What impact might this have on wider efforts to promote the idea of cryptophilanthropy?Will SBF's downfall lead to further calls to clamp down on big money donations in politics, given his prominent support for the Democracts in recent years?Is it likely to mean more skepticism about philanthropic funding for journalism, given that some feel SBF's significant donations to news outlets led to him receiving less critical coverage?Does his downfall present an existential crisis for Effective Altruism (EA)?Should we distinguish between different ways of understanding EA: EA-as-movement, EA-as-ideology, EA-as-academic-field? What is the likely impact on each of these?Do EA movement leaders have questions to answer about whether they were complicit in what was going on at FTX, or just naive? And what are the ramifications of either?Did SBF's EA beliefs lead him to adopt a radical "end justifies the means" view that allowed him to justify bad behaviour?Is this situation a killer blow for EA's "give to earn" idea?Scott & BezosHow excited should we be about pledges to give big in the future?What details do we have about what Bezos is actually planning to do?Why does the idea that "giving money away is hard" have such a long history? How is Mackenzie Scott challenging this idea?How should we understand "effectiveness" when it comes to philanthropy?Why has Bezos given $100m to Dolly Parton...?Related ContentRhod's Alliance Magazine piece, "Effectively over: What Does Sam Bankman-Fried's downfall mean for philanthropy and Effective Altruism?"Vox, "Effective altruism gave rise to Sam Bankman-Fried. Now it's facing a moral reckoning".SBF's ill-advised interview with Vox's Kelsey PiperEvan Huber's EA Forum post, "We must be very clear: fraud in the service of effective altruism is unacceptable"Why Philanthropy Matters article, "Why Am I Not an Effective Altruist?" Philanthropisms podcast on cryptophilanthropyPhilanthropisms podcast on the philosophy of philanthropyRhod's piece on "Marcus Rashford, Dolly Parton and public perceptions of Philanthropy"
Long and rough notes on Effective Altruism (EA). Written to help me get to the bottom of several questions: what do I like and think is important about EA? Why do I find the mindset so foreign? Why am I not an EA? And to start me thinking about: what do alternatives to EA look like? The notes are not aimed at effective altruists, though they may perhaps be of interest to EA-adjacent people. Thoughtful, informed comments and corrections welcome (especially detailed, specific corrections!) - see the comment area at the bottom."Using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis": that's the idea at the foundation of the Effective Altruism (EA) ideology and movement. Over the past two decades it has gone from being an idea batted about by a few moral philosophers to being a core part of the life philosophy of thousands or tens of thousands of people, including several of the world's most powerful and wealthy individuals. These are my rough working notes on EA. The notes are long and quickly written: disorganized rough thinking, not a polished essay.Original article:https://michaelnotebook.com/eanotes/Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Samo Burja: What the collapse of FTX means for effective altruism, published by peterhartree on November 17, 2022 on The Effective Altruism Forum. Samo Burja has the best analysis I've seen so far. CONTENT WARNING: Samo's analysis may be upsetting and demoralising. If you're feeling low, anxious, or otherwise in a bad way, I strongly recommend you bookmark this post and come back when you're on better form. If you're ready for a calm attempt to understand what is happening, and what this all means, read on. This post is written in a personal capacity I'm sharing this in a personal capacity. I am not speaking on behalf of any current or past employer. I asked a couple of people for their thoughts before I posted, but the decision to post is mine alone. Disclosure I personally received ~$60K funding from FTX senior staff (Nishad & Claire) in 2020, to pursue a year of independent study. (We had no pre-existing social or professional relationship before they made the grant.) Two projects I run, Radio Bostrom and Parfit Archive, were funded via the FTX Future Fund Regrantors Programme. 80,000 Hours, a previous employer for whom I currently serve as a freelance advisor, also received significant donations from SBF and Alameda. I don't follow crypto closely, but I did put about $10K into FTX wallets over the past couple of years. I've not logged into these accounts for months, so I've no idea what they are/were worth. Personal comment I don't follow crypto closely. My understanding of things over at FTX and Alameda is entirely based on what is being reported on Twitter and the newspapers. I can't verify all of the factual claims that I've excerpted below. The ones I actually know about all seem correct, to the best of my knowledge. I am personally feeling calm and fine about things. Part of me is devasted, another part is angry. But I am good at compartmentalising. My capability for questionable gallows humor remains in evidence. I agree with (put high credence on) roughly all of Samo's takes that are excerpted below. I semi-independently arrived at most of these takes between Wednesday 9 - and Friday 11 November, and they've been fairly confident and robust since then. My thinking was supported by many conversations over those days (mostly not with staff from Effective Ventures, CEA or 80K—who are all extremely busy). For me, this analysis is a big positive update on Samo and his team's general capability for insight and analysis. I have followed Samo for a while, read his Great Founder Theory, and listened to at least 10 interviews. I always have a difficult time assessing claims about history and historical forces, though—especially when they have fluent, charismatic and media-savvy proponents. Selected excerpts The moral authority and intellectual legitimacy of EA will be reduced Most importantly, the collapse of FTX will reduce the moral authority and intellectual legitimacy of the “Effective Altruism” (EA) movement, which has become a rising cultural force in Silicon Valley and a rising financial force in global philanthropy. Sam Bankman-Fried was a highly visible example of an “Effective Altruist.” He was a major donor both to the movement and to causes favored by the movement, for example committing $160 million in grants through the FTX Future Fund. Bankman-Fried also frequently boosted its ideology in press interviews and reportedly only earned money so that he could donate more of it to the cause, an ethos known as “earn to give” that was invented by the EA movement. Good summary of SBF political activity Bankman-Fried has also shown a serious interest in engaging with mainstream U.S. politics. He has funded political candidates aligned with his own views in congressional primary races and ballot initiatives in California. In 2020, he became President Joseph Bide...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A subjective account of what it's like to join an EA-aligned org without previous EA knowledge, published by lukasj10 on August 30, 2022 on The Effective Altruism Forum. TL;DR We (Healthier Hens (HH), a CE-incubated EA-aligned animal welfare charity) have hired a mid-career Country Manager for our operation in Kenya. He had no previous EA knowledge or experience. We asked him 15 questions about his experience entering the EA world, learning about the key concepts and the community in general. This post is an overview of his subjective experiences including overarching themes such as what seemed common and different. Some of the benefits felt upon entering EA and his perspective on EA awareness and community. Finally, we reflect on the above from the organisation's standpoint. Introduction After having chosen our pilot country of operations, we hired a Country Manager (CM) based in Kenya, to lead our efforts on the ground. Despite having previous experience working in the humanitarian and animal welfare sectors, he had no previous knowledge of Effective Altruism (EA). We thought that learning about his experience jumping into the EA world could be useful for other EA-aligned orgs hiring externally and upcoming ones, considering what challenges it could bring. We carried out a semi-structured interview, asking 15 questions (plus follow-ups) to understand what his experience was like as an applicant and employee. This post is a summary of the responses that our questions elicited, presented under the umbrella of several overarching themes that had come up. Readers are advised that this is a highly subjective account of early professional engagement with EA concepts and working principles. We are, however, very keen on encouraging other orgs that have gone through similar hiring processes to engage their new employees with similar inquiries to better understand how we can transition more smoothly, since the focus on attracting talent is definitely here to stay. Differences and similarities - the before and after Among notable differences between regular non-profit programs and those stemming from EA, the focus on positive impact in the (at least relative) long-term stood out. Having seen several short-term interventions come and go in the region, our CM is inspired by the community's attempts to seek measurable and quantifiable ways to achieve change. In the case of Animal Welfare (AW), he was yearning for solutions going beyond one-dimensional, 5-year “band-aid solution” interventions. EA thinking of how to have a long-lasting, multigenerational effect with positive flow through effects made sense, even when considering the experiences and frustrations of conventional NGO stakeholders. Using the ITN framework was also a fresh breath of air - our CM found regular interventions reliant on emotions and public opinion far too often. Data should permit arriving at better decisions. This was a major motivator when considering applying for the job. Despite counterfactuals being a new concept to the entire animal welfare community in Kenya, the more he had learnt about it, the more he wanted to help lead an organisation that could potentially have a positive impact on millions of hens worldwide. A surprising discovery for him was to find out that Open Wing Alliance (OWA) is part of the EA community. It then made great sense why the group advocates for the cage-free transition, and how impactful that can be. Regarding the recruitment process, our CM found it surprisingly thought-provoking and useful - each part of it taught him even more about the organisational concept and ultimate final impact that we seek. In past recruitment experiences, there were practically no stages based on theoretical or project management skills, with most focus on face-to-face discussions. He experie...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Nietzschean Challenge to Effective Altruism, published by Richard Y Chappell on August 29, 2022 on The Effective Altruism Forum. In ‘The Strange Shortage of Moral Optimizers', I noted that it's difficult to criticize Effective Altruism in a thoroughgoing way, since the foundational idea of beneficentrism (roughly: utilitarianism minus all the controversial bits) seems so indisputable. That leaves plenty of room for superficial/empirical/internal critiques of the form “The EA movement as it actually exists isn't fully living up to its admittedly excellent values/potential; here's how it could do better.” But is there space for a more fundamental, philosophical critique of EA's core values? In this post, I'll play Devil's Advocate and try to set out what I think is the most philosophically pressing critique of EA's beneficentrism, drawing on the classic critique of utilitarianism as a “philosophy for swine” (developed, in its most sophisticated form, in Andrew Huddleston's interpretation of Nietzsche's perfectionism). The idea, in a nutshell, is that we go wrong in thinking that anything resembling happiness (or the avoidance of suffering) is what ultimately matters for a good life. We are lazy creatures, drawn to creature comforts. But that isn't what's truly good for us. What truly gives our lives dignity and meaning is to contribute, whether directly or indirectly, to cultural excellence. Better to be a Socrates—or his servant—dissatisfied, than to be a pig satisfied. (Unless Socrates eats the pig. Then you're good either way.) The upshot: I'll argue that there's some (limited) overlap between the practical recommendations of Effective Altruism (EA) and Nietzschean perfectionism, or what we might call Effective Aesthetics (EÆ). To the extent that you give Nietzschean perfectionism some credence, this may motivate (i) prioritizing global talent scouting over mere health interventions alone, (ii) giving less priority to purely suffering-focused causes, such as animal welfare, (iii) wariness towards traditional EA rhetoric that's very dismissive of funding for art museums and opera houses, and (iv) greater support for longtermism, but with a strong emphasis on futures that continue to build human capacities and excellences, and concern to avoid hedonistic traps like “wireheading”. The Meaningful Life In the final chapter of Practical Ethics, Peter Singer addresses the question: ‘Why Act Morally?' One answer he's drawn towards invokes the common wisdom that our lives are more meaningful insofar as we contribute to something larger than ourselves. Universal altruism—in a world as full of unmet needs as ours is—provides us with a suitably monumental goal to meet this deep human need of our own. To illustrate this motivation, Singer asked Henry Spira (an accomplished twentieth-century animal- and civil rights activist), as his death from cancer drew near, “what had driven him to spend his life working for others.” Spira answered: I guess basically one wants to feel that one's life has amounted to more than just consuming products and producing garbage. I think that one likes to look back and say that one's done the best one can to make this a better place for others. [W]hat greater motivation can there be than doing whatever one possibly can to reduce pain and suffering? This sounds compelling! But it's in this context that the Nietzschean challenge looms large, as advancing human civilization is also monumental—sometimes literally!—and arguably feels “deeper” than merely promoting comfort. (It may also prove more legible than chasing the drab shadows of distant strangers in accordance with traditional welfarism.) We appreciate the enduring magnificence of the Great Pyramids, while the suffering of the slaves who built them is lost to history. Contributing to a lasting...
Kat Woods is an effective altruist and the co-founder of Nonlinear, which incubates longtermist nonprofits by connecting founders with ideas, funding, and mentorship. Gianluca and Kat discuss brain hacks for curing imposter syndrome and being more agentic, infohazards, the simulation hypothesis, why you don't need permission to do things, “passive impact” via automation, and Kat's exciting new projects at Nonlinear. -------- Shownotes: -------- Kat Woods on Twitter: www.twitter.com/Kat__Woods Gianluca on Twitter: www.twitter.com/QVagabond Kat's blog: www.katwoods.org/ Nonlinear: www.nonlinear.org/ Effective Altruism (EA): https://www.effectivealtruism.org/ The Effective Altruism Handbook: https://forum.effectivealtruism.org/handbook Harry Potter and the Methods of Rationality: https://www.goodreads.com/book/show/10016013-harry-potter-and-the-methods-of-rationality Replacing Guilt: https://anchor.fm/guilt SMBC cartoon on compatibilism: www.smbc-comics.com/comic/compatibilism Nonlinear Library (podcast): https://forum.effectivealtruism.org/posts/JTZTBienqWEAjGDRv/listen-to-more-ea-content-with-the-nonlinear-library Kat's post on text-to-speech automation: https://forum.effectivealtruism.org/posts/tAWK33eNXZKMckPhn/how-and-why-to-turn-everything-into-audio EA Houses: https://forum.effectivealtruism.org/posts/4zHWQNzCusaTfD7jz/ea-houses-live-or-stay-with-eas-around-the-world Nonlinear support fund: www.nonlinear.org/productivity-fund.html Nonlinear bounty programme: https://super-linear.org/ EA hiring agency: https://second-bellflower-54f.notion.site/EA-Hiring-Agency-0d6d75a0f5934455be9003fd7886d537 Nonlinear newsletter: www.nonlinear.org/subscribe.html Bit of a Tangent on Twitter: www.twitter.com/podtangent Bit of a Tangent on Instagram: instagram.com/podtangent/
How can we do the most good with our careers, money and lives? And what are the things that we can do right now, to positively impact future generations to come? This is the mission of the Effective Altruism (EA) movement co-founded by Will McAskill, Associate Professor in Philosophy at the University of Oxford and co-founder of nonprofits Giving What We Can, the Centre for Effective Altruism, and Y Combinator-backed 80,000 Hour. In the conversation, me and Will talk about the fundamentals of EA, his brand new book 'What We Owe The Future', the idea of 'longtermism', the most pressing existential threats humanity is facing and what we can do about them, why giving away your income will make you happier, why your career choice is the biggest choice you'll make in your life and much more.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Case for Funding New Long-Term Randomized Controlled Trials of Deworming, published by MHR on August 4, 2022 on The Effective Altruism Forum. Summary Despite significant uncertainty in the cost-effectiveness of mass deworming, GiveWell has directed over a hundred million dollars in donations to deworming initiatives since 2011. Almost all the data underlying GiveWell's cost-effectiveness estimate comes from a single 1998 randomized trial of deworming in 75 Kenyan schools. Errors in GiveWell's estimate of cost-effectiveness (in either direction) could be driving an impactful misallocation of funding in the global health and development space, reducing the total welfare created by Effective Altruism (EA)-linked donations. A randomized controlled trial replicating the 1998 Kenya deworming trial could provide a substantial improvement in the accuracy of cost-effectiveness estimates, with a simplified model indicating the expected value of such a trial is in the millions of dollars per year. Therefore, EA-aligned donors may have made an error by not performing replication studies on the long-run economic impact of deworming and should prioritize running them in the future. More generally, this finding suggests that EA organizations may be undervaluing the information that could be gained from running experiments to replicate existing published results. Introduction Chronic parasitic infections are common in many regions of the world, including sub-Saharan Africa and parts of East Asia. Two common types of parasitic disease are schistosomiasis, which is transmitted by contaminated water, and the soil-transmitted helminth infections (STHs) trichuriasis, ascariasis, and hookworm. Mass deworming is the process of treating these diseases in areas of high prevalence by administering antiparasitic medications to large groups of people without first testing each individual for infection. The antiparasitic medications involved, praziquantel for schistosomiasis and albendazole for STHs, are cheap, have relatively few side effects, and are considered safe to administer on a large scale. There is strong evidence that deworming campaigns reduce the prevalence of parasitic disease, as well as weaker evidence that deworming campaigns improve broader life outcomes. GiveWell has included charities working on deworming in its top charities list for over a decade, with the SCI Foundation (formerly the Schistosomiasis Control Initiative) and Evidence Action's Deworm the World Initiative being the top recipients of GiveWell-directed deworming donations. As of 2020, GiveWell has directed $163 million to charities working on deworming, with this funding coming from individual donors giving to deworming organizations based on GiveWell's recommendation, GiveWell funding deworming organizations directly via its Maximum Impact Fund, and Open Philanthropy donating to deworming organizations based on GiveWell's research. GiveWell's recommendation of deworming-focused charities is based almost entirely on the limited evidence linking deworming to long-term economic benefits, particularly increases in income and consumption. Regarding impacts on health, the GiveWell brief on deworming states “evidence for the impact of deworming on short-term general health is thin. We would guess that deworming has small impacts on weight, but the evidence for its impact on other health outcomes is weak.” So-called “supplemental factors” other than the effect on income change GiveWell's overall cost-effectiveness estimate for Deworm the World by 7%. GiveWell's estimate of the long-term economic benefit produced by deworming comes from “Twenty-Year Economic Impacts of Deworming” (2021), by Joan Hamory, Edward Miguel, Michael Walker, Michael Kremer, and Sarah Baird. This paper is a 20-year follow-up to “Worms: Ident...
Larry Temkin is a moral philosopher. He has major works on inequality (book: Inequality); transitivity and social choices (when A > B > C, A > C ?; book: Rethinking the Good) and recently on the philosophies of doing good (critiquing some aspects of Effective Altruism, long-termism, international aid, utilitarianism | book: Being Good in a World of Need). As of 2022, he was Distinguished Professor of Philosophy at Rutgers University. The podcast is in two parts. The second part focuses on Effective Altruism (EA) ideas. The first part looks at transitivity, and other debates in philosophy through a pluralist lens. This is part 2 on EA ideas. The whole conversation is 3 hours long, so please feel free to dip in and out of it, and if you are intrigued go and look to Larry's original works. There is a link to a transcript and commentray in a blogpost at the end. In the podcast, I ask: I ask how Larry comes up with such unique ideas such as on inequality and transitivity, and the story of how he was rejected by three great philosophers when he first proposed his idea. (In part 1) Larry explains consequentialist notions of personhood, especially with respect to a question I had on Singer's view on disability, and even though our general views are more pluralist. (In part 1) I pose a dilemma I have about the art of a friend who has done awful things, and Larry explains the messiness of morals. (In part 1) (In part 2) Larry recounts the dinner with Derek Parfit, and Angus Deaton, along with a billionaire and other philosophers. This dinner gave Larry bad dreams and lead to Larry thinking up many disanalogies to Peter Singer's classic pond analogy. We discuss the pond analogy and how it may or may not be a good analogy for doing good in foreign places especially the disaster that was Goma. Larry discusses how he changed his mind on whether international aid may be doing more harm than good and both philosophical and practical reason behind it. Larry also discusses some concern on the the possible over focus on long-termism. We barely touch on Larry's work in inequality, but I will mention that it has been influential in how the World Health Organisation and potentially ultimately China has viewed access to healthcare. The work has also highlighted the complexity around equality, and that it may be more individualistic and more complicated than often assumed. Throughout all of this is the strong sense of a pluralistic view of the world, where we may value many attributes such as fairness, justice, health and that a focus on only one value may lead us astray. Larry ends with life advice: “I've taught many students over the years. I'm coming to the end of my career. I'm retiring. I've had countless students in my office over the years who are struggling with the question of, "How should I lead my life? This is extremely controversial, but being the pluralist that I am, I believe in a balanced life. Now, you can find balance in a number of ways. But just as I'm a pluralist about my moral values, I'm a pluralist about what's involved in being a good person and what's involved in leading a worthwhile human life. I'm signed up in the camp of, "We only have one life to lead." Transcript and video, plus blog posts here: https://www.thendobetter.com/arts/2022/7/24/larry-temkin-transitivity-critiques-of-effective-altruism-international-aid-pluralism-podcast
Nadia Asparouhova (previously writing under Nadia Eghbal) is an independent researcher with widely read essays on a range of topics most recently philanthropic funding including effective altruism and ideas machines, and recent ideas in funding science. She's written books about the open source community. She has worked in start ups and venture. She set up and ran Helium grants, a microgrant programme. She is an Emergent Ventures fellow. We speak about what she learned from microgranting and reviewing thousands of applications. We discuss what she thinks about EA influenced philanthropy, and why she is personally pro-pluralism. Nadia talks about why doesn't consider herself a creator and the downsides and upsides on he creator economy as currently formed. We discuss parallels with the open source community. We chat about Nadia's work as an independent researcher versus her work at start-ups and how they are fulfilling in different ways. Nadia examines what faith means to her now. We chat on the importance of intuition and the messiness of creative science and learning. We talk about science funding and how we might be the cusp of something new. Nadia expresses optimism about the future as we discuss possible progress stagnation. On a more personal note, we chat about how Nadia was a vegetarian and how and why she changed her mind. But also that she could not be a complete only carnivore either. We discuss the importance of family stories that shape us and the role the stories of her grandmother played in her life. We play over-rated under-rated: -Effective Altruism -Miami -Crowdfunding -Toulouse -Newsletters -Katy Perry Nadia talks briefly about a seed of an idea around anti-memetics. Nadia ends with her advice to others. Follow your curiosities. Transcript is available here. How are crypto billionaires most likely to change charitable giving Effective Altruism (EA) aside? “Broadly my worldview or thesis around how we think about philanthropy is that it moves in these sorts of wealth generations. And so, right now we're kind of seeing the dawn of the people who made a lot of money in the 2010s with startups. It's the “ trad tech” or startup kind of cohort. Before then you had people who made a lot of money in investment banking and finance and the early tech pioneers, they all formed their own cohort. And then you might say crypto is the next generation after that, which will eventually break down into smaller sub components for sure but we don't really know what those things are yet, I think, because crypto is still so early and they've sort of made money in their own way. ...When you have a group of people that have made money in a certain way that is almost by definition it's because it's a new wealth boom. They made their money in a way that's distinctly different from previous generations. And so, that becomes sort of like a defining theory of change or worldview. All the work that they are doing in this sort of philanthropic sense is finding a way to impose that worldview. …what will crypto's contribution to that be? ...I think in the crypto kind of generation you might see instead of thinking about the power of top talent, I think they're more about giving people tools to kind of build their own worlds..."
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Michael Nielsen's "Notes on effective altruism", published by Pablo on June 3, 2022 on The Effective Altruism Forum. Quantum physicist Michael Nielsen has published an impressive critical essay on EA. Summary: Long and rough notes on Effective Altruism (EA). Written to help me get to the bottom of several questions: what do I like and think is important about EA? Why do I find the mindset so foreign? Why am I not an EA? And to start me thinking about: what do alternatives to EA look like? The notes are not aimed at effective altruists, though they may perhaps be of interest to EA-adjacent people. Thoughtful, informed comments and corrections welcome (especially detailed, specific corrections!) - see the comment area at the bottom. Some passages I highlighted: I have EA friends who donate a large fraction of their income to charitable causes. In some cases it's all their income above some fairly low (by rich developed world standards) threshold, say $30k. In some cases it seems plausible that their personal donations are responsible for saving dozens of lives, helping lift many people out of poverty, and preventing many debilitating diseases, often in some of the poorest and most underserved parts of the world. Some of those friends have directly helped save many lives. That's a simple sentence, but an extraordinary one, so I'll repeat it: they've directly helped save many lives. As extraordinary as my friend's generosity was, there is something further still going on here. Kravinsky's act is one of moral imagination, to even consider donating a kidney, and then of moral conviction, to follow through. This is an astonishing act of moral invention: someone (presumably Kravinsky) was the first to both imagine doing this, and then to actually do it. That moral invention then inspired others to do the same. It actually expanded the range of human moral experience, which others can learn from and then emulate. In this sense a person like Kravinsky can be thought of as a moral pioneer or moral psychonaut, inventing new forms of moral experience. Moral reasoning, if taken seriously and acted upon, is of the utmost concern, in part because there is a danger of terrible mistakes. The Nazi example is overly dramatic: for one thing, I find it hard to believe that the originators of Nazi ideas didn't realize that these were deeply evil acts. But a more everyday example, and one which should give any ideology pause, is overly self-righteous people, acting in what they "know" is a good cause, but in fact doing harm. I'm cautiously enthusiastic about EA's moral pioneering. But it is potentially a minefield, something to also be cautious about. when EA judo is practiced too much, it's worth looking for more fundamental problems. The basic form of EA judo is: "Look, disagreement over what is good does nothing directly to touch EA. Indeed, such disagreement is the engine driving improvement in our notion of what is good." This is perhaps true in some God's-eye, omniscient, in-principle philosopher's sense. But EA community and organizations are subject to fashion and power games and shortcomings and biases, just like every other community and organization. Good intentions alone aren't enough to ensure effective decisions about effectiveness. And the reason many people are bothered by EA is not that they think it's a bad idea to "do good better". But rather that they doubt the ability of EA institutions and community to live up to the aspirations. These critiques can come from many directions. From people interested in identity politics I've heard: "Look, many of these EA organizations are being run by powerful white men, reproducing existing power structures, biased toward technocratic capitalism and the status quo, and ignoring many of the things which really matter." From libertarian...
In what is hopefully the last installment of Vaden and Ben debate Effective Altruism, we ask if EA lies on the cultishness (yes, that's a word) spectrum. We discuss: The potential pitfall of having goodness as a core value Aspects of Effective Altruism (EA) that put it on the cultishness spectrum Does EA focus on good over truth? Ben's experience with EA Making criticism a core value How does one resist the allure of groupthink? How to (mis)behave at parties How would one create a movement which doesn't succumb to cult-like dynamics? Weird ideas as junk food Error Correction intro segment - Scott Alexander pointing out that Ivermectin works indirectly via: There's a reason the most impressive ivermectin studies came from parts of the world where worms are prevalent, he says. Parasites suppress the immune system, making it more difficult for the human body to fight off viruses. Thus, getting rid of worm infections makes it easier for COVID-19 patients to bounce back from the virus. See full post below and summary news article here (https://www.msn.com/en-us/health/medical/everyone-was-wrong-about-ivermectin/ar-AAQRURP) Czechoslovakia was not a part of the USSR @lukeconibear pointing out some climate models and data are publicly available. See for instance Goddard Earth Observing System (GEOS) Chem model: https://github.com/geoschem/geos-chem Community Earth System Model (CESM): https://github.com/ESCOMP/CESM Energy Exascale Earth System model: https://github.com/E3SM-Project/E3SM @PRyan pointing out we were confused about the difference between economic growth, division of labour, and free trade Join the movement at incrementspodcast@gmail.com. Follow us on twitter at @IncrementsPod (https://twitter.com/IncrementsPod) and on Youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ).
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is A case for the effectiveness of protest, published by James Ozden. What I want this post to achieve: My main goal with this post is to start a discussion about the effectiveness of different forms of political advocacy. Specifically, whether, and how, social movement concepts such as nonviolent protest should be used for EA causes. Disclaimer and epistemic certainty: This is a somewhat speculative post and I'm not fully confident on some of the numbers used for the cost-effectiveness estimates. I've been working for Extinction Rebellion and Animal Rebellion for the past two years, as well as studying social movement theory, so I will naturally bring in some degree of bias and motivated reasoning. I also think it's important to note that due to concerns around The Sunrise Movement expressed here, I have significantly weakened my belief in the effectiveness of certain social movements. Reading time: 30-60+ minutes. You can also read it in a Google Doc. Summary Social movements are broad alliances of people who are connected through their shared interest in social change. This research focuses on social movements that use civil resistance as a theory of change, as I believe this is under-represented within Effective Altruism (EA). Civil resistance can be defined as political action that relies on the use of nonviolent resistance by civil groups to challenge a particular power, force, policy or regime. In practice, this looks like nonviolent protests and direct action. Extinction Rebellion (XR) has highlighted the potential for social movements using nonviolent protest to create positive societal change. However, there has been little quantitative analysis of the exact impact that XR or other social movements have had on shifting public opinion, creating policy change or, in this case, reducing carbon emissions. In this research project, I attempted to quantify the cost-effectiveness that XR's protests, and other activities, has had on reducing greenhouse gas emissions (GHG) and influencing government spending on climate-related activities. These findings suggest that XR has abated 16 tonnes of GHGs per pound spent on advocacy, using the median estimates for cost-effectiveness. Relative to the top Effective Altruist (EA) recommended climate change charity, Clean Air Task Force (CATF), this is more effective by a factor of 12x. If true, this indicates that nonviolent protests can be highly effective in achieving positive outcomes and social movement objectives. This leads to the conclusion that social movement theory should be a focus area for impact-focused researchers, advocates and philanthropists, to determine when these opportunities might arise and how to best utilise them. Throughout this research, I argue for the following claims, which I believe to be strong: Nonviolent protest is an effective tool to influence public opinion and policy around a certain issue. Public opinion plays a significant factor in policy change To date, Effective Altruists have devoted too little consideration to social movements and civil resistance. I'm also arguing for the following claims, but I believe them to be weaker: The most impactful Social Movement Organisations (SMOs) using nonviolent protest can be more cost-effective than existing EA-funded interventions. My cost-effectiveness analysis of Extinction Rebellion indicates that they were more cost-effective, by a factor of 0.4 - 32x, than current EA recommendations for tackling climate change, using a variety of metrics. A two-person year research project studying the use of social movements and civil resistance for certain cause areas could discover more cost-effective interventions than those that already exist. I estimate there's a 30% likelihood of this happening. We should allocate a greater proportion of funds towards early...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. This is: Summary of Core Feedback Collected by CEA in Spring/Summer 2019, published by Ben_West, Centre for Effective Altruism on the effective altruism forum. Introduction The Centre for Effective Altruism (CEA) aims to grow and maintain the Effective Altruism (EA) movement. As part of that work, it is important for us to understand the needs, values, and concerns of members of the EA community. CEA collects feedback from community members in a variety of ways (see “CEA's Feedback Process” below). In the spring and summer of 2019, we reached out to about a dozen people who work in senior positions in EA-aligned organizations to solicit their feedback. We were particularly interested to get their take on execution, communication, and branding issues in EA. Despite this focus, the interviews were open-ended and tended to cover the areas each person felt was important. This document is a summary of their feedback. The feedback is presented “as is,” without any endorsement by CEA. This feedback represents a small (albeit influential) portion of the EA community, and should be considered in context with other sources of feedback. This post is the first in a series of upcoming posts where we aim to share summaries of the feedback we have received. The second is here. CEA's Feedback Process CEA has, historically, been much better at collecting feedback than at publishing the results of what we collect. This post is part of our attempt to address that shortcoming and publish more feedback, but, until more can be published, we want to share more details about the types of feedback we collect. As some examples of other sources of feedback CEA has collected this year: We have received about 2,000 questions, comments and suggestions via Intercom (a chat widget on many of CEA's websites) so far this year We hosted a group leaders retreat (27 attendees), a community builders retreat (33 attendees), and had calls with organizers from 20 EA groups asking about what's currently going on in their groups and how CEA can be helpful Calls with 18 of our most prolific EA Forum users, to ask how the Forum can be made better. A “medium-term events” survey, where we asked everyone who had attended an Individual Outreach retreat how the retreat impacted them 6-12 months later. (53 responses) EA Global has an advisory board of ~25 people who are asked for opinions about content, conference size, format, etc., and we receive 200-400 responses to the EA Global survey from attendees each time. The feedback summarized in this document sometimes agrees with other feedback we have received, and sometimes disagrees. This document generally presents feedback “as is” in an attempt to give an accurate summary of people's responses, even if the feedback here disagreed with opinions we have gotten from other data sources. Solutions Mentioned in this Document In addition to examples of concerns respondents raised, this document contains efforts CEA has implemented which may address the concern. Efforts are ongoing, so these ideas are not intended to be final solutions, and we will continue to iterate as we gather more information about how things are working. Additionally, the solutions were not necessarily triggered by this feedback – many of these projects were started before the feedback round was run, were inspired by other feedback, etc. Executive Summary Things Which Are Going Well CEA's Community Health and Events Projects. Respondents felt that the Community Health team does important work to keep the community safe, and there is a strong argument for a central entity like CEA to oversee community health. EA Global is the “flagship” event of the community, and smaller events run by CEA were also positively regarded. EA Community Members are Smart, Talented, and Thoughtful. Respondents frequently mentioned ...