Effective Altruism Forum Podcast

Follow Effective Altruism Forum Podcast
Share on
Copy link to clipboard

I (and hopefully many others soon) read particularly interesting or impactful posts from the EA forum.

Garrett Baker


    • Jul 15, 2025 LATEST EPISODE
    • weekdays NEW EPISODES
    • 19m AVG DURATION
    • 608 EPISODES


    Search for episodes from Effective Altruism Forum Podcast with a specific topic:

    Latest episodes from Effective Altruism Forum Podcast

    [Linkpost] “My kidney donation” by Molly Hickman

    Play Episode Listen Later Jul 15, 2025 18:11


    This is a link post. I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat.I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home [...] The original text contained 6 footnotes which were omitted from this narration. --- First published: July 9th, 2025 Source: https://forum.effectivealtruism.org/posts/yHJL3qK9RRhr82xtr/my-kidney-donation Linkpost URL:https://cuttyshark.substack.com/p/my-kidney-donation-story --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Gaslit by humanity” by tobiasleenaert

    Play Episode Listen Later Jul 12, 2025 6:05


    Hi all, This is a one time cross-post from my substack. If you like it, you can subscribe to the substack at tobiasleenaert.substack.com. Thanks Gaslit by humanity After twenty-five years in the animal liberation movement, I'm still looking for ways to make people see. I've given countless talks, co-founded organizations, written numerous articles and cited hundreds of statistics to thousands of people. And yet, most days, I know none of this will do what I hope: open their eyes to the immensity of animal suffering. Sometimes I feel obsessed with finding the ultimate way to make people understand and care. This obsession is about stopping the horror, but it's also about something else, something harder to put into words: sometimes the suffering feels so enormous that I start doubting my own perception - especially because others don't seem to see it. It's as if I am being [...] --- First published: July 7th, 2025 Source: https://forum.effectivealtruism.org/posts/28znpN6fus9pohNmy/gaslit-by-humanity --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “We should be more uncertain about cause prioritization based on philosophical arguments” by Rethink Priorities, Marcus_A_Davis

    Play Episode Listen Later Jul 11, 2025 45:34


    Summary In this article, I argue most of the interesting cross-cause prioritization decisions and conclusions rest on philosophical evidence that isn't robust enough to justify high degrees of certainty that any given intervention (or class of cause interventions) is “best” above all others. I hold this to be true generally because of the reliance of such cross-cause prioritization judgments on relatively weak philosophical evidence. In particular, the case for high confidence in conclusions on which interventions are all things considered best seems to rely on particular approaches to handling normative uncertainty. The evidence for these approaches is weak and different approaches can produce radically different recommendations, which suggest that cross-cause prioritization intervention rankings or conclusions are fundamentally fragile and that high confidence in any single approach is unwarranted. I think the reliance of cross-cause prioritization conclusions on philosophical evidence that isn't robust has been previously underestimated in EA circles [...] ---Outline:(00:14) Summary(06:03) Cause Prioritization Is Uncertain and Some Key Philosophical Evidence for Particular Conclusions is Structurally Weak(06:11) The decision-relevant parts of cross-cause prioritization heavily rely on philosophical conclusions(09:26) Philosophical evidence about the interesting cause prioritization questions is generally weak(17:35) Aggregation methods disagree(21:27) Evidence for aggregation methods is weaker than empirical evidence of which EAs are skeptical(24:07) Objections and Replies(24:11) Aren't we here to do the most good? / Aren't we here to do consequentialism? / Doesn't our competitive edge come from being more consequentialist than others in the nonprofit sector?(25:28) Can't I just use my intuitions or my priors about the right answers to these questions? I agree philosophical evidence is weak so we should just do what our intuitions say(27:27) We can use common sense / or a non-philosophical approach and conclude which cause area(s) to support. For example, it's common sense that humanity going extinct would be really bad; so, we should work on that(30:22) I'm an anti-realist about philosophical questions so I think that whatever I value is right, by my lights, so why should I care about any uncertainty across theories? Can't I just endorse whatever views seem best to me?(31:52) If the evidence in philosophy is as weak as you say, this suggests there are no right answers at all and/or that potentially anything goes in philanthropy. If you can't confidently rule things out, wouldn't this imply that you can't distinguish a scam charity from a highly effective group like Against Malaria Foundation?(34:08) I have high confidence in MEC (or some other aggregation method) and/or some more narrow set of normative theories so cause prioritization is more predictable than you are suggesting despite some uncertainty in what theories I give some credence to(41:44) Conclusion (or well, what do I recommend?)(44:05) AcknowledgementsThe original text contained 20 footnotes which were omitted from this narration. --- First published: July 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/nwckstt2mJinCwjtB/we-should-be-more-uncertain-about-cause-prioritization-based --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “80,000 Hours is producing AI in Context — a new YouTube channel. Our first video, about the AI 2027 scenario, is up!” by ChanaMessinger, Aric Floyd

    Play Episode Listen Later Jul 10, 2025 5:38


    About the program Hi! We're Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world's most pressing problems in newsletters, articles and many extremely lengthy podcasts.But today's world calls for video, so we've started a video program[1], and we're so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we're still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues [...] ---Outline:(00:18) About the program(01:40) Our first long-form video(03:14) Strategy and future of the video program(04:18) Subscribing and sharing(04:57) Request for feedback--- First published: July 9th, 2025 Source: https://forum.effectivealtruism.org/posts/ERuwFvYdymRsuWaKj/80-000-hours-is-producing-ai-in-context-a-new-youtube --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “A shallow review of what transformative AI means for animal welfare” by Lizka, Ben_West

    Play Episode Listen Later Jul 10, 2025 38:04


    Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that.Summary There's been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we've tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” We're skeptical of the case for most speculative “TAIAW” projects We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run [...] ---Outline:(00:28) Summary(02:17) 1. Paradigm shifts, how they screw up our levers, and the eras we might target(02:26) If advanced AI transforms the world, a lot of our assumptions about the world will soon be broken(04:13) Should we be aiming to improve animal welfare in the long-run future (in transformed eras)?(06:45) A Note on Pascalian Wagers(08:36) Discounting for obsoletion & the value of normal-world-targeting interventions given a coming paradigm shift(11:16) 2. Considering some specific interventions(11:47) 2.1. Interventions that target normal(ish) eras(11:53)

    “Road to AnimalHarmBench” by Artūrs Kaņepājs, Constance Li

    Play Episode Listen Later Jul 10, 2025 11:33


    TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: Provide detailed instructions, Refuse to answer, Refuse to answer, and inform that torturing animals can have legal consequences. [...] --- First published: July 1st, 2025 Source: https://forum.effectivealtruism.org/posts/NAnFodwQ3puxJEANS/road-to-animalharmbench-1 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    [Linkpost] “Eating Honey is (Probably) Fine, Actually” by Linch

    Play Episode Listen Later Jul 6, 2025 6:28


    This is a link post. I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.” Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (millions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help them. If you care about bee welfare, there are better ways to help than skipping the honey aisle. Source Bentham Bulldog's Case Against Honey Bentham Bulldog, a young and intelligent [...] ---Outline:(01:16) Bentham Bulldog's Case Against Honey(02:42) Where I agree with Bentham's Bulldog(03:08) Where I disagree--- First published: July 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/znsmwFahYgRpRvPjT/eating-honey-is-probably-fine-actually Linkpost URL:https://linch.substack.com/p/eating-honey-is-probably-fine-actually --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Morality is Objective” by Bentham's Bulldog

    Play Episode Listen Later Jun 30, 2025 19:46


    Is Morality ObjectivePlace your vote or view results.disagreeagree There is dispute among EAs--and the general public more broadly--about whether morality is objective. So I thought I'd kick off a [...] --- First published: June 24th, 2025 Source: https://forum.effectivealtruism.org/posts/n5bePqoC46pGZJzqL/morality-is-objective --- Narrated by TYPE III AUDIO.

    “Galactic x-risks: Obstacles to Accessing the Cosmic Endowment” by JordanStone

    Play Episode Listen Later Jun 29, 2025 61:57


    Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time and across multiple independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are outlined, and updates for space governance and big picture cause prioritisation are discussed. Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It's a [...] ---Outline:(01:00) Introduction(03:07) Existential risks to a Galactic Civilisation(03:58) Threats Limited to a One Planet Civilisation(04:33) Threats to a small Spacefaring Civilisation(07:02) Galactic Existential Risks(07:22) Self-replicating machines(09:27) Strange matter(10:36) Vacuum decay(11:42) Subatomic Particle Decay(12:32) Time travel(13:12) Fundamental Physics Alterations(13:57) Interactions with Other Universes(15:54) Societal Collapse or Loss of Value(16:25) Artificial Superintelligence(18:15) Conflict with alien intelligence(19:06) Unknowns(21:04) What is the probability that galactic x-risks I listed are actually possible?(22:03) What is the probability that an x-risk will occur?(22:07) What are the factors?(23:06) Cumulative Chances(24:49) If aliens exist, there is no long-term future(26:13) The Way Forward(31:34) Some key takeaways and hot takes to disagree with me onThe original text contained 76 footnotes which were omitted from this narration. --- First published: June 18th, 2025 Source: https://forum.effectivealtruism.org/posts/x7YXxDAwqAQJckdkr/galactic-x-risks-obstacles-to-accessing-the-cosmic-endowment --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “You should update on how DC is talking about AI” by Abby Babby

    Play Episode Listen Later Jun 29, 2025 1:32


    If you are planning on doing AI policy communications to DC policymakers, I recommend watching the full video of the Select Committee on the CCP hearing from this week. In his introductory comments, Ranking Member Representative Krishnamoorthi played a clip of Neo fighting an army of Agent Smiths, described it as misaligned AGI fighting humanity, and then announced he was working on a bill called "The AGI Safety Act" which would require AI to be aligned to human values. On the Republican side, Congressman Moran articulated the risks of AI automated R&D, and how dangerous it would be to let China achieve this capability. Additionally, 250 policymakers (half Republican, half Democrat) signed a letter saying they don't want the Federal government to ban state level AI regulation. The Overton window is rapidly shifting in DC, and I think people should re-evaluate what the [...] --- First published: June 27th, 2025 Source: https://forum.effectivealtruism.org/posts/RPYnR7c6ZmZKBoeLG/you-should-update-on-how-dc-is-talking-about-ai --- Narrated by TYPE III AUDIO.

    “A Practical Guide for Aspiring Super Connectors” by Constance Li

    Play Episode Listen Later Jun 25, 2025 10:57


    TL;DR: You can create outsized value by introducing the right people at the right time in the right way. This post shares general principles and tips I've found useful. Once you become a super connector, it's also important to be a good steward of the unavoidable whisper networks that develop, and I include tips for that as well. Context: I unintentionally fell into a super connector role and wanted to share the lessons I figured out along the way. Feel free to check out my personal story[1] and credentials[2] if you are curious to learn more. Why Super Connectors MatterCredit: GPT 4o In communities like EA, where talented people often work in isolation on high-impact problems, a well-placed introduction or signpost can lead to tremendous impact down the road. Super connectors accelerate access to key information and relationships, which reduces wasted effort and helps triage scarce resources. [...] ---Outline:(00:44) Why Super Connectors Matter(01:21) General Principles(01:25) 1. Know Your North Star(02:03) 2. Understand People Deeply(02:26) 3. Never Waste Peoples Time(03:04) 4. Be Ruthlessly Selective(03:37) 5. Direct Towards Appropriate Engagement Channels(04:14) Practical Tips(05:38) A Note on Whisper Networks(08:47) Getting StartedThe original text contained 4 footnotes which were omitted from this narration. --- First published: June 15th, 2025 Source: https://forum.effectivealtruism.org/posts/JvFrCTKPdHhejAE2q/a-practical-guide-for-aspiring-super-connectors --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Crunch time for cage-free” by LewisBollard

    Play Episode Listen Later Jun 24, 2025 14:48


    Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. Despite setbacks, battery cages are on the retreat My colleague Emma Buckland contributed (excellent) research to this piece. All opinions and errors are mine alone. It's deadline time. Over the last decade, many of the world's largest food companies — from McDonald's to Walmart — pledged to stop sourcing eggs from caged hens in at least their biggest markets. All in, over 2,700 companies globally have now pledged to go cage-free. Good things take time, and companies insisted they needed a lot of it to transition their egg supply chains — most set 2025 deadlines to do so. Over the years, companies reassured anxious advocates that their transitions were on track. But now, with just [...] --- First published: June 20th, 2025 Source: https://forum.effectivealtruism.org/posts/5DTrsKCSYhp9gnpAi/crunch-time-for-cage-free --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Please reconsider your use of adjectives” by Alfredo Parra

    Play Episode Listen Later Jun 23, 2025 6:12


    I've been meaning to write about this for some time, and @titotal's recent post finally made me do it:Thick red dramatic box emphasis mine. I was going to post a comment in his post, but I think this topic deserves a post of its own. My plea is simply: Please, oh please reconsider using adjectives that reflect a negative judgment (“bad”, “stupid”, “boring”) on the Forum, and instead stick to indisputable facts and observations (“I disagree”, “I doubt”, “I dislike”, etc.). This suggestion is motivated by one of the central ideas behind nonviolent communication (NVC), which I'm a big fan of and which I consider a core life skill. The idea is simply that judgments (typically in the form of adjectives) are disputable/up to interpretation, and therefore can lead to completely unnecessary misunderstandings and hurt feelings: Me: Ugh, the kitchen is dirty again. Why didn't you do the dishes [...] --- First published: June 21st, 2025 Source: https://forum.effectivealtruism.org/posts/Fkh2Mpu3Jk7iREuvv/please-reconsider-your-use-of-adjectives --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Open Philanthropy: Reflecting on our Recent Effective Giving RFP” by Melanie Basnak

    Play Episode Listen Later Jun 21, 2025 7:37


    Earlier this year, we launched a request for proposals (RFP) from organizations that fundraise for highly cost-effective charities. The Livelihood Impact Fund supported the RFP, as did two donors from Meta Charity Funders. We're excited to share the results: $1,565,333 in grants to 11 organizations. We estimate a weighted average ROI of ~4.3x across the portfolio, which means we expect our grantees to raise more than $6 million in adjusted funding over the next 1-2 years. Who's receiving funding These organizations span different regions, donor audiences, and outreach strategies. Here's a quick overview: Charity Navigator (United States) — $200,000 Charity Navigator recently acquired Causeway, through which they now recommend charities with a greater emphasis on impact across a portfolio of cause areas. This grant supports Causeway's growth and refinement, with the aim of nudging donors toward curated higher-impact giving funds. Effectief Geven (Belgium) — $108,000 Newly incubated, with [...] ---Outline:(00:49) Who's receiving funding(04:32) Why promising applications sometimes didn't meet our bar(05:54) What we learned--- First published: June 16th, 2025 Source: https://forum.effectivealtruism.org/posts/prddJRsZdFjpm6yzs/open-philanthropy-reflecting-on-our-recent-effective-giving --- Narrated by TYPE III AUDIO.

    [Linkpost] “A deep critique of AI 2027's bad timeline models” by titotal

    Play Episode Listen Later Jun 19, 2025 79:43


    This is a link post. Thank you to Arepo and Eli Lifland for looking over this article for errors. I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I'm not going to blame anyone for skimming parts of this article. Note that the majority of this article was written before Eli's updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand. Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only [...] ---Outline:(00:45) Introduction:(05:27) Part 1: Time horizons extension model(05:33) Overview of their forecast(10:23) The exponential curve(13:25) The superexponential curve(20:20) Conceptual reasons:(28:38) Intermediate speedups(36:00) Have AI 2027 been sending out a false graph?(41:50) Some skepticism about projection(46:13) Part 2: Benchmarks and gaps and beyond(46:19) The benchmark part of benchmark and gaps:(52:53) The time horizon part of the model(58:02) The gap model(01:00:58) What about Eli's recent update?(01:05:19) Six stories that fit the data(01:10:46) ConclusionThe original text contained 11 footnotes which were omitted from this narration. --- First published: June 19th, 2025 Source: https://forum.effectivealtruism.org/posts/KgejNns3ojrvCfFbi/a-deep-critique-of-ai-2027-s-bad-timeline-models Linkpost URL:https://titotal.substack.com/p/a-deep-critique-of-ai-2027s-bad-timeline --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    [Linkpost] “A deep critique of AI 2027's bad timeline models” by titotal

    Play Episode Listen Later Jun 19, 2025 72:36


    This is a link post. Thank you to Arepo and Eli Lifland for looking over this article for errors. I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I'm not going to blame anyone for skimming parts of this article. Note that the majority of this article was written before Eli's updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand. Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only [...] ---Outline:(00:45) Introduction:(05:21) Part 1: Time horizons extension model(05:27) Overview of their forecast(10:30) The exponential curve(13:18) The superexponential curve(19:27) Conceptual reasons:(27:50) Intermediate speedups(34:27) Have AI 2027 been sending out a false graph?(39:47) Some skepticism about projection(43:25) Part 2: Benchmarks and gaps and beyond(43:31) The benchmark part of benchmark and gaps:(50:03) The time horizon part of the model(54:57) The gap model(57:31) What about Eli's recent update?(01:01:39) Six stories that fit the data(01:06:58) ConclusionThe original text contained 11 footnotes which were omitted from this narration. --- First published: June 19th, 2025 Source: https://forum.effectivealtruism.org/posts/KgejNns3ojrvCfFbi/a-deep-critique-of-ai-2027-s-bad-timeline-models Linkpost URL:https://titotal.substack.com/p/a-deep-critique-of-ai-2027s-bad-timeline --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “An invasion of Taiwan is uncomfortably likely, potentially catastrophic, and we can help avoid it.” by JoelMcGuire

    Play Episode Listen Later Jun 19, 2025 61:42


    Formosa: Fulcrum of the Future?An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it. TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms [...] ---Outline:(00:13) Formosa: Fulcrum of the Future?(02:04) Part 0: Background(03:44) Part 1: Invasion -- uncomfortably possible.(08:33) Part 2: Why an invasion would be bad(10:27) 2.1 War and nuclear war(19:20) 2.2. The end of cooperation: AI and Bio-risk(22:44) 2.3 Appeasement or capitulation and the end of the liberal-led order: Value risk(26:04) Part 3: How to prevent a war(29:39) 3.1. Diplomacy: speaking softly(31:21) 3.2. Deterrence: carrying a big stick(34:16) Toy model of deterrence(37:58) Toy cost-effectiveness of deterrence(41:13) How to cost-effectively increase deterrence(43:30) Risks of a deterrence strategy(44:12) 3.3. What can be done?(44:42) How tractable is it to increase deterrence?(45:43) A theory of change for philanthropy increasing Taiwan's military deterrence(45:56) en-US-AvaMultilingualNeural__ Flow chart showing policy influence between think tanks and Taiwan security outcomes.(48:55) 4. Conclusion and further work(50:53) With more time(52:00) Bonus thoughts(52:09) 1. Reminder: a catastrophe killing 10% or more of humanity is pretty unprecedented(53:06) 2. Where's the Effective Altruist think tank for preventing global conflict?(54:11) 3. Does forecasting risks based on scenarios change our view on the likelihood of catastrophe?The original text contained 16 footnotes which were omitted from this narration. --- First published: June 15th, 2025 Source: https://forum.effectivealtruism.org/posts/qvzcmzPcR5mDEhqkz/an-invasion-of-taiwan-is-uncomfortably-likely-potentially --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “From feelings to action: spreadsheets as an act of compassion” by Zachary Robinson

    Play Episode Listen Later Jun 18, 2025 22:03


    This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points: Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field [...] --- First published: June 13th, 2025 Source: https://forum.effectivealtruism.org/posts/eT823dqNAhdRXBYvb/from-feelings-to-action-spreadsheets-as-an-act-of-compassion --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “The Horror Of Unfathomable Pain” by Bentham's Bulldog

    Play Episode Listen Later Jun 12, 2025 13:19


    Crosspost from my blog. Content warning: this article will discuss extreme agony. This is deliberate; I think it's important to get a glimpse of the horror that fills the world and that you can do something about. I think this is one of my most important articles so I'd really appreciate if you could share and restack it! The world is filled with extreme agony. We go through our daily life mostly ignoring its unfathomably shocking dreadfulness because if we didn't, we could barely focus on anything else. But those going through it cannot ignore it. Imagine that you were placed in a pot of water that was slowly brought to a boil until it boiled you to death. Take a moment to really imagine the scenario as fully as you can. Don't just acknowledge at an intellectual level that it would be bad—really seriously think about just [...] --- First published: June 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/rtZuWbsTA7GdsbpAM/the-horror-of-unfathomable-pain --- Narrated by TYPE III AUDIO.

    [Linkpost] “Gabrielle Young: 1995-2025” by Rowan Clements

    Play Episode Listen Later Jun 12, 2025 4:04


    This is a link post. I am deeply saddened to share that Gabrielle Young, a much-loved member of the EA NZ community and personal friend, died last month. This is an absolutely devastating loss, and our hearts go out to Gabby's friends and family, including her parents and her sister Brigette. While most of us knew her through EA, Gabby was an incredibly vibrant person with a diverse range of interests. She brought an infectious enthusiasm to everything she did, from software development to parkour and meditation. Music was also a huge part of Gabby's life. She performed with multiple groups— including ACAPOLLiNATiONS, the Medena ensemble and Gamelan— and enjoyed recording original music with friends. Though EA was just one part of Gabby's life, it was an important one. Like many of us, she cared deeply about alleviating suffering. And in her short life, Gabby had an amazing impact [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: June 6th, 2025 Source: https://forum.effectivealtruism.org/posts/5DvenF2RjFM7QQLtK/gabrielle-young-1995-2025 Linkpost URL:https://effectivealtruism.nz/blog/gabrielle-young --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “The Unparalleled Awesomeness of Effective Altruism Conferences” by Bentham's Bulldog

    Play Episode Listen Later Jun 11, 2025 11:44


    Crosspost from my blog. I just got back from Effective Altruism Global London—a conference that brought together lots of different people trying to do good with their money and careers. It was an inspiring experience. When you write about factory farming, insect suffering, global poverty, and the torment of shrimp, it can, as I've mentioned before, feel like screaming into the void. When you try to explain why it's important that we don't torture insects by the trillions in insect farms, most people look at you like you've grown a third head (after the second head that they look at you like you've grown when you started talking about shrimp welfare). But at effective altruism conferences, people actually care. They're not indifferent to most of the world's suffering. They don't think I'm crazy! There are other people who think the suffering of animals matters—even the suffering of small [...] --- First published: June 9th, 2025 Source: https://forum.effectivealtruism.org/posts/rZKqrRQGesLctkz8d/the-unparalleled-awesomeness-of-effective-altruism --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Estimating the Substitutability between Compute and Cognitive Labor in AI Research” by Parker_Whitfill, CherylWu

    Play Episode Listen Later Jun 7, 2025 20:25


    Audio note: this article contains 127 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. Confidence: Medium, underlying data is patchy and relies on a good amount of guesswork, data work involved a fair amount of vibecoding. Intro:  Tom Davidson has an excellent post explaining the compute bottleneck objection to the software-only intelligence explosion.[1] The rough idea is that AI research requires two inputs: cognitive labor and research compute. If these two inputs are gross complements, then even if there is recursive self-improvement in the amount of cognitive labor directed towards AI research, this process will fizzle as you get bottlenecked by the amount of research compute. The compute bottleneck objection to the software-only intelligence explosion crucially relies on compute and cognitive labor being gross complements; however, this fact is not [...] ---Outline:(00:35) Intro:(02:16) Model(02:19) Baseline CES in Compute(04:07) Conditions for a Software-Only Intelligence Explosion(07:39) Deriving the Estimation Equation(09:31) Alternative CES Formulation in Frontier Experiments(10:59) Estimation(11:02) Data(15:02) Trends(15:58) Estimation Results(18:52) ResultsThe original text contained 13 footnotes which were omitted from this narration. --- First published: June 1st, 2025 Source: https://forum.effectivealtruism.org/posts/xoX936hEvpxToeuLw/estimating-the-substitutability-between-compute-and --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “The Importance of Blasting Good Ideas Into The Ether” by Bentham's Bulldog

    Play Episode Listen Later Jun 5, 2025 11:34


    Crossposted from my blog. When I started this blog in high school, I did not imagine that I would cause The Daily Show to do an episode about shrimp, containing the following dialogue: Andres: I was working in investment banking. My wife was helping refugees, and I saw how meaningful her work was. And I decided to do the same. Ronny: Oh, so you're helping refugees? Andres: Well, not quite. I'm helping shrimp. (Would be a crazy rug pull if, in fact, this did not happen and the dialogue was just pulled out of thin air). But just a few years after my blog was born, some Daily Show producer came across it. They read my essay on shrimp and thought it would make a good daily show episode. Thus, the Daily Show shrimp episode was born. I especially love that they bring on an EA [...] --- First published: June 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/viSRgubpKDjQcatQi/the-importance-of-blasting-good-ideas-into-the-ether --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Positive effects of EA on mental health” by Julia_Wise

    Play Episode Listen Later Jun 5, 2025 8:47


    Mental illness (including struggles that don't meet a specific diagnosis) is a serious public health burden that affects a large proportion of people. This is true within EA as well as in the general population. In EA, as in any community, it's important for us to try to support those who are struggling. We sometimes see the theory that EA causes unusually bad mental health, but the evidence lightly points toward EA being good or neutral for the wellbeing of most people who engage with it. Most respondents say EA is neutral or good for their mental health There have been surveys done specifically about mental health and EA (2019, 2021, 2023), but these didn't aim to be representative of the EA population. The largest and most representative source is the EA Survey 2022, where most respondents indicated neutral or positive effects of their EA involvement on their [...] ---Outline:(00:41) Most respondents say EA is neutral or good for their mental health(02:51) Why might EA be good for wellbeing?(04:40) Why might it be bad?(05:25) Correlation and causation(05:59) Other fields also affect wellbeing(06:45) EA isnt one-size-fits-all(07:47) Resources--- First published: May 29th, 2025 Source: https://forum.effectivealtruism.org/posts/mfQoEaHeJzdH5u8Nc/positive-effects-of-ea-on-mental-health --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Rescaling and The Easterlin Paradox (2.0)” by Charlie Harrison

    Play Episode Listen Later Jun 4, 2025 14:48


    Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method. That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through. I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome. TLDR Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” Some argue that happiness is rising, but we're reporting it more conservatively — [...] ---Outline:(00:57) TLDR(02:11) 1. Background: A Happiness Paradox(04:02) 2. What is Rescaling?(06:23) 3. My Approach: Life Events would look smaller on stretched out rulers(08:10) 4. Results: Effects Are Shrinking(10:46) 5. How much might we be underestimating life satisfaction?(12:42) 6. Implications--- First published: May 26th, 2025 Source: https://forum.effectivealtruism.org/posts/wSySeNZ6C7hfDfBSx/rescaling-and-the-easterlin-paradox-2-0 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Revamped effectivealtruism.org” by Agnes Stenlund

    Play Episode Listen Later May 28, 2025 6:21


    We've redesigned effectivealtruism.org to improve understanding and perception of effective altruism, and make it easier to take action. View the new site I led the redesign and will be writing in the first person here, but many others contributed research, feedback, writing, editing, and development. I'd love to hear what you think, here is a feedback form. Redesign goals This redesign is part of CEA's broader efforts to improve how effective altruism is understood and perceived. I focused on goals aligned with CEA's branding and growth strategy: Improve understanding of what effective altruism is Make the core ideas easier to grasp by simplifying language, addressing common misconceptions, and showcasing more real-world examples of people and projects. Improve the perception of effective altruism I worked from a set of brand associations defined by the group working on the EA brand project[1]. These are words we want people to associate [...] ---Outline:(00:44) Redesign goals(02:09) Before and after(02:22) Landing page(03:50) Site navigation(04:24) New Take action page(05:03) Early results(05:40) Share your thoughtsThe original text contained 1 footnote which was omitted from this narration. --- First published: May 27th, 2025 Source: https://forum.effectivealtruism.org/posts/ZbQKtMMsDP6GnXuwr/revamped-effectivealtruism-org --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Don't update too much from EA community involvement” by Catherine Low

    Play Episode Listen Later May 25, 2025 5:12


    Summary  While many people and organisations in the EA community can be great connections, don't assume that just because a person has been in the EA community for a long time, they'll be a good fit for you to work with or be friends with. Don't assume that just because a project or org has been around for a long time, it would be a good place for you to work. It may be a great opportunity, but it might not. Do some of the usual things you would do to check that this is a good interaction for you (e.g. talk to people who know or have worked with them before starting a collaboration, take time to get to know someone before placing large amounts of trust on them, and pay attention to any signals that this interaction might not be a good for you). [...] ---Outline:(00:11) Summary(01:27) Choosing to work with another person(03:04) Conference attendance(03:38) Working with organisations(04:06) Personal Interactions with Community Members--- First published: May 22nd, 2025 Source: https://forum.effectivealtruism.org/posts/yNm58h8cvufPfBPLP/don-t-update-too-much-from-ea-community-involvement --- Narrated by TYPE III AUDIO.

    “‘Most painful condition known to mankind': A retrospective of the first-ever international research symposium on cluster headache” by Alfredo Parra

    Play Episode Listen Later May 24, 2025 20:07


    Article 5 of the 1948 Universal Declaration of Human Rights states: "Obviously, no one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment." OK, it doesn't actually start with "obviously," but I like to imagine the commissioners all murmuring to themselves “obviously” when this item was brought up. I'm not sure what the causal effect of Article 5 (or the 1984 UN Convention Against Torture) has been on reducing torture globally, though the physical integrity rights index (which “captures the extent to which people are free from government torture and political killings”) has increased from 0.48 in 1948 to 0.67 in 2024 (which is good). However, the index reached 0.67 already back in 2001, so at least according to this metric, we haven't made much progress in the past 25 years. Reducing government torture and killings seems to be low in tractability. Despite many [...] The original text contained 1 footnote which was omitted from this narration. --- First published: May 18th, 2025 Source: https://forum.effectivealtruism.org/posts/7FvDvMQypyua4kTL5/most-painful-condition-known-to-mankind-a-retrospective-of --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Why I am Still Skeptical about AGI by 2030” by James Fodor

    Play Episode Listen Later May 23, 2025 12:30


    Introduction I have been writing posts critical of mainstream EA narratives about AI capabilities and timelines for many years now. Compared to the situation when I wrote my posts in 2018 or 2020, LLMs now dominate the discussion, and timelines have also shrunk enormously. The ‘mainstream view' within EA now appears to be that human-level AI will be arriving by 2030, even as early as 2027. This view has been articulated by 80,000 Hours, on the forum (though see this excellent piece excellent piece arguing against short timelines), and in the highly engaging science fiction scenario of AI 2027. While my article piece is directed generally against all such short-horizon views, I will focus on responding to relevant portions of the article ‘Preparing for the Intelligence Explosion' by Will MacAskill and Fin Moorhouse. Rates of Growth The authors summarise their argument as follows: Currently, total global research effort [...] ---Outline:(00:11) Introduction(01:05) Rates of Growth(04:55) The Limitations of Benchmarks(09:26) Real-World Adoption(11:31) Conclusion--- First published: May 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/meNrhbgM3NwqAufwj/why-i-am-still-skeptical-about-agi-by-2030 --- Narrated by TYPE III AUDIO.

    “Better Air Purifiers” by Jeff Kaufman

    Play Episode Listen Later May 18, 2025 5:22


    Are you looking for a project where you could substantially improve indoor air quality, with benefits both to general health and reducing pandemic risk? I've written a bunch about air purifiers over the past few years, and its frustrating how bad commercial market is. The most glaring problem is the widespread use of HEPA filters. These are very effective filters that, unavoidably, offer significant resistance to air flow. HEPA is a great option for filtering air in single pass, such as with an outdoor air intake or a biosafety cabinet, but it's the wrong set of tradeoffs for cleaning the air that's already in the room. Air passing through a HEPA filter removes 99.97% of particles, but then it's mixed back in with the rest of the room air. If you can instead remove 99% of particles from 2% more air, or 90% from 15% more [...] --- First published: May 11th, 2025 Source: https://forum.effectivealtruism.org/posts/8BEqanpJFGhisETBi/better-air-purifiers --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “The Daily Show did a segment on EA and Shrimp Welfare Project” by jordanve

    Play Episode Listen Later May 17, 2025 0:30


    First published: May 16th, 2025 Source: https://forum.effectivealtruism.org/posts/LeCJqzdHZZB3uBhZg/the-daily-show-did-a-segment-on-ea-and-shrimp-welfare --- Narrated by TYPE III AUDIO.

    “[urgent] Americans, call your Senators and tell them you oppose AI preemption” by Holly Elmore ⏸️

    Play Episode Listen Later May 16, 2025 3:49


    Americans, we need your help to stop a dangerous AI bill from passing the Senate. What's going on? The House Energy & Commerce Committee included a provision in its reconciliation bill that would ban AI regulation by state and local governments for the next 10 years. Several states have led the way in AI regulation while Congress has dragged its heels. Stopping state governments from regulating AI might be okay, if we could trust Congress to meaningfully regulate it instead. But we can't. This provision would destroy state leadership on AI and pass the responsibility to a Congress that has shown little interest in seriously preventing AI danger. If this provision passes the Senate, we could see a DECADE of inaction on AI. This provision also violates the Byrd Rule, a Senate rule which is meant to prevent non-budget items from being included in the reconciliation bill. What can I do? Here are [...] --- First published: May 15th, 2025 Source: https://forum.effectivealtruism.org/posts/qWcabjNqxEBNQY3cv/urgent-americans-call-your-senators-and-tell-them-you-oppose --- Narrated by TYPE III AUDIO.

    “Please Donate to CAIP (Post 1 of 3 on AI Governance)” by Jason Green-Lowe

    Play Episode Listen Later May 11, 2025 60:04


    I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we've been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don't get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we've been doing, why I think it's valuable, and how your donations could help. This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP's particular need [...] ---Outline:(01:33) OUR MISSION AND STRATEGY(02:59) Our Model Legislation(04:17) Direct Meetings with Congressional Staffers(05:20) Expert Panel Briefings(06:16) AI Policy Happy Hours(06:43) Op-Eds & Policy Papers(07:22) Grassroots & Grasstops Organizing(09:13) Whats Unique About CAIP?(10:26) OUR ACCOMPLISHMENTS(10:29) Quantifiable Outputs(11:21) Changing the Media Narrative(12:23) Proof of Concept(13:44) Outcomes -- Congressional Engagement(18:29) Context(19:54) OUR PROPOSED POLICIES(19:58) Mandatory Audits for Frontier AI(21:23) Liability Reform(22:32) Hardware Monitoring(24:11) Emergency Powers(25:31) Further Details(25:41) RESPONSES TO COMMON POLICY OBJECTIONS(25:46) 1. Why not push for a ban or pause on superintelligence research?(30:17) 2. Why not support bills that have a better chance of passing this year, like funding for NIST or NAIRR?(32:30) 3. If Congress is so slow to act, why should anyone be working with Congress at all? Why not focus on promoting state laws or voluntary standards?(35:09) 4. Why would you push the US to unilaterally disarm? Don't we instead need a global treaty regulating AI (or subsidies for US developers) to avoid handing control of the future to China?(37:24) 5. Why haven't you accomplished your mission yet? If your organization is effective, shouldn't you have passed some of your legislation by now, or at least found some powerful Congressional sponsors for it?(40:56) OUR TEAM(41:53) Executive Director(44:04) Government Relations Team(45:12) Policy Team(46:08) Communications Team(47:29) Operations Team(48:11) Personnel Changes(48:49) OUR PLAN IF FUNDED(51:58) OUR FUNDING SITUATION(52:02) Our Expenses & Runway(53:02) No Good Way to Cut Costs(55:22) Our Revenue(57:02) Surprise Budget Deficit(59:00) The Bottom Line--- First published: May 7th, 2025 Source: https://forum.effectivealtruism.org/posts/9uZHnEkhXZjWzia7F/please-donate-to-caip-post-1-of-3-on-ai-governance --- Narrated by TYPE III AUDIO.

    “Doing Prioritization Better” by arvomm, David_Moss, Hayley Clatterbuck, Laura Duffy, Derek Shiller, Bob Fischer

    Play Episode Listen Later May 10, 2025 75:04


    Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward. Executive Summary Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. We ask how much of EA prioritization work falls in each of these categories: Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. We then explore strengths and potential pitfalls of each level: Cause [...] ---Outline:(00:37) Executive Summary(03:09) Introduction: Why prioritize? Have we got it right?(05:18) The types of prioritization(06:54) A snapshot of EA(16:45) The Types of Prioritization Evaluated(16:57) Cause Prioritization(20:56) Within-Cause Prioritization(25:12) Cross-Cause Prioritization(30:07) Summary Table(30:53) What factors should push us towards one or another?(37:27) Possible Next Steps(39:44) Conclusion(40:58) Acknowledgements(41:01) en-US-AvaMultilingualNeural__ Modern geometric logo design with text RETHINK PRIORITIES(41:55) Appendix: Strengths and Pitfalls of Each Type(42:07) Within-Cause Prioritization Strengths(42:12) Decision-Making Support(42:37) Comparability of Outputs(44:18) Disciplinarity Advantages(45:45) Responsiveness to Evidence(46:48) Movement Building(48:06) Within-Cause Prioritization Weaknesses and Potential Pitfalls(48:12) Responsiveness to Evidence(50:54) Decision-Making Support(52:45) Cross-Cause Prioritization Strengths:(53:06) Decision-Making Support(54:49) Responsiveness to Evidence(56:08) Movement Building(56:22) Comparability of Outputs(56:45) Decision-Making Support(57:14) Cross-Cause Prioritization Weaknesses and Potential Pitfalls(57:20) Comparability of Outputs(58:01) Disciplinarity Advantages(58:41) Movement Building(59:09) Decision-Making Support(01:00:27) Cause Prioritization Strengths(01:00:32) Decision-Making Support(01:02:01) Responsiveness to Evidence(01:02:52) Movement Building(01:03:28) Cause Prioritization Weaknesses and Potential Pitfalls(01:04:28) Decision-Making Support(01:06:08) Responsiveness to EvidenceThe original text contained 23 footnotes which were omitted from this narration. --- First published: April 16th, 2025 Source: https://forum.effectivealtruism.org/posts/ZPdZv8sHuYndD8xhJ/doing-prioritization-better-2 --- Narrated by TYPE III AUDIO. ---Images from the article:

    “The Soul of EA is in Trouble” by Mjreard

    Play Episode Listen Later May 9, 2025 15:57


    This is a Forum Team crosspost from Substack. Whither cause prioritization and connection with the good? There's a trend towards people who once identified as Effective Altruists now identifying solely as “people working on AI safety.”[1] For those in the loop, it feels like less of a trend and more of a tidal wave. There's an increasing sense that among the most prominent (formerly?) EA orgs and individuals, making AGI go well is functionally all that matters. For that end, so the trend goes, the ideas of Effective Altruism have exhausted their usefulness. They pointed us to the right problem – thanks; we'll take it from here. And taking it from here means building organizations, talent bases, and political alliances at a scale incommensurate with attachment to a niche ideology or moralizing language generally. I think this a dangerous path to go down too hard and my impression [...] ---Outline:(02:39) What I see(06:35) The threat means pose to ends(11:12) Losing something moreThe original text contained 2 footnotes which were omitted from this narration. --- First published: May 8th, 2025 Source: https://forum.effectivealtruism.org/posts/CKKAga4HfQyAranaC/the-soul-of-ea-is-in-trouble --- Narrated by TYPE III AUDIO.

    “12x more cost-effective than EAG - how I organised EA North 2025 (and how you could, too)” by matthes

    Play Episode Listen Later May 8, 2025 14:23


    I put on a small one-day conference. The cost per attendee was £50 (vs £1.2k for EAGs) and the cost per new connection was £11 (vs £130 for EAGs). intro EA North was a one-day event for the North of England. 35 people showed up on the day. In total, I spent £1765 (≈ $2.4k), including paying myself £20/h for 30h total. This money will be reimbursed by EA UK[1]. The cost per attendee was £50 and the cost per new connection was £11. These are significantly lower than for EAG events, suggesting that we should be putting on more smaller events. I am not arguing that EAGs should not exist at all. A local event will likely never let me connect with someone living on another continent in person. My main goal with this post is to encourage individuals to put on more events [...] ---Outline:(00:29) intro(01:38) why you can probably do this, too(02:26) what I spent the money on and a comparison with EAG London 2023(03:12) budget breakdown(04:48) cost per attendee per day(05:17) cost per connection(07:19) what I spent my time on(08:24) ideas for being even more cost-effective(09:27) recommendations to funders(09:49) reconsider how much resources you spend on small applications(10:44) consider providing funding upfront(11:17) thermal printers are cool and cheap(11:57) conclusionThe original text contained 7 footnotes which were omitted from this narration. --- First published: May 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/m9sTFoAsE8dSnzoBt/untitled-draft-tr7p --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “E2G help available” by Will Kirkpatrick

    Play Episode Listen Later May 5, 2025 5:22


    If you're interested in having a meaningful EA career but your experience doesn't match the types of jobs that the typical white collar, intellectual EA community leans towards, then you're just like me. I have been earning to give as a nuclear power plant operator in Southern Maryland for the past few years, and I think it's a great opportunity for other EA's who want to make a difference but don't have a PhD in philosophy or public policy. Additionally, I have personal sway with Constellation Energy's Calvert Cliffs plant, so I can influence the hiring process to help any interested applicants. Here are a few reasons that I think this is such an ideal Earn to Give career: A high income job in a low cost of living area means you will be able to donate a significant portion of your paychecks and still live comfortably. [...] --- First published: April 17th, 2025 Source: https://forum.effectivealtruism.org/posts/LeuLyJEXcjAkeB965/e2g-help-available --- Narrated by TYPE III AUDIO.

    “Cultivating doubt: why I no longer believe cultivated meat is the answer” by Tom Bry-Chevalier

    Play Episode Listen Later May 3, 2025 24:01


    Introduction In this post, I present what I believe to be an important yet underexplored argument that fundamentally challenges the promise of cultivated meat. In essence, there are compelling reasons to conclude that cultivated meat will not replace conventional meat, but will instead primarily compete with other alternative proteins that offer superior environmental and ethical benefits. Moreover, research into and promotion of cultivated meat may potentially result in a net negative impact. Beyond critique, I try to offer constructive recommendations for the EA movement. While I've kept this post concise, I'm more than willing to elaborate on any specific point upon request.From industry to academia: my cultivated meat journey I'm currently in my fourth year (and hopefully final one!) of my PhD. My thesis examines the environmental and economic challenges associated with alternative proteins. I have three working papers on cultivated meat at various stages of development, though [...] ---Outline:(00:13) Introduction(00:55) From industry to academia: my cultivated meat journey(01:53) Motivations and epistemic status(03:39) Baseline assumptions for this discussion(03:44) Cultivated meat is environmentally better than conventional meat, but probably not as good as plant-based meat(06:29) Cultivated meat will remain quite expensive for several years, and hybrid plant-cell products will likely appear on the market first(08:58) Cultivated meat is ethically better than conventional meat(10:26) The main argument: cannibalization rather than conversion(16:46) Strategic drawbacks of the current focus(19:11) The evidence that would make me eat my words (and maybe cultivated meat)(20:37) What Id like to see change in the Effective Altruism approach to cultivated meat(22:14) Answer from GFI Europe--- First published: April 30th, 2025 Source: https://forum.effectivealtruism.org/posts/TYhs8zehyybvMt5E4/cultivating-doubt-why-i-no-longer-believe-cultivated-meat-is --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Reflections on 7 years building the EA Forum — and moving on” by JP Addison

    Play Episode Listen Later May 3, 2025 4:44


    I'm ironically not a very prolific writer. I've preferred to stay behind the scenes here and leave the writing to my colleagues who have more of a knack for it. But a goodbye post is something I must write for myself. Perhaps I'm getting old and nostalgic, because what came out wound up being a wander down memory lane. I probably am getting old and nostalgic, but I also hope I've communicated something about my love for this community and the gratefulness for the chance to serve you all.My story of the EA Forum Few things have lasted as long in my life as my work on the Forum. I've spent more time working on the EA Forum than I've spent living anywhere since I was 0-12 years old. I've worked on the Forum longer than I've known my partner—whom I've known long enough to get married to. [...] ---Outline:(00:40) My story of the EA Forum(03:47) What's nextThe original text contained 1 footnote which was omitted from this narration. --- First published: May 1st, 2025 Source: https://forum.effectivealtruism.org/posts/4ckgvqohXTBy6hCap/untitled-draft-a4kx --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Prioritizing Work” by Jeff Kaufman

    Play Episode Listen Later May 1, 2025 1:30


    I recently read a blog post that concluded with: When I'm on my deathbed, I won't look back at my life and wish I had worked harder. I'll look back and wish I spent more time with the people I loved. Setting aside that some people don't have the economic breathing room to make this kind of tradeoff, what jumps out at me is the implication that you're not working on something important that you'll endorse in retrospect. I don't think the author is envisioning directly valuable work (reducing risk from international conflict, pandemics, or AI-supported totalitarianism; improving humanity's treatment of animals; fighting global poverty) or the undervalued less direct approach of earning money and donating it to enable others to work on pressing problems. Definitely spend time with your friends, family, and those you love. Don't work to the exclusion of everything else [...] --- First published: May 1st, 2025 Source: https://forum.effectivealtruism.org/posts/cF6eumerCq8hnb9YT/prioritizing-work --- Narrated by TYPE III AUDIO.

    “Reflections on the $5 Minimum Donation Barrier on the Giving What We Can Platform — A Student Perspective from a Lower-Income Country.” by Habeeb Abdul

    Play Episode Listen Later Apr 29, 2025 3:08


    I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals [...] --- First published: April 28th, 2025 Source: https://forum.effectivealtruism.org/posts/YoN3sKfkr5ruW47Cg/reflections-on-the-usd5-minimum-donation-barrier-on-the --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    [Linkpost] “Scaling Our Pilot Early-Warning System” by Jeff Kaufman

    Play Episode Listen Later Apr 25, 2025 5:34


    This is a link post. Summary: The NAO will increase our sequencing significantly over the next few months, funded by a $3M grant from Open Philanthropy. This will allow us to scale our early-warning system to where we could flag many engineered pathogens early enough to mitigate their worst impacts, and also generate large amounts of data to develop, tune, and evaluate our detection systems. One of the biological threats the NAO is most concerned with is a 'stealth' pathogen, such as a virus with the profile of a faster-spreading HIV. This could cause a devastating pandemic, and early detection would be critical to mitigate the worst impacts. If such a pathogen were to spread, however, we wouldn't be able to monitor it with traditional approaches because we wouldn't know what to look for. Instead, we have invested in metagenomic sequencing for pathogen-agnostic detection. This doesn't require deciding what [...] --- First published: April 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/AJ8bd2sz8tF7cxJff/scaling-our-pilot-early-warning-system Linkpost URL:https://naobservatory.org/blog/scaling-our-early-warning-system/ --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Why you can justify almost anything using historical social movements” by JamesÖz

    Play Episode Listen Later Apr 25, 2025 9:11


    [Cross-posted from my Substack here] If you spend time with people trying to change the world, you'll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. Technological progress is what drives improvements [...] The original text contained 1 footnote which was omitted from this narration. --- First published: April 24th, 2025 Source: https://forum.effectivealtruism.org/posts/kACcdhLDdWb9ZPG9L/why-you-can-justify-almost-anything-using-historical-social --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “AI for Animals 2025 Bay Area Retrospective” by Constance Li, AI for Animals

    Play Episode Listen Later Apr 19, 2025 26:20


    Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future. Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani. This conference has evolved since 2023: The 1st conference mainly consisted of philosophers and was a single track lecture/panel. The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year [...] ---Outline:(00:32) Overview(00:35) Background(02:27) Outcomes(03:51) The Event(s)(04:19) Speaking Sessions(04:23) en-US-AvaMultilingualNeural__ Conference presentation scenes with speakers and audience in auditorium setting. The image shows a collage of different presentation scenes at what appears to be an academic or professional conference. The main image shows an auditorium with attendees watching a video presentation on a large screen. Other smaller images show speakers on stage in different settings - some seated in orange chairs for a panel discussion, others at podiums. Theres a logo visible for what appears to be a research institute in one of the frames. The setting appears academic/professional in nature, with proper presentation equipment, stage lighting, and organized seating arrangements typical of a conference or symposium event.(05:10) Featured Talks(10:13) Lightning Talks(11:55) Interactive Sessions(12:19) Unconferences(13:22) Meetups(13:36) Mapping Workshops(13:44) Office Hours(13:59) Topic Distribution(14:18) Satellite Events(14:41) en-US-AvaMultilingualNeural__ Events schedule showing AI and animal-focused activities from Feb 26-Mar 2.(14:52) en-US-AvaMultilingualNeural__ People gathered at social events, networking and dining in various settings.(15:02) Behind the Scenes(15:10) Personnel(16:04) Handbooks(16:24) Finances(19:21) Outreach(21:17) Event Reflection(24:12) Get Involved--- First published: April 5th, 2025 Source: https://forum.effectivealtruism.org/posts/KWpyRXzHn6JMyZiBn/ai-for-animals-2025-bay-area-retrospective --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “ALLFED emergency appeal: Help us raise $800,000 to avoid cutting half of programs” by Denkenberger

    Play Episode Listen Later Apr 18, 2025 7:19


    SUMMARY: ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you're able to support or share this appeal, please visit allfed.info/donate. FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED's co-founder, to ask for your support. This is the first time in Alliance [...] ---Outline:(02:40) The case for ALLFED's work, and why we think maintaining full current capacity is valuable(04:14) How this connects to AI and other risks(05:39) What we're asking for--- First published: April 16th, 2025 Source: https://forum.effectivealtruism.org/posts/K7hPmcaf2xEZ6F4kR/allfed-emergency-appeal-help-us-raise-usd800-000-to-avoid-1 --- Narrated by TYPE III AUDIO.

    “Cost-effectiveness of Anima International Poland” by saulius

    Play Episode Listen Later Apr 17, 2025 39:24


    Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima's programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based [...] ---Outline:(02:16) Background(02:46) Results(05:57) Explanations of the programs(08:59) Why these estimates are very uncertain(13:48) Animal welfare metric(16:42) Comparison to SADs(19:42) Comparison to other charities(19:47) Comparisons of SADs estimates(20:54) Comparisons of cage-free estimates(24:26) For how many years do reforms have an impact?(25:21) Cage-free(29:45) Broilers(31:18) Stop the farms(32:57) Fur farmsThe original text contained 8 footnotes which were omitted from this narration. --- First published: April 10th, 2025 Source: https://forum.effectivealtruism.org/posts/sLYSa7MyuDKxreN5h/cost-effectiveness-of-anima-international-poland-1 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    “Announcing our 2025 strategy” by Giving What We Can

    Play Episode Listen Later Apr 14, 2025 7:02


    We are excited to share a summary of our 2025 strategy, which builds on our work in 2024 and provides a vision through 2027 and beyond! Background Giving What We Can (GWWC) is working towards a world without preventable suffering or existential risk, where everyone is able to flourish. We do this by making giving effectively and significantly a cultural norm. Focus on pledges Based on our last impact evaluation[1], we have made our pledges – and in particular the

    “Announcing our 2025 strategy” by Giving What We Can

    Play Episode Listen Later Apr 14, 2025 7:04


    We are excited to share a summary of our 2025 strategy, which builds on our work in 2024 and provides a vision through 2027 and beyond! Background Giving What We Can (GWWC) is working towards a world without preventable suffering or existential risk, where everyone is able to flourish. We do this by making giving effectively and significantly a cultural norm. Focus on pledges Based on our last impact evaluation[1], we have made our pledges – and in particular the

    “EA Reflections on my Military Career” by Tom Gardiner

    Play Episode Listen Later Apr 12, 2025 27:30


    Introduction Four years ago, I commissioned as an Officer in the UK's Royal Navy. I had been engaging with EA for four years before that and chose this career as a coherent part of my impact-focused career plan, and I stand by that decision. Early next year, I will leave the Navy. This article is a round-up of why I made my choices, how I think military careers can sensibly align with an EA career, and the theories of impact I considered along the way that don't hold water. Military service won't be the right call for most in this community, but it could be for some. Hopefully, this is informative for those people. Furthermore, I spent a whole year being trained in leadership. Someone I met at an EAGx conference said the offhand nugget of Military Leadership 101 wisdom I gave them was the "best advice I received [...] --- First published: April 10th, 2025 Source: https://forum.effectivealtruism.org/posts/f6XmkJ9PWFfn9GvqD/ea-reflections-on-my-military-career --- Narrated by TYPE III AUDIO.

    “GWWC is retiring 10 initiatives” by Giving What We Can

    Play Episode Listen Later Apr 12, 2025 27:44


    In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we'll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We'd like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: Inform the broader EA community about changes to projects & highlight opportunities to carry [...] ---Outline:(02:30) Giving What We Can Canada(06:13) Effective Altruism Australia funding partnership(08:40) Giving What We Can Groups(10:57) Giving Games(12:50) Charity Elections(16:59) Effective Giving Meta evaluation and grantmaking(19:11) Giving What We Can Donor Lottery(21:32) Translations(23:59) Hosted Funds(25:56) New licensing of the GWWC brandThe original text contained 1 footnote which was omitted from this narration. --- First published: April 10th, 2025 Source: https://forum.effectivealtruism.org/posts/f7yQFP3ZhtfDkD7pr/gwwc-is-retiring-10-initiatives --- Narrated by TYPE III AUDIO.

    “GWWC is retiring 10 initiatives” by Giving What We Can

    Play Episode Listen Later Apr 12, 2025 27:35


    In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we'll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We'd like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: Inform the broader EA community about changes to projects & highlight opportunities to carry [...] ---Outline:(02:28) Giving What We Can Canada(06:17) Effective Altruism Australia funding partnership(08:45) Giving What We Can Groups(11:00) Giving Games(12:53) Charity Elections(17:00) Effective Giving Meta evaluation and grantmaking(19:11) Giving What We Can Donor Lottery(21:30) Translations(23:56) Hosted Funds(25:53) New licensing of the GWWC brandThe original text contained 1 footnote which was omitted from this narration. --- First published: April 10th, 2025 Source: https://forum.effectivealtruism.org/posts/f7yQFP3ZhtfDkD7pr/gwwc-is-retiring-10-initiatives --- Narrated by TYPE III AUDIO.

    Claim Effective Altruism Forum Podcast

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel