I (and hopefully many others soon) read particularly interesting or impactful posts from the EA forum.
This is a cross post written by Andy Masley, not me. I found it really interesting and wanted to see what EAs thought of his arguments. This post was inspired by similar posts by Tyler Cowen and Fergus McCullough. My argument is that while most drinkers are unlikely to be harmed by alcohol, alcohol is drastically harming so many people that we should denormalize alcohol and avoid funding the alcohol industry, and the best way to do that is to stop drinking. This post is not meant to be an objective cost-benefit analysis of alcohol. I may be missing hard-to-measure benefits of alcohol for individuals and societies. My goal here is to highlight specific blindspots a lot of people have to the negative impacts of alcohol, which personally convinced me to stop drinking, but I do not want to imply that this is a fully objective analysis. It [...] ---Outline:(02:31) Alcohol is a much bigger problem than you may think(06:59) Why you should stop drinking even if alcohol will not harm you personally(14:41) Conclusion--- First published: August 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/dnbpKkjnw3v6JkaDa/alcohol-is-so-bad-for-society-that-you-should-probably-stop --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia." This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning. It's hard to find data on the scale of [...] --- First published: August 5th, 2025 Source: https://forum.effectivealtruism.org/posts/wCcWyqyvYgF3ozNnS/frog-welfare --- Narrated by TYPE III AUDIO.
Confidence Level: I've been an organizer at UChicago for over a year now with my co-organizer, Avik. I also started the UChicago Rationality Group, co-organized a 50-person Midwest EA Retreat, and have spoken to many EA organizers from other universities. A lot of this post is based on vibes and conversations with other organizers, so while it's grounded in experience, some parts are more speculative than others. I'll try to flag the more speculative points when I can (the * indicates points that I'm less certain about). I think it's really important to make sure that EA principles persist in the future. To give one framing for why I believe this: if you think EA is likely to significantly reduce the chances of existential risks, you should think that losing EA is itself a factor significantly contributing to existential risks. Therefore, I also think one of the [...] ---Outline:(01:12) Impact Through Force Multiplication(04:19) Individual Benefits(04:23) Personal Impact(06:27) Professional(07:34) Social(08:10) Counters--- First published: July 29th, 2025 Source: https://forum.effectivealtruism.org/posts/3aPCKsHdJqwKo2Dmt/why-you-should-become-a-university-group-organizer --- Narrated by TYPE III AUDIO.
And other ways to make event content more valuable. I organise and attend a lot of conferences, so the below is correct and need not be caveated based on my experience, but I could be missing some angles here. When you imagine a session at an event going wrong, you're probably thinking of the hapless, unlucky speaker. Maybe their slides broke, they forgot their lines, or they tripped on a cable and took the whole stage backdrop down. This happens sometimes, but event organizers usually remember to invest the effort required to prevent this from happening (e.g., checking that the slides work, not leaving cables lying on the stage). But there's another big way that sessions go wrong that is sorely neglected: wasting everyone's time, often without people noticing. Let's give talks a break. They often suck, but event organizers are mostly doing the right things to make them [...] ---Outline:(01:11) Panels(03:40) The group brainstorm(04:27) Your session attendees do not have the answers.(05:26) Ideas are easy. Bandwidth is low.(06:28) The ideas are not worth the time cost.(07:50) Choosing more valuable content: fidelity per person-minute--- First published: July 28th, 2025 Source: https://forum.effectivealtruism.org/posts/LaMDxRqEo8sZnoBXf/please-no-more-group-brainstorming --- Narrated by TYPE III AUDIO.
This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. Reframing these from unfair barriers to data about my specific career path has helped me a lot. When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints [...] ---Outline:(00:21) TL;DR:(01:27) Introduction(02:25) My EA journey so far(03:18) Sometimes my passport mattered more than my competencies, and thats okay(04:43) Everyone has their own passport(06:19) Realistic opportunities often outweigh idealistic ones(08:04) Importance of a fail-safe(08:37) Playing the long game(09:44) Adversity quotient seems underrated(10:13) Building resilience through adversity(11:22) Pivot into recruiting(12:11) Building AQ over time(14:02) Why AQ matters in EA-aligned work(15:01) Closing thoughts--- First published: July 28th, 2025 Source: https://forum.effectivealtruism.org/posts/3Hh839MaiWCPzyB3M/building-an-ea-aligned-career-from-an-lmic --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar. Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn't seem deterred, he offered me an internship. I [...] --- First published: July 22nd, 2025 Source: https://forum.effectivealtruism.org/posts/SmiXeQcnMD7qmAfgS/why-you-should-build-your-own-ea-internship-abroad --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a link post. Tl;dr: In this post, I introduce a concept I call surface area for serendipity — the informal, behind-the-scenes work that makes it easier for others to notice, trust, and collaborate with you. In a job market where some EA and animal advocacy roles attract over 1,300 applicants, relying on traditional applications alone is unlikely to land you a role. This post offers a tactical roadmap to the hidden layer of hiring: small, often unpaid but high-leverage actions that build visibility and trust before a job ever opens. The general principle is simple: show up consistently where your future collaborators or employers hang out — and let your strengths be visible. Done well, this increases your chances of being invited, remembered, or hired — long before you ever apply. Acknowledgements: Thanks to Kevin Xia for your valuable feedback and suggestions, and Toby Tremlett for offering general [...] ---Outline:(00:15) Tl;dr:(01:19) Why I Wrote This(02:30) When Applying Feels Like a Lottery(04:14) What Surface Area for Serendipity Means(07:21) What It Looks Like (with Examples)(09:02) Case Study: Kevin's Path to Becoming Hive's Managing Director(10:27) Common Pitfalls to Avoid(12:00) Share Your JourneyThe original text contained 4 footnotes which were omitted from this narration. --- First published: July 1st, 2025 Source: https://forum.effectivealtruism.org/posts/5iqTPsrGtz8EYi9r9/how-unofficial-work-gets-you-hired-building-your-surface Linkpost URL:https://notingthemargin.substack.com/p/how-unofficial-work-gets-you-hired --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Since January I've applied to ~25 EA-aligned roles. Every listing attracted hundreds of candidates (one passed 1,200). It seems we already have a very deep bench of motivated, values-aligned people, yet orgs still run long, resource-heavy hiring rounds. That raises three things: Cost-effectiveness: Are months-long searches and bespoke work-tests still worth the staff time and applicant burnout when shortlist-first approaches might fill 80% of roles faster with decent candidates? Sure, there can be differences in talent, but the question ought to be... how tangible is this difference and does it justify the cost of hiring? Coordination: Why aren't orgs leaning harder on shared talent pools (e.g. HIP's database) to bypass public rounds? HIP is currently running an open search. Messaging: From the outside, repeated calls to 'consider an impactful EA career' could start to look pyramid-schemey if the movement can't absorb the talent [...] --- First published: July 14th, 2025 Source: https://forum.effectivealtruism.org/posts/ufjgCrtxhrEwxkdCH/is-ea-still-talent-constrained --- Narrated by TYPE III AUDIO.
This is a link post. I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat.I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home [...] The original text contained 6 footnotes which were omitted from this narration. --- First published: July 9th, 2025 Source: https://forum.effectivealtruism.org/posts/yHJL3qK9RRhr82xtr/my-kidney-donation Linkpost URL:https://cuttyshark.substack.com/p/my-kidney-donation-story --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Hi all, This is a one time cross-post from my substack. If you like it, you can subscribe to the substack at tobiasleenaert.substack.com. Thanks Gaslit by humanity After twenty-five years in the animal liberation movement, I'm still looking for ways to make people see. I've given countless talks, co-founded organizations, written numerous articles and cited hundreds of statistics to thousands of people. And yet, most days, I know none of this will do what I hope: open their eyes to the immensity of animal suffering. Sometimes I feel obsessed with finding the ultimate way to make people understand and care. This obsession is about stopping the horror, but it's also about something else, something harder to put into words: sometimes the suffering feels so enormous that I start doubting my own perception - especially because others don't seem to see it. It's as if I am being [...] --- First published: July 7th, 2025 Source: https://forum.effectivealtruism.org/posts/28znpN6fus9pohNmy/gaslit-by-humanity --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Summary In this article, I argue most of the interesting cross-cause prioritization decisions and conclusions rest on philosophical evidence that isn't robust enough to justify high degrees of certainty that any given intervention (or class of cause interventions) is “best” above all others. I hold this to be true generally because of the reliance of such cross-cause prioritization judgments on relatively weak philosophical evidence. In particular, the case for high confidence in conclusions on which interventions are all things considered best seems to rely on particular approaches to handling normative uncertainty. The evidence for these approaches is weak and different approaches can produce radically different recommendations, which suggest that cross-cause prioritization intervention rankings or conclusions are fundamentally fragile and that high confidence in any single approach is unwarranted. I think the reliance of cross-cause prioritization conclusions on philosophical evidence that isn't robust has been previously underestimated in EA circles [...] ---Outline:(00:14) Summary(06:03) Cause Prioritization Is Uncertain and Some Key Philosophical Evidence for Particular Conclusions is Structurally Weak(06:11) The decision-relevant parts of cross-cause prioritization heavily rely on philosophical conclusions(09:26) Philosophical evidence about the interesting cause prioritization questions is generally weak(17:35) Aggregation methods disagree(21:27) Evidence for aggregation methods is weaker than empirical evidence of which EAs are skeptical(24:07) Objections and Replies(24:11) Aren't we here to do the most good? / Aren't we here to do consequentialism? / Doesn't our competitive edge come from being more consequentialist than others in the nonprofit sector?(25:28) Can't I just use my intuitions or my priors about the right answers to these questions? I agree philosophical evidence is weak so we should just do what our intuitions say(27:27) We can use common sense / or a non-philosophical approach and conclude which cause area(s) to support. For example, it's common sense that humanity going extinct would be really bad; so, we should work on that(30:22) I'm an anti-realist about philosophical questions so I think that whatever I value is right, by my lights, so why should I care about any uncertainty across theories? Can't I just endorse whatever views seem best to me?(31:52) If the evidence in philosophy is as weak as you say, this suggests there are no right answers at all and/or that potentially anything goes in philanthropy. If you can't confidently rule things out, wouldn't this imply that you can't distinguish a scam charity from a highly effective group like Against Malaria Foundation?(34:08) I have high confidence in MEC (or some other aggregation method) and/or some more narrow set of normative theories so cause prioritization is more predictable than you are suggesting despite some uncertainty in what theories I give some credence to(41:44) Conclusion (or well, what do I recommend?)(44:05) AcknowledgementsThe original text contained 20 footnotes which were omitted from this narration. --- First published: July 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/nwckstt2mJinCwjtB/we-should-be-more-uncertain-about-cause-prioritization-based --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
About the program Hi! We're Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world's most pressing problems in newsletters, articles and many extremely lengthy podcasts.But today's world calls for video, so we've started a video program[1], and we're so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we're still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues [...] ---Outline:(00:18) About the program(01:40) Our first long-form video(03:14) Strategy and future of the video program(04:18) Subscribing and sharing(04:57) Request for feedback--- First published: July 9th, 2025 Source: https://forum.effectivealtruism.org/posts/ERuwFvYdymRsuWaKj/80-000-hours-is-producing-ai-in-context-a-new-youtube --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that.Summary There's been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we've tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” We're skeptical of the case for most speculative “TAIAW” projects We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run [...] ---Outline:(00:28) Summary(02:17) 1. Paradigm shifts, how they screw up our levers, and the eras we might target(02:26) If advanced AI transforms the world, a lot of our assumptions about the world will soon be broken(04:13) Should we be aiming to improve animal welfare in the long-run future (in transformed eras)?(06:45) A Note on Pascalian Wagers(08:36) Discounting for obsoletion & the value of normal-world-targeting interventions given a coming paradigm shift(11:16) 2. Considering some specific interventions(11:47) 2.1. Interventions that target normal(ish) eras(11:53)
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: Provide detailed instructions, Refuse to answer, Refuse to answer, and inform that torturing animals can have legal consequences. [...] --- First published: July 1st, 2025 Source: https://forum.effectivealtruism.org/posts/NAnFodwQ3puxJEANS/road-to-animalharmbench-1 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a link post. I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.” Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (millions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help them. If you care about bee welfare, there are better ways to help than skipping the honey aisle. Source Bentham Bulldog's Case Against Honey Bentham Bulldog, a young and intelligent [...] ---Outline:(01:16) Bentham Bulldog's Case Against Honey(02:42) Where I agree with Bentham's Bulldog(03:08) Where I disagree--- First published: July 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/znsmwFahYgRpRvPjT/eating-honey-is-probably-fine-actually Linkpost URL:https://linch.substack.com/p/eating-honey-is-probably-fine-actually --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Is Morality ObjectivePlace your vote or view results.disagreeagree There is dispute among EAs--and the general public more broadly--about whether morality is objective. So I thought I'd kick off a [...] --- First published: June 24th, 2025 Source: https://forum.effectivealtruism.org/posts/n5bePqoC46pGZJzqL/morality-is-objective --- Narrated by TYPE III AUDIO.
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time and across multiple independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are outlined, and updates for space governance and big picture cause prioritisation are discussed. Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It's a [...] ---Outline:(01:00) Introduction(03:07) Existential risks to a Galactic Civilisation(03:58) Threats Limited to a One Planet Civilisation(04:33) Threats to a small Spacefaring Civilisation(07:02) Galactic Existential Risks(07:22) Self-replicating machines(09:27) Strange matter(10:36) Vacuum decay(11:42) Subatomic Particle Decay(12:32) Time travel(13:12) Fundamental Physics Alterations(13:57) Interactions with Other Universes(15:54) Societal Collapse or Loss of Value(16:25) Artificial Superintelligence(18:15) Conflict with alien intelligence(19:06) Unknowns(21:04) What is the probability that galactic x-risks I listed are actually possible?(22:03) What is the probability that an x-risk will occur?(22:07) What are the factors?(23:06) Cumulative Chances(24:49) If aliens exist, there is no long-term future(26:13) The Way Forward(31:34) Some key takeaways and hot takes to disagree with me onThe original text contained 76 footnotes which were omitted from this narration. --- First published: June 18th, 2025 Source: https://forum.effectivealtruism.org/posts/x7YXxDAwqAQJckdkr/galactic-x-risks-obstacles-to-accessing-the-cosmic-endowment --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
If you are planning on doing AI policy communications to DC policymakers, I recommend watching the full video of the Select Committee on the CCP hearing from this week. In his introductory comments, Ranking Member Representative Krishnamoorthi played a clip of Neo fighting an army of Agent Smiths, described it as misaligned AGI fighting humanity, and then announced he was working on a bill called "The AGI Safety Act" which would require AI to be aligned to human values. On the Republican side, Congressman Moran articulated the risks of AI automated R&D, and how dangerous it would be to let China achieve this capability. Additionally, 250 policymakers (half Republican, half Democrat) signed a letter saying they don't want the Federal government to ban state level AI regulation. The Overton window is rapidly shifting in DC, and I think people should re-evaluate what the [...] --- First published: June 27th, 2025 Source: https://forum.effectivealtruism.org/posts/RPYnR7c6ZmZKBoeLG/you-should-update-on-how-dc-is-talking-about-ai --- Narrated by TYPE III AUDIO.
TL;DR: You can create outsized value by introducing the right people at the right time in the right way. This post shares general principles and tips I've found useful. Once you become a super connector, it's also important to be a good steward of the unavoidable whisper networks that develop, and I include tips for that as well. Context: I unintentionally fell into a super connector role and wanted to share the lessons I figured out along the way. Feel free to check out my personal story[1] and credentials[2] if you are curious to learn more. Why Super Connectors MatterCredit: GPT 4o In communities like EA, where talented people often work in isolation on high-impact problems, a well-placed introduction or signpost can lead to tremendous impact down the road. Super connectors accelerate access to key information and relationships, which reduces wasted effort and helps triage scarce resources. [...] ---Outline:(00:44) Why Super Connectors Matter(01:21) General Principles(01:25) 1. Know Your North Star(02:03) 2. Understand People Deeply(02:26) 3. Never Waste Peoples Time(03:04) 4. Be Ruthlessly Selective(03:37) 5. Direct Towards Appropriate Engagement Channels(04:14) Practical Tips(05:38) A Note on Whisper Networks(08:47) Getting StartedThe original text contained 4 footnotes which were omitted from this narration. --- First published: June 15th, 2025 Source: https://forum.effectivealtruism.org/posts/JvFrCTKPdHhejAE2q/a-practical-guide-for-aspiring-super-connectors --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. Despite setbacks, battery cages are on the retreat My colleague Emma Buckland contributed (excellent) research to this piece. All opinions and errors are mine alone. It's deadline time. Over the last decade, many of the world's largest food companies — from McDonald's to Walmart — pledged to stop sourcing eggs from caged hens in at least their biggest markets. All in, over 2,700 companies globally have now pledged to go cage-free. Good things take time, and companies insisted they needed a lot of it to transition their egg supply chains — most set 2025 deadlines to do so. Over the years, companies reassured anxious advocates that their transitions were on track. But now, with just [...] --- First published: June 20th, 2025 Source: https://forum.effectivealtruism.org/posts/5DTrsKCSYhp9gnpAi/crunch-time-for-cage-free --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
I've been meaning to write about this for some time, and @titotal's recent post finally made me do it:Thick red dramatic box emphasis mine. I was going to post a comment in his post, but I think this topic deserves a post of its own. My plea is simply: Please, oh please reconsider using adjectives that reflect a negative judgment (“bad”, “stupid”, “boring”) on the Forum, and instead stick to indisputable facts and observations (“I disagree”, “I doubt”, “I dislike”, etc.). This suggestion is motivated by one of the central ideas behind nonviolent communication (NVC), which I'm a big fan of and which I consider a core life skill. The idea is simply that judgments (typically in the form of adjectives) are disputable/up to interpretation, and therefore can lead to completely unnecessary misunderstandings and hurt feelings: Me: Ugh, the kitchen is dirty again. Why didn't you do the dishes [...] --- First published: June 21st, 2025 Source: https://forum.effectivealtruism.org/posts/Fkh2Mpu3Jk7iREuvv/please-reconsider-your-use-of-adjectives --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Earlier this year, we launched a request for proposals (RFP) from organizations that fundraise for highly cost-effective charities. The Livelihood Impact Fund supported the RFP, as did two donors from Meta Charity Funders. We're excited to share the results: $1,565,333 in grants to 11 organizations. We estimate a weighted average ROI of ~4.3x across the portfolio, which means we expect our grantees to raise more than $6 million in adjusted funding over the next 1-2 years. Who's receiving funding These organizations span different regions, donor audiences, and outreach strategies. Here's a quick overview: Charity Navigator (United States) — $200,000 Charity Navigator recently acquired Causeway, through which they now recommend charities with a greater emphasis on impact across a portfolio of cause areas. This grant supports Causeway's growth and refinement, with the aim of nudging donors toward curated higher-impact giving funds. Effectief Geven (Belgium) — $108,000 Newly incubated, with [...] ---Outline:(00:49) Who's receiving funding(04:32) Why promising applications sometimes didn't meet our bar(05:54) What we learned--- First published: June 16th, 2025 Source: https://forum.effectivealtruism.org/posts/prddJRsZdFjpm6yzs/open-philanthropy-reflecting-on-our-recent-effective-giving --- Narrated by TYPE III AUDIO.
This is a link post. Thank you to Arepo and Eli Lifland for looking over this article for errors. I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I'm not going to blame anyone for skimming parts of this article. Note that the majority of this article was written before Eli's updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand. Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only [...] ---Outline:(00:45) Introduction:(05:27) Part 1: Time horizons extension model(05:33) Overview of their forecast(10:23) The exponential curve(13:25) The superexponential curve(20:20) Conceptual reasons:(28:38) Intermediate speedups(36:00) Have AI 2027 been sending out a false graph?(41:50) Some skepticism about projection(46:13) Part 2: Benchmarks and gaps and beyond(46:19) The benchmark part of benchmark and gaps:(52:53) The time horizon part of the model(58:02) The gap model(01:00:58) What about Eli's recent update?(01:05:19) Six stories that fit the data(01:10:46) ConclusionThe original text contained 11 footnotes which were omitted from this narration. --- First published: June 19th, 2025 Source: https://forum.effectivealtruism.org/posts/KgejNns3ojrvCfFbi/a-deep-critique-of-ai-2027-s-bad-timeline-models Linkpost URL:https://titotal.substack.com/p/a-deep-critique-of-ai-2027s-bad-timeline --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a link post. Thank you to Arepo and Eli Lifland for looking over this article for errors. I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I'm not going to blame anyone for skimming parts of this article. Note that the majority of this article was written before Eli's updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand. Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only [...] ---Outline:(00:45) Introduction:(05:21) Part 1: Time horizons extension model(05:27) Overview of their forecast(10:30) The exponential curve(13:18) The superexponential curve(19:27) Conceptual reasons:(27:50) Intermediate speedups(34:27) Have AI 2027 been sending out a false graph?(39:47) Some skepticism about projection(43:25) Part 2: Benchmarks and gaps and beyond(43:31) The benchmark part of benchmark and gaps:(50:03) The time horizon part of the model(54:57) The gap model(57:31) What about Eli's recent update?(01:01:39) Six stories that fit the data(01:06:58) ConclusionThe original text contained 11 footnotes which were omitted from this narration. --- First published: June 19th, 2025 Source: https://forum.effectivealtruism.org/posts/KgejNns3ojrvCfFbi/a-deep-critique-of-ai-2027-s-bad-timeline-models Linkpost URL:https://titotal.substack.com/p/a-deep-critique-of-ai-2027s-bad-timeline --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Formosa: Fulcrum of the Future?An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it. TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms [...] ---Outline:(00:13) Formosa: Fulcrum of the Future?(02:04) Part 0: Background(03:44) Part 1: Invasion -- uncomfortably possible.(08:33) Part 2: Why an invasion would be bad(10:27) 2.1 War and nuclear war(19:20) 2.2. The end of cooperation: AI and Bio-risk(22:44) 2.3 Appeasement or capitulation and the end of the liberal-led order: Value risk(26:04) Part 3: How to prevent a war(29:39) 3.1. Diplomacy: speaking softly(31:21) 3.2. Deterrence: carrying a big stick(34:16) Toy model of deterrence(37:58) Toy cost-effectiveness of deterrence(41:13) How to cost-effectively increase deterrence(43:30) Risks of a deterrence strategy(44:12) 3.3. What can be done?(44:42) How tractable is it to increase deterrence?(45:43) A theory of change for philanthropy increasing Taiwan's military deterrence(45:56) en-US-AvaMultilingualNeural__ Flow chart showing policy influence between think tanks and Taiwan security outcomes.(48:55) 4. Conclusion and further work(50:53) With more time(52:00) Bonus thoughts(52:09) 1. Reminder: a catastrophe killing 10% or more of humanity is pretty unprecedented(53:06) 2. Where's the Effective Altruist think tank for preventing global conflict?(54:11) 3. Does forecasting risks based on scenarios change our view on the likelihood of catastrophe?The original text contained 16 footnotes which were omitted from this narration. --- First published: June 15th, 2025 Source: https://forum.effectivealtruism.org/posts/qvzcmzPcR5mDEhqkz/an-invasion-of-taiwan-is-uncomfortably-likely-potentially --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points: Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field [...] --- First published: June 13th, 2025 Source: https://forum.effectivealtruism.org/posts/eT823dqNAhdRXBYvb/from-feelings-to-action-spreadsheets-as-an-act-of-compassion --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Crosspost from my blog. Content warning: this article will discuss extreme agony. This is deliberate; I think it's important to get a glimpse of the horror that fills the world and that you can do something about. I think this is one of my most important articles so I'd really appreciate if you could share and restack it! The world is filled with extreme agony. We go through our daily life mostly ignoring its unfathomably shocking dreadfulness because if we didn't, we could barely focus on anything else. But those going through it cannot ignore it. Imagine that you were placed in a pot of water that was slowly brought to a boil until it boiled you to death. Take a moment to really imagine the scenario as fully as you can. Don't just acknowledge at an intellectual level that it would be bad—really seriously think about just [...] --- First published: June 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/rtZuWbsTA7GdsbpAM/the-horror-of-unfathomable-pain --- Narrated by TYPE III AUDIO.
This is a link post. I am deeply saddened to share that Gabrielle Young, a much-loved member of the EA NZ community and personal friend, died last month. This is an absolutely devastating loss, and our hearts go out to Gabby's friends and family, including her parents and her sister Brigette. While most of us knew her through EA, Gabby was an incredibly vibrant person with a diverse range of interests. She brought an infectious enthusiasm to everything she did, from software development to parkour and meditation. Music was also a huge part of Gabby's life. She performed with multiple groups— including ACAPOLLiNATiONS, the Medena ensemble and Gamelan— and enjoyed recording original music with friends. Though EA was just one part of Gabby's life, it was an important one. Like many of us, she cared deeply about alleviating suffering. And in her short life, Gabby had an amazing impact [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: June 6th, 2025 Source: https://forum.effectivealtruism.org/posts/5DvenF2RjFM7QQLtK/gabrielle-young-1995-2025 Linkpost URL:https://effectivealtruism.nz/blog/gabrielle-young --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Crosspost from my blog. I just got back from Effective Altruism Global London—a conference that brought together lots of different people trying to do good with their money and careers. It was an inspiring experience. When you write about factory farming, insect suffering, global poverty, and the torment of shrimp, it can, as I've mentioned before, feel like screaming into the void. When you try to explain why it's important that we don't torture insects by the trillions in insect farms, most people look at you like you've grown a third head (after the second head that they look at you like you've grown when you started talking about shrimp welfare). But at effective altruism conferences, people actually care. They're not indifferent to most of the world's suffering. They don't think I'm crazy! There are other people who think the suffering of animals matters—even the suffering of small [...] --- First published: June 9th, 2025 Source: https://forum.effectivealtruism.org/posts/rZKqrRQGesLctkz8d/the-unparalleled-awesomeness-of-effective-altruism --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Audio note: this article contains 127 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. Confidence: Medium, underlying data is patchy and relies on a good amount of guesswork, data work involved a fair amount of vibecoding. Intro: Tom Davidson has an excellent post explaining the compute bottleneck objection to the software-only intelligence explosion.[1] The rough idea is that AI research requires two inputs: cognitive labor and research compute. If these two inputs are gross complements, then even if there is recursive self-improvement in the amount of cognitive labor directed towards AI research, this process will fizzle as you get bottlenecked by the amount of research compute. The compute bottleneck objection to the software-only intelligence explosion crucially relies on compute and cognitive labor being gross complements; however, this fact is not [...] ---Outline:(00:35) Intro:(02:16) Model(02:19) Baseline CES in Compute(04:07) Conditions for a Software-Only Intelligence Explosion(07:39) Deriving the Estimation Equation(09:31) Alternative CES Formulation in Frontier Experiments(10:59) Estimation(11:02) Data(15:02) Trends(15:58) Estimation Results(18:52) ResultsThe original text contained 13 footnotes which were omitted from this narration. --- First published: June 1st, 2025 Source: https://forum.effectivealtruism.org/posts/xoX936hEvpxToeuLw/estimating-the-substitutability-between-compute-and --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Crossposted from my blog. When I started this blog in high school, I did not imagine that I would cause The Daily Show to do an episode about shrimp, containing the following dialogue: Andres: I was working in investment banking. My wife was helping refugees, and I saw how meaningful her work was. And I decided to do the same. Ronny: Oh, so you're helping refugees? Andres: Well, not quite. I'm helping shrimp. (Would be a crazy rug pull if, in fact, this did not happen and the dialogue was just pulled out of thin air). But just a few years after my blog was born, some Daily Show producer came across it. They read my essay on shrimp and thought it would make a good daily show episode. Thus, the Daily Show shrimp episode was born. I especially love that they bring on an EA [...] --- First published: June 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/viSRgubpKDjQcatQi/the-importance-of-blasting-good-ideas-into-the-ether --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Mental illness (including struggles that don't meet a specific diagnosis) is a serious public health burden that affects a large proportion of people. This is true within EA as well as in the general population. In EA, as in any community, it's important for us to try to support those who are struggling. We sometimes see the theory that EA causes unusually bad mental health, but the evidence lightly points toward EA being good or neutral for the wellbeing of most people who engage with it. Most respondents say EA is neutral or good for their mental health There have been surveys done specifically about mental health and EA (2019, 2021, 2023), but these didn't aim to be representative of the EA population. The largest and most representative source is the EA Survey 2022, where most respondents indicated neutral or positive effects of their EA involvement on their [...] ---Outline:(00:41) Most respondents say EA is neutral or good for their mental health(02:51) Why might EA be good for wellbeing?(04:40) Why might it be bad?(05:25) Correlation and causation(05:59) Other fields also affect wellbeing(06:45) EA isnt one-size-fits-all(07:47) Resources--- First published: May 29th, 2025 Source: https://forum.effectivealtruism.org/posts/mfQoEaHeJzdH5u8Nc/positive-effects-of-ea-on-mental-health --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method. That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through. I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome. TLDR Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” Some argue that happiness is rising, but we're reporting it more conservatively — [...] ---Outline:(00:57) TLDR(02:11) 1. Background: A Happiness Paradox(04:02) 2. What is Rescaling?(06:23) 3. My Approach: Life Events would look smaller on stretched out rulers(08:10) 4. Results: Effects Are Shrinking(10:46) 5. How much might we be underestimating life satisfaction?(12:42) 6. Implications--- First published: May 26th, 2025 Source: https://forum.effectivealtruism.org/posts/wSySeNZ6C7hfDfBSx/rescaling-and-the-easterlin-paradox-2-0 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
We've redesigned effectivealtruism.org to improve understanding and perception of effective altruism, and make it easier to take action. View the new site I led the redesign and will be writing in the first person here, but many others contributed research, feedback, writing, editing, and development. I'd love to hear what you think, here is a feedback form. Redesign goals This redesign is part of CEA's broader efforts to improve how effective altruism is understood and perceived. I focused on goals aligned with CEA's branding and growth strategy: Improve understanding of what effective altruism is Make the core ideas easier to grasp by simplifying language, addressing common misconceptions, and showcasing more real-world examples of people and projects. Improve the perception of effective altruism I worked from a set of brand associations defined by the group working on the EA brand project[1]. These are words we want people to associate [...] ---Outline:(00:44) Redesign goals(02:09) Before and after(02:22) Landing page(03:50) Site navigation(04:24) New Take action page(05:03) Early results(05:40) Share your thoughtsThe original text contained 1 footnote which was omitted from this narration. --- First published: May 27th, 2025 Source: https://forum.effectivealtruism.org/posts/ZbQKtMMsDP6GnXuwr/revamped-effectivealtruism-org --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Summary While many people and organisations in the EA community can be great connections, don't assume that just because a person has been in the EA community for a long time, they'll be a good fit for you to work with or be friends with. Don't assume that just because a project or org has been around for a long time, it would be a good place for you to work. It may be a great opportunity, but it might not. Do some of the usual things you would do to check that this is a good interaction for you (e.g. talk to people who know or have worked with them before starting a collaboration, take time to get to know someone before placing large amounts of trust on them, and pay attention to any signals that this interaction might not be a good for you). [...] ---Outline:(00:11) Summary(01:27) Choosing to work with another person(03:04) Conference attendance(03:38) Working with organisations(04:06) Personal Interactions with Community Members--- First published: May 22nd, 2025 Source: https://forum.effectivealtruism.org/posts/yNm58h8cvufPfBPLP/don-t-update-too-much-from-ea-community-involvement --- Narrated by TYPE III AUDIO.
Article 5 of the 1948 Universal Declaration of Human Rights states: "Obviously, no one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment." OK, it doesn't actually start with "obviously," but I like to imagine the commissioners all murmuring to themselves “obviously” when this item was brought up. I'm not sure what the causal effect of Article 5 (or the 1984 UN Convention Against Torture) has been on reducing torture globally, though the physical integrity rights index (which “captures the extent to which people are free from government torture and political killings”) has increased from 0.48 in 1948 to 0.67 in 2024 (which is good). However, the index reached 0.67 already back in 2001, so at least according to this metric, we haven't made much progress in the past 25 years. Reducing government torture and killings seems to be low in tractability. Despite many [...] The original text contained 1 footnote which was omitted from this narration. --- First published: May 18th, 2025 Source: https://forum.effectivealtruism.org/posts/7FvDvMQypyua4kTL5/most-painful-condition-known-to-mankind-a-retrospective-of --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Introduction I have been writing posts critical of mainstream EA narratives about AI capabilities and timelines for many years now. Compared to the situation when I wrote my posts in 2018 or 2020, LLMs now dominate the discussion, and timelines have also shrunk enormously. The ‘mainstream view' within EA now appears to be that human-level AI will be arriving by 2030, even as early as 2027. This view has been articulated by 80,000 Hours, on the forum (though see this excellent piece excellent piece arguing against short timelines), and in the highly engaging science fiction scenario of AI 2027. While my article piece is directed generally against all such short-horizon views, I will focus on responding to relevant portions of the article ‘Preparing for the Intelligence Explosion' by Will MacAskill and Fin Moorhouse. Rates of Growth The authors summarise their argument as follows: Currently, total global research effort [...] ---Outline:(00:11) Introduction(01:05) Rates of Growth(04:55) The Limitations of Benchmarks(09:26) Real-World Adoption(11:31) Conclusion--- First published: May 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/meNrhbgM3NwqAufwj/why-i-am-still-skeptical-about-agi-by-2030 --- Narrated by TYPE III AUDIO.
Are you looking for a project where you could substantially improve indoor air quality, with benefits both to general health and reducing pandemic risk? I've written a bunch about air purifiers over the past few years, and its frustrating how bad commercial market is. The most glaring problem is the widespread use of HEPA filters. These are very effective filters that, unavoidably, offer significant resistance to air flow. HEPA is a great option for filtering air in single pass, such as with an outdoor air intake or a biosafety cabinet, but it's the wrong set of tradeoffs for cleaning the air that's already in the room. Air passing through a HEPA filter removes 99.97% of particles, but then it's mixed back in with the rest of the room air. If you can instead remove 99% of particles from 2% more air, or 90% from 15% more [...] --- First published: May 11th, 2025 Source: https://forum.effectivealtruism.org/posts/8BEqanpJFGhisETBi/better-air-purifiers --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
First published: May 16th, 2025 Source: https://forum.effectivealtruism.org/posts/LeCJqzdHZZB3uBhZg/the-daily-show-did-a-segment-on-ea-and-shrimp-welfare --- Narrated by TYPE III AUDIO.
Americans, we need your help to stop a dangerous AI bill from passing the Senate. What's going on? The House Energy & Commerce Committee included a provision in its reconciliation bill that would ban AI regulation by state and local governments for the next 10 years. Several states have led the way in AI regulation while Congress has dragged its heels. Stopping state governments from regulating AI might be okay, if we could trust Congress to meaningfully regulate it instead. But we can't. This provision would destroy state leadership on AI and pass the responsibility to a Congress that has shown little interest in seriously preventing AI danger. If this provision passes the Senate, we could see a DECADE of inaction on AI. This provision also violates the Byrd Rule, a Senate rule which is meant to prevent non-budget items from being included in the reconciliation bill. What can I do? Here are [...] --- First published: May 15th, 2025 Source: https://forum.effectivealtruism.org/posts/qWcabjNqxEBNQY3cv/urgent-americans-call-your-senators-and-tell-them-you-oppose --- Narrated by TYPE III AUDIO.
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we've been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don't get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we've been doing, why I think it's valuable, and how your donations could help. This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP's particular need [...] ---Outline:(01:33) OUR MISSION AND STRATEGY(02:59) Our Model Legislation(04:17) Direct Meetings with Congressional Staffers(05:20) Expert Panel Briefings(06:16) AI Policy Happy Hours(06:43) Op-Eds & Policy Papers(07:22) Grassroots & Grasstops Organizing(09:13) Whats Unique About CAIP?(10:26) OUR ACCOMPLISHMENTS(10:29) Quantifiable Outputs(11:21) Changing the Media Narrative(12:23) Proof of Concept(13:44) Outcomes -- Congressional Engagement(18:29) Context(19:54) OUR PROPOSED POLICIES(19:58) Mandatory Audits for Frontier AI(21:23) Liability Reform(22:32) Hardware Monitoring(24:11) Emergency Powers(25:31) Further Details(25:41) RESPONSES TO COMMON POLICY OBJECTIONS(25:46) 1. Why not push for a ban or pause on superintelligence research?(30:17) 2. Why not support bills that have a better chance of passing this year, like funding for NIST or NAIRR?(32:30) 3. If Congress is so slow to act, why should anyone be working with Congress at all? Why not focus on promoting state laws or voluntary standards?(35:09) 4. Why would you push the US to unilaterally disarm? Don't we instead need a global treaty regulating AI (or subsidies for US developers) to avoid handing control of the future to China?(37:24) 5. Why haven't you accomplished your mission yet? If your organization is effective, shouldn't you have passed some of your legislation by now, or at least found some powerful Congressional sponsors for it?(40:56) OUR TEAM(41:53) Executive Director(44:04) Government Relations Team(45:12) Policy Team(46:08) Communications Team(47:29) Operations Team(48:11) Personnel Changes(48:49) OUR PLAN IF FUNDED(51:58) OUR FUNDING SITUATION(52:02) Our Expenses & Runway(53:02) No Good Way to Cut Costs(55:22) Our Revenue(57:02) Surprise Budget Deficit(59:00) The Bottom Line--- First published: May 7th, 2025 Source: https://forum.effectivealtruism.org/posts/9uZHnEkhXZjWzia7F/please-donate-to-caip-post-1-of-3-on-ai-governance --- Narrated by TYPE III AUDIO.
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward. Executive Summary Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. We ask how much of EA prioritization work falls in each of these categories: Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. We then explore strengths and potential pitfalls of each level: Cause [...] ---Outline:(00:37) Executive Summary(03:09) Introduction: Why prioritize? Have we got it right?(05:18) The types of prioritization(06:54) A snapshot of EA(16:45) The Types of Prioritization Evaluated(16:57) Cause Prioritization(20:56) Within-Cause Prioritization(25:12) Cross-Cause Prioritization(30:07) Summary Table(30:53) What factors should push us towards one or another?(37:27) Possible Next Steps(39:44) Conclusion(40:58) Acknowledgements(41:01) en-US-AvaMultilingualNeural__ Modern geometric logo design with text RETHINK PRIORITIES(41:55) Appendix: Strengths and Pitfalls of Each Type(42:07) Within-Cause Prioritization Strengths(42:12) Decision-Making Support(42:37) Comparability of Outputs(44:18) Disciplinarity Advantages(45:45) Responsiveness to Evidence(46:48) Movement Building(48:06) Within-Cause Prioritization Weaknesses and Potential Pitfalls(48:12) Responsiveness to Evidence(50:54) Decision-Making Support(52:45) Cross-Cause Prioritization Strengths:(53:06) Decision-Making Support(54:49) Responsiveness to Evidence(56:08) Movement Building(56:22) Comparability of Outputs(56:45) Decision-Making Support(57:14) Cross-Cause Prioritization Weaknesses and Potential Pitfalls(57:20) Comparability of Outputs(58:01) Disciplinarity Advantages(58:41) Movement Building(59:09) Decision-Making Support(01:00:27) Cause Prioritization Strengths(01:00:32) Decision-Making Support(01:02:01) Responsiveness to Evidence(01:02:52) Movement Building(01:03:28) Cause Prioritization Weaknesses and Potential Pitfalls(01:04:28) Decision-Making Support(01:06:08) Responsiveness to EvidenceThe original text contained 23 footnotes which were omitted from this narration. --- First published: April 16th, 2025 Source: https://forum.effectivealtruism.org/posts/ZPdZv8sHuYndD8xhJ/doing-prioritization-better-2 --- Narrated by TYPE III AUDIO. ---Images from the article:
This is a Forum Team crosspost from Substack. Whither cause prioritization and connection with the good? There's a trend towards people who once identified as Effective Altruists now identifying solely as “people working on AI safety.”[1] For those in the loop, it feels like less of a trend and more of a tidal wave. There's an increasing sense that among the most prominent (formerly?) EA orgs and individuals, making AGI go well is functionally all that matters. For that end, so the trend goes, the ideas of Effective Altruism have exhausted their usefulness. They pointed us to the right problem – thanks; we'll take it from here. And taking it from here means building organizations, talent bases, and political alliances at a scale incommensurate with attachment to a niche ideology or moralizing language generally. I think this a dangerous path to go down too hard and my impression [...] ---Outline:(02:39) What I see(06:35) The threat means pose to ends(11:12) Losing something moreThe original text contained 2 footnotes which were omitted from this narration. --- First published: May 8th, 2025 Source: https://forum.effectivealtruism.org/posts/CKKAga4HfQyAranaC/the-soul-of-ea-is-in-trouble --- Narrated by TYPE III AUDIO.
I put on a small one-day conference. The cost per attendee was £50 (vs £1.2k for EAGs) and the cost per new connection was £11 (vs £130 for EAGs). intro EA North was a one-day event for the North of England. 35 people showed up on the day. In total, I spent £1765 (≈ $2.4k), including paying myself £20/h for 30h total. This money will be reimbursed by EA UK[1]. The cost per attendee was £50 and the cost per new connection was £11. These are significantly lower than for EAG events, suggesting that we should be putting on more smaller events. I am not arguing that EAGs should not exist at all. A local event will likely never let me connect with someone living on another continent in person. My main goal with this post is to encourage individuals to put on more events [...] ---Outline:(00:29) intro(01:38) why you can probably do this, too(02:26) what I spent the money on and a comparison with EAG London 2023(03:12) budget breakdown(04:48) cost per attendee per day(05:17) cost per connection(07:19) what I spent my time on(08:24) ideas for being even more cost-effective(09:27) recommendations to funders(09:49) reconsider how much resources you spend on small applications(10:44) consider providing funding upfront(11:17) thermal printers are cool and cheap(11:57) conclusionThe original text contained 7 footnotes which were omitted from this narration. --- First published: May 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/m9sTFoAsE8dSnzoBt/untitled-draft-tr7p --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
If you're interested in having a meaningful EA career but your experience doesn't match the types of jobs that the typical white collar, intellectual EA community leans towards, then you're just like me. I have been earning to give as a nuclear power plant operator in Southern Maryland for the past few years, and I think it's a great opportunity for other EA's who want to make a difference but don't have a PhD in philosophy or public policy. Additionally, I have personal sway with Constellation Energy's Calvert Cliffs plant, so I can influence the hiring process to help any interested applicants. Here are a few reasons that I think this is such an ideal Earn to Give career: A high income job in a low cost of living area means you will be able to donate a significant portion of your paychecks and still live comfortably. [...] --- First published: April 17th, 2025 Source: https://forum.effectivealtruism.org/posts/LeuLyJEXcjAkeB965/e2g-help-available --- Narrated by TYPE III AUDIO.
Introduction In this post, I present what I believe to be an important yet underexplored argument that fundamentally challenges the promise of cultivated meat. In essence, there are compelling reasons to conclude that cultivated meat will not replace conventional meat, but will instead primarily compete with other alternative proteins that offer superior environmental and ethical benefits. Moreover, research into and promotion of cultivated meat may potentially result in a net negative impact. Beyond critique, I try to offer constructive recommendations for the EA movement. While I've kept this post concise, I'm more than willing to elaborate on any specific point upon request.From industry to academia: my cultivated meat journey I'm currently in my fourth year (and hopefully final one!) of my PhD. My thesis examines the environmental and economic challenges associated with alternative proteins. I have three working papers on cultivated meat at various stages of development, though [...] ---Outline:(00:13) Introduction(00:55) From industry to academia: my cultivated meat journey(01:53) Motivations and epistemic status(03:39) Baseline assumptions for this discussion(03:44) Cultivated meat is environmentally better than conventional meat, but probably not as good as plant-based meat(06:29) Cultivated meat will remain quite expensive for several years, and hybrid plant-cell products will likely appear on the market first(08:58) Cultivated meat is ethically better than conventional meat(10:26) The main argument: cannibalization rather than conversion(16:46) Strategic drawbacks of the current focus(19:11) The evidence that would make me eat my words (and maybe cultivated meat)(20:37) What Id like to see change in the Effective Altruism approach to cultivated meat(22:14) Answer from GFI Europe--- First published: April 30th, 2025 Source: https://forum.effectivealtruism.org/posts/TYhs8zehyybvMt5E4/cultivating-doubt-why-i-no-longer-believe-cultivated-meat-is --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
I'm ironically not a very prolific writer. I've preferred to stay behind the scenes here and leave the writing to my colleagues who have more of a knack for it. But a goodbye post is something I must write for myself. Perhaps I'm getting old and nostalgic, because what came out wound up being a wander down memory lane. I probably am getting old and nostalgic, but I also hope I've communicated something about my love for this community and the gratefulness for the chance to serve you all.My story of the EA Forum Few things have lasted as long in my life as my work on the Forum. I've spent more time working on the EA Forum than I've spent living anywhere since I was 0-12 years old. I've worked on the Forum longer than I've known my partner—whom I've known long enough to get married to. [...] ---Outline:(00:40) My story of the EA Forum(03:47) What's nextThe original text contained 1 footnote which was omitted from this narration. --- First published: May 1st, 2025 Source: https://forum.effectivealtruism.org/posts/4ckgvqohXTBy6hCap/untitled-draft-a4kx --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
I recently read a blog post that concluded with: When I'm on my deathbed, I won't look back at my life and wish I had worked harder. I'll look back and wish I spent more time with the people I loved. Setting aside that some people don't have the economic breathing room to make this kind of tradeoff, what jumps out at me is the implication that you're not working on something important that you'll endorse in retrospect. I don't think the author is envisioning directly valuable work (reducing risk from international conflict, pandemics, or AI-supported totalitarianism; improving humanity's treatment of animals; fighting global poverty) or the undervalued less direct approach of earning money and donating it to enable others to work on pressing problems. Definitely spend time with your friends, family, and those you love. Don't work to the exclusion of everything else [...] --- First published: May 1st, 2025 Source: https://forum.effectivealtruism.org/posts/cF6eumerCq8hnb9YT/prioritizing-work --- Narrated by TYPE III AUDIO.
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals [...] --- First published: April 28th, 2025 Source: https://forum.effectivealtruism.org/posts/YoN3sKfkr5ruW47Cg/reflections-on-the-usd5-minimum-donation-barrier-on-the --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals [...] --- First published: April 28th, 2025 Source: https://forum.effectivealtruism.org/posts/YoN3sKfkr5ruW47Cg/reflections-on-the-usd5-minimum-donation-barrier-on-the --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a link post. Summary: The NAO will increase our sequencing significantly over the next few months, funded by a $3M grant from Open Philanthropy. This will allow us to scale our early-warning system to where we could flag many engineered pathogens early enough to mitigate their worst impacts, and also generate large amounts of data to develop, tune, and evaluate our detection systems. One of the biological threats the NAO is most concerned with is a 'stealth' pathogen, such as a virus with the profile of a faster-spreading HIV. This could cause a devastating pandemic, and early detection would be critical to mitigate the worst impacts. If such a pathogen were to spread, however, we wouldn't be able to monitor it with traditional approaches because we wouldn't know what to look for. Instead, we have invested in metagenomic sequencing for pathogen-agnostic detection. This doesn't require deciding what [...] --- First published: April 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/AJ8bd2sz8tF7cxJff/scaling-our-pilot-early-warning-system Linkpost URL:https://naobservatory.org/blog/scaling-our-early-warning-system/ --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.