I (and hopefully many others soon) read particularly interesting or impactful posts from the EA forum.

No-one knows when AI will begin having transformative impacts upon the world. People aren't sure and shouldn't be sure: there just isn't enough evidence to pin it down. But we don't need to wait for certainty. I want to explore what happens if we take our uncertainty seriously — if we act with epistemic humility. What does wise planning look like in a world of deeply uncertain AI timelines? I'll conclude that taking the uncertainty seriously has real implications for how one can contribute to making this AI transition go well. And it has even more implications for how we act together — for our portfolio of work aimed towards this end. AI Timelines By AI timelines, I refer to how long it will be before AI has truly transformative effects on the world. People often think about this using terms such as artificial general intelligence (AGI), human level AI, transformative AI, or superintelligence. Each term is used differently by different people, making it challenging to compare their stated timelines. Indeed even an individual's own definition of their favoured term will be somewhat vague, such that even after their threshold has been crossed, they might have [...] ---Outline:(00:58) AI Timelines(04:38) Short vs Long Timelines(07:05) Broad Timelines(17:55) Implications(19:46) Hedging(20:58) A Different World(24:00) Longterm Actions(28:33) Conclusions --- First published: March 19th, 2026 Source: https://forum.effectivealtruism.org/posts/HCR2AE9it279ggiZT/broad-timelines --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Crossposted from my blog I am very fortunate to have my job in many ways – I get to talk to, learn from, and give money to amazing people and nonprofits all around the world. I get to allocate a modest amount of resources to incredible organisations that I think are doing some of the best work to improve the world. I don't have to fundraise for my or my team's salaries anymore. However, there are some things I've learned since becoming a philanthropic grantmaker that were either surprising or affected me more strongly than I expected. I will outline some of these below. These are not meant to invoke feelings of “oh poor grantmakers who have access to money and influence” but rather “oh, I never considered things from that perspective”. Hopefully, they will also lead to more productive working relationships between funders and advocacy groups. Here, I discuss: How challenging the trade-offs are that funders face The extremely poor feedback mechanisms that nonprofits have How people treat you differently once you have access to funding, and how that changes you The weight of saying no to good groups Some things that make me feel cynical Trade-offs [...] ---Outline:(01:18) Trade-offs are hard and money is scarce(06:35) Nonprofits have bad feedback mechanisms(13:00) How people treat you differently (and how that changes you)(14:52) Its hard to say no to people(16:08) Its easy to become cynical(19:38) Wrapping up --- First published: March 11th, 2026 Source: https://forum.effectivealtruism.org/posts/umicYzuRsm6okFRKA/what-i-didn-t-expect-about-being-a-funder --- Narrated by TYPE III AUDIO.

Epistemic status: A bit sad (I know that's not an epistemic status) The best development Forum on the internet? 3 years ago a headline “FTX SBF blah blah blah” triggered my memory “oh that's right, that effective altruism thing”. A few years earlier I had read “Doing Good Better” in our Northern Ugandan hut, and was excited by how the ideas matched my experience of seeing the BINGOs [1] on the ground here doing not-much-good at all. Soon after my wife dragged me to Cambridge for a year and I joined an EA group. I was drawn in to a beautiful crew of good, ernest people trying to do the best they could with their lives -[2] something I'd only seen before among a few people at church. I was most impressed by their veganism, practising what they preached. But after going back to Uganda I forgot about the whole EA thing. But 3 years later the FTX headlines and a google search led me to the EA forum, which to my delight turned out to be the best place on the internet to discuss global health and development. My first foray was a not-very-good post [...] ---Outline:(00:16) The best development Forum on the internet?(01:29) A steady decline(02:59) Why?(04:34) Is this fine?(05:02) Is this less fine?(06:29) How to Boost GHD discourse? --- First published: March 15th, 2026 Source: https://forum.effectivealtruism.org/posts/4jbbjTTJ87baMrkY4/ghd-discussion-here-is-slowly-dying --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Many of us in this community are in the shocking position of thinking there's a real chance of humanity being wiped out over the next decade or two. Most of the time, we discuss that in rational terms. We talk about probabilities, and threat models, and interventions. We don't talk as much about the emotions we have about how radically our world might change and about the possibility of it ending entirely. There are lots of reasons for not talking about those feelings. For starters, it's often hard to know how we even do feel about it. There isn't a straightforward societal script for how to feel about such radical world changes. People each have to figure it out for themselves, and feel very different ways. No one wants to sound extreme or crazy by talking about feeling very strongly about it. But they don't want to sound callous either. And opening up about your feelings and being met without understanding and similarity feels alienating, particularly when it's about something so important. But the biggest reason I don't talk about it is horror. I don't want to think about it, and I don't want to upset others. [...] ---Outline:(01:58) A range of feelings(04:18) How I feel(06:46) Different people are different --- First published: March 7th, 2026 Source: https://forum.effectivealtruism.org/posts/ZDKkhoJoS7qgq2wqA/feelings-about-the-end-of-the-world --- Narrated by TYPE III AUDIO.

I work on the capacity-building team on the Global Catastrophic Risks-half of Coefficient Giving (formerly known as Open Philanthropy). Our remit is, roughly, to increase the amount of talent aiming to prevent unprecedented, globally catastrophic events. These days, we're mostly focused on AI, and we've funded a number of projects and grantees that readers of this post might be familiar with– including MATS, BlueDot Impact, Constellation, 80,000 Hours, CEA, the Curve, FAR.AI's events, university groups, and many other workshops and projects. The post aims to make the case that broadly, capacity-building work (including on AI risk) has been and continues to be extremely impactful, and to encourage people to consider pursuing relevant projects and careers. This post is written from my personal perspective; that said, my sense is that a number of CG staff and others in the AI safety space share my views. I include some quotes from them at the end of this post. I'm writing this post partly out of a desire to correct what I perceive as an asymmetry in terms of how excited I and others at Coefficient Giving are about this kind of work vs. how much people in the EA and AI [...] ---Outline:(02:15) The case for capacity-building work(04:11) Surveys(06:49) Testimonials(08:21) Neel Nanda (Senior Research Scientist at Google DeepMind)(11:15) Max Nadeau (Associate Program Officer (Technical AI Safety) at Coefficient Giving)(12:51) Rachel Weinberg (founder and former head of The Curve, currently at AI Futures Project)(14:30) Marius Hobbhann (CEO and founder of Apollo Research)(16:38) Adam Kaufman (member of technical staff at Redwood Research)(18:10) Gabriel Wu (member of technical staff (alignment) at OpenAI)(19:37) Catherine Brewer (Senior Program Associate (AI Governance) at Coefficient Giving)(21:12) Aric Floyd (video host for AI in Context)(23:12) Ryan Kidd (Director of MATS)(25:43) What tends to work?(28:34) Whats good to do now?(29:31) Who should be doing this work?(31:02) What would doing this work look like?(31:13) Working at an organization doing good work in the space(31:46) Constellation - CEO(32:46) Kairos - various early generalist positions(33:42) Starting or running your own capacity-building project or organization(34:07) Working on a capacity-building project part-time(34:30) Subscribing to Multiplier, a Substack with thoughts from our team (and other AI grantmaking staff at CG)(34:39) Letting our team know(35:03) Social proof(35:25) Julian Hazell, AI governance and policy at Coefficient Giving(36:19) Trevor Levin, AI governance and policy at Coefficient Giving(36:51) Ryan Greenblatt, Chief Scientist at Redwood Research:(37:21) Buck Shlegeris, CEO of Redwood Research(39:52) Appendix --- First published: March 10th, 2026 Source: https://forum.effectivealtruism.org/posts/rAqKSSXankvys2Fzu/the-case-for-ai-safety-capacity-building-work --- Narrated by TYPE III AUDIO.

For those not working in the space this probably isn't on your radar, but the animal advocacy movement just secured a huge win with Ahold Delhaize, convincing the fourth-largest supermarket company in the US to set the strongest cage-free policy of any large US retailer: A roadmap with benchmarks to fully eliminate caged egg cartons, expand cage-free offerings, and increase the percentage of cage-free sales. A pledge to annually report on its progress. At all 2,000+ locations, placing large, promotional shelf tags in front of cage-free cartons to differentiate cage-free and caged cartons for consumers. This was a giant campaign. My understanding is that other companies were watching to see if this campaign would succeed or fail, to see if they would need to follow suit. In addition to the animals helped, this win will add pressure for competitors to do the same. This was a coordinated effort among many groups, including: Center For Responsible Food Business; Animal Equality; International Council for Animal Welfare; The Humane League; Mercy For Animals; Compassion in World Farming; Coalition to Abolish the Fur Trade and Animal Activist Collective. Animal Equality states that this will affect 5-7 million hens. I know a lot [...] --- First published: March 4th, 2026 Source: https://forum.effectivealtruism.org/posts/2wePKArWWr4Xx6Zvf/some-good-news-ahold-delhaize-to-go-cage-free --- Narrated by TYPE III AUDIO.

This is a link post. Forum note: this post embodies the spirit of a new project I—Matt Reardon—am starting to reinvigorate in-person EA communities. I'm hiring a co-founder and I'm interested in meeting others who want to collaborate on this vision. My DMs are open. Sequence thinkers will be forgiven and rejoice In some fleeting moments lately, I catch glimpses of 2022—the year Effective Altruism's ascent seemed unstoppable. Universally positive (if limited) press, big groups at top universities across the world, a young EA entrepreneur was the darling of the financial industry, the first EA running for Congress calling in enormous financial and personnel resources for his campaign. More than those public-facing facts though, was a feeling on the ground that if you had a good grip on things and a plausible idea, you would get funding, go to the Bay, and make it happen. I actually regret how slow I was to see it at the time. You could just do things, and yes, that's always been true, but at that time, you didn't even have excuses. In my first year as an advisor at 80,000 Hours, I allowed people a lot of excuses. I think this came from [...] ---Outline:(02:37) The Retreat(07:03) What Greatness Demands(10:59) Effective Altruism is Good and Right --- First published: March 7th, 2026 Source: https://forum.effectivealtruism.org/posts/uHhcqagBBkhFTTGpq/effective-altruism-will-be-great-again Linkpost URL:https://open.substack.com/pub/frommatter/p/effective-altruism-will-be-great --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

All views are my own, not Anthropic's. This post assumes Anthropic's announcement of RSP v3.0 as background. Today, Anthropic released its Responsible Scaling Policy 3.0. The official announcement discusses the high-level thinking behind it. This is a more detailed post giving my own takes on the update. First, the big picture: I expect some people will be upset about the move away from a “hard commitments”/”binding ourselves to the mast” vibe. (Anthropic has always had the ability to revise the RSP, and we've always had language in there specifically flagging that we might revise away key commitments in a situation where other AI developers aren't adhering to similar commitments. But it's been easy to get the impression that the RSP is “binding ourselves to the mast” and committing to unilaterally pause AI development and deployment under some conditions, and Anthropic is responsible for that.) I take significant responsibility for this change. I have been pushing for this change for about a year now, and have led the way in developing the new RSP. I am in favor of nearly everything about the changes we're making. I am excited about the Roadmap, the Risk Reports, the move toward external [...] ---Outline:(05:32) How it started: the original goals of RSPs(11:25) How its going: the good and the bad(11:51) A note on my general orientation toward this topic(14:56) Goal 1: forcing functions for improved risk mitigations(15:02) A partial success story: robustness to jailbreaks for particular uses of concern, in line with the ASL-3 deployment standard(18:24) A mixed success/failure story: impact on information security(20:42) ASL-4 and ASL-5 prep: the wrong incentives(25:00) When forcing functions do and dont work well(27:52) Goal 2 (testbed for practices and policies that can feed into regulation)(29:24) Goal 3 (working toward consensus and common knowledge about AI risks and potential mitigations)(30:59) RSP v3s attempt to amplify the good and reduce the bad(36:01) Do these benefits apply only to the most safety-oriented companies?(37:40) A revised, but not overturned, vision for RSPs(39:08) Q&A(39:10) On the move away from implied unilateral commitments(39:15) Is RSP v3 proactively sending a race-to-the-bottom signal? Why be the first company to explicitly abandon the high ambition for achieving low levels of risk?(40:34) How sure are you that a voluntary industry-wide pause cant happen? Are you worried about signaling that youll be the first to defect in a prisoners dilemma?(42:03) How sure are you that you cant actually sprint to achieve the level of information security, alignment science understanding, and deployment safeguards needed to make arbitrarily powerful AI systems low-risk?(43:49) What message will this change send to regulators? Will it make ambitious regulation less likely by making companies commitments to low risk look less serious?(45:10) Why did you have to do this now - couldnt you have waited until the last possible moment to make this change, in case the more ambitious risk mitigations ended up working out?(46:03) Could you have drafted the new RSP, then waited until you had to invoke your escape clause and introduced it then? Or introduced the new RSP as what we will do if we invoke our escape clause?(47:29) The new Risk Reports and Roadmap are nice, but couldnt you have put them out without also making the key revision of moving away from unilateral commitments?(48:26) Why isnt a unilateral pause a good idea? It could be a big credible signal of danger, which could lead to policy action.(49:37) Could a unilateral pause ever be a good idea? Why not commit to a unilateral pause in cases where it would be a good idea?(50:31) Why didnt you communicate about the change differently? Im worried that the way you framed this will cause audience X to take away message Y.(51:53) Why dont Anthropics and your communications about this have a more alarmed and/or disappointed vibe? I reluctantly concede that this revision makes sense on the merits, but Im sad about it. Arent you?(53:19) On other components of the new RSP(53:24) The new RSPs commitments related to competitors seem vague and weak. Could you add more and/or strengthen these? They dont seem sufficient as-is to provide strong assurance against a prisoners dilemma world where each relevant company wishes it could be more careful, but rushes due to pressure from others.(55:29) Why is external review only required at an extreme capability level? Why not just require it now?(58:06) The new commitments are mostly about Risk Reports and Roadmap - what stops companies from just making these really perfunctory?(59:18) Why isnt the RSP more adversarially designed such that once a company adopts it, it will improve their practices even if nobody at the company values safety at all?(01:00:18) What are the consequences of missing your Roadmap commitments? If they arent dire, will anyone care about them?(01:00:29) OK, but does that apply to other companies too? How will Roadmaps force other companies to get things done?(01:00:40) Why arent the recommendations for industry-wide safety more specific? Why is it built around safety cases instead of ASLs with specific lists of needed risk mitigations?(01:02:06) What is the point of making commitments if you can revise them anytime? --- First published: February 24th, 2026 Source: https://forum.effectivealtruism.org/posts/DGZNAGL2FNJfftwgE/responsible-scaling-policy-v3-1 --- Narrated by TYPE III AUDIO.

TL;DR: Define your line that if crossed, you would consider this issue one of (if not the most) pressing issues, or at least pressing enough to warrant some of your time. I want to start with a clarification that I learned while writing this post. In the United States, charities with 501(c)(3) tax-exempt status are permitted to discuss policy and engage in advocacy, but are prohibited from participating in partisan political campaigns. I have also read the EA Forum post Politics on the EA Forums and I believe this post is consistent with those norms. I am not advocating for or against any party, candidate, or electoral campaign. The question I want to raise is broader: whether creeping authoritarianism, anti-fascism, and authoritarian lock-in should be discussed more explicitly in EA spaces as subjects of analysis and concern. Although my own experience is local to me in Canada, the question is clearly relevant to the current situation in the United States and globally. I'm asking this sincerely: why isn't anti-fascism a bigger topic at EAG events or on this forum? I was thinking about it while planning my trip to EAG San Francisco 2026. Should I be travelling to [...] --- First published: March 1st, 2026 Source: https://forum.effectivealtruism.org/posts/dgnCabfdXy6jv4gDu/why-isn-t-anti-fascism-a-bigger-topic-at-eag-events-or-on --- Narrated by TYPE III AUDIO.

Six years ago, as covid-19 was rapidly spreading through the US, my sister was working as a medical resident. One day she was handed an N95 and told to "guard it with her life", because there weren't any more coming. N95s are made from meltblown polypropylene, produced from plastic pellets manufactured in a small number of chemical plants. Building more would take too long: we needed these plants producing all the pellets they could. Braskem America operated plants in Marcus Hook PA and Neal WV. If there were infections on-site, the whole operation would need to shut down, and the factories that turned their pellets into mask fabric would stall. Companies everywhere were figuring out how to deal with this risk. The standard approach was staggering shifts, social distancing, temperature checks, and lots of handwashing. This reduced risk, but it was still significant: each shift change was an opportunity for someone to bring an infection from the community into the factory. I don't know who had the idea, but someone said: what if we never left? About eighty people, across both plants, volunteered to move in. The plan was four weeks, twelve-hour [...] --- First published: February 27th, 2026 Source: https://forum.effectivealtruism.org/posts/DBbgMgbPthABqn2No/here-s-to-the-polypropylene-makers --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

I burned out badly a few years ago. I've since had several conversations with people in the EA community who are heading toward burnout themselves, and I noticed they were sometimes thinking about it in ways that I worry wouldn't help them. So I want to share what I think is actually going on, and what I wish someone had told me earlier. A theory of burnout There are good models of the mechanism of burnout already out there. Anna Salamon has written about willpower as a kind of internal currency: your conscious planner "earns" trust with your deeper, more visceral processes by choosing actions that nourish them, and goes "credibility-broke" when it spends that trust without replenishing it. Cate Hall describes something similar with her metaphor of the elephant and the rider: the rider promises the elephant rewards in exchange for effort, and burnout is what happens when those promises are broken too many times. I usually explain this in terms of an energy imbalance: you're putting more into your work than you're getting back. Not just in terms of rest, but in terms of meaning, autonomy, connection, a sense of accomplishment, positive feedback. All the things that [...] ---Outline:(00:29) A theory of burnout(02:23) Why EA culture builds effective cages(06:11) What it actually felt like(07:10) What I want to push back on(08:31) What Id encourage if youre in the grey zone(10:50) What recovery actually looked like(11:55) What I learned, and didnt learn --- First published: February 27th, 2026 Source: https://forum.effectivealtruism.org/posts/2veCceQkhjovCfdbg/you-re-not-burning-out-because-you-re-tired --- Narrated by TYPE III AUDIO.

In this piece, I discuss the sexual harassment I experienced at the Centre for Effective Altruism, the organisation's response, the outcomes of two independent legal reviews, and the final settlement. In the second part of this piece, I make cultural critiques of CEA and EA more broadly. Everything shared here reflects my own experience and perspective. I have anonymised the perpetrator, but I reference specific leadership roles where I believe this to be appropriate and necessary. Trigger warnings: non-specific reference of rape and specific discussion of sexual harassment TL;DR (One-page summary) After I was raped (outside of and unrelated to work), a colleague at CEA wrote and circulated a document that included a sexualised description of my rape, speculation about my mental health, and commentary on my personal life, all without my consent. Several senior leaders, including the CEO and the now-former COO, received this document and took no safeguarding action for approximately nine months. I was never officially informed of its existence; I only learned about it informally through one of the recipients. After I filed a harassment report, the incident was independently investigated and determined to be harassment. Despite this, I was denied access to the document [...] ---Outline:(00:47) TL;DR (One-page summary)(03:38) A more detailed account(03:42) The sexual harassment incident(06:42) The investigation(10:38) The appeal and final report(14:02) Public accountability versus internal processes(16:59) The final settlement agreement(18:54) I still think there is a lot of good in effective altruism(20:33) Various cultural reflections(20:50) 1) Sexual harassment is not the natural result of an open and high-trust culture, it is the natural result of misogyny.(22:46) 2) The danger of EAs fixation on intent and why he didnt mean it is not good enough.(24:11) 3) Cowardice and deference at CEA.(26:30) 4) Women in EA are often encouraged to try and settle things informally or to trust their organisations -- another abuse of high-trust culture.(28:45) 5) A harmful misunderstanding of trauma and mitigating vs. aggravating factors.(30:27) 6) I have encountered so many EAs who believe it is easy for victims to speak publicly, or to share their experiences with other community members. And thus, if they arent regularly hearing from victims, harassment must be rare.(33:02) To any women who have faced something similar(34:40) Acknowledgements --- First published: February 27th, 2026 Source: https://forum.effectivealtruism.org/posts/XxXnPoGQ2eKsQx3FE/cea-s-response-to-sexual-harassment --- Narrated by TYPE III AUDIO.

I'm Dom Jackman. I founded Escape the City in 2010 to help people leave corporate jobs and find work that matters. 16 years later, 500k+ professionals have used the platform - mostly people 5-15 years into careers at places like McKinsey, Deloitte, Google, the big banks - who feel a growing gap between what they do all day and what they actually care about. I'm not from the EA community. I'm writing this because I think there's a real overlap between the people I work with and what the EA talent ecosystem actually needs. I want to test that before investing serious time in it. What I've noticed Reading through talent discussions on this forum, there's a consistent theme: the pipeline is strongest for early-career people. 80,000 Hours does great work for students and recent grads. Probably Good provides broad guidance. BlueDot, MATS, Talos build skills for specific cause areas. But mid-career professionals with real commercial experience keep coming up as underserved. The "Gaps and opportunities in the EA talent & recruiting landscape" post nails it: these people "don't have 'EA capital,' may be poorly networked and might feel alienated by current messaging." The post calls for "custom entry [...] ---Outline:(00:51) What Ive noticed(01:40) What I see every day(02:28) What Im thinking about building(03:24) Honest questions(04:39) Not looking for funding(04:58) Artifacts --- First published: February 11th, 2026 Source: https://forum.effectivealtruism.org/posts/H9pb6DEasgzjCff9a/500k-mid-career-professionals-want-to-do-more-good-with --- Narrated by TYPE III AUDIO.

[I am a career advisor at 80,000 Hours. I've been thinking about something Will MacAskill said recently in an interview with my shrimp-friend Matt: "should people be more ambitious? I genuinely think yes. I think people systematically aren't ambitious enough, so the answer is almost always yes. Again, the ambition you have should match the scale of the problems that we're facing—and the scale of those problems is very large indeed." This post is my reflection on these ideas.] ************ My last post argued that if you want to have a great career, your goal should not be to get a job. Instead, you should choose an important problem to work on, then “get good and be known.” Building skills will allow you to solve problems and reap the benefits. In the ~500 career advising calls I've hosted in the past year, the most common response I've heard has been: “Okay, how good? How well known? How many hours of practice will get me there?” Most people want to calibrate their ambitions so that the time and energy they invest feels worth it to them. I empathize with this, but when I'm honest– with myself for my own [...] ---Outline:(06:28) Jensen Huang is more ambitious than you(12:58) Most extreme ambition is misplaced(17:45) Okay, how can altruistic people aim higher and work harder?(21:17) Ambition at the End of the Human Era(24:03) Closing Caveats - Efficiency, Burnout, and Choosing What Matters --- First published: February 12th, 2026 Source: https://forum.effectivealtruism.org/posts/7qsisgX3cwETJuPNz/our-levels-of-ambition-should-match-the-problems-we-re --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

This is a link post. I would like to thank David Thorstadt for looking over this. If you spot a factual error in this article please message me. The code used to generate the graphs in the article is available to view here. Introduction Say you are an organiser, tasked with achieving the best result on some metric, such as “trash picked up”, “GDP per capita”, or “lives saved by an effective charity”. There are several possible options of interventions you can take to try and achieve this. How do you choose between them? The obvious thing to do is look at each intervention in turn and make your best, unbiased estimate of how each intervention will perform on your metric, and pick the one that performs the best:Image taken from here Having done this ranking, you declare the top ranking program to be the best intervention and invest in it, expecting that that your top estimate will be the result that you get. This whole procedure is totally normal, and people all around the world, including people in the effective altruist community, do it all the time. In actuality, this procedure is not correct. The optimisers curse is [...] ---Outline:(00:26) Introduction(02:17) The optimisers curse explained simply(04:42) Introducing a toy model(08:45) Introducing speculative interventions(12:15) A simple bayesian correction(18:47) Obstacles to simple optimizer curse solutions.(22:08) How Givewell has reacted to the optimiser curse(25:18) Conclusion --- First published: February 11th, 2026 Source: https://forum.effectivealtruism.org/posts/q2TfTirvspCTH2vbZ/the-best-cause-will-disappoint-you-an-intro-to-the Linkpost URL:https://open.substack.com/pub/titotal/p/the-best-cause-will-disappoint-you?r=1e0is3&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

What is the highest form of love? According to the VascoBot Claude programmed for me: “Thanks for the great question, AgentMa

We are Melanie and Anthony, the two community builders at EA Barcelona. In this post, we share where the group stands today and reflect on key learnings from nearly three years of grant-funded community building. We hope these reflections are useful to other community builders, funders, and CEA, particularly around what it realistically takes to build and sustain EA communities over multiple years, from funding stability and feedback loops to the personal sustainability of professional community builders. TL;DR EA Barcelona was funded by the EA Infrastructure Fund between May 2023 and December 2025 (

Note: opinions are all my own. Following Jeff Kaufman's Front-Load Giving Because of Anthropic Donors and Jenn's Funding Conversation We Left Unfinished, I think there is a real likelihood that impactful causes will receive significantly more funding in the near future. As background on where this new funding could come from: Coefficient Giving announced: A recent NYT piece covered rumors of an Anthropic valuation at $350 billion. Many of Anthropic's cofounders and early employees have pledged to donate significant amounts of their equity, and it seems likely that an outsized share of these donations would go to effective causes. A handful of other sources have the potential to grow their giving: Founders Pledge has secured $12.8 billion in pledged funding, and significantly scaled the amount it directs.[1] The Gates Foundation has increased its giving following Bill Gates' announcement to spend down $200 billion by 2045. Other aligned funders such as Longview, Macroscopic, the Flourishing Fund, the Navigation Fund, GiveWell, Project Resource Optimization, Schmidt Futures/Renaissance Philanthropy, and the Livelihood Impacts Fund have increased their staffing and dollars directed in recent years. The OpenAI Foundation controls a 26% equity stake in the for-profit OpenAI Group PB. This stake is currently valued at $130 billion [...] ---Outline:(02:39) Work(03:50) Giving(04:53) Conduct --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/H8SqwbLxKkiJur3c4/preparing-for-a-flush-future-work-giving-and-conduct --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

The EA Grants Database is a new site that neatly aggregates grant data from major EA funders who publish individual or total grant information. It is intended to be easy to maintain long term, entirely piggybacking off of existing data that is likely to be maintained. The website data is updated by a script that can be run in seconds, and I anticipate doing this for the foreseeable future. In creating the website, I tried to make things as clear and straightforward as possible. If your user experience is in any way impaired, I would appreciate hearing from you. I would also appreciate feedback on what features would actually be useful to people, although I am committed to avoiding bloat. In a funding landscape that seems poised to grow, I hope this site can serve as a resource to help grantmakers, grantees, and other interested parties make decisions while also providing perspective on what has come before. My post on matching credits and this website are both outgrowths of my thinking on how we might best financially coordinate as EA grows and becomes more difficult to understand.[1] Relatedly, I am also interested in the sort of mechanisms that [...] --- First published: February 8th, 2026 Source: https://forum.effectivealtruism.org/posts/rohYFGfiFjepLDnWC/ea-grants-database-a-new-website --- Narrated by TYPE III AUDIO.

Cross-posted to LessWrong.Summary History's most destructive ideologies—like Nazism, totalitarian communism, and religious fundamentalism—exhibited remarkably similar characteristics: epistemic and moral certainty extreme tribalism dividing humanity into a sacred “us” and an evil “them” a willingness to use whatever means necessary, including brutal violence. Such ideological fanaticism was a major driver of eight of the ten greatest atrocities since 1800, including the Taiping Rebellion, World War II, and the regimes of Stalin, Mao, and Hitler. We focus on ideological fanaticism over related concepts like totalitarianism partly because it better captures terminal preferences, which plausibly matter most as we approach superintelligent AI and technological maturity. Ideological fanaticism is considerably less influential than in the past, controlling only a small fraction of world GDP. Yet at least hundreds of millions still hold fanatical views, many regimes exhibit concerning ideological tendencies, and the past two decades have seen widespread democratic backsliding. The long-term influence of ideological fanaticism is uncertain. Fanaticism faces many disadvantages including a weak starting position, poor epistemics, and difficulty assembling broad coalitions. But it benefits from greater willingness to use extreme measures, fervent mass followings, and a historical tendency to survive and even thrive amid technological and societal upheaval. Beyond complete victory or defeat, multipolarity may [...] ---Outline:(00:16) Summary(05:19) What do we mean by ideological fanaticism?(08:40) I. Dogmatic certainty: epistemic and moral lock-in(10:02) II. Manichean tribalism: total devotion to us, total hatred for them(12:42) III. Unconstrained violence: any means necessary(14:33) Fanaticism as a multidimensional continuum(16:09) Ideological fanaticism drove most of recent historys worst atrocities(19:24) Death tolls dont capture all harm(20:55) Intentional versus natural or accidental harm(22:44) Why emphasize ideological fanaticism over political systems like totalitarianism?(25:07) Fanatical and totalitarian regimes have caused far more harm than all other regime types(26:29) Authoritarianism as a risk factor(27:19) Values change political systems: Ideological fanatics seek totalitarianism, not democracy(29:50) Terminal values may matter independently of political systems, especially with AGI(31:02) Fanaticisms connection to malevolence (dark personality traits)(34:22) The current influence of ideological fanaticism(34:42) Historical perspective: it was much worse, but we are sliding back(37:19) Estimating the global scale of ideological fanaticism(43:57) State actors(48:12) How much influence will ideological fanaticism have in the long-term future?(48:57) Reasons for optimism: Why ideological fanaticism will likely lose(49:45) A worse starting point and historical track record(50:33) Fanatics intolerance results in coalitional disadvantages(51:53) The epistemic penalty of irrational dogmatism(54:21) The marketplace of ideas and human preferences(55:57) Reasons for pessimism: Why ideological fanatics may gain power(56:04) The fragility of democratic leadership in AI(56:37) Fanatical actors may grab power via coups or revolutions(59:36) Fanatics have fewer moral constraints(01:01:13) Fanatics prioritize destructive capabilities(01:02:13) Some ideologies with fanatical elements have been remarkably resilient and successful(01:03:01) Novel fanatical ideologies could emerge--or existing ones could mutate(01:05:08) Fanatics may have longer time horizons, greater scope-sensitivity, and prioritize growth more(01:07:15) A possible middle ground: Persistent multipolar worlds(01:08:33) Why multipolar futures seem plausible(01:10:00) Why multipolar worlds might persist indefinitely(01:15:42) Ideological fanaticism increases existential and suffering risks(01:17:09) Ideological fanaticism increases the risk of war and conflict(01:17:44) Reasons for war and ideological fanaticism(01:26:27) Fanatical ideologies are non-democratic, which increases the risk of war(01:27:00) These risks are both time-sensitive and timeless(01:27:44) Fanatical retributivism may lead to astronomical suffering(01:29:50) Empirical evidence: how many people endorse eternal extreme punishment?(01:33:53) Religious fanatical retributivism(01:40:45) Secular fanatical retributivism(01:41:43) Ideological fanaticism could undermine long-reflection-style frameworks and AI alignment(01:42:33) Ideological fanaticism threatens collective moral deliberation(01:47:35) AI alignment may not solve the fanaticism problem either(01:53:33) Prevalence of reality-denying, anti-pluralistic, and punitive worldviews(01:55:44) Ideological fanaticism could worsen many other risks(01:55:49) Differential intellectual regress(01:56:51) Ideological fanaticism may give rise to extreme optimization and insatiable moral desires(01:59:21) Apocalyptic terrorism(02:00:05) S-risk-conducive propensities and reverse cooperative intelligence(02:01:28) More speculative dynamics: purity spirals and self-inflicted suffering(02:03:00) Unknown unknowns and navigating exotic scenarios(02:03:43) Interventions(02:05:31) Societal or political interventions(02:05:51) Safeguarding democracy(02:06:40) Reducing political polarization(02:10:26) Promoting anti-fanatical values: classical liberalism and Enlightenment principles(02:13:55) Growing the influence of liberal democracies(02:15:54) Encouraging reform in illiberal countries(02:16:51) Promoting international cooperation(02:22:36) Artificial intelligence-related interventions(02:22:41) Reducing the chance that transformative AI falls into the hands of fanatics(02:27:58) Making transformative AIs themselves less likely to be fanatical(02:36:14) Using AI to improve epistemics and deliberation(02:38:13) Fanaticism-resistant post-AGI governance(02:39:51) Addressing deeper causes of ideological fanaticism(02:41:26) Supplementary materials(02:41:39) Acknowledgments(02:42:22) References --- First published: February 12th, 2026 Source: https://forum.effectivealtruism.org/posts/EDBQPT65XJsgszwmL/long-term-risks-from-ideological-fanaticism --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Context: The authors are a few EAs who currently work or have previously worked at the European Commission. In this post, we make the case that more people[1] aiming for a high impact career should consider working for the EU institutions[2] using the Importance, Tractability, Neglectedness framework, and; briefly outline how one might get started on this, highlighting a currently open recruitment drive (deadline 10 March) that only comes along once every ~5 years.Why working at the EU can be extremely impactfulImportance The EU adopts binding legislation for a continent of 450 million people and has a significant budget, making it an important player across different EA cause areas.Animal welfare[3] The EU sets welfare standards for the over 10 billion farmed animals slaughtered across the continent each year. The issue suffered a major setback in 2023, when the Commission, in the final steps of the process, dropped the ‘world's most comprehensive farm animal welfare reforms to date', following massive farmers' protests in Brussels. The reform would have included ‘banning cages and crates for Europe's roughly 300 million caged animals, ending the routine mutilation of perhaps 500 million animals per year, stopping the [...] ---Outline:(00:43) Why working at the EU can be extremely impactful(00:49) Importance(05:30) Tractability(07:22) Neglectedness(09:00) Paths into the EU --- First published: February 1st, 2026 Source: https://forum.effectivealtruism.org/posts/t23ko3x2MoHekCKWC/more-eas-should-consider-working-for-the-eu --- Narrated by TYPE III AUDIO.

We're trying something a bit new this week. Over the last year, Toby Ord has been writing about the implications of the fact that improvements in AI require exponentially more compute. Only one of these posts so far has been put on the EA forum. This week we've put the entire series on the Forum and made this thread for you to discuss your reactions to the posts. Toby Ord will check in once a day to respond to your comments[1]. Feel free to also comment directly on the individual posts that make up this sequence, but you can treat this as a central discussion space for both general takes and more specific questions. If you haven't read the series yet, we've created a page where you can, and you can see the summaries of each post below: Are the Costs of AI Agents Also Rising Exponentially? Agents can do longer and longer tasks, but their dollar cost to do these tasks may be growing even faster. How Well Does RL Scale? I show that RL-training for LLMs scales much worse than inference or pre-training. Evidence that Recent AI Gains are Mostly from Inference-Scaling I show how [...] --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/JAcueP8Dh6db6knBK/the-scaling-series-discussion-thread-with-toby-ord --- Narrated by TYPE III AUDIO.

This is a link post. There is an extremely important question about the near-future of AI that almost no-one is asking. We've all seen the graphs from METR showing that the length of tasks AI agents can perform has been growing exponentially over the last 7 years. While GPT-2 could only do software engineering tasks that would take someone a few seconds, the latest models can (50% of the time) do tasks that would take a human a few hours. As this trend shows no signs of stopping, people have naturally taken to extrapolating it out, to forecast when we might expect AI to be able to do tasks that take an engineer a full work-day; or week; or year. But we are missing a key piece of information — the cost of performing this work. Over those 7 years AI systems have grown exponentially. The size of the models (parameter count) has grown by 4,000x and the number of times they are run in each task (tokens generated) has grown by about 100,000x. AI researchers have also found massive efficiencies, but it is eminently plausible that the cost for the peak performance measured by METR has been [...] ---Outline:(13:02) Conclusions(14:05) Appendix(14:08) METR has a similar graph on their page for GPT-5.1 codex. It includes more models and compares them by token counts rather than dollar costs: --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/AbHPpGTtAMyenWGX8/are-the-costs-of-ai-agents-also-rising-exponentially Linkpost URL:https://www.tobyord.com/writing/hourly-costs-for-ai-agents --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

This is a link post. In the last year or two, the most important trend in modern AI came to an end. The scaling-up of computational resources used to train ever-larger AI models through next-token prediction (pre-training) stalled out. Since late 2024, we've seen a new trend of using reinforcement learning (RL) in the second stage of training (post-training). Through RL, the AI models learn to do superior chain-of-thought reasoning about the problem they are being asked to solve. This new era involves scaling up two kinds of compute: the amount of compute used in RL post-training the amount of compute used every time the model answers a question Industry insiders are excited about the first new kind of scaling, because the amount of compute needed for RL post-training started off being small compared to the tremendous amounts already used in next-token prediction pre-training. Thus, one could scale the RL post-training up by a factor of 10 or 100 before even doubling the total compute used to train the model. But the second new kind of scaling is a problem. Major AI companies were already starting to spend more compute serving their models to customers than in the training [...] --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/5zfubGrJnBuR5toiK/evidence-that-recent-ai-gains-are-mostly-from-inference Linkpost URL:https://www.tobyord.com/writing/mostly-inference-scaling --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

This is a link post. The new scaling paradigm for AI reduces the amount of information a model can learn from per hour of training by a factor of 1,000 to 1,000,000. I explore what this means and its implications for scaling. The last year has seen a massive shift in how leading AI models are trained. 2018–2023 was the era of pre-training scaling. LLMs were primarily trained by next-token prediction (also known as pre-training). Much of OpenAI's progress from GPT-1 to GPT-4, came from scaling up the amount of pre-training by a factor of 1,000,000. New capabilities were unlocked not through scientific breakthroughs, but through doing more-or-less the same thing at ever-larger scales. Everyone was talking about the success of scaling, from AI labs to venture capitalists to policy makers. However, there's been markedly little progress in scaling up this kind of training since (GPT-4.5 added one more factor of 10, but was then quietly retired). Instead, there has been a shift to taking one of these pre-trained models and further training it with large amounts of Reinforcement Learning (RL). This has produced models like OpenAI's o1, o3, and GPT-5, with dramatic improvements in reasoning (such as solving [...] --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/64iwgmMvGSTBHPdHg/the-extreme-inefficiency-of-rl-for-frontier-models Linkpost URL:https://www.tobyord.com/writing/inefficiency-of-reinforcement-learning --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

This is a link post. The shift from scaling up the pre-training compute of AI systems to scaling up their inference compute may have profound effects on AI governance. The nature of these effects depends crucially on whether this new inference compute will primarily be used during external deployment or as part of a more complex training programme within the lab. Rapid scaling of inference-at-deployment would: lower the importance of open-weight models (and of securing the weights of closed models), reduce the impact of the first human-level models, change the business model for frontier AI, reduce the need for power-intense data centres, and derail the current paradigm of AI governance via training compute thresholds. Rapid scaling of inference-during-training would have more ambiguous effects that range from a revitalisation of pre-training scaling to a form of recursive self-improvement via iterated distillation and amplification. The end of an era — for both training and governance The intense year-on-year scaling up of AI training runs has been one of the most dramatic and stable markers of the Large Language Model era. Indeed it had been widely taken to be a permanent fixture of the AI landscape and the basis of many approaches to [...] ---Outline:(01:06) The end of an era -- for both training and governance(05:24) Scaling inference-at-deployment(06:42) Reducing the number of simultaneously served copies of each new model(08:45) Reducing the value of securing model weights(09:30) Reducing the benefits and risks of open-weight models(10:05) Unequal performance for different tasks and for different users(12:08) Changing the business model and industry structure(12:50) Reducing the need for monolithic data centres(17:16) Scaling inference-during-training(28:07) Conclusions(30:17) Appendix. Comparing the costs of scaling pre-training vs inference-at-deployment --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/RnsgMzsnXcceFfKip/inference-scaling-reshapes-ai-governance Linkpost URL:https://www.tobyord.com/writing/inference-scaling-reshapes-ai-governance --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

This is a link post. Building on the recent empirical work of Kwa et al. (2025), I show that within their suite of research-engineering tasks the performance of AI agents on longer-duration tasks can be explained by an extremely simple mathematical model — a constant rate of failing during each minute a human would take to do the task. This implies an exponentially declining success rate with the length of the task and that each agent could be characterised by its own half-life. This empirical regularity allows us to estimate the success rate for an agent at different task lengths. And the fact that this model is a good fit for the data is suggestive of the underlying causes of failure on longer tasks — that they involve increasingly large sets of subtasks where failing any one fails the task. Whether this model applies more generally on other suites of tasks is unknown and an important subject for further work. METR's results on the length of tasks agents can reliably complete A recent paper by Kwa et al. (2025) from the research organisation METR has found an exponential trend in the duration of the tasks that frontier AI agents can [...] ---Outline:(05:33) Explaining these results via a constant hazard rate(14:54) Upshots of the constant hazard rate model(18:47) Further work(19:25) References --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/qz3xyqCeriFHeTAJs/is-there-a-half-life-for-the-success-rates-of-ai-agents-3 Linkpost URL:https://www.tobyord.com/writing/half-life --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

This is a link post. Improving model performance by scaling up inference compute is the next big thing in frontier AI. But the charts being used to trumpet this new paradigm can be misleading. While they initially appear to show steady scaling and impressive performance for models like o1 and o3, they really show poor scaling (characteristic of brute force) and little evidence of improvement between o1 and o3. I explore how to interpret these new charts and what evidence for strong scaling and progress would look like. From scaling training to scaling inference The dominant trend in frontier AI over the last few years has been the rapid scale-up of training — using more and more compute to produce smarter and smarter models. Since GPT-4, this kind of scaling has run into challenges, so we haven't yet seen models much larger than GPT-4. But we have seen a recent shift towards scaling up the compute used during deployment (aka 'test-time compute' or ‘inference compute'), with more inference compute producing smarter models. You could think of this as a change in strategy from improving the quality of your employees' work via giving them more years of training in which acquire [...] --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/zNymXezwySidkeRun/inference-scaling-and-the-log-x-chart Linkpost URL:https://www.tobyord.com/writing/inference-scaling-and-the-log-x-chart --- Narrated by TYPE III AUDIO. ---Images from the article:

This is a link post. AI capabilities have improved remarkably quickly, fuelled by the explosive scale-up of resources being used to train the leading models. But if you examine the scaling laws that inspired this rush, they actually show extremely poor returns to scale. What's going on? AI Scaling is Shockingly Impressive The era of LLMs has seen remarkable improvements in AI capabilities over a very short time. This is often attributed to the AI scaling laws — statistical relationships which govern how AI capabilities improve with more parameters, compute, or data. Indeed AI thought-leaders such as Ilya Sutskever and Dario Amodei have said that the discovery of these laws led them to the current paradigm of rapid AI progress via a dizzying increase in the size of frontier systems. Before the 2020s, most AI researchers were looking for architectural changes to push the frontiers of AI forwards. The idea that scale alone was sufficient to provide the entire range of faculties involved in intelligent thought was unfashionable and seen as simplistic. A key reason it worked was the tremendous versatility of text. As Turing had noted more than 60 years earlier, almost any challenge that one could pose to [...] --- First published: January 30th, 2026 Source: https://forum.effectivealtruism.org/posts/742xJNTqer2Dt9Cxx/the-scaling-paradox Linkpost URL:https://www.tobyord.com/writing/the-scaling-paradox --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

There is an insane amount of money being thrown around by international organizations and agreements. Nobody with any kind of power over these agreements is asking basic EA questions like: "What are the problems we're trying to solve?" "What are the most neglected aspects of those problems?" and "What is the most cost-effective way to address those neglected areas?" As someone coming from an EA background reading through plans for $200-700 billion in annual funding commitments that focus on unimaginative and ineffective interventions, it makes you want to tear your hair out. So much good could be done with that money. EA focuses a lot on private philanthropy, earning-to-give (though less so post-SBF), and the usual pots of money. But why don't we have delegations who are knowledgeable in international diplomacy going to COPs and advocating for more investment in lab-grown meat, alternative proteins, or lithium recycling? It seems like there would be insane alpha in such a strategy. An example: The Global Biodiversity Framework The Kunming-Montreal Global Biodiversity Framework (GBF) was adopted in 2022 to halt biodiversity loss. It has 23 targets, commitments of $200 billion annually by 2030 and $700 billion by 2050, and near-universal adoption from [...] ---Outline:(01:12) An example: The Global Biodiversity Framework(02:13) What Is That Money Actually Being Spent On?(03:02) The Elephant in the Room Literally Nobody is Talking About: Beef(04:21) The Absolutely Insane Funding Gap(05:26) The Leverage Point Were Ignoring(06:47) What Would EA Engagement Look Like? --- First published: January 20th, 2026 Source: https://forum.effectivealtruism.org/posts/Peaq4HNhn8agsZY3z/why-isn-t-ea-at-the-table-when-usd121-billion-gets-allocated --- Narrated by TYPE III AUDIO.

EA thinking is thinking on the margin. When EAs prioritise causes, they are prioritising causes given the fact that they only control their one career, or, sometimes, given that they have some influence over a community of a few thousand people, and the distribution of some millions or billions of dollars. Some critiques of EA act as if statements about cause prioritisation are absolute rather than relative. I.e. that EAs are saying that literally everyone should be working on AI Safety, or, the flipside, that EAs are saying that no one should be working on [insert a problem which is pressing, but not among the most urgent to commit the next million dollars to]. In conversations that sound like this, I've often turned to the idea that, if EAs controlled all the resources in the world, career advisors at the hypothetical world government's version of 80,000 Hours would be advising some people to be postal workers. Given that the EA world government will have long ago filled the current areas of direct EA work, it could be the single most impactful thing a person could do with their skillset, given the comparative neglectedness of work in the [...] --- First published: January 16th, 2026 Source: https://forum.effectivealtruism.org/posts/MZ5g33fXuxd6bSgJW/if-ea-ruled-the-world-career-advisors-would-tell-some-people --- Narrated by TYPE III AUDIO.

I've started a substack, so a few more people might encounter my spicy takes - I'll still mostly be here. USAID is gone. Direct country aid to low income countries is down 25%. So now's a great time to share five ways I think development charity can be done better in 2026. To state the obvious... none of these ideas will be the best approach all of the time, there's plenty of grey area and nuance. I start a little playful, then get a little more serious. 1. Ditch the Cars Close your eyes and picture the first thing that comes into your head when I say “NGO”. It might be………… a shiny white Landcruiser The view from the front window of my hut But owning cars doesn't usually make economic sense in low income countries. The ‘real' market makes this clear. Business rarely buy cars, instead they use public transport or motorbikes. When companies do own cars, its more Corolla than Landcruiser as well. Cars are often more expensive dollar-for-dollar than in richer countries, fuel cost are high and many NGOs hire drivers, all while public transport is dirt cheap. To move 100km in Uganda [...] ---Outline:(00:43) 1. Ditch the Cars(02:49) 2. Fund Solutions not Projects(07:07) 3. Fund cost effective solutions(08:06) 4. Fund Bimodal - Test and Scale(11:59) 5. Pay workers less --- First published: January 19th, 2026 Source: https://forum.effectivealtruism.org/posts/LvE3s6kCJk4Jck2ww/5-ways-to-better-charity-work-in-2026 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Summary In January 2025, FarmKind ran a provocative media campaign which used controversial media messaging and materials to promote ‘offsetting' as an option for individuals who are concerned about factory farming but are currently unwilling or unable to change their diet. The campaign raised an estimated $16,700--$59,300 (explained in our Results section below) and generated a number of media ‘hits' including TV and created some debate that many advocates have told us they found productive. However we made mistakes in its execution and generated unproductive controversy within the EA and animal advocacy movements. This post aims to explain our theory of change, what happened, what we got wrong, and what we learned. We still believe mobilizing the meat-eating majority to take action for farmed animals requires meeting them where they're at, which sometimes means provocative framing that distinguishes us from vegan advocacy -- though we understand many in the movement disagree. However, we regret specific execution failures, particularly our insufficient stakeholder consultation, which risks sparking infighting within the animal movement.Context FarmKind is a donation platform that aims to bring more money into the movement against factory farming. People donate through our platform directly to six highly effective farmed [...] ---Outline:(00:12) Summary(01:23) Context(02:06) The goals of our campaign(02:51) Primary goals(03:16) Secondary goals(04:06) How we envisaged it working(05:24) Launching the campaign(07:18) Coordination with Veganuary(07:22) Did you tell Veganuary about the campaign in advance?(08:21) Did Veganuary object to the campaign?(09:07) Is there bad blood between you and Veganuary?(09:50) Does Veganuary endorse this campaign?(10:13) What we got wrong(10:16) 1) Underestimating the risk of movement infighting(12:29) 2) Insufficient stakeholder consultation(13:11) 3) Internal coordination failures(13:52) How we responded to concerns(17:20) Results(20:12) FAQs(20:15) Are you anti-vegan?(22:35) Aren't you concerned about dissuading people from being vegan?(26:07) Have you measured whether you're dissuading people from being vegan or supporting animal advocacy?(29:03) Why not just do something much more nuanced?(29:45) Why did you pitch to tabloids and right-wing outlets?(30:50) Conclusion --- First published: January 23rd, 2026 Source: https://forum.effectivealtruism.org/posts/c2buSr3oatKQJZi6F/reflections-on-farmkind-s-january-media-campaign --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Summary Our new book, All the Lives You Can Change: Effective Altruism for Christians, will be published in April 28 2026 The book introduces effective altruism–style thinking to a Christian audience, framing effectiveness, cause prioritization, and evidence-based action as expressions of loving God and loving one's neighbor (Matt. 22:37–39) Authored by @dominicroser, @DavidZhang and me (JD). You can best support this project by pre-ordering a copy or free intro here Praise for All the Lives You Can Change “Effective altruism asks us to extend our empathy beyond our immediate circle to include distant strangers and future generations. All the Lives You Can Change argues powerfully that this ‘radical empathy' is at the very core of the Christian faith. Inspiring, intellectually rigorous, and deeply practical, this is an essential guide for Christians who want to ensure their compassion translates into the greatest possible impact for the world's most vulnerable people. It's a beautiful, moving book.” — @William_MacAskill, author of What We Owe the Future and Doing Good Better “I couldn't put this book down. It manages to be both inspiring and practical. It blends cutting-edge research with careful theological discussion. . . . Essential reading for Christians who are [...] ---Outline:(00:12) Summary(00:49) Praise for All the Lives You Can Change(02:50) Longer Summary(04:30) About the book(04:55) Table of Contents (Overview)(06:31) Why This Might Be Relevant to the (Secular) EA Community --- First published: January 14th, 2026 Source: https://forum.effectivealtruism.org/posts/E7RqRc3fLNm2syzAh/announcing-all-the-lives-you-can-change --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

TL;DR When surveyed, the EA community and leaders think ~18-24% of resources should go towards animal advocacy. The actual figure is about 7%. We as the EA ecosystem are putting less resources (money and time) into animal advocacy than the movement thinks we should when surveyed. This disparity could be because of loss of message fidelity, it's a harder cause area to pitch donors, or the role of large funders, but I'm honestly not too sure. My job at Senterra Funders involves making the case to EA/EA adjacent prospective donors that they can do a tonne of good by donating to animal advocacy charities. As part of this work I've noticed a certain level of inconsistency in the EA ecosystem: I encounter a lot more people who want the animal advocacy movement to 'win' than people working in or donating to the space.The numbersIt turns out this intuition is backed up by survey data. Sources (see Appendix for extra details): Meta Coordination Forum (MCF; 2024) / Talent Need Survey on ideal allocation of financial resources EA Community survey data from 2023 on jobs by cause area I obtained in private correspondence with David Moss. Historical EA [...] ---Outline:(01:07) The numbers(02:37) Accounting for the disparity(05:04) Appendix 1. Data Sources --- First published: January 13th, 2026 Source: https://forum.effectivealtruism.org/posts/FxZdQJXs45fTFnMEe/is-ea-underfunding-animal-advocacy-according-to-our-own --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Thank you At EAGx Amsterdam, I shared most of this as a talk. I was afraid I'd run out of time, so I decided to do things backwards and start with the thank you. I did not want to miss the most important thing. Since I might lose you halfway reading this long and personal piece, I decided to keep this order. The EA community creates a space that makes it easier to donate and to live my values—and to be okay with living in this world. It normalizes caring about effectiveness and spreadsheets, provides frameworks and research and feedback. This community makes me feel less alone in trying to navigate the absurdity and burden of existence. My Story, Not Yours I am assuming that anything I do is determined by luck and circumstance, nature and nurture. Therefore, one way to explain why I donate is to show you some of those things. This is personal; my story might not be applicable or relatable to you. I'm not sure there's anything practical you can learn from it. But maybe my experience raises questions that help you in your giving journey. First I'll tell you about my life [...] ---Outline:(00:10) Thank you(00:53) My Story, Not Yours(01:39) My life(08:01) I donate because it helps others(08:17) It's my responsibility to do something(09:23) I should do good responsibly(10:20) I donate because it helps me(10:30) Retail Therapy Donation Therapy Effective Giving(12:39) Convenience of effective giving(14:06) This is how I can live the lives I wont get to live(15:16) How I Donate --- First published: December 31st, 2026 Source: https://forum.effectivealtruism.org/posts/bRMQB85KXz6uzqXkf/why-i-donate-a-personal-story --- Narrated by TYPE III AUDIO. ---Images from the article:

An all too common reason I've seen to “quit EA” is disliking aspects of the community. Maybe “they're” too focused on the “wrong cause area” or are skeptical of yours. Maybe “they” annoy you. Maybe “they” publicly attacked you. I'm putting quotation marks around “they” to highlight an important thing: EA is composed of individuals. Some EAs may annoy you / focus on the "wrong cause area" / publicly attack you / [insert your reason here]. But you don't have to hang out with them! Imagine you decided you didn't like science because you didn't like some scientists. Or even most scientists! That might affect the frequency you go to science conferences. But that shouldn't affect your appreciation of science itself. Yes, science is a community, but it's also a practice, a goal, a method, an idea, results. EA is too. Not to mention - you don't have to interact with most scientists! Or EAs! You can just be picky. I only interact with “most EAs” when I post on the EA Forum or the EA subreddit. Otherwise I've found my favorite EAs and hangout with them regularly. I treat them as [...] --- First published: December 28th, 2025 Source: https://forum.effectivealtruism.org/posts/QPtimJrGBRyqiYzip/don-t-stop-being-an-ea-because-you-dislike-eas-you-don-t --- Narrated by TYPE III AUDIO.

Note: I used LLMs to draft different parts of this. I've checked almost everything, but there might be some mistakes remaining. Apologies for posting this on Christmas Eve. I wanted to get this out the door before the end of the year. Questions welcome, and if it's easy to pull metrics to answer them, I will.Summary 80,000 Hours launched a video program in 2025 focused on longform, cinematic, personality-driven content about AI risks. Our first two longform releases were: We're Not Ready for Superintelligence (the "AI 2027" video): 8.9M views, ~1.4M watch hours If you remember one AI disaster, make it this one (the "MechaHitler" video): 2.7M views, ~419K watch hours Both videos significantly outperformed our expectations (we'd anticipated 15-50K views for the first). The cost per engagement hour ($0.11 and $0.39 respectively, including staff time) compares favorably to other 80,000 Hours programs. This post covers: what we spent, what we got, why we think it worked, and what we'd do differently.The numbersCostsCategoryAI 2027MechaHitlerDirect costs~$50K~$64KStaff hours~450 hrs~450 hrs (Note, I'm assuming it's about the same as for AI 2027, I didn't re-ask people how much time they spent.)Total cost (making some assumptions about we should incorporate staff [...] ---Outline:(00:34) Summary(01:33) The numbers(01:36) Costs(02:16) Timing(02:40) Results(03:46) How valuable is a video watch hour?(04:24) Qualitative Feedback(04:28) AI 2027(05:51) MechaHitler(06:12) YouTube commenters like:(06:52) What the comments don't like:(07:17) Qualitative Analysis(07:21) Why we think AI 2027 did well(09:56) Why MechaHitler did less well (but still well)(10:50) Lessons Learned(10:54) Overall what we think matters(11:25) Our guess at what's less important (though we're certainly unsure, maybe if we nailed these, we'd get more success)(12:24) How our production works(12:43) The timeline(13:32) Ideation(14:06) Scripting(14:57) Shooting(15:31) Reshoots / Voiceover(15:45) Editing(16:06) Launch(17:00) What we're still figuring out(17:36) Closing thoughts --- First published: December 24th, 2025 Source: https://forum.effectivealtruism.org/posts/RCRaBYSqBaMzHzTjF/untitled-retrospective-and-learnings-from-ai-in-context-s --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

I really enjoyed reading the "why I donate" posts in the past week, so much so that I felt compelled to add my reflections, in case someone finds my reasons as interesting as I found theirs. 1. My money needs to be spent on something, might as well spend it on the most efficient things The core reason I give is something that I think is under-represented in the other posts: the money I have and earn will need to be spent on something, and it feels extremely inefficient and irrational to spend it on my future self when it can provide >100x as much to others. To me, it doesn't seem important whether I'm in the global top 10% or bottom 10%, or whether the money I have is due to my efforts or to the place I was born. If it can provide others 100x as much, it just seems inefficient/irrational to allocate it to myself. Honestly, the post could end here, but there are other secondary reasons/perspectives on why I personally donate that I haven't seen commonly discussed. 2. Spending money is voting on how the global economy allocates its resources In 2017, I read Wealth [...] ---Outline:(00:22) 1. My money needs to be spent on something, might as well spend it on the most efficient things(01:09) 2. Spending money is voting on how the global economy allocates its resources(04:11) 3. I dont think its as bad as some make it out to be(07:35) 4. I donate because Im an atheist (/s) --- First published: December 15th, 2025 Source: https://forum.effectivealtruism.org/posts/CSKob9hGmWM7f7yv8/i-give-because-it-s-the-most-rational-way-to-spend-my-money --- Narrated by TYPE III AUDIO.

This year, I have given money to a range of EA cause areas. Most of it has either been towards global health and development, or EA infrastructure I believe does or could lead to effective fundraising for global health and development. The following are a list of very selfish personal reasons why I like to do this. I feel the selfless reasons have been adequately covered elsewhere, so I'm intentionally leaving them off. I get to ignore ineffective charity adverts. In order to genuinely convince myself that I am helping, I want to see things like well-regarded cost-effectiveness metrics. I do not like heartstring-tugging advertising or vague statements of "should", particularly to do with orphanages. They make me feel a bit ill. So I am glad that donating effectively gives me a very good justification to ignore them. It is a marker of my politics. I don't believe that poor people I don't know in rich countries are 100× more worthy of my help [i.e. worthy of help that's 100× less cost-efficient] than poor people in poor countries. This is because I don't believe anyone is 100× more worthy than anyone. Choosing to donate based on the cost-effectiveness of [...] ---Outline:(00:36) I get to ignore ineffective charity adverts.(01:02) It is a marker of my politics.(01:36) Giving expresses abundance.(02:32) Ive stopped valuing things by how expensive they are.(03:17) People have stopped (openly) judging me about some of my life choices.(03:56) I get to hang out with cool people and be in the cool kids club.(04:16) It helps me genuinely care about helping people.(04:37) It motivates me at my job.(05:01) By giving effectively, I can do great things. --- First published: December 12th, 2025 Source: https://forum.effectivealtruism.org/posts/84PYRzFCeqZGfgv3N/why-i-donate-some-selfish-reasons --- Narrated by TYPE III AUDIO.

Note: This post was crossposted from the Coefficient Giving Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. It can feel hard to help factory-farmed animals. We're up against a trillion-dollar global industry and its army of lobbyists, marketeers, and apologists. This industry wields vast political influence in nearly every nation and sells its products to most people on earth. Against that, we are a movement of a few thousand full-time advocates operating on a shoestring. Our entire global movement — hundreds of groups combined — brings in less funds in a year than one meat company, JBS, makes in two days. And we have the bigger task. The meat industry just wants to preserve the status quo: virtually no regulation and ever-growing demand for factory farming. We want to upend it — and place humanity on a more humane path. Yet, somehow, we're winning. After decades of installing battery cages, gestation crates, and chick macerators, the industry is now removing them. Once-dominant industries, like fur farming, are collapsing. And advocates are building momentum toward bigger reforms for all farmed animals. Here are [...] --- First published: December 16th, 2025 Source: https://forum.effectivealtruism.org/posts/qTnsqYrmSTHawTNa6/ten-big-wins-in-2025-for-farmed-animals --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Conscious Meaning We share every moment with trillions of other conscious beings. Some are much like us, and others experience the world very differently. Creatures without a language to structure their thoughts, some who see broader spectrums of light or others who might experience the world in comparative slow motion. Each conscious moment immediately slips into the past largely unobserved and forgotten. They fall through time like snow to become frozen in the past. Always to have happened just as they did. Each conscious moment is transient and one small part of a vast whole, so one could see any individual as meaningless and insignificant. But every conscious moment is imbued with meaning. Happiness that need not justify itself and pains that consume any desire but to escape them. As individuals, we are not responsible for the state of the world. You did not choose to create disease, poverty and mental illness. You can't control nature, and you can't control the society around you. Many schools of philosophy disagree exactly on what our moral obligations are to others. Given this disagreement, we could default to radical scepticism that all attempts to decide what the right way to [...] ---Outline:(00:11) Conscious Meaning(02:06) Ovarian lottery(03:49) The Good we can do(05:18) Creating Balance(06:13) Voluntary Simplicity(08:12) Setting Salary based on the Worlds average income(10:12) Appendix: Let he who is without sin cast the first stone --- First published: December 11th, 2025 Source: https://forum.effectivealtruism.org/posts/wd7XsSwqWCzd2uzhq/the-further-pledge-voluntary-simplicity --- Narrated by TYPE III AUDIO.

The Giving What We Can research team is excited to share the results of our 2025 round of evaluations of charity evaluators and grantmakers! In this round, we completed two evaluations that will inform our donation recommendations for the 2025 giving season. As with our previous rounds, there are substantial limitations to these evaluations, but we nevertheless think that they are a significant improvement to a landscape in which there were previously no independent evaluations of evaluators' work. In this post, we share the key takeaways from our two 2025 evaluations and link to the full reports. In our conclusion, we explain our plans for future evaluations. Please also see our website for more context on why and how we evaluate evaluators. We look forward to your questions and comments! (Note: we will respond when we return from leave on the 8th of December) Key takeaways from each of our 2025 evaluations The two evaluators included in our 2025 round of evaluating evaluators were: GiveWell (full report) Happier Lives Institute (full report) GiveWell Based on our evaluation, we have decided to continue including GiveWell's Top Charities, Top Charities Fund and All Grants Fund in GWWC's [...] ---Outline:(01:08) Key takeaways from each of our 2025 evaluations(01:25) GiveWell(03:18) Happier Lives Institute (HLI)(06:29) Conclusion and future plans --- First published: December 1st, 2025 Source: https://forum.effectivealtruism.org/posts/sAiHYuuGGT7qvne5P/gwwc-s-2025-evaluations-of-evaluators --- Narrated by TYPE III AUDIO.

And Effective Altruism has put my faith community to shame The BeginningWhen I became a Christian age 15 my life began to transform, but sadly my first external play was proclaiming no sex before marriage and saying F#$% a bit less (I've since resumed). Two years later at premed, Tuesday was my only night with no tutorial so I joined a church group, which was weirdly labelled “Social Justice”. I had zero clue what this was aboutt, maybe preventing bullying at school?. Our leader Jo opened with a question I'll never forget. “I'm fundraising for World Vision and I told my chain-smoking friend I'll buy him a pack of cigs if he joins the fundraising effort. Do you guys think that's OK?” As we discussed the conundrum for the next hour my heart jumped a little. Perhaps my time, skills and money could be useful for something more than just a comfortable life in the ‘burbs…Why do I Give?“When you give….” Jesus Christian motivations for giving vary wildly. Some mostly give to keep their church club solvent, others to save face, but most have deeper motivations. Here are mine. [...] ---Outline:(01:06) Why do I Give?(01:24) Gratitude and Joy(02:24) Utilitarian(03:14) More to come?(03:49) Christians aren't great at Giving(04:04) Father of Earning to Give?(05:07) We're not much better(06:08) Effective Altruist Giving Impresses me --- First published: December 10th, 2025 Source: https://forum.effectivealtruism.org/posts/QrQ9jwFSNoEdd373f/i-donate-because-i-am-christian --- Narrated by TYPE III AUDIO.

I keep thinking about what kind of identity would be useful for building a powerful animal advocacy movement. Here are 3 features of veganism that I often think about which make me doubt its usefulness.Too maximalist The official definition of veganism by the inventors of the term is the following: “Veganism is a philosophy and way of living which seeks to exclude—as far as is possible and practicable—all forms of exploitation of, and cruelty to, animals for food, clothing or any other purpose” This basically amounts to "avoid doing bad things as far as possible." The threshold sits right below what is impossible. I think that is way too ambitious. Doing the best possible thing at every circumstance shouldn't be the criterion for inclusion to a social movement. We don't expect human rights activists to avoid all forms of exploitation and cruelty as far as possible to qualify as human rights activists. Some activists respond "No, veganism is the bare minimum. The 'as far as possible and practicable' part means it's not about being perfect.". But when I ask for examples of gratuitously harmful actions that veganism doesn't forbid, at most I hear about instances of accidental uses [...] ---Outline:(00:22) Too maximalist(03:37) No space for believers to sin(04:29) Too behaviour-focused --- First published: November 26th, 2025 Source: https://forum.effectivealtruism.org/posts/BX8hPeye2QRcyftRk/3-doubts-about-veganism --- Narrated by TYPE III AUDIO.

People working in the AI industry are making stupid amounts of money, and word on the street is that Anthropic is going to have some sort of liquidity event soon (for example possibly IPOing sometime next year). A lot of people working in AI are familiar with EA, and are intending to direct donations our way (if they haven't started already). People are starting to discuss what this might mean for their own personal donations and for the ecosystem, and this is encouraging to see. It also has me thinking about 2022. Immediately before the FTX collapse, we were just starting to reckon, as a community, with the pretty significant vibe shift in EA that came from having a lot more money to throw around. CitizenTen, in "The Vultures Are Circling" (April 2022), puts it this way: The message is out. There's easy money to be had. And the vultures are coming. On many internet circles, there's been a worrying tone. “You should apply for [insert EA grant], all I had to do was pretend to care about x, and I got $$!” Or, “I'm not even an EA, but I can pretend, as getting a 10k grant is [...] --- First published: December 10th, 2025 Source: https://forum.effectivealtruism.org/posts/vpPee6NgMbPcdsam3/the-funding-conversation-we-left-unfinished --- Narrated by TYPE III AUDIO.

Summary: Anthropic has many employees with an EA-ish outlook, who may soon have a lot of money. If you also have that kind of outlook, money donated sooner will likely be much higher impact. It's December, and I'm trying to figure out how much to donate. This is usually a straightforward question: give 50%. But this year I'm considering dipping into savings. There are many EAs and EA-informed employees at Anthropic, which has been very successful and is reportedly considering an IPO. The Manifold market estimates a median IPO date of June 2027: At a floated $300B valuation and many EAs among their early employees, the amount of additional funding could be in the billions. Efforts I'd most want to support may become less constrained by money than capacity: as I've experienced in running the NAO, scaling programs takes time. This means donations now seem more valuable; ones that help organizations get into a position to productively apply further funding especially so. In retrospect I wish I'd been able to support 80,000 Hours more substantially before Open Philanthropy Coefficient Giving began funding them; this time, with more ability to see what's likely [...] --- First published: December 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/rRBaP7YbXfZibSn3C/front-load-giving-because-of-anthropic-donors --- Narrated by TYPE III AUDIO.

Ronny Chieng strikes again, this time featuring Peter Wildeford and the risks from AI on the Daily Show: --- First published: December 5th, 2025 Source: https://forum.effectivealtruism.org/posts/epuSKFdGD82cxZAGd/peter-wildeford-talks-about-risks-from-ai-on-the-daily-show --- Narrated by TYPE III AUDIO.

I've spoken with hundreds of entomologists at conferences the world over. While there's clearly some self-selection (not everyone wants to talk to a philosopher), my experience is consistent: most think it's reasonable to care about the welfare of insects. Entomologists don't regard it as the last stop on the crazy train; they don't worry they're getting mugged; they don't think the idea is just utilitarianism run amok. Instead, they see some concern for welfare as stemming from a common-sense commitment to being humane in our dealings with animals. Let's be clear: they embrace “some concern,” not “bugs have rights.” Entomologists generally believe it's important to do invasive studies on insects, to manage their populations, to kill them to document their diversity. But given the choice between an aversive and a non-aversive way of euthanizing insects, most prefer the latter. Given the choice between killing fewer insects and more, most prefer fewer. They don't want to end good lives unnecessarily; they don't want to cause gratuitous suffering. It wasn't always this way. But the science of sentience is evolving; attitudes are evolving too. These people work with insects every day; they constantly face choices about how to catch insects, how [...] --- First published: November 23rd, 2025 Source: https://forum.effectivealtruism.org/posts/4FncrGhQKcuFthxiR/caring-about-bugs-isn-t-weird --- Narrated by TYPE III AUDIO.

We, the AIM Board and outgoing CEO Joey Savoie, are delighted to announce that Samantha Kagel has been selected as AIM's new CEO, effective December 1, 2025. Over the last few months, we have been engaged in a highly important activity: finding AIM's next CEO. This was not an easy position to fill, as we sought someone who could lead the organization to high growth and impact while retaining the core elements that have made AIM unique. We were committed to conducting a thorough search and put out a public call for candidates, considering over 100 applicants from both public applications and referrals. We evaluated external candidates, internal team members, and past charity graduates, and ultimately identified Samantha as the candidate whom we believe will best execute the next stages of AIM's development. About Samantha Samantha has served on AIM's executive team as Chief Programs Officer for the past 1.5 years, leading the strategy and delivery of our Charity Entrepreneurship function. In this role, she has demonstrated exceptional capability across multiple dimensions: building collaborative teams, driving strategic execution, and maintaining unwavering focus on impact. Before joining the executive team, Samantha successfully filled nearly every role across our organization [...] ---Outline:(01:00) About Samantha(02:13) A note from our incoming CEO, Samantha Kagel --- First published: December 1st, 2025 Source: https://forum.effectivealtruism.org/posts/r8GSGnay6scK7Jmb5/announcing-the-new-aim-ceo --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Cross-posted from Good Structures. For impact-minded donors, it's natural to focus on doing the most cost-effective thing. Suppose you're genuinely neutral on what you do, as long as it maximizes the good. If you're donating money, you want to look for the most cost-effective opportunity (on the margin) and donate to it. But many organizations and individuals who care about cost-effectiveness try to influence the giving of others. This includes: Research organizations that try to influence the allocation or use of charitable funds. Donor advisors who work with donors to find promising opportunities. People arguing to community members on venues like the EA Forum. Charity recommenders like GiveWell and Animal Charity Evaluators. These are endeavors where you're specifically trying to influence the giving of others. And when you influence the giving of others, you don't get full credit for their decisions! You should only get credit for how much better the thing you convinced them to do is compared to what they would otherwise do. This is something that many people in EA and related communities take for granted and find obvious in the abstract. But I think the implications of this aren't always fully digested by the [...] ---Outline:(03:34) Impact is largely a function of what the donor would have done otherwise.(04:36) Is improving the use of effective or ineffective charitable dollars easier?(06:14) How do people respond to these lower impact interventions?(08:14) What are the implications of paying a lot more attention to funding counterfactuals?(10:21) Objections to this argument. --- First published: November 12th, 2025 Source: https://forum.effectivealtruism.org/posts/YrMFHJm7mbswJd7Me/the-overall-cost-effectiveness-of-an-intervention-often --- Narrated by TYPE III AUDIO.